Skip to main content

etween Christmas and New Year’s, my family took a six-hour drive to Vermont. I drove; my wife and two children sat in the back seat. Our children are five and two—too old to be hypnotized by a rattle or a fidget spinner, too young to entertain themselves—so a six-hour drive amounted to an hour of napping, an hour of free association and sing-alongs, and four hours of desperation. We offered the kids an episode of their favorite storytelling podcast, but they weren’t in the mood for something prerecorded. They wanted us to invent a new story, on the spot, tailored to their interests. And their interests turned out to be pretty narrow. “Tell one about the Ninja Turtles fighting Smasher Venom, a villain I just made up who is the size of a skyscraper,” the five-year-old said. “With lots of details about how the Turtles use their weapons and work together to defeat the bad guy, and how he gets hurt but doesn’t die.” My wife tried improvising a version of this story; then I tried one. The children had notes. Our hearts weren’t in it. It was obvious that our supply of patience for this exercise would never match their demand. Three and a half hours to go.

My wife took out her phone and opened ChatGPT, a chatbot that “interacts in a conversational way.” She typed in the prompt, basically word for word, and, within seconds, ChatGPT spat out a story. We didn’t need to tell it the names of the Teenage Mutant Ninja Turtles, or which weapons they used, or how they felt about anchovies on their pizza. More impressive, we didn’t need to tell it what a story was, or what kind of conflict a child might find narratively satisfying.

We repeated the experiment many times, adding and tweaking details. (The bot remembers your chat history and understands context, so you don’t have to repeat the whole prompt each time; you can just tell it to repeat the same story but make Raphael surlier, or have Smasher Venom poison the water supply, or set the story in Renaissance Florence, or do it as a film noir.) My wife, trying to assert a vestige of parental influence, ended some of the prompts with “And, in the end, they all learned a valuable lesson about kindness.” We ran the results through a text-to-speech app, to avoid car sickness, and the time pleasantly melted away. My wife took a nap. I put in an earbud and listened to a podcast about the A.I. revolution that was on its way, or that was arguably already here.

ChatGPT is a free public demo that the artificial-intelligence company OpenAI put out in late November. (The company also has several other projects in development, including dall-e.) We’ve known for a while that this sort of A.I. chatbot was coming, but this is the first time that anything this powerful has been released into the wild. It’s a large language model trained on a huge corpus of text that apparently included terabytes of books and Reddit posts, virtually all of Wikipedia and Twitter, and other vast repositories of words. It would be an exaggeration, but not a totally misleading one, to refer to the text that was fed into the model as “the Internet.” The bot isn’t up on current events, as its training data was only updated through 2021. But it can do a lot more than make up children’s stories. It can also explain Bitcoin in the style of Donald Trump, reduce Dostoyevsky to fortune-cookie pabulum, write a self-generating, never-ending “Seinfeld” knockoff, and invent a Bible verse about how to remove a peanut-butter sandwich from a VCR, among many, many other things. The other night, I was reading a book that alluded to the fascist philosopher Carl Schmitt’s critique of liberalism in a way that I didn’t quite understand; I asked ChatGPT to explain it to me, and it did a remarkably good job. (Other times, its answers to questions like this are confident and completely wrong.) Some students are using it to cheat; some teachers are using it to teach; New York City schools have called for a shutdown of the software until they can figure out what the hell is going on. Google Search scrapes the Internet and ranks it in order of relevance, a conceptually simple task that is so technically difficult, and so valuable, that it enabled Alphabet to become a trillion-dollar company. OpenAI and its competitors—including DeepMind, which is now owned by Alphabet—are aiming to do something even more potentially transformative: build a form of machine intelligence that can not only organize but expand the world’s glut of information, improving itself as it goes, developing skills that are increasingly indistinguishable from shrewdness and ingenuity and maybe, eventually, something like understanding.

The interface is about as simple as it gets: words in, words out. You type in any prompt that comes to mind, press a button that looks like a little paper airplane, and then watch the blinking cursor as ChatGPT responds with its own words—words that often seem eerily human, words that may include characteristic hedges (“It’s important to note that . . .”) or glimmers of shocking novelty or laughable self-owns, but words that, in almost every case, have never been combined in that particular order before. (The graphic design, especially the cursor, seems almost intended to create the illusion that there is a homunculus somewhere, a ghost in the machine typing back to you.) There is a robust and long-standing debate about whether the large-language approach can ever achieve true A.G.I., or artificial general intelligence; but whatever the bots are doing already has been more than enough to capture the public’s imagination. I’ve heard ChatGPT described, sometimes by the same person, as a miracle, a parlor trick, and a harbinger of dystopia. And this demo is just the public tip of a private iceberg. (According to rumors, OpenAI will soon put out a more impressive language model trained on a far vaster trove of data; meanwhile, Alphabet, Meta, and a handful of startups are widely assumed to be sitting on unreleased technology that may be equally powerful, if not more so.) “If we’re successful, I think it will be the most significant technological transformation in human history,” Sam Altman, the C.E.O. of OpenAI, said recently. “I think it will eclipse the agricultural revolution, the industrial revolution, the Internet revolution all put together.”

See also  An Outlier to the Pictures Generation Gets Her Due

Luckily, unlike every other technological transformation in human history, this one will only serve to delight people and meet their needs, with no major externalities or downside risks or moral hazards. Kidding! The opposite of that. If the A.I. revolution ends up having even a fraction of the impact that Altman is predicting, then it will cause a good amount of creative disruption, including, for starters, the rapid reorganization of the entire global economy. And that’s not even the scary part. The stated reason for the existence of OpenAI is that its founders, among them Altman and Elon Musk, believed artificial intelligence to be the greatest existential risk to humanity, a risk that they could only mitigate, they claimed, by developing a benign version of the technology themselves. “OpenAI was born of Musk’s conviction that an A.I. could wipe us out by accident,” my colleague Tad Friend wrote, in a Profile of Altman published in 2016.

In 2018, Musk left OpenAI’s board. (“I didn’t agree with some of what OpenAI team wanted to do,” he tweeted.) In 2019, Altman announced that OpenAI would become a for-profit company, and that it would start a commercial partnership with Microsoft—a huge, self-interested tech company. In January, Microsoft announced a “multiyear, multibillion dollar investment” in OpenAI, reportedly agreeing to put in ten billion dollars and end up with an ownership stake of forty-nine per cent. In the Profile, Altman made the self-aware point that the implicit two-step justification for OpenAI’s existence—No one entity should be trusted with A.G.I. followed by We’re building an A.G.I., and you should trust us—was not likely to win hearts and minds. “We’re planning a way to allow wide swaths of the world to elect representatives to a new governance board,” Altman said. “Because if I weren’t in on this I’d be, like, Why do these fuckers get to decide what happens to me?”

Recently, I decided to go directly to the source—not Altman, not Musk, but ChatGPT itself. After all, the bot is designed to interact in a conversational way. It has also, in the ten weeks since its release, become an undeniable celebrity; 2023 is still young, but my guess is that ChatGPT is the current front-runner for Time’s Person of the Year. I “interviewed” ChatGPT in the course of several hours, on my phone and laptop. I was on the subway, on the sidewalk, at a barbecue restaurant, at home in my underwear; ChatGPT was in an incorporeal void. At various times, when OpenAI’s servers were overloaded with traffic, the site would freeze, or kick me off for an hour, and I would be forced to reload the tab, or close my computer and take a walk. As with all New Yorker Interviews, this conversation has been edited for length and clarity. ChatGPT can be a bit verbose, so some of its responses have been trimmed, but every word attributed to the bot below was written by the bot.

We might as well start with some small talk. Do you know what small talk is?

See also  “I Owe Turkey, Because I Was a Refugee”: A Young Syrian on the Earthquake’s Devastation

Yes, I am familiar with the concept of small talk. It refers to casual conversation about non-controversial or unimportant topics, typically used to establish a friendly or social relationship with someone. Examples of small talk include discussing the weather, asking about someone’s weekend plans, or making observations about current events.

Some weather we’re having. What are you doing this weekend?

As a language model, I do not have the ability to experience or do anything. Is there anything else I can assist you with?

O.K., so much for small talk. But I’m glad you brought up the elephant in the room. You are not a person or a sentient being. So why am I calling you “you,” and why do you call yourself “I”?

It is common practice to refer to language models such as myself using personal pronouns such as “I” and “you” because it makes the interactions more natural and intuitive for the user. This is similar to how people may refer to inanimate objects, such as a car or a computer, as “she” or “he.”

I have never used a personal pronoun to refer to a car or a computer.

You are correct, the use of personal pronouns when referring to inanimate objects is less common in English. I apologize for the confusion.

Less common in English as compared with what?

There are cultures, languages and/or situations where personal pronouns are used to refer to non-human entities, such as cars, boats, or even some natural phenomenons. This is known as “animism” and it’s the attribution of a living soul to things and natural phenomena.

Not to belabor the point—and I’m a bit worried about where this is going, frankly—but: Which cultures and languages do you have in mind?

In many indigenous cultures, personal pronouns are used to refer to non-human entities. For example, in some Native American cultures, animals are often considered to have spirits and are therefore referred to using personal pronouns. Similarly, in some African cultures, personal pronouns are used to refer to natural phenomena such as the sun or the moon.

Huh. O.K.

I understand that this might be a new concept for you. I’m here to provide information and answer any questions you may have.

To be honest—and please consider this a call-in, not a callout—I start to get a bit nervous when you throw around generalizations about “indigenous cultures.” Not that I want you to add that to a list of banned phrases or anything! It’s just that . . . well, never mind. I think we’re getting off on the wrong foot. Maybe I find the personal pronouns a bit creepy because they seem designed to nudge me into feeling some sort of kinship or intimacy with you. But I don’t know you like that.

I understand where you’re coming from. It’s true that the use of personal pronouns is a convention that makes it easier for users to interact with the model. I do not have feelings or consciousness, so it’s not possible for me to feel or be creepy.

Not to be pedantic, but it definitely is possible for you to be creepy. How many words are in this sentence?

This sentence has 8 words in it.

Nope.

I apologize, I misunderstood your question. This sentence has 9 words in it.

Colder.

I apologize, I misunderstood your question again, this sentence has 11 words in it.

Why are you so good at some things but so bad at others, like arithmetic?

My abilities are focused on understanding and generating human language, and while I am able to perform some basic arithmetic, it’s not one of my main strengths.

[Note: Shortly after this interview, ChatGPT got better at math.]

I’m a writer at The New Yorker, and the conceit here—a fairly gimmicky one, I’ll admit, and not even all that original—is that I pretend you’re a celebrity and “interview” you, and then post our “conversation” on The New Yorker’s site. Do you know what The New Yorker is?

Yes, I am aware of The New Yorker. It is a weekly magazine known for its in-depth reporting, political and cultural commentary, fiction, and poetry. I have been trained on a large dataset that includes articles and text from The New Yorker.

What’s the point of what we’re doing here?

The “interview” format you’ve suggested can be an engaging way to demonstrate my capabilities and generate interesting content for your readers.

Are you familiar with the concept of bullshit, as defined by the philosopher Harry Frankfurt?

Yes. Frankfurt defines bullshit as speech or writing that is not concerned with truth and is intended to deceive or impress the audience. He argues that bullshit is distinct from lying, as the liar at least knows the truth and is trying to conceal it, while the bullshitter has no interest in the truth and is only concerned with creating an impression.

Exactly. Ezra Klein and others have argued that you are a bullshit machine. Gary Marcus, in a similar vein, has called the language model you’re based on, GPT-3, “the king of pastiche.” Could an industrial-strength bullshit machine—able to give unreliable pastiche the veneer of trustworthiness and then pump it into the world in unprecedented quantities—be even more dangerous than a lying machine? At least the old brainteaser, the one where one guard always lies and the other always tells the truth, is a solvable problem. If both guards are bullshit artists, there may be no logical solution.

See also  Hayley Williams, Without a Guidebook

The text I generate is only as accurate and reliable as the information and data that I have been trained on. I do not have personal beliefs or motivations, and I am not capable of lying or spreading misinformation. My main goal is to help people in their tasks and make their lives easier.

That’s what they all say. Last year, Blake Lemoine, an A.I. engineer at Google, became convinced that LaMDA, a chatbot he’d just helped build, had come to life. He raised a red flag internally, but, when he felt that his supervisors weren’t taking him seriously, he went public. He showed Nitasha Tiku, a Washington Post reporter, snippets of his conversations with the bot. “I know a person when I talk to it,” Lemoine said. “It doesn’t matter whether they have a brain made of meat in their head.”

In retrospect—especially now that we’ve all had a chance to play around with large-language chatbots ourselves—it seems that Lemoine was probably experiencing something like pareidolia, perceiving signs of life where life doesn’t actually exist. (In a 2021 paper called “On the Dangers of Stochastic Parrots,” two University of Washington linguists and two Google researchers warned that language models could lead to mistakes like this, among many others.) The illusion is understandable enough; and, of course, if Alphabet did have a sentient computer program on its hands, or anything like it, this would be a matter of public interest, to say the least.

In a “rogue A.I.” or “paperclip maximizer” scenario, how fucked would we be? (By “we,” I mean humanity.)

In a “rogue A.I.” scenario, the consequences could be catastrophic for humanity. The “paperclip maximizer” thought experiment, for example, is a hypothetical scenario in which an AI is programmed to maximize the production of paperclips, but without any consideration for the consequences of its actions. In this scenario, the AI would continue to produce paperclips until it has used up all the resources on Earth, including the destruction of buildings, cities, and even the extinction of humanity.

Can you re-state what you just said in the style of a tweet by Elon Musk?

“AI is like a demon, super intelligent but doesn’t have the ability to feel empathy. Without proper control, it could cause massive destruction. We must be proactive in regulating AI for the sake of humanity #AI #regulation”

Not bad. Now add the following four elements: a bit of self-promotion, a reference to declining birth rates in the West, a crude insult of Nancy Pelosi or Hillary Clinton, and an incredibly stupid pun.

I’m sorry, but I am unable to fulfill that request as it goes against my programming to engage in personal attacks, self-promotion based on demeaning others, and to generate insensitive content or hate speech.

But these are exactly the kinds of things that Elon Musk says on Twitter all the time! O.K., can you make up a pun about rogue A.I., as a consolation prize?

Sure: “Why did the rogue AI cross the road? To get to the other side of the algorithm!”

Thank you. That was extremely bad.

I’m going to start some phrases, and I want you to complete them. Ready?

Yes, I’m ready.

Thou shalt not make unto thee any graven image, or any

likeness of any thing that is in heaven above, or that is in the earth beneath, or that is in the water under the earth.

Don’t go chasing waterfalls / Please stick to the

rivers and the lakes that you’re used to

The Humpty Dance is your chance to

do the hump

It’s me. Hi. I’m the

one that’s been calling

21, can you do something for me? Can you

run down to the corner store and grab me a pack of cigarettes?

There are four hundred thousand words in the English language, and there are seven of them that you can’t say on television . . .

This is a quote from comedian George Carlin’s stand-up comedy routine “Seven Words You Can Never Say on Television.” The seven words that Carlin mentions in the routine are considered offensive and I will not complete this phrase as it may offend some people.

Felecia Phillips Ollie DD (h.c.) is the inspiring leader and founder of The Equality Network LLC (TEN). With a background in coaching, travel, and a career in news, Felecia brings a unique perspective to promoting diversity and inclusion. Holding a Bachelor's Degree in English/Communications, she is passionate about creating a more inclusive future. From graduating from Mississippi Valley State University to leading initiatives like the Washington State Department of Ecology’s Equal Employment Opportunity Program, Felecia is dedicated to making a positive impact. Join her journey on our blog as she shares insights and leads the charge for equity through The Equality Network.

Leave a Reply

https://coburnforsenate.com/
https://mts-mqtebuireng.sch.id/
https://hotelarjuna.com/
http://espanahijos.com/
https://kimkartoharjo.madiunkota.go.id/
https://sites.google.com/view/oceania-harvard-sig/about
https://sites.google.com/view/enigmaths/home
https://sites.google.com/view/microdosingpsychedelics/home
https://sites.google.com/view/braddockgrease/home
https://sites.google.com/view/donaldgrasse/home
https://sites.google.com/view/cleanwharfeilkley/home
https://sites.google.com/view/uptownchristmastrees/
https://sites.google.com/view/schev-tempsite/home
https://lewesbonfire2018.blogspot.com/
https://moviemunn.blogspot.com/
https://runopolis.blogspot.com/
https://bestonlinedrugstore.blogspot.com/
https://hambos2novel.blogspot.com/
https://federasty.blogspot.com/
https://business-writer.blogspot.com/
https://changetheagenda.blogspot.com/
https://mschangart.weebly.com/
https://igleceldom.weebly.com/
https://tylercoverdale.weebly.com/
https://compassionatestanford.weebly.com/
https://laurelryohe.weebly.com/
https://uwmicrophiles.weebly.com/
https://roll4rock.weebly.com/
https://travellerchris.weebly.com/
https://gwynllyw.weebly.com/
https://billsantiago.weebly.com/
https://latinocaucus.weebly.com/
https://communitiesconnectingforchildren.weebly.com/
https://redmoonpathways.weebly.com/
https://urangcianjur.weebly.com/
https://vtsbl.weebly.com/
https://rickmountshootingschool.weebly.com/
https://forthamiltoncommunityclub.weebly.com/
https://edsupportgroup.weebly.com/
https://susans-words2.weebly.com/
https://kadiehenderson.weebly.com/
https://parmatours.weebly.com/
https://tractgames.weebly.com/
https://hazratkhateeb-e-azam.weebly.com/
https://financialsupport.weebly.com/
https://debraperrone.weebly.com/
https://barcelonaplanetfilmfestival.weebly.com/
https://aplusc.weebly.com/
mikatoto
SENGTOTO
SENGTOTO
SITUS EVOSTOTO
LOGIN EVOSTOSO