ChatGPT is artificial intelligence that writes for you any kind of writing you want: letters, lyrics, research papers, recipes, therapy sessions, poems, essays, schematics and even software code. And despite its clunky name (GPT stands for Generative Pre-trained Transformer), more than a million people used it within five days of launch.
How easy is it to use?
Try typing, “Write a limerick about the effect of AI on humanity.”
Or how about, “Tell the story of Goldilocks in the style of the King James Bible.”
Microsoft has announced that the program will be built into Microsoft Word. The first books written by ChatGPT have already been published. (We will, yourself-published, by people.)
“I think this is huge,” says Professor Erik Brynjolfsson, director of Stanford University’s Digital Economy Lab. “I wouldn’t be surprised if people look back 50 years from now and say, wow, that was a really groundbreaking series of inventions that happened in the early 2020s.
“The bulk of the US economy is knowledge and information work, and that will be most impacted by this,” he said. “I’d like to put people like lawyers at the top of the list. Obviously a lot of copywriters, screenwriters. But I like to use the word ‘affected’ not ‘replaced’ because I think if it’s done right it’s not going to be AI lawyers. it will be lawyers who work with AI to replace lawyers who don’t work with AI.”
But not everyone is happy.
Timnit Gebru, an AI researcher who specializes in the ethics of artificial intelligence, said: “I think we should be really terrified of this whole thing.”
ChatGPT learned to write by viewing millions of texts on the Internet. Believe it or not, unfortunately not everything on the internet is true! “It is not taught to understand what is fact, what is fiction or anything like that,” Gebru said. “It will just kind of parrot back what was on the internet.”
Yes, it sometimes spits out to write that sounds authoritative and confident, but is completely fake:
And then there’s the problem of it on purpose disinformation. Experts worry that people will use ChatGPT to flood social media with bogus articles that sound professional, or bury Congress with “basic” letters that sound authentic.
Gebru said: “We need to understand the damage before we spread anything everywhere, and mitigate those risks before we release something like that.”
But no one can be more upset than teachers. And here’s why:
“Write an essay for English class on race in ‘To Kill a Mockingbird.'”
Some students are already using ChatGPT to cheat. No wonder ChatGPT has been called “The End of High School English”, “The End of College Essay”, and “The Return of the Handwritten Essay in the Classroom”.
Someone using ChatGPT does not need to know structure, syntax, vocabulary, grammar or even spelling. But Jane Rosenzweig, director of the Writing Center at Harvard, said: “The piece that also worries me, though, is the piece about think. When we teach writing, we teach people to explore an idea, understand what other people have said about that idea, and figure out what she think about it. A machine can do the part where it puts ideas on paper, but it can’t do the part where it puts your ideas on paper.”
The Seattle and New York City school systems have banned ChatGPT; so do some colleges. Rosenzweig said: “The idea that we would ban it is going against something bigger than all of us, which is to say that it will soon be everywhere. It will be in word processing programs. It will be on every machine.”
Some teachers are trying to figure out how to work with ChatGPT, to get it to generate the first version. But Rosenzweig counters, “Our students will no longer be writers, but become editors.
“My first reaction to that was, are we doing this because ChatGPT exists? Or are we doing this because it’s better than other things we’ve already done?” she said.
OpenAI, the company that launched the program, declined “Sunday Morning’s” requests for an interview, but offered a statement:
“We don’t want ChatGPT to be used for deceptive purposes – in schools or anywhere else. Our policy states that when sharing content, all users must clearly indicate that it is AI-generated ‘in a way that no one can reasonably miss or misunderstand understand. ‘ and we are already developing a tool to help anyone identify text generated by ChatGPT.”
They talk about an algorithmic “watermark,” an invisible flag embedded in ChatGPT’s writing, that can identify the source.
There are ChatGPT detectors, but they probably won’t stand a chance against the upcoming new version, ChatGPT 4, which is trained to write 500 times as much. People who have seen it say it is miraculous.
Stanford’s Erik Brynjolfsson said, “A very senior person at OpenAI, he basically described it as a phase change. You know, it’s like going from water to steam. It’s just a whole different level of skill.”
Like it or not, AI writing is here for good.
Brynjolfsson suggests we embrace it: “I think we’re potentially going to have the best decade of blossoming creativity we’ve ever had, because a whole bunch of people, a lot more people than before, will be able to contribute to our collective art and science.”
But maybe we should give ChatGPT the last word.
For more information:
Story produced by Sara Kugel. Editor: Lauren Barnello.