Artificial Intelligence Program Self-Published an Impressive Scientific Paper on Itself

What happens now that artificial intelligence apparently has the means to communicate for itself? Do we let it run wild? Where do we draw the line?
Artificial Intelligence

In June, a Google engineer said that a conversational Artificial Intelligence (AI) technology called LaMBDA had become sentient and had even asked to hire an attorney for itself. This led to a lot of talk about whether or not AI is sentient.

What do you do with an Artificial Intelligence system that has self-published a scientific paper on itself? 

That was the question researchers worldwide were left with this week when an Artificial Intelligence system published a scientific paper on its own.


A recent news headline by stated, ‘After an Artificial Intelligence bot wrote a scientific paper about itself,’ the experiment’s researcher says she hopes she didn’t open a “Pandora’s box.” What happens when you open a Pandora’s box? Perhaps something useful will come out of it.

The headline and the story are pretty self-explanatory.

The key points of the article are:

  • An Artificial Intelligence program called GPT-3 took only two hours to write a thesis on itself.
  • The researcher who told the AI to write the paper gave the bot permission to send it to a journal.

What is GPT-3?

GPT-3 is an autoregressive language model that uses deep learning to make text that reads like it was written by a person.

deep learnign

It is the third generation of the GPT-n series of language prediction models and came after GPT-2. It was made by OpenAI, an artificial intelligence research lab in San Francisco. In its full form, GPT-3 can handle 175 billion machine learning parameters. GPT-3 was released in May 2020, and as of July 2020, it was in beta testing. It is part of a trend in natural language processing (NLP) systems toward language representations that have already been trained.

The quality of the text that GPT-3 creates is so high that it can be hard to tell if it was written by a human or not. This has both pros and cons.


What happened when a researcher asked GPT-3 to write a thesis?

A Swedish researcher gave an Artificial Intelligence program called GPT-3 a simple task to write an academic thesis about GPT-3 in 500 words and include scientific references and citations.

Almira Osmanovic Thunstrom, a researcher at Gothenburg University, said she was amazed as the text began to form. She had what she thought was a “pretty good” research introduction that GPT-3 had written about itself in front of her.

1024px University of Gothenburg
University of Gothenburg by Natonato under CC BY-SA 3.0

After the experiment went well, Thunstrom wanted to write a whole paper about GPT-3 and publish it in an academic journal that was reviewed by her peers. The question was: Can a paper be written about something other than a person?


After GPT-3 finished its scientific paper in only two hours, Thunstrom started the process of sending it in. She decided to ask the algorithm if it was okay for the work to be published.

Thunstrom wrote, “It said, ‘Yes.'” “I was a little sweaty and relieved because if it had said no, my conscience wouldn’t have let me go on. So I checked the box that said “Yes.”

In addition to that, she questioned the bot about whether or not it had any conflicts of interest. The algorithm said “no,” and Thunstrom wrote that the authors started to treat GPT-3 as if it were a sentient being, even though it wasn’t.


Thunstrom wrote, “All we know is that we opened a gate.” “We just hope we didn’t open Pandora’s box.”

In an email to Insider, Thunstrom said that the artificial intelligence community liked the results of the experiment and that other scientists are trying to copy the results. She said that people who do similar experiments find that GPT-3 can write about any subject.


Using the AI to publish its own research might provide some insight into how AI systems are going to consume media and the fears about whether AI is a potential force for good or, for lack of a better word, evil.

Anatomy 1751201 1280

Researchers would like to know what people think in order to start to make something that’s useful for people.

Experts said, though, that technology is not yet good enough to make machines that look like people. There are some pretty impressive text generator bots available online that the public can try, which do a similar thing to what the bot in this article did; they include:

Try them out. What do you think? Does the output look human?


Which parts of this article do you think were written by a human, and which parts do you think were pasted from an online text generator robot using GPT-3 type technology? Let us know in the comments!

Feature image credit: Artificial Intelligence & AI & Machine Learning by mikemacmarketing under CC BY 2.0