was_der_Fall_ist

was_der_Fall_ist t1_irbjdoj wrote

There are a few points to make here. First, I’d like to make it clear that I’m extremely optimistic about the development of AI, and that I think language models like GPT-3 are incredibly impressive and important. I use GPT-3 regularly, in fact. So I’m not just nay-saying the technology in general.

Second, as far as I can tell, the paper by Thunström and GPT-3 has not been peer-reviewed and published in a journal. It has only been released as a preprint and “awaits review.”

Third, even if GPT-3 is perfectly capable of writing scientific papers, that does not relate to the overall purpose of my commenting, which was to explain that the chart in the OP’s picture measures the number of papers written about AI, rather than written by AI.

Fourth, the paper, entitled “Can GPT-3 write an academic paper on itself, with minimal human input?” is… strange. Even disregarding the “meta” nature of the paper, in which the subject matter is the paper itself, it exhibits problems that are typical of the flaws of GPT-3 which make it unreliable. For example, it starts the introduction to the paper by saying that “GPT-3 is a machine learning platform that enables developers to train and deploy AI models. It is also said to be scalable and efficient with the ability to handle large amounts of data.” This is a terrible description of GPT-3. GPT-3 is, of course, a language model that predicts text, not a machine learning platform that enables developers to train and deploy AI models. Classic GPT-3, writing in great style but with a pathological disregard for reality. With factual inaccuracies like this, I doubt the paper would be published in a respected journal as, say, DeepMind’s research is published in Nature.

I’m hopeful that future models will correct this reliability problem (many have already been working on it), but right now, GPT-3 too often expresses falsehoods to be a scientific writer, or to be relied upon for other purposes that depend on factual accuracy. This is why the only example of a GPT-3-written research paper so far is one that, to my understanding, does not qualify as human-level work.

1

was_der_Fall_ist t1_ir6t2wi wrote

This is unrelated to the chart in the OP’s post. Anyway, despite one person writing a paper with GPT-3, language models really aren’t reliable enough at the present moment to be writing scientific papers, and they certainly weren’t in the period from 1994-2020. Maybe GPT-4.

1

was_der_Fall_ist t1_ir6sugi wrote

They’re simply wrong. Do you think AI was writing papers in 1994, as this chart shows? No — this is just a measure of papers about AI, in the field of AI, but written by humans. A couple of commenters here have linked an article about how a researcher used GPT-3 to write a paper, but that is unrelated to this measure of scientific papers in the fields of AI and machine learning. GPT-3 is, in general, not reliable enough to write scientific papers, and, anyway, it was only created in 2020, so it wouldn’t explain how this chart tracks AI papers in the period from 1994-2020.

3