Viewing a single comment thread. View all comments

BUGFIX-66 t1_j6ei6ib wrote

These large language models can't write (or fix or "understand") software unless they have seen human solutions to a problem. They are essentially interpolators, interpolating human work (https://en.m.wikipedia.org/wiki/Interpolation).

Don't believe me? I built a site to demonstrate this, by testing OUTSIDE the training set. Try it:

https://BUGFIX-66.com

Copilot can solve 6 of these, and only the ones that appear in its training set. ChatGPT solves even fewer, maybe 3.

To test whether ChatGPT can code, you need to give it problems where it hasn't been trained on human solutions to similar or identical problems. Then you need to check the answer, because the language model is dishonest.

It's bogus.

1

WikiSummarizerBot t1_j6ei8g4 wrote

Interpolation

>In the mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points. In engineering and science, one often has a number of data points, obtained by sampling or experimentation, which represent the values of a function for a limited number of values of the independent variable. It is often required to interpolate; that is, estimate the value of that function for an intermediate value of the independent variable. A closely related problem is the approximation of a complicated function by a simple function.

^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)

1