MattAbrams

MattAbrams t1_je07108 wrote

This isn't how science works. It's easy to say the machine works when you already have the papers you're looking for.

But this happens all the time in bitcoin trading, like I do. It can predict lots of things with high probability. They are all much more likely than things that make no sense. But just because they make sense doesn't mean that you have an easy way to actually choose which one is "correct."

If we ran this machine in year X, it would spit out a large number of papers in year Y, some of which may be correct, but there still needs to be a way to actually test all of them, which would take a huge amount of effort.

My guess is that there will never be an "automatic discoverer" that suddenly jumps 100x in an hour, because the testing process is long and the machines required to test become significantly more complicated in parallel to the abilities of the computer - look at the size increases of particle accelerators, for example.

1

MattAbrams t1_je055b1 wrote

Why does nobody here consider that five years from now, there will be all sorts of software (because that's what this is) that can do all sorts of things, and each of them will be better at certain things than others?

That's just what makes sense using basic computer science. A true AGI that can do "everything" would be horribly inefficient at any specific thing. That's why I'm starting to believe that people will eventually accept that the ideas they had for hundreds of years were wrong.

There are "superintelligent" programs all around us right now, and there will never be one that can do everything. There will be progress, but as we are seeing now, there are specific paradigms that are each best at doing specific things. The hope and fear around AI is partly based upon the erroneous belief that there is a specific technology that can do everything equally well.

2

MattAbrams t1_je04dx1 wrote

Artificial intelligence is software. There are different types of software, some of which are more powerful than others. Some software generates images, some runs power plants, and some predicts words. If this software output theorems, it would be a "theorem prover," not something that can drive self-driving cars.

Similarly, I don't need artificial intelligence to kill all humans. I can write software myself to do that, if I had access to an insecure nuclear weapons system.

This is why I see a lot of what's written in this field is hype - from the people talking about the job losses to the people saying the world will be grey goo. We're writing SOFTWARE. It follows the same rules as any other software. The impacts are what the software is programmed to do.

There isn't any AI that does everything, and never will be. Humans can't do everything, either.

And by the way, GPT-4 cannot make new discoveries. It can spit out theories that sound correct, but then you click "regenerate" and it will spit out a different one. I can write hundreds of papers a day of theories without AI. There's no way to figure out which theories are correct other than to test them in the physical world, which it simply can't do because it does nothing other than predict words.

0