Submitted by gaudiocomplex t3_zxnskd in Futurology
Introsium t1_j24d816 wrote
Reply to comment by MagneSTic in What, exactly, are we supposed to do until AGI gets here? by gaudiocomplex
You could program a non-AI to perform any given task, but the entire point of my statement is that it casually passes the exam. It was not programmed to do that, but that doesn’t stop it from passing what’s commonly regarded as a very hard test. It simultaneously crushes programming challenges. But, most importantly, it can do most people’s jobs. It can’t do all of them perfectly but it can do them much cheaper than humans can for the loss in quality.
You’re looking at a Fabricator and saying “but that other machine can build a car, this isn’t really impressive”, which is entirely missing the point.
DoktoroKiu t1_j24lxlt wrote
It may have passed the test, but I would not use this as an indication that it could represent you in court. Unless it is fundamentally different than the other large language models it will confidently lie and is only really "motivated" to produce probable responses to given prompts.
The AI they trained on only research papers was shut down very quickly when it started making very detailed lies citing studies that seem plausible yet don't exist.
Now this is by no means an unsolvable problem, but solving it is not something we can just assume. AI alignment is not an easy problem.
Viewing a single comment thread. View all comments