Submitted by Neurogence t3_121zdkt in singularity
Fluglichkeiten t1_jdq2i5n wrote
Reply to comment by acutelychronicpanic in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
Yeah, exactly this. It doesn’t necessarily need to be a general intelligence. The question then is; are any of the current AI models better than humans at the specific skills required to make AIs?
I don’t know the answer. I suspect not, but it feels like we’re not too far away. Current models seem to have achieved a kind of ‘creativity’ and can be linked with other systems to shore up their deficiencies (such as maths). Maybe if one of the larger models was trained specifically to work on AI design… although how would that look? Feed an LLM lots of academic papers paired with real world implementations?
I’d be interested to see what the big labs have cooking behind the scenes.
acutelychronicpanic t1_jdqrppa wrote
Probably not? At least not any public models I've heard of. If you had a model architecture design AI that was close to that good, you'd want to keep the secret sauce to yourself and use it to publish other research or develop products.
LLMs show absolutely huge potential for being a conductor or executive that coordinates smaller modules. The plug-ins coming to ChatGPT are the more traditional software version of this. How long until an LLM can determine it needs a specific kind of machine learning model to understand something and just cooks up and architecture and can choose appropriate data?
lehcarfugu t1_jds352j wrote
It seems like they are capped out by the data they receive, so by their nature they are going to be as smart as the collective human race, but not smarter. I think it's unlikely the singularity comes from this current approach.
Viewing a single comment thread. View all comments