sumane12
sumane12 t1_iriwdqi wrote
Reply to comment by TopicRepulsive7936 in When do you think we'll have AGI, if at all? by intergalacticskyline
Yes
sumane12 t1_irif2r9 wrote
Reply to comment by TopicRepulsive7936 in When do you think we'll have AGI, if at all? by intergalacticskyline
Google works
sumane12 t1_irhve4c wrote
Reply to comment by TopicRepulsive7936 in When do you think we'll have AGI, if at all? by intergalacticskyline
Intelligence of smartest human.
Intelligence of dumbest human.
Intelligence of average human.
Human intelligence and sentient.
Human intelligence and not sentient.
Generalise from one task to a second.
Generalise from one task to multiple tasks.
Generalise from one task to every task achievable to a human.
Generalise from one task to every task achievable by every human.
The 'G' in AGI stands for general meaning any AI that is able to generalise skills from one to another, eg being trained on go, and transfering those skills to chess. That is the simplest definition of AGI.
sumane12 t1_irgqb5a wrote
Reply to comment by TopicRepulsive7936 in When do you think we'll have AGI, if at all? by intergalacticskyline
Yea but that's not true tho.
sumane12 t1_irgazhf wrote
Depends on how you define AGI, very primitive agi by 2030, human level AGI by 2035
sumane12 t1_irdr88j wrote
Reply to comment by Devoun in How concerned are you that global conflict will prevent the singularity from happening? by DreaminDemon177
Yeah.. that's why it's scary as fuck 😞
sumane12 t1_irdoxqe wrote
Reply to comment by beachmike in “Extrapolation of this model into the future leads to short AI timelines: ~75% chance of AGI by 2032” by Dr_Singularity
This is true. The world will get better as it always has, but true utopia is not in our nature. We always reach beyond our capabilities which means we will always want something we can't have, ergo we will never have utopia no matter how good life is. Also considering everyone's definition of utopia is different, everyone would have to agree that we have achieved utopia. Not to mention I'm sure heroin addicts believe they have utopia when they are high, but I'm pretty sure most of us would consider being in that state permanently to be a waste of life.
My personal hope, is for a star trek like existence with no war and no crime, and for ageing to be solved. It's a big ask but that's my definition of utopia which I think is achievable, but we would still have problems we would need to solve
sumane12 t1_ir4et1k wrote
Reply to How Can We Profit From A.I.? by nexus3210
ARK etf
sumane12 t1_iqx79hi wrote
Reply to Is ai countdown accurate? by Phoenix5869
The fact that we are now questioning the definition of AGI should tell you all you need to know. We have advanced to the point that a few years ago, people would have been convinced AGI has been achieved given its current capabilities.
I think AI and human intelligence (HI) is different, AI has not had to suffer 4 billion years of natural selection in a predator/prey environment, its goals (in my humble opinion) will never be comparable to our goals, it might not even be able to have goals that are not dictated to it by us (much like our goals are dictated by natural selection). While those differences remain, people will still not be convinced AI has been achieved (even if all of its capabilities surpass HI).
Personally I think my perspective of AGI will be achieved by 2028, and that perspective is a chat bot that can have an engaging human level conversation, can carry out basic requests, and can fully function as a worker in 90% of jobs. But hey, that's just my opinion 🙂
sumane12 t1_iqokbk8 wrote
Reply to Serious question: Why does so many want to fix aging? Without radically changing the economy, this basically makes you into a slave that can never retire or die from age. by [deleted]
Serious answer, I'd rather be forced to work, than forced to die.
sumane12 t1_iriy7pl wrote
Reply to comment by DungeonsAndDradis in When do you think we'll have AGI, if at all? by intergalacticskyline
So I see a few problems with this, number one, some of the smartest people I know make terrible coffee. Number 2 I'm sure some people with really low intelligence can make great coffee, I can also imagine a closed system narrow AI to be trained on enough data to complete this task with no general intelligence, fun fact, I just asked gpt2 to describe the steps in making a cup of coffee and it was extremely close, (apart from boiling the water) so much so that gpt3 would have no issue with it I'm guessing. Add some image recognition, some motor function and I'm pretty sure a few current AIs could accomplish this in 99% of situations.