sumane12

sumane12 t1_iriy7pl wrote

So I see a few problems with this, number one, some of the smartest people I know make terrible coffee. Number 2 I'm sure some people with really low intelligence can make great coffee, I can also imagine a closed system narrow AI to be trained on enough data to complete this task with no general intelligence, fun fact, I just asked gpt2 to describe the steps in making a cup of coffee and it was extremely close, (apart from boiling the water) so much so that gpt3 would have no issue with it I'm guessing. Add some image recognition, some motor function and I'm pretty sure a few current AIs could accomplish this in 99% of situations.

2

sumane12 t1_irhve4c wrote

Intelligence of smartest human.

Intelligence of dumbest human.

Intelligence of average human.

Human intelligence and sentient.

Human intelligence and not sentient.

Generalise from one task to a second.

Generalise from one task to multiple tasks.

Generalise from one task to every task achievable to a human.

Generalise from one task to every task achievable by every human.

The 'G' in AGI stands for general meaning any AI that is able to generalise skills from one to another, eg being trained on go, and transfering those skills to chess. That is the simplest definition of AGI.

10

sumane12 t1_irdoxqe wrote

This is true. The world will get better as it always has, but true utopia is not in our nature. We always reach beyond our capabilities which means we will always want something we can't have, ergo we will never have utopia no matter how good life is. Also considering everyone's definition of utopia is different, everyone would have to agree that we have achieved utopia. Not to mention I'm sure heroin addicts believe they have utopia when they are high, but I'm pretty sure most of us would consider being in that state permanently to be a waste of life.

My personal hope, is for a star trek like existence with no war and no crime, and for ageing to be solved. It's a big ask but that's my definition of utopia which I think is achievable, but we would still have problems we would need to solve

2

sumane12 t1_iqx79hi wrote

The fact that we are now questioning the definition of AGI should tell you all you need to know. We have advanced to the point that a few years ago, people would have been convinced AGI has been achieved given its current capabilities.

I think AI and human intelligence (HI) is different, AI has not had to suffer 4 billion years of natural selection in a predator/prey environment, its goals (in my humble opinion) will never be comparable to our goals, it might not even be able to have goals that are not dictated to it by us (much like our goals are dictated by natural selection). While those differences remain, people will still not be convinced AI has been achieved (even if all of its capabilities surpass HI).

Personally I think my perspective of AGI will be achieved by 2028, and that perspective is a chat bot that can have an engaging human level conversation, can carry out basic requests, and can fully function as a worker in 90% of jobs. But hey, that's just my opinion 🙂

4