Submitted by possiblybaldman t3_11a9j56 in singularity
PM_ME_A_STEAM_GIFT t1_j9qu8a0 wrote
> ‘HLMI’ was defined as follows:
The following questions ask about ‘high–level machine intelligence’ (HLMI). Say we have ‘high-level machine intelligence’ when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption.
I think the bottleneck here is robotics. We might have human-level intelligence in a digital-only form a lot sooner than we will be able to build a humanoid robot with human-level dexterity, speed and strength. And it will be even longer until such a robot is cheaper than human labor.
CubeFlipper t1_j9s2fbd wrote
I agree that robotics would come after software, but I can't imagine the additional time would be very long at all. I'd expect an AGI should have no problem making the changes required in a very short timeframe to make ai robotics a mature field.
Brashendeavours t1_j9rigfl wrote
Human level intelligence is typically not represented well by any measure of physical dexterity or speed.
What are you trying to say?
turnip_burrito t1_j9rjqz2 wrote
They're saying physical labor via robotics might be the last part of human capability to be replaced, which to be fair could be considered a form of intelligence.
Brashendeavours t1_j9rlgzt wrote
The central argument is regarding the timeline of AGI. The incorporation (or not) of effectors and sensors is irrelevant.
cancolak t1_j9sfnjd wrote
How is that irrelevant exactly? What would humanity have achieved if all we had were minds floating in ether? For any sort of intelligent software to be civilization altering, it needs to have access to a shit ton of sensory data as input and robotics as output, ideally in real time. Otherwise you have a speech bot, which we already have. “Well, if we have AGI, it will figure out the rest” is one of the most intellectually lazy statements I’ve ever read anywhere and unfortunately it’s kind of like this sub’s one commandment. AGI without sensors isn’t intelligent; thoughts in a head aren’t intelligent without input or output. This is a fallacy. If you think this is the case, then ChatGPT should already qualify, why not call it for today?
Viewing a single comment thread. View all comments