sticky_symbols
sticky_symbols t1_j33icmp wrote
Reply to comment by LarsPensjo in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
No, it really doesn't. That's you asking it about it's thinking and it responding. That's different than reflection, which would consist of it asking itself anything about its thinking.
sticky_symbols t1_j32rgwg wrote
Reply to comment by SeaBearsFoam in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
True.
It also occurred to me, though, that this might actually be what a high school teacher would say about chatGPT. They might get it this wrong.
sticky_symbols t1_j32czsi wrote
Chatgpt is misrepresenting its own capabilities. It cannot reflect on its own thinking or self improve.
sticky_symbols t1_j2qohoi wrote
Reply to comment by blueSGL in Companies can ‘hire’ a virtual person for about $14k a year in China by blueSGL
They're not accepting it, they just clicked a link once in the last year.
sticky_symbols t1_j2czd1v wrote
Reply to comment by GhostInTheNight03 in none of us are going to see the singularity by [deleted]
That is just not good reasoning. If there were a vague 50% chance of you being hit by a car, you'd get out of the road. All of the important predictions of the past have been less than 100% certain. We always take guesses about the future based on the available data.
sticky_symbols t1_j2cz1ib wrote
Reply to none of us are going to see the singularity by [deleted]
Anyone could be wrong. But I'm about as expert as anyone on AI and brain function, and by my estimate the first is catching the second very fast.
I'm not sure that's a good thing. At this rate the singularity is likely to wipe us out.
sticky_symbols t1_j22m6ap wrote
Reply to How many users in this sub are AI? by existentialzebra
None.
Nobody cares enough about this sub to create a bot that integrates chatGPT with an account.
Some time soon, we might see a lot of those on everything, once the code becomes commonplace enough for amateurs to run just for fun.
sticky_symbols t1_j1nj7jh wrote
Reply to comment by drewhead118 in Which Sci-fi book/Movie/Short Story has technology closest to something in our future. (What would be your guess) by Ortus12
I found it on Amazon and got the free sample. If I like it, I'll pay for the ebook. Thanks, I'm looking forward to it!
sticky_symbols t1_j1lvdql wrote
Reply to Which Sci-fi book/Movie/Short Story has technology closest to something in our future. (What would be your guess) by Ortus12
Rainbows End (sic) Presents a vision of how AR technology might transform our world for the better, or at least the more amusing
sticky_symbols t1_j1lux7n wrote
Reply to comment by drewhead118 in Which Sci-fi book/Movie/Short Story has technology closest to something in our future. (What would be your guess) by Ortus12
What is it? I'll check it out.
sticky_symbols t1_j1i5gpt wrote
Reply to comment by PoliteThaiBeep in Hype bubble by fortunum
I agree that we don't know when. The point people often miss is that we have high uncertainty in both directions. It could happen sooner than the average guess, as well as later. We are now around the same processing power as a human brain (depending what aspects of brain function you measure), so it's all about algorithms.
sticky_symbols t1_j1giyvw wrote
Reply to comment by fortunum in Hype bubble by fortunum
I agree with all of this. But the definition of outrageous is subjective. Is it more outrageous to claim that we're on a smooth path to AGI, or to claim that we will suddenly hit a brick wall, when progress has appeared to readily accelerate in recent years? You have to get into the details to decide. I'd say Marcus and co. are about half right. But the reasons are too complex to state here.
sticky_symbols t1_j1gi6fn wrote
Reply to comment by fortunum in Hype bubble by fortunum
Here we go. This comment has enough substance to discuss. Most of the talk in this sub isn't deep or well informed enough to really count as discussion.
Perceptual and motor networks are making progress almost as rapidly as language models. If you think those are important, and I agree that they can only help, they are probably being integrated right now, and certainly will be soon.
I've spent a career studying how the human brain works. I'm convinced it's not infinitely more complex than current networks, and the co.putational motifs to get from where we are to brain like function are already understood by handfuls of people, and merely need to be integrated and iterated upon.
My median prediction is ten years to full superhuman AGI, give or take. By that I mean something that makes better plans in any domain than a single human can do. That will slowly or quickly accelerate progress as it's applied to building better AGI, and we have the intelligence explosion version of the singularity.
At which point we all die, if we haven't somehow solved the alignment problem by then. If we have, we all go on permanent vacation and dream up awesome things to do with our time.
sticky_symbols t1_j179w2x wrote
The AI in Her is fully sentient. It is smarter than humans in every way.
I don't think people can fully fall in love with something way dumber than they are.
So AI will be charming to the very young and very old. Until they take over.
I could be wrong, though - desperate people may fall in love with non-sentient systems that are just somewhat better than chatGPT with a long term memory.
sticky_symbols t1_j16hdjl wrote
Impossible.
It will take some more analytical capabilities before AI cand o a CEOs job.
sticky_symbols t1_izy81b1 wrote
Reply to comment by OldWorldRevival in An Odious but Plausible Solution to the Alignment Problem. by OldWorldRevival
Interesting thought. I think you might well be right.
I guess having a politician in charge of the world is better than having humanity ended. :) But they are people who want power, and I find that disturbing. This might be a case where I'd actually prefer a successful actor, and they're pretty good with popularity contests, too. I think that for that purpose, being positive and open-minded would be more important than any knowledge. And you wouldn't need to have someone that smart, just not stupid.
The Rock for god-emperor! Or maybe RDJ can reprise his role as the creator of Ultron...
:)
sticky_symbols t1_izy6q0b wrote
Reply to comment by OldWorldRevival in An Odious but Plausible Solution to the Alignment Problem. by OldWorldRevival
That's right. There are no other plausible proposals for making AGI truly and reliably aligned with humanity's values. But this seems simple enough that it could probably be implemented.
sticky_symbols t1_izy5p7j wrote
I think this is exactly right. And I've read a ton of professional writing on AGI safety.
I think some of the people involved are already thinking this way.
It's actually really nice that both the clear leaders in AI also seem to be really ethical people, the sort you'd like to have ruling the world, if someone has to. However, governments and militaries will probably catch on and take over those labs before they get all the way to AGI.
sticky_symbols t1_izy1gup wrote
Sounds like you are discounting the possibility of truly sentient AGI. Most experts are not. The world will change again when it arrives, maybe for the better, maybe for the worse.
You're also not accounting for possible political change. When the rich own the AI that creates everything, I hope we'll legislate for distributing that wealth.
Also, art will become meaningless to some creators, but not consumers. For us it will be better when AI makes it easier. Imagine having more, better AAA games to choose from and play cooperatively and competitively with other people. I think the meaning of life for many will be based on our accomplishments and communities in virtual worlds.
sticky_symbols t1_iy2nswt wrote
Reply to Super Intelligent A.I. is Neither Necessary nor Desirable (11 min read) by BackgroundResult
Those doing AGI safety tend to agree. They also agree that it will likely happen anyway, based on economic and political forces.
sticky_symbols t1_iy2norl wrote
Reply to comment by HeinrichTheWolf_17 in Super Intelligent A.I. is Neither Necessary nor Desirable (11 min read) by BackgroundResult
I mean, sure, if you're okay with a world in which everything is turned into that AIs favorite thing. That sounds like the end of every good possibility to me.
sticky_symbols t1_iwxpp5k wrote
The Asimov stories were all about how those rules fail.
sticky_symbols t1_ivnhstz wrote
Reply to comment by mhornberger in The Collapse vs. the Conclusion: two scenarios for the 21st century by camdoodlebop
You're responding to a bunch of stuff I'm not saying or thinking. Sometimes online conversations work, sometimes they don't. Maybe others will be edified.
sticky_symbols t1_ivnhl14 wrote
Reply to comment by mootcat in The Collapse vs. the Conclusion: two scenarios for the 21st century by camdoodlebop
This is amazing. Thank you so much. I am interested in what you know about the economic angle, but I'll read and watch some of your sources before I ask for anything more.
sticky_symbols t1_j33ih3f wrote
Reply to comment by LarsPensjo in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
That's improvement but definitely not self improvement since a human had to ask.