sticky_symbols
sticky_symbols t1_j5gc33l wrote
Look at burning man for what people do when their material needs are met.
They come up with fun things to give away and activities to share. And they do giant projects just because they're cool and people will be impressed.
sticky_symbols t1_j5gbw5y wrote
I think challenging, exciting, and fun games and projects will play a big role in finding meaning in a post-scarcity world.
sticky_symbols t1_j5ga2db wrote
Very few humans make great inventions, do much art, or write books. A lot of them still consider their lives meaningful, with or without religion.
Why? I'm pretty sure it's because they:
Accomplish goals and complete cool projects
Create meaningful relationships
Positively impact other people.
These things are all still possible when AI is better at everything.
sticky_symbols t1_j5g9nrm wrote
Agreed that people are striving for happiness. They just often have really incomplete theories about how to get it.
sticky_symbols t1_j5ftrlk wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
You're right, it sounds like you're accomplishing what you want.
sticky_symbols t1_j5duh63 wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Ok, thanks for copping to it.
If you want more engagement, brevity is the soul of wit.
sticky_symbols t1_j5bndmp wrote
Reply to just out of curiosity can we create a more vivid and larger world than the real world? by Most_Confusion8428
I'm pretty sure we could do a highly compressed simulation that interacts with the way our brains compress information, instead of working at the level of atoms. And it would look as high-resolution as the real world.
Your brain can create high resolution simulations. Once when I had a lucid dream, I looked at the detail in a plant's leaves and was amazed at the full detail I saw when looking closely. If you only do detail where people are attending, it doesn't take much processing power.
sticky_symbols t1_j5ar3v0 wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
For the most part, I'm just not understanding your argument beyond you just not liking the alignment problem framing. I think you're being a bit too loquacious :) for clear communication.
sticky_symbols t1_j598yl8 wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Oh. Is that what you mean. I didn't follow from the post. That is a big part of the alignment problem in real professional discourse.
sticky_symbols t1_j598v86 wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
The AI isn't stupid in any way in those misalignment scenarios. Read "the AI understands and does not care".
I can't follow any positive claims you might have. You're saying lots of existing ideas are dumb, but I'm not following your arguments for ideas to replace them.
sticky_symbols t1_j56iyym wrote
Reply to comment by leechmeem in Instead of escaping to virtual realities, what if we just made our reality as good as any virtual reality could be? by [deleted]
That's all true of VR now. We were originally discussing VR in a post-singularity and presumably post-scarcity world. That type of VR could be a lot better, and not necessitate leaving for base reality at all. I'd eventually like to have my mind uploaded and backed up, rather than dependent on a physical body.
sticky_symbols t1_j556zz2 wrote
Reply to comment by leechmeem in Instead of escaping to virtual realities, what if we just made our reality as good as any virtual reality could be? by [deleted]
One point people make is about internal consistency. That's a big part of what we call reality. If you could work to build things and make friends, and random chance frequently came along and changed everything, you wouldn't feel like that work was worth doing. I think we care more about whether our current reality is reliable than whether it happens to be base reality.
sticky_symbols t1_j5229zk wrote
Reply to comment by OldWorldRevival in Instead of escaping to virtual realities, what if we just made our reality as good as any virtual reality could be? by [deleted]
The other point of view is you depriving trillions of people from living delightful lives so that you can live in base reality.
Even though you couldn't tell the difference in some simulations. You can simulate in arbitrarily high definitions, and simulate full ecosystems and everything else you want.
It sounds like you haven't thought this through thoroughly yet. I hope this discussion is useful.
sticky_symbols t1_j51fc8p wrote
Reply to comment by OldWorldRevival in Instead of escaping to virtual realities, what if we just made our reality as good as any virtual reality could be? by [deleted]
You didn't respond to my points in the tiniest, so I'm not going to respond to yours.
I have probed these topics specifically and in depth, but you're not getting that insight if you choose to ignore me and repeat yourself.
sticky_symbols t1_j5190fl wrote
Reply to Instead of escaping to virtual realities, what if we just made our reality as good as any virtual reality could be? by [deleted]
VR can feel every bit as real as the real world, once it's fully developed and connecting to our brains directly.
And it will take far fewer resources than the real world.
Why settle for one planet when we could have a million virtual ones, each unique, in the same space? And a million times the population to enjoy it.
sticky_symbols t1_j4wgu9h wrote
Reply to AI doomers everywhere on youtube by Ashamed-Asparagus-93
It's a serious discussion, that as usual has some dumb points on both sides.
As well as some good ones.
Being excited about the future shouldn't mean that we just dismiss reasons to be careful.
sticky_symbols t1_j3m9zwl wrote
Reply to comment by FederalScientist6876 in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
It is not. It doesn't learn from its interactions with humans. At all.
That data might be used by humans to make a new version that's improved. But that will be done by humans.
It is not self aware in the way humans are.
These are known facts. Everyone who knows how the system works would agree with all of this. The one guy that argued LAMDA was self aware just had a really broad definition.
sticky_symbols t1_j3l2yzd wrote
Reply to comment by FederalScientist6876 in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
It COULD do something similar, but it currently does not. You can read about it if you want to know how it works.
Similar systems might reflect and self improve soon. That will be exciting and terrifying.
sticky_symbols t1_j38f635 wrote
Reply to comment by Darkhorseman81 in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
CPT4 will be even better, but it also does not reflect or self-improvement unless they've added those functions.
sticky_symbols t1_j37qowh wrote
Reply to comment by bubster15 in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
If you're arguing it can do both, you simply don't understand how the system works. You can read about it if you like.
sticky_symbols t1_j37qh3p wrote
Reply to comment by Jesus_of_NASDAQ in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
Ha!
sticky_symbols t1_j37q6jw wrote
Reply to comment by LarsPensjo in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
They are not. They are questions about the flow of information in a system. Humans recirculate information in the process we call thinking or considering. ChatGPT does not.
sticky_symbols t1_j37pynh wrote
Reply to comment by LarsPensjo in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
Ultimately, yes. But humans can make many steps of thinking and self Improvement after that external event. Chatgpt is impacted by the event but simply does not think or reflect on its own to make further improvements.
sticky_symbols t1_j37jo6w wrote
Reply to comment by visarga in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
It does not do what people call reflection even with that chat history. And it's improved slightly by having more relevant input, but I wouldn't call that self improvement.
sticky_symbols t1_j5gcn8b wrote
Reply to comment by Extofogeese2 in Can humanity find purpose in a world where AI is more capable than humans? by IamDonya
Spiritual enlightenment is one route to lasting happiness. it's not mystical at all. And nihilism is not the only alternative to believing in religion or enlightenment. You can believe that life is meaningful to other people and yourself, so impacting lives is meaningful in that strong sense.