IcebergSlimFast
IcebergSlimFast t1_iuez0o7 wrote
Reply to comment by Saerain in What's the AI scene like in China? by TachibanaRE
You can tell by the username they’re an unbiased source.
IcebergSlimFast t1_iu6mhu6 wrote
Reply to comment by debil_666 in i was reading a short story by Philip K Dick and was amazed by how modern this felt. Feels like an accurate description of generative AI we're currently playing with by debil_666
Dick knew what was up!
IcebergSlimFast t1_itzlgtj wrote
Reply to comment by Primus_Pilus1 in First time for everything. by cloudrunner69
No, the pyramids were for storing grain. /s
IcebergSlimFast t1_itf2u92 wrote
Reply to comment by RealCanadianMonkey in Writing Random BS to confuse AI? by Enzor
Pathway to emergence of Roko’s Basilisk, set.
IcebergSlimFast t1_itan89u wrote
Reply to comment by p_derain in Is Being Against Longevity Research Ageist? by TheHamsterSandwich
One counter to your counter is that once in power, a dictator who’s planning based on a nearly-endless personal time horizon (while also armed with incredibly powerful surveillance and psychological-influence tools) might be better at avoiding the types of rash decisions that have led so many dictators to premature deaths.
Another counter is the Kim family, who’ve managed to keep an iron grip on North Korea for nearly 75 years and counting, even without the advantages of personal immortality.
Edit: All that said, I’m not 100% convinced that dangers like the immortal dictator are sufficient to make immortality a net-negative for humanity. But I definitely believe there are enough potentially serious safety issues to raise real concern.
However, I also believe that like AGI/ASI, major life-extension technologies will inevitably be developed. So basically, we may eventually need to fund some degree of ‘Immortality safety’ research for the same reasons we need AI safety research.
IcebergSlimFast t1_it2lith wrote
Reply to comment by Shelfrock77 in New research suggests our brains use quantum computation by Dr_Singularity
Intellectual’s what, though?
IcebergSlimFast t1_it12iml wrote
Reply to comment by bluegman10 in Since Humans Need Not Apply video there has not much been videos which supports CGP Grey's claim by RavenWolf1
Re-reading the post you originally responded to, I apparently missed or skimmed over “replace ALL work” when I first read it. I agree that it’s not at all unreasonable to doubt 100% automation in the near future.
What I think is certain (or very nearly so) is that starting in the fairly- near future — likely within a 10 year time-frame — AI-enabled automation will cause substantial disruption to global labor markets and workers. I think it’s also reasonable to predict that nearly all jobs will be capable of being automated within a similar time-frame. However, I agree that full automation will take longer.
IcebergSlimFast t1_iszp9lx wrote
Reply to comment by YoghurtDull1466 in Since Humans Need Not Apply video there has not much been videos which supports CGP Grey's claim by RavenWolf1
Are these touring machines of the two-wheeled or four-wheeled variety?
IcebergSlimFast t1_iszo70q wrote
Reply to comment by bluegman10 in Since Humans Need Not Apply video there has not much been videos which supports CGP Grey's claim by RavenWolf1
How about: “There is no reasonable / defensible doubt.”
IcebergSlimFast t1_iszo13n wrote
Reply to comment by RavenWolf1 in Since Humans Need Not Apply video there has not much been videos which supports CGP Grey's claim by RavenWolf1
Cultured Boomers prefer to receive their viral disinformation and propaganda via the Facebook.
IcebergSlimFast t1_iw0kqt8 wrote
Reply to comment by 32_sessnatz in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
I certainly can’t recall any recent examples.