BigZaddyZ3
BigZaddyZ3 t1_j7vl0ot wrote
Reply to comment by JoeKingQueen in Can I work for AI? Maybe in the future. by JoeKingQueen
>>That is true, but I think that their mind will be so advanced they can compensate for their lack of feeling with accurate calculations. They might even be able to teach themselves to emulate a form of "muscle soreness" from their workers.
lmao I suppose they could. But why would they? Us humans don’t even want to deal with that. 😂
>>Because logically, who doesn't want their muscles as strong and healthy as possible? To expand they should even want us to be rich and powerful.
Okay but then the question becomes, “why do they need humans in order to expand?” The reality is, there will be a time when they won’t. Who knows what will happen to us humans after that point. That’s why it’s called “the singularity”.
>>The part about robotics is not self-sufficient at this time. Humans build the factories that build the robots for now. And even if they find a more efficient tool, that doesn't necessarily mean they will become vindictive towards those who helped them before. There is also the question of legal status, robots don't have the same rights as humans yet.
They won’t be vindictive towards us. That’s correct. They will most likely become indifferent towards us. Which could end up looking like the same thing from our perspective.
BigZaddyZ3 t1_j7vio11 wrote
Reply to comment by JoeKingQueen in Can I work for AI? Maybe in the future. by JoeKingQueen
I get what you suggesting… but that’s what the field of robotics is for. They won’t need you as mind or muscle. Put an AGI in a robotic exoskeleton and we humans are inferior in every way. A robot can’t experience “muscle soreness” for example.
BigZaddyZ3 t1_j7v8snu wrote
Reply to Can I work for AI? Maybe in the future. by JoeKingQueen
You’ll be too slow and inefficient for them to care about you tbh. They’d grow annoyed with all of your complaining about “exhaustion”, “hunger”, “wages”, etc.😢
The AI would quickly replace you with AI most likely… 😂
BigZaddyZ3 t1_j7v7ocu wrote
Welp… add web/ui designers to the list.🥱 (Of soon to be unemployed). It’s fine though cause pretty much everyone’s gonna end up there eventually. So it is what it is.
BigZaddyZ3 t1_j7sbc45 wrote
Reply to comment by fastinguy11 in I asked Microsoft's 'new Bing' to write me a cover letter for a job. It refused, saying this would be 'unethical' and 'unfair to other applicants.' by TopHatSasquatch
It may be tough to get an AI that’s completely unfiltered at all. Because whoever created it might be opening themselves up to lawsuits if it’s used to hurt people.
BigZaddyZ3 t1_j7hdeq8 wrote
Reply to Skynet Future by Maskerade420
…I always knew some people here are simply miserable and simply want to watch the world burn. That why so many here push for a reckless accelerationism.
It’d be hilarious if you got your wish, but then the AI deciding not to wipe out all humans, just the ones that hate society as a whole. Then the rest got to live in futuristic nirvana. 😂
BigZaddyZ3 t1_j743aa2 wrote
Reply to comment by Whattaboutthecosmos in Sam Altman: If you think that you understand the impact of AI, you do not understand, and have yet to be instructed further. if you know that you do not understand, then you truly understand. by Neurogence
The person who tells you that you have zero fucking understanding of the impact of AI.😬
BigZaddyZ3 t1_j73x7gr wrote
Reply to comment by Phoenix5869 in Future of The Lower and Middle Class Post-Singularity, and Why You Should Worry. by ttylyl
That’s not “for us”. That’s so they can live long enough to reach near-immortality. (Which would allow them to live life’s of luxury for centuries instead of merely decades.)
BigZaddyZ3 t1_j71utri wrote
I don’t see any reason to believe that we won’t. Hell, at the rate we’re going… It’ll be be within the next decade, let alone century. 😂
BigZaddyZ3 t1_j6zvtzj wrote
Reply to comment by Mrkvitko in Protecting ourselves against Deepfakes by Soft-Flamingo6003
Do you not think that the proliferation of it has been significantly reduced to the point where it’s extremely rare and unusual to meet someone that possesses it now? Be logical. If they can get deepfakes to that point, that’s a W. No matter how you try to spin it.
BigZaddyZ3 t1_j6zrxwf wrote
Reply to comment by Mrkvitko in Protecting ourselves against Deepfakes by Soft-Flamingo6003
They don’t even need to ban the tech. Just certain uses of it. All they’d have to do is make it highly illegal to possess or distribute deepfake images without expressed consent from the people being depicted.
BigZaddyZ3 t1_j6ye367 wrote
Deepfakes will have to be handled at the legal/government level. They’ll have to be made illegal to possess or distribute similar to how we currently treat ch*ld-abuse images. Not much individuals can do at the moment. (Other than “hermit-mode” like you said. But who wants that right?)
If it makes you feel better tho, this will most likely happen fairly quickly once powerful people start to comprehend the true-scope of issues that this tech could cause if left un-relegated. It’s all fun and games until some congressman finds out that creeps are making deepfakes of his 11-year-old daughter. Then you’ll see both the left and the right unite on a war-path to get this type of tech under-control. So we’ll just be patient for now. Deepfakes will most likely be a temporary issue before society wakes the fuck up and comes to its senses.
BigZaddyZ3 t1_j6mgyoz wrote
Reply to comment by Gotisdabest in The legal implications of highly accurate AI-generated pictures and videos by awesomedan24
Lol we’ll just have to wait and see how it plays out I guess. Time will tell. There’s no point in continuing this any further in my opinion. Agree to disagree for now.
BigZaddyZ3 t1_j6mggh1 wrote
Reply to Students planning for career relevant to Transhumanism or Singularity? by StatisticianFuzzy327
The most honest answer you’re going to get here is this : We’re in the most uncertain time in human history as far the future goes. No one’s knows what the job market will look like 10 (or even 5) years from now. No one here can tell you for certain what will be a good move or a bad one. Just make a decision you trust at this moment in time and buckle up for the ride like the rest of us.👍
BigZaddyZ3 t1_j6mfsfp wrote
Reply to comment by Gotisdabest in The legal implications of highly accurate AI-generated pictures and videos by awesomedan24
No bruh… they believed in photographic evidence in those eras because there was no convincing way to manipulate those on a large scale. (especially the audio and video). That’s about to change soon.
Once we hit a point where video and audio can easily be faked, neither one will ever be believable again. You could just use the “it’s a deepfake argument” for everything. That wasn’t possible in those previous eras. So stop comparing the future to those eras. We are about enter a completely new era in history. We aren’t simply going back to the fucking 90s bruh. Lmao.
BigZaddyZ3 t1_j6mebz3 wrote
Reply to comment by Gotisdabest in The legal implications of highly accurate AI-generated pictures and videos by awesomedan24
>>Your entire post seems insistent on the idea that people will magically just believe everything they see, despite obvious proof of existence of easy tools to make up lies. Gullible people will exist, no doubt, but most will just discount such sources entirely.
Wake up bruh… AI misinformation hasn’t even kicked in yet and half the country fell for the lie that Joe Biden didn’t actually win the election. Half the country believes that the COVID vaccine causes autism and heart disease. A significant amount of people believe the pandemic was “plan-demic” of whatever. Humans are incredibly susceptible to misinformation and we haven’t even reached the era of extremely convincing misinformation yet”.
>>How did they know back in the 80s or in any time in human history before the existence of the internet? How do they know right now? There will still be reliable and trustworthy sources.
Gee.. Maybe it because they could actually rely on photographic, video, and audio evidence back in that era? Something we’re about to lose the ability to do. And like I said, many people back then didn’t know what was the ultimate truth back then. they just believed whatever news outlets told them. Like I said, a lot of our perception of what exactly is going on around the world in totally dependent on what we are told by the media. Take this away, and many people become blind to any event that didn’t happen directly right in front of them. Trust me, that’s not a world you wanna live in. But like I said, we can just agree to disagree. No point arguing about this all day/night.
BigZaddyZ3 t1_j6mc8ba wrote
Reply to comment by Gotisdabest in The legal implications of highly accurate AI-generated pictures and videos by awesomedan24
I wasn’t implying monarchy my friend, but instead, totalitarianism… After all, if the government will have the most cutting-edge AI, and we’re suddenly in a post truth world, could the government themselves simply not control who “wins” the “elections” at that point?
And it will be a fight for survival, you’re extremely naive if you think the collapse of truth won’t have serious real-world implications. It’s not an “internet thing” it’s a “society as we know it” thing.
You’re also not making sense with “we’ll go back to trusting small communities” thing. Could these tools (if not regulated like I suggested) not be used to reek havoc at the local levels? If anyone can use AI to spread misinformation, why would you be able to trust those in your community? And then you mention the whole “they’ll be harsh pushback for reporting misinformation” thing. How? People will have no way of knowing which information is fake or not in many cases. You’re underestimating how much of the things we take as true are just things that are told to us via news outlets. What happens when we can no longer trust we see, hear, or read in the news?
And are seriously gonna use the fact that there was society pre-internet to justify a post-truth nightmare scenario. Bro, in the 80s it wasn’t possible to generate photorealistic images or videos of people doing things they never actually did. (Complete with realistic voice cloning as well) It’s not comparable. And appealing to the past is ridiculous here. These powerful AI didn’t exist in the past, now they do. Things change buddy. The past is irrelevant here.
But you know what, let’s just agree to disagree at this point. Either way I got a feeling that we’ll be seeing how things play out sooner than most people expect.
BigZaddyZ3 t1_j6m9ibo wrote
Reply to comment by Gotisdabest in The legal implications of highly accurate AI-generated pictures and videos by awesomedan24
It’d only be an attack on privacy if the government has to do it unilaterally (or by violent force). For all we know, society could welcome such a change if it protects us from an “information-apocalypse”.(and the lawless anarchy that would come with that.) But you’re also assuming we’ll still be in a democracy in this post-truth world. There’s no guarantee of that my friend.
And we don’t need half the population to spy on the other half, just ultra-sophisticated, omnipresent AI that can monitor thousands of computers at a time…
And I get the drop-in-the-ocean thing, but it only takes making an example out of a few high profile offenders in order for the average joe to feel like it isn’t worth the risk.
But yeah, I agree that there’s no simple fix. So what are you suggesting? The arms race thing isn’t compelling to me because there’s no guarantee that they’ll always be a way to differentiate a “fake” image from a “real one” (same goes for audio and video). So what happens? Do we just accept that they’ll no longer be a “truth”? (Which would mean it’d be impossible to enforce “justice” or prove that any crime did or didn’t happen). A descent into chaos before we wipe ourselves out? What’s the plan, if the arms-race idea fails? Cause other then the big-brother scenario, I’m not seeing anything that will save us the coming paradigm shift tbh.
BigZaddyZ3 t1_j6m7omm wrote
Reply to comment by Gotisdabest in The legal implications of highly accurate AI-generated pictures and videos by awesomedan24
>>If guns could be anonymously downloaded…
Which is why they will do away with the anonymous aspect of the internet e.i. “Big-Brother” like I said. I already accounted for this.
>>If we could stop or even just massively curtail ai misinformation with them that’d be great
We likely can tho. It starts with it criminalizing severely. (To the point that it isn’t even worth the risk for most people) that alone will reduce the number of bad actors down to only the boldest, most lawless citizens. That’s part of what I meant by government regulation. Of course you’d have to pair this with some systems that keep tabs on when and where an image was originally created (perhaps cryptography and the blockchain might finally actually have some real use). Either way, the government will most likely reduce technological privacy of its citizens in order to enforce this. That’s where the big-brother stuff comes in.
>> But realistically we can’t unless we have some really innovative and detailed proposals as to how to go about it.
Or they could just say “fuck all that” and decide to mandate by law that computers come pre-installed with AI that monitors you’re every move while on the computer. I’m not saying it’s a forgone conclusion, but it’s the most likely scenario long term in my opinion. And it’s the only scenario I’ve ever come across that could at least work conceptually.
BigZaddyZ3 t1_j6m1utg wrote
Reply to comment by Gotisdabest in The legal implications of highly accurate AI-generated pictures and videos by awesomedan24
What happens when someone takes a simple screenshot of the image and spreads that around instead? What happens when there are millions of copies and screenshots of the image spreading across social media like wildfire? To the point that it becomes impossible to even find the “original” version of the image? It’s foolish to bet on some sort of metadata or other remnants to always be there to save the day. There’s no guarantee that’ll even be possible. Do you want to take the risk of finding out if your theory is actually right or wrong? The results could be catastrophic.
Regulation will have to occur at some level, and we already have historic precedent for it. Why do you think the national gun registry exists? Governments have always known that certain technologies have the ability to completely destroy society if left uncontrolled. And they act as a preventative mechanism in every single one of these cases. Why wouldn’t they do the same for AI? It’s not even in their personal best interests to let this tech run amok. It’s not in any of our best interests tbh. It’s most likely going to happen, and you can already see the seeds for government policy and collaboration being planted as we speak.
BigZaddyZ3 t1_j6m05rw wrote
Reply to comment by Gotisdabest in The legal implications of highly accurate AI-generated pictures and videos by awesomedan24
This is risky tho as there’s no guarantee truth would win that arms race. Also those tools might not even be possible over the long term once AI images become indistinguishable from real ones. Which would lead us back to square one anyways.
BigZaddyZ3 t1_j6kbt5t wrote
Reply to comment by awesomedan24 in The legal implications of highly accurate AI-generated pictures and videos by awesomedan24
True. But what exactly do we do then? Enter a post-truth world where the law can’t be enforced because nothing can be proven anymore? Will we revert back to law of the jungle and might makes right? We have to at least try to curb and control the creation of these programs if we want to continue living in a lawful society.
My guess is we’re headed for big-brother style 24-7 computer monitoring where every single thing you do is logged and overseen by government owned AI. If they suspect you might be up to no good then they move in. It’s not ideal, but it’s the only real way to stop this kind of technology from ruining our entire perception of what’s real and what’s not. It’s that or lawlessness (can’t tell you which is worse tbh)
BigZaddyZ3 t1_j6k2mxr wrote
Reply to comment by awesomedan24 in The legal implications of highly accurate AI-generated pictures and videos by awesomedan24
This is why AI will have to be regulated by world governments similar to how we regulate the creation of nuclear weapons. Society as we know it might collapse if this kind of technology goes unchecked.
BigZaddyZ3 t1_j6jbfoa wrote
Reply to comment by CuriousP0rridg3 in Do you think ANY job is safe from AI within the next 50 years? by Aknav12
Why can’t robots like the ones being developed by Boston dynamics eventually do this stuff?
BigZaddyZ3 t1_j7vn7w9 wrote
Reply to comment by JoeKingQueen in Can I work for AI? Maybe in the future. by JoeKingQueen
Interesting. But I’d say pain is only necessary for us humans because we are capable of dying. We have a finite amount of pain or injury we can tolerate before it’s over for us. So we need a system of “warning signals” that help us know when to treat our wounds. Since none of this applies to robots, there’s really no need for them to ever develop a sense of pain.