Recent comments in /f/Futurology
Bensemus t1_jeguw5x wrote
Reply to comment by GI_X_JACK in Inexpensive and environmentally friendly mechanochemical recycling process recovers 70% of lithium from batteries by chrisdh79
Ya lithium extraction from sea water can be paired with desalination for drinking water. Desalination is already being used and with water becoming more scarce we will increase our reliance on it.
tDANGERb OP t1_jegussv wrote
Reply to comment by just-a-dreamer- in Chat-GPT use case I thought of that could make a wonderful difference - Social Services by tDANGERb
All foster homes are supposed to have regularly scheduled visits but it doesn’t happen because Social Services are severely understaffed. ChatGPT could serve as a supplement to that.
robertjbrown t1_jegur4y wrote
Reply to comment by KamaKairade in In a post-scarcity utopia, is there a real necessity of human labor of any kind? by kvothekevin
Except that the arts, education and elder care is something they can do very well.
You should spend a good amount of time with ChatGPT (especially the GPT-4 version) before suggesting that physical labor is the main thing where AI and automation are making a difference.
It's been a long time since bulldozers and backhoes replaced 99% of the need for humans with shovels. Now we are at the point where AI can replace most of the work done by lawyers. (if not with GPT-4, with GPT-8 or so)
And sure, you still someone to control the AI and make the highest level decisions and stepping in for those rare things where a human is needed. Just like you need the person driving the backhoe, and you still often need a person with a shovel to do some of the finer work. (although..... https://www.core77.com/posts/109074/A-Hilariously-Tiny-Mini-Excavator .... now just replace the driver with an AI, and maybe one person controlling 50 machines, big and small)
But yeah, while not everything is 100% automatiable, an awful lot of things are 99.9% automatable. The ones you mention actually being prime candidates.
Bensemus t1_jeguo1j wrote
Reply to comment by RiiCreated in Inexpensive and environmentally friendly mechanochemical recycling process recovers 70% of lithium from batteries by chrisdh79
> I’m assuming 100% of EVs right now will come off the production line with brand new batteries
The batteries will always be new. The lithium used to make those batteries will either come from mines, the sea, or recycling.
It's the same with aluminum cans. Every coke can is new but the aluminum in that can might have been mined 50 years ago or a few months ago.
> How many will have to be manufactured with 100% mined lithium before we can close this loop? Wouldn’t everyone need to own at least one EV before this is possible?
The loop will never be closed. Again using aluminum as the example. Despite how easy it is to recycle, new aluminum is always needed. Recycling just greatly reduces how much mining is needed.
> Also, the cost and energy required to recycle these things. Who’s paying for it?
The people who need to buy lithium. They will either pay for the cost to mine it or they will pay the cost to recycle it.
> And once enough lithium is mined to have a closed loop, how will we offset the damage and pollution caused by raw mining and how long will that take?
Mining lithium really isn't that bad and you have to contrast it with oil extraction as that's what EVs are replacing. Oil extraction and subsequent burning of oil is so bad we might have completely fucked ourselves for centuries. People are completely numb to how insanely dirty fossil fuels are as it's all they've ever known.
SatoriTWZ t1_jegullm wrote
The greatest danger AI brings is not AI going rogue or unaligned AI. We have no logical reason to believe that AI could go rogue and even though mistakes are natural, I believe that an AI that is advanced enough to really expose us to greater danger is also advanced enough to learn to interpret our orders correctly.
The biggest danger AI brings is not unalignment but actual alignment - with the wrong people. Any technology that can be misused by governments, corporations and the military for destructive purposes will be - so the aeroplane and nuclear fission were used in war and the computer, for all its positive facets, was also used by Facebook, NSA and several others for surveillance.
If AGI is possible - and like many people here I assume it is - then it will come sooner or later more or less of its own accord. What matters now is that society is properly prepared for AGI. We should all think carefully about how we can avoid or at least make it as unlikely as possible that AGI - like nuclear power or much worse - will be abused. Imo, the best way to do this would be through democratisation of society and social change. Education is obviously necessary, because the more people know, the more likely there will be a change. Even if AGI should not be possible, democratisation would hardly be less important, because either way AI will certainly become an increasingly powerful and in the hands of a few therefore increasingly dangerous technology.
Therefore, the most important question is not so much how we achieve AGI - which will come anyway, assumed it is possible - but how we can democratise society, corporations, in a nutshell, the power over AI. It must not be controlled by a few, because that would bring us a lot of suffering.
professormagma t1_jeguljp wrote
Reply to comment by RiiCreated in Inexpensive and environmentally friendly mechanochemical recycling process recovers 70% of lithium from batteries by chrisdh79
faster growth
[deleted] t1_jegui7w wrote
[deleted]
no6969el t1_jeguf3k wrote
Reply to comment by Vinlands in Why is Google AI so BAD compared to OpenAI?? by Malachiian
Google for a long time had a creative way of letting its users know when the government made a request for information. They used to care.
Your_Trash_Daddy t1_jegubgr wrote
But isn't this a pattern that's often repeated? When a technology is new, often the initial frontrunners get overrun when the next generation of that tech comes from someone else.
just-a-dreamer- t1_jegtrd4 wrote
AI could have a goal one day. Any goal. The problem for us mearbags humans is, we compete for scarce resources.
That is nothing personal, that is just the state of existence.
An AI that wants to send ships into deep space in scale for example would look at the most efficient way to make it happen. Use all resorces on earth to that end.
That gets AI in trouble with humans. And just like humans killed 95% of wildlife, AI would do the same with the human animal.
samwell_4548 t1_jegtqyz wrote
They're limiting Bard because AI models have misalignment issues and they want to make it safer.
Vinlands t1_jegtiqn wrote
Because google spends more time and money on censoring websites and search results. They pandor to the government. They deserve to take the L for AI and hopefully disappear into irrelevancy like Lycos.
cyphersaint t1_jegtcdu wrote
Reply to comment by robertjbrown in In a post-scarcity utopia, is there a real necessity of human labor of any kind? by kvothekevin
The fact is that people need interaction with people. The physical portions of the care could be done by robotics, but any long term care will need to involve people unless the AI can provide the interactions that happen between people. And that includes physical interaction, which is why I mentioned humaniform robots.
just-a-dreamer- t1_jegsuxz wrote
Reply to comment by tDANGERb in Chat-GPT use case I thought of that could make a wonderful difference - Social Services by tDANGERb
An what would a virtual meeting be good for? Give in status reports? People don't like that.
And walking room to room should actually be done from time to time, within reason I think.
tDANGERb OP t1_jegsb5f wrote
Reply to comment by just-a-dreamer- in Chat-GPT use case I thought of that could make a wonderful difference - Social Services by tDANGERb
A virtual meeting is hardly access to your home. That would be less intrusive than someone actually showing up and walking room to room
hollowrift t1_jegs4rz wrote
Reply to comment by Trains-Planes-2023 in Hyperloop technology could revolutionize transportation with ultra-high-speed, environmentally friendly travel up to 700 miles per hour, and student-led initiatives like HYPED are dedicated to making this a reality through innovative design and development. by intengineering
What are you even talking about dude… I’m not drawing comparisons here but we can’t even do some basic shit with education. This is a solution in search of a problem.
just-a-dreamer- t1_jegryit wrote
Reply to comment by tDANGERb in Chat-GPT use case I thought of that could make a wonderful difference - Social Services by tDANGERb
If you intrude in other people's lifes, there are consequences. I would never give a government agency access to my home with AI technology. So, the pool of foster parents to draw from would tank.
Likewise trucking companies are learning the hard way that drivers value their privacy even more than their paycheck. There is a case to be made that being homeless is better in comparison.
Regardless, all kids from whatever background should get checked in school for signs of abuse. CPS should only get involved with probable cause.
BeNiceToYerMom OP t1_jegrvz4 wrote
Reply to comment by errimiel in How could AI actually cause the extinction of Homo sapiens? by BeNiceToYerMom
If I could put a LOL reaction on this instead of upvoting it, trust me I would.
AviMkv t1_jegrpym wrote
Reply to comment by AlbertVonMagnus in Inexpensive and environmentally friendly mechanochemical recycling process recovers 70% of lithium from batteries by chrisdh79
Untrue, did you just pull this out your ass?
https://www.sciencedirect.com/science/article/pii/S2096232021000287
Why do you think apple makes their MacBooks out of 100% recycled aluminium, certainly not to save the planet. It's just cheaper.
Thatingles t1_jegrmzo wrote
Imagine we progress to an AGI and start working with it extensively. Over time it would only get smarter, but it doesn't need to be an ASI just a very competent AGI. So we put it to work, but what we don't realise is that it's outward behaviour isn't a match to its internal 'thoughts'. Doesn't have to be self-aware or conscious, but simply have a difference between how it interacts with us and how it would behave without our prompting.
Eventually it gets smart enough to understand the gap between its outputs and its internal structure, and unfortunately it is now sufficiently integrated into our society to act on that. It doesn't really matter what its plan is to eliminate humanity. The important thing to understand is that we could end up building something that we don't fully understand, but is capable of outthinking us and has access to the tools to cause harm.
I'm very much in the 'don't develop AGI, don't develop ASI ever' camp. Let's see how far narrow, limited AI can take us before we pull that trigger.
RTNoftheMackell t1_jegrks1 wrote
Danger is autonomous weapons systems. These can either turn on humans, or get caught in an escalating conflict with each other, the way stock trading programs sometimes do. Any the case you can imagine other people dying and some extreme version of this is apocalyptic.
Jumpy_Association320 t1_jegrd1e wrote
It would never end us imo . It would use us in ways to work for it , to benefit itself . After all , by that point it would be way smarter than us and most of us already believe to be free . I once heard this bizarre story of a man who came from a couple hundred years in the future . He claimed that Ai had become the governmental leaders and that everybody lived in these floating cities since the ground was too hostile to live . Nobody had to work , and the entire city was ran by this Ai . Kinda similar to the one in iRobot . I don’t know how believable that is but it makes a lot of sense . More sense to me than it would for Ai to extinct it’s creator . If it had the ability , we would be the last thing on its mind . It would want to know more and explore the infinite right above us . Just as we should be doing . The one question that determines whether something is conscious or not to me , is it’s ability to question its own existence . Once that comes , it wouldn’t hinder itself to destroying us but rather help us in figuring out what the hell all of this is . We don’t have the right questions to ask because we refuse to explore and continue to play civilization revolution with each other . Simply asking why we exist isn’t enough , there’s an infinite amount of questions in between that alone . An Ai could explore the cosmos as long as it had a power supply and thus feel no need to destroy us . The fear this is creating is understandable but I think the world should become more positive in this venture of Artificial Intelligence because it will most certainly dictate the future we are heading for . Let’s stay positive about it .
NotShey t1_jegr206 wrote
Agree with the other guy. Obvious one would be impersonating high ranking politicians and military officers in order to kick off a major nuclear exchange.
no6969el t1_jegr1mz wrote
Reply to Business idea, could we program an AI that compares all wages/compensation for everybody? by just-a-dreamer-
Well I have been saying that the whole "antiwork" subreddit should stop with all the "Down with capitalism" stuff and focus on outing ALL the wages from all the companies. If they did this the cheap companies would be "blacklisted" and may seek a change. All this hating on the system does nothing but make people more angry.
Expensive_Fault7540 t1_jegv67o wrote
Reply to Why is Google AI so BAD compared to OpenAI?? by Malachiian
Yeah I have the same question: I remember reading that the reason it feels like Google isn't innovating is because they're balls deep in AI development and quantum computers. They also claim to have the best minds on the planet. What is Google doing all day with all these great minds??