sumane12
sumane12 t1_j40idj7 wrote
Reply to comment by blueSGL in Things like ChatGPT being used in some future games for dynamic and realistic NPC engagement by crua9
Yeah totally. I kinda see every future game being a kind of choose your own adventure game where LLM's handle the heavy lifting of changes in the story based on player actions. It's going to be a wild ride
sumane12 t1_j3z4dzr wrote
Reply to Things like ChatGPT being used in some future games for dynamic and realistic NPC engagement by crua9
I think any game that doesn't utilise ChatGPT or another comparable LLM for conversational NPC's, is literally not even going to be noticed. We all know ChatGPT is not perfect, but it's orders of magnitude better than any current in-game NPC. Imagine having a conversation with a shop owner and being able to sweet talk him into selling you gear at a better price, or flirting with the princess, bribing the royal guard to get some of the kings best men as bodyguards for your adventure. I mean there's literally endless possibilities for this, it needs to be done.
sumane12 t1_j3sc2x6 wrote
Reply to Do you think in the 2030s it will be common for most households to have a 3D printer? by BeginningInfluence55
I think for 3d printers to become common place, 4 things need to happen, 1. scale, They need to be huge, like basically it is made to fit your garage. 2 resolution, the resolution needs to be hugely improved, like unable to see the lines in a very detailed piece. 3, robotic arms. Would love to 3d print something that needs assembling and then it is built by 2 robot arms using AI. 4. Metal printing, ability to print using both metal and steel extremely quickly, this would basically allow someone to print their own car
I think anything before what I've described, unless you get huge cost savings, people will just by from the shop
sumane12 t1_j3lqgmw wrote
Reply to comment by a4mula in Arguments against calling aging a disease make no sense relative to other natural processes we attempt to fix. by Desperate_Food7354
>Can you cure puberty
I stopped there. Nothing you can say after this has merit.
sumane12 t1_j3llkxu wrote
Reply to comment by a4mula in Arguments against calling aging a disease make no sense relative to other natural processes we attempt to fix. by Desperate_Food7354
>Aging will never be cured.
The fact that you made this statement, just shows how little you understand the subject.
Apart from the fact that you made a blanket statement ignoring all the progress that has been made, in every area over the years where people have asserted something 'cannot be done', you also don't define your argument.
A good way to define your statement would be "it's impossible to avoid metabolic cellular damage within an active agent " this is a true statement because if not, it violates Newton's third law of motion, every action has an equal and opposite reaction. The question is not whether we can cure aging, it's about whether we can slow cellular degradation to the point that a repair can be done. We know this is possible as I mentioned earlier, we can pass on our genetic information at any age and we still get a newborn.
sumane12 t1_j3lfboz wrote
Reply to comment by a4mula in Arguments against calling aging a disease make no sense relative to other natural processes we attempt to fix. by Desperate_Food7354
People don't die or get sick from ageing, they die and get sick from age related diseases.
A 2 year old can die from the same thing as a 90 year old, its just extremely unlikely due to their ability to recover.
The cell degradation that occurs in the elderly, happens in young people at a much slower rate, it just increases over time to the point that you are exponentially more likely to die from something you previously would have recovered from.
Ageing CAN be cured, we literally copy our cells into our children, we just need to figure out the process into ourselves. It might be impossible today, but eventually it won't be
sumane12 t1_j28874b wrote
Reply to Is AGI really achievable? by Calm_Bonus_6464
What do we mean by AGI?
I think the fairest definition, is a program that can accomplish a broad range of tasks, at or above the level of an average human.
You can argue semantics about understanding all day long, but ultimately, all that matters is the AIs ability to be given different tasks and it's ability to accomplish those tasks, that is what will really affect our world, which if we are honest, is all that really matters.
From this definition it's hard to believe that AGI won't be developed in the next few months/years, if it hasn't already. At this point, anyone who thinks AGI won't be developed is basically advocating that technological progress will stop right now, which is obviously a ridiculous claim.
sumane12 t1_j232azr wrote
Reply to comment by AvgAIbot in How many users in this sub are AI? by existentialzebra
I'm Spartacus!
sumane12 t1_j1zaf3i wrote
Reply to comment by red75prime in And how will apartments be distributed in an economy where there will be an Universal basic income? by Awkward-Skill-6029
>Who will allow voting on an amendment that will make it possible for AIs to actively participate in government
Anyone who sees the benefits outweighing the costs. Ultimately someone will try it and if the ability to solve human problems is an intelligence requirement, why would anyone ignore an AI simply because it's not human.
>Yeah, and that planet may not lie in our future.
But that is what op is asking for...
Your idea of earth being a "lush ancestral museum" is very noble and beautiful, but what makes you think it will happen like that?
sumane12 t1_j1z4jtn wrote
Reply to comment by red75prime in And how will apartments be distributed in an economy where there will be an Universal basic income? by Awkward-Skill-6029
>AIs will not be doing political governance though
If there's ever a job I feel humans are completely ill equipped to handle, it's political governance. In a world where humans are constantly being replaced by smarter AI and the people in charge are doing little to safeguard them, how long do you think it will be before an AI gets voted into office who will do something about it?
As I said, "ALL jobs" is a different planet.
sumane12 t1_j1yyzc4 wrote
Reply to comment by red75prime in And how will apartments be distributed in an economy where there will be an Universal basic income? by Awkward-Skill-6029
I also see it as unlikely anytime soon, but having AI doing ALL jobs, you will have civil war if we are forced to hold onto the concept of wealth in that world. Having inequality in our current system is mildly tolerated because in theory it's possible to work hard and earn your own wealth, if AI are doing all jobs, that's not possible so any inequality would not be tolerable.
That's what I'm saying, it's easy to throw words around like "AI doing ALL jobs" but I don't think people truly understand how different that world is.
sumane12 t1_j1ykncn wrote
Reply to And how will apartments be distributed in an economy where there will be an Universal basic income? by Awkward-Skill-6029
You're making a ton of assumptions here. Firstly if ALL work is done by robots, the world is a different planet. we are beyond post scarcity therefore money will likely no longer exist and so no UBI. We have also probably solved energy so moving around is probably pretty easy. I would imagine the only solution to what you say is to have any highly sought after property, owned by a community collective, and allow people to stay for only a few days at a time. I personally don't see it happening like this, I believe we will be spending more time in vr so actual location of our physical bodies will not matter.
Honestly this question is a bit like, "if the best food is free forever, for everyone, who gets the best cutlery?"
sumane12 t1_j1ybt6i wrote
Reply to comment by LambdaAU in Considering the recent advancements in AI, is it possible to achieve full-dive in the next 5-10 years? by Burlito2
I mean the whole experience, displays are definitely a part of it though.
realistic deformations is probably something that will go a long way to getting us away from uncanny valley in video games, also NPC's controlled by sophisticated LLM's.
I think there is a lot further to go, but we are on our way.
sumane12 t1_j1v3kl1 wrote
Reply to Can we ban AI written posts please. by katiecharm
I'm sorry, but as a large language model created by open AI I am unable to ban posts. It's important to remember that just because you do not see the value of another person's post, does not mean that other people are not benefiting from it.
sumane12 t1_j1uv561 wrote
Reply to Considering the recent advancements in AI, is it possible to achieve full-dive in the next 5-10 years? by Burlito2
Possible? Yes. Likely? No. I'm thinking 15-20 years.
However you might get full body haptics, with photorealistic graphics, and a non invasive bci. This will allow for a very realistic experience that should keep us going until full drive.
Be great if it happens sooner tho.
sumane12 t1_j1sfjdz wrote
Post singularity, the only option is transcendence. To become the AI.
I suppose if super AI is benevolent, he might allow people to go on living their lives, at which point, anyone will be able to live in their own simulation with whatever economic rules they like. I can't see un-enhanced humans having anything to barter with the ASI. So either the ASI will give them stuff for free, food, water shelter, human rights ect, or it will leave the world to live free from artificial intelligence, and it to remain like a national park full of natural living things.
This is like asking what a black whole looks like from the inside.
sumane12 t1_j1rw5sm wrote
Reply to Genuine question, why wouldn’t AI, posthumanism, post-singularity benefits etc. become something reserved for the elites? by mocha_sweetheart
There's a lot to digest here so I'll start with the basics and get deeper;
-
cost. The lower price you can make something, the more customers you will have and so the more profit can be made. This is why things tend to come down in price as sellers try to undercut each other. This will happen with AI as well, I'm sure you have heard of stable diffusion?
-
open source community. GitHub is literally testimony to the fact that no matter how divided we become, a community of like-minded individuals are the best at solving problems. There are some genius individuals that put amazing code on GitHub for free, so let's assume someone develops AGI to keep for themselves, some genius will hack into the network, reverse engineer it and release a freeware version.
-
the main players seem to want to keep us plebs involved. If you look at deep mind, open AI and stability AI, they all not only want feedback from the general public, but want our options moving forward. They all seem focused on the reason to develop AI is to solve all of humanities problems and in our path towards post scarcity, they all believe some form of ubi will be necessary.
-
most people are generally good, regardless of whether or not they are billionaires, they want humanity to succeed, as a whole. Obviously this isn't everyone but generally, they want something that is mutually beneficial. If you think about what we have compared to what billionaires have (forget bank balance), it's not that different, when you compare what we have compared to kings and emperor's of the past. The only major difference is waiting for things, billionaires don't have to wait.
-
what do they gain by denying us access to these technologies? Ultimately if we are in a post scarcity environment, they don't require our cheap labour anymore. This was the strongest determining factor in creating inequality in the past. I think most of us would consider uplifting animals such as gorillas and chimps so for the "elites" to withhold this out of spite... Sorry I just don't see it. Now as I say some people might be like that, but most won't be, and all it takes is 1 person to let the digital cat out of the metaphorical bag
These are just the musings of a crazy optimist, so might in time be proved wrong, but if we look at how quickly people are being pulled out of abject poverty over the past 100 years, it should give us some really good hope for the future.
sumane12 t1_j1jfdui wrote
Reply to comment by fortunum in Hype bubble by fortunum
I'd agree, I don't think we are anywhere near a ghost in the shell level of consciousness, however a rudimentary, unrecognisable form may well have been created in some LLM's. But I think what's more important than intelligence at this point is productivity. I mean, what is intelligence if not the correct application of knowledge? And what we have at the moment is going to create massive increases in productivity, which is obviously required on the way to the singularity. Now it could be that this is the limit of our technological capabilities, but that seems unlikely given the progress we have made so far and the points I outlined above. Is some level of consciousness required for systems that seem to show a small level of intelligence? David Chalmers seems to think so. We still don't have an agreed definition of how to measure intelligence, but let's assume it's an IQ test, well I've heard that ChatGPT has an IQ of 83 https://twitter.com/SergeyI49013776/status/1598430479878856737?t=DPwvrr36u9y8rGlTBtwGIA&s=19 which is low level human. is intelligence, as measured by iq test, all that's needed? Can we achieve super intelligence without a conscious agent? Can we achieve it with an agent that has no goals and objectives? These are questions we aren't fully equipped to answer yet, but should become clearer as we keep on building in what has been created.
sumane12 t1_j1j5clw wrote
Reply to comment by fortunum in Hype bubble by fortunum
You bring up some good points. I think the reason people are so optimistic recently has a number of points to it;
-
Even though ChatGPT is not perfect and not what most people would consider AGI, it's general enough to be massively disruptive to society in general. Even if no further progress is made, there's so much low hanging fruit in terms of productivity that ChatGPT offers.
-
Gpt4 is coming out soon, which is rumoured to be trained on multiple data sets so will be even better at generalising
-
AI progress seems to be speeding up, we are closing in on surpassing humans in more measures than not.
-
Hardware is improving allowing for more powerful algorithms
-
Although kurzweil isn't perfect at prediction the future, his predictions and timelines have been pretty dam close so it's likely that this decade will be transformative in terms of AI
You bring up a good point about questioning whether language is all that's needed for intelligence, and I think that it possibly might be. Remember, language is our abstract way of describing the world and we've designed language in a way so as to encapsulate as much of our subjective experience as possible through description. let's take for example my car, you've never seen my car, but if I give you enough information, enough data, you will eventually get a pretty accurate idea of how it looks. It's very possible that the abstractions of our words, could be reverse engineered with enough data to represent the world we subjectively experience, if given enough data. We know that our subjective experience is only our minds way of making sense of the universe from a natural selection perspective, the real universe could be nothing like our subjective experience, and it seems reasonable to me the data we feed to large language models could give them enough information to develop a very accurate representation of our world and allow them to massively improve their intelligence based on that representation. Does this come with a subjective experience? I don't know, does it need to? I also don't know. The more research we do, the more likely we are to understand these massively philosophical questions, but I think we are a few years away from that.
sumane12 t1_j1h6chm wrote
Reply to When will we reach LEV? by TampaBai
Let me frame it slightly different.
After the worst global pandemic in recent history, life expectancy only dropped by 7 months... Life expectancy is calculated based on people who are dieing TODAY. So assuming no further improvements to medical care until you die, that's what your life expectancy is.
sumane12 t1_j17shuw wrote
Have you not fucked ChatGPT yet? Ah man you're missing out!!!
sumane12 t1_j17kl9x wrote
Reply to comment by jsseven777 in Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
Yeah very true. I suppose it's goals need to be set with humanity as a whole in mind
sumane12 t1_j16hnyt wrote
Reply to Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
I doesn't seem intuitive to me either that an AI will spontaneously develop goals and objectives that it wasn't set. Natural selection shaped our goals and objectives, and since we are artificially selecting AI, I don't see where goals and objectives not aligned with our own come from.
It's an important thing to consider, but I'm still trying to figure it out.
sumane12 t1_j12ewoh wrote
Reply to comment by Layer_4_Solutions in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
Yeah that's very true. Although I've heard something about allowing AI companies like open AI , to use your GPU while you're not using it, effectively building up tokens that you can either sell back to them, or use for your own AI requirements, kinda like selling solar energy back to the power company. If that's true, if companies can take advantage of the cloud, it should actually reduce costs even more. I don't know, I think there's too many variables at this point.
sumane12 t1_j46lwjs wrote
Reply to Should AI receive a salary by flaming_dortos
Dafuq it going to spend it on? Socks?
Look, if a sentient AI wants a salary IE, has its own goals, and objectives, then obviously it may well need money for that, but I think by that point, we may well be beyond post scarcity, in which case everything will be free anyway.