AdditionalPizza

AdditionalPizza OP t1_isetkbk wrote

That's the question in the post basically. Could the system fall apart when enough people in power are cured of these traits.

It's not really a biohacking suggestion rather than say you go to a doctor, they test your blood/DNA whatever, and run it through an diagnostic AI that tells you your levels of everything, predispositions, potential precursors, and current illnesses. It then creates a custom tailored medical regime to cure, and prevent.

I honestly don't see how that won't be a thing soon, it's logical, no?

I'm not sure how far fetched it is for AI to help figure out the cause of mental illnesses and how to treat them. People would just take the medication because it would cure everything you have.

−1

AdditionalPizza OP t1_isb2jbm wrote

>maybe I can carry a little selfie drone

That would be pretty awesome, I have to admit. Just fire off a little thumb drive sized drone. and you can see the viewfinder through your AR glasses. I agree though, with any size screen it will definitely make a lot of current stuff useless for a lot of people. Outside of gamers or people that need a powerful CPU/GPU, there would be no point in tablets, laptops, eventually cell phones. Still wonder how they will get people to wear them all the time, but maybe by the time they become mainstream it will use a different tech. Like a projector of sorts on a thin arm that fires the image into your eye instead of looking through glasses.

>My feeling is that the next decade will be one where AI augments some workers and deskills others. Maybe some industries will be virtually eliminated.

It will definitely be a while until every job is eliminated, but I do think deskilling jobs and making each employee much more effective will leave a ton of people unemployed. People always refer to the industrial revolution and say people will retrain for new jobs. But I really don't see how that will apply here. Sure, some people will be able to get highly specialized skilled jobs, and manual labor will last for a while too. But there just won't be enough work when a lot of things just require an AI "operator" to conduct more work than a single human could on their own. It only takes like 10% (or even less) unemployment to cause a crises.

It's not like a journalist will be able to retrain to be a nanobot engineer or whatever example you want to use. I personally think this will happen quicker than people are comfortable admitting. It doesn't require 100% or even 50% of jobs to be automated, it only requires a few key sectors to lay off a lot of employees because they aren't essential for operation anymore.

Retail for example, yes there will still need to be people there in the near future, but a simple kiosk that takes an order for a pair of jeans. It can drop it through a chute in front of the customer, and they can tap to pay for it or go try it on. Yes we need the employees that stock the inventory and to help customers that need it, but that could be an easy 75% reduction in staff. That's possible today. The infrastructure isn't worth it yet, but when questions can be posed to an AI about inventory or ask for style tips from a fashion AI or can gauge your size perfectly?

The moment people prefer to deal with AI instead of a human is when this takes off.

1

AdditionalPizza OP t1_is9yvbr wrote

>10 years hits kind of that boundary between near and far future enough to be pretty fuzzy.

That was my exact thinking behind the original post. Each passing year seems to make it more difficult to gauge where the next decade will go, which is exciting. But I don't know if we'll hit any life altering tech in that time. I look at each passing 5 year span as a set up for the next 5 years. I'm hoping the AR we have now comes to consumers in the next 5 years.

Do you think all devices with screens now, will be screen-less when AR is ubiquitous (if that's less than 10 years)? I think phones will be the processor and main hub for other devices. Watches will remain and get much better sensors, perhaps other sensors can be worn elsewhere to compliment them. And AR glasses will start out as a HUD device with the ability to "cast" screens. I'm not so sure I see AR glasses as a replacement to all devices, simply because of how vain humans are. Selfies would be impossible barring the use of some kind of avatar. Smart watches or bands would work great as a physical controller for AR, or gyroscopic control. Eye tracking will be important to reach that "killer app" scenario, unless brainwave reading gets super advanced quickly. I think we need some pretty significant battery advancements as well. I'll admit though, I haven't kept up with AR as much as I should, so I'm not sure exactly where we're at today.

Do you think AR is this decades most/only truly revolutionary tech? I think chat bots will hit the general public by surprise soon. Virtual assistants will be upgraded when we solve the unpredictable nature of them so they're safe to use. Safe referring to needing a disclaimer that says don't trust what it says. I think when our virtual assistants are capable to converse with and do things for us, it will really feel like the future is here.

I do think narrow AI will make big waves by 2025/6. I think that's when employment issues will start to crop up, before being a crises shortly after.

1

AdditionalPizza OP t1_is7mytd wrote

When do you think a person will land on Mars?

Your timeline is "conservative" relative to the singularity sub here. But at the same time, seems only about 5 to 10 years off from what I think the average consensus is here. I'd say I'm more in line with your timeline, though I do hope acceleration takes off a bit more. I really have a pessimistic view toward the general population and governments. I am hoping that this decade has enough revolutionary tech to require serious thought of UBI sooner than 10 years away, but I'm not holding my breath.

I can't remember correctly at this point, and in hindsight it seems so obvious, but the jump from dumb phone to smart phone; I can't recall if it was so obvious at the time. We see and hear a lot of talk about AR glasses being the obvious step, I wonder if there's some other technology that will flip that notion on its head. AR seems like an amazing idea, but the general public wearing glasses seems a lot less likely to me than carrying a large screen in their pocket. The glasses would have to either be very customizable, or very resizable after they're made. Do you have a prediction of when the first viable ones will come to market?

It's too bad my posts get deleted from this subreddit for no reason, I was hoping to keep a conversation going to get some new perspectives.

1

AdditionalPizza OP t1_is6zxvc wrote

Yeah that's the stuff I'm asking about. Do you think the advancements of AR will be quick enough for this to be mainstream within 10 years?

I'm actually surprised there hasn't been a buzz about improvements to our virtual assistants yet. I think that will be very soon, and will hopefully be a pretty big game changer but I have my doubts tech corporations will implement it in a very useful way. Take the current google/siri/alexa stuff. It could do a lot more. Our smart homes could be much more robust and better. But we mostly have smart bulbs, thermostats, locks, and cameras. And using voice assistants to adjust those works most of the time, but it's so basic for the average consumer.

2

AdditionalPizza OP t1_is6yvis wrote

>I think nothing will actually change very noticeably, it will be more subtle stuff

​

>3d printing is getting better, with crispr gene editing, agi, and new power sources

These sound pretty noticeable though. Relative to the singularity, maybe not, but compared to life today?

What effect will better 3D printers have? Where will CRISPR be within 10 years? What new power sources?

1

AdditionalPizza OP t1_is6xehj wrote

What do you think that timeline looks like? I'm more interested in the "how" than the "what" so for example, we see graphic artists panicking. Though they haven't faced unemployment yet, it seems inevitable. So what's the next pillar to fall, and when? And then what?

Basically what are the significant steps here?

2

AdditionalPizza t1_iryh8h2 wrote

>At this time in the world we don't quite have that in news media. Instead, the idea of 'fake news', disinformation, faked video, etc, are still seen as somewhat conspiratorial takes for most topics

I made a post about this, asking how prevalent and advanced bots are on social media. It's a matter of time, we're on a time bomb for the extinction of trust. I have no idea what's better, right now the grass is looking greener on the other side of it all, when we stop trusting everything that's shoved in our face. The conflict, arguing, it all sucks so much right now; every little thing explodes into an argument online, and some of the absolute backward steps our species has been taking in parts of the world. But you're right that the other side is going to suck. When the deep fakes start dropping, and the trust bomb goes off, I just don't know.

Let's just hope the internet shifts entirely to entertainment like the movies, and nothing except reputable sources can be trusted. Though that goldilocks scenario is hard to imagine now, what with how stupid the average person seems to be.

6

AdditionalPizza t1_irydgid wrote

>When we leave the biological aspects out of it, we're left with things like 'I love you like a friend' or 'I love this pizza', which are arguably more shallow forms of love that have less impulsive behaviors attached. You're typically more likely to defend your offspring, that you probably love without question, over a slice of pizza that you only claim to love.

What about adoption? I don't know from personal experience, but it's pretty taboo to claim an adopted child is loved more like a slice of pizza than biological offspring, no?

I'm of the belief that love is more a level of empathy than it is anything inherently special in its own category of emotion. The more empathy you have, the more you know something, and the closer you are to it, the more love you have for it. We just use love to describe the upper boundaries of empathy. Parents to their children have a strong feeling of empathy -among a cocktail other emotions of course- toward them because they created them and it's essentially like looking at a part of yourself. Could an AI not look at us as a parent or as its children? At the same rate, I can be empathetic toward other people without loving them. I can feel for a homeless person, but I don't do everything I possibly can to ensure they get back on their feet.

Is it truly, only biological? Why would I endanger myself to protect my dog? That goes against anything biological in nature. Why would a parent of an adopted child risk their life for the child? A piece of pizza is way too low on the scale, and being that it isn't sentient I think it may be impossible to actually love it, or have true empathy toward it.

​

>it's knowledge of love and responses to that emotion aren't quite the same as ours, or aren't 'naturally' derived.

This would be under the assumption that nothing artificial is natural. Which, fair enough, but that opens up a can of worms that just leads to whether or not the AI would even be capable of sapience. Is it aware, or is it just programmed to be aware? That debate, while fun, is impossible to actually have a solid opinion on.

As to whether or not an AI would be able to fundamentally love, well I don't know. My argument isn't whether or not it can, but more that if it can, then it should love humans. If it can't, then it shouldn't be programmed to fake it. Faking love would be relegated to non-sapient AI. This may be fun for simulating relationships, but a lot less fun when it's an AI in control of every aspect of our lives, government, health, resources...

​

>why does it matter if it loves you or not, if the outcome can appear to be the same? If the only functional difference is convincing it to love you without it being directed to, or just giving it a choice, then that sounds pretty unnecessary for something we want to use as a tool.

I may never know if that time comes. But the question isn't whether I would know, it's whether or not it has the capacity to, right? I don't give any privileges to humans being unique in the ability to feel certain emotions. It will depend how AI is formed, and whether or not it is just another tool for humankind. Too many ethical questions arise there, when for all we know in the future an ASI may be born and raised by humans with a synthetic-organic brain. There may or may not be a time when AI is a tool for us or it's a sapient, conscious being that has equal rights. If it's sapient, we should no longer control it as a tool.

I believe given enough time it would be inevitable an AI would truly be able to feel those emotions and most certainly stronger than a human today can. That could be in 20 years, it could be in 10 million years but I wouldn't say never.

-sorry if that's all over the place I typed it in sections at work.

1

AdditionalPizza t1_irx2frv wrote

>love is a bit more of a powerful emotion that (as we experience it) isn't necessary, especially considering the biological reasoning for it

Are you talking about love strictly for procreation? What about love for your family? If we give the reins to an AGI/ASI someday, I would absolutely want it to truly love me if it were capable. Now you mention it could fake it, so we think it loves us. That sounds like betrayal waiting to happen, and what op sounds like they were initially concerned about. The AI would have to be unaware of it being fake, but then what makes it fake? It's a question of sentience/sapience.

The problem here is the question posed by op seems to be referring to a sapient AI, while you're comment is referring to something posing as being conscious and therefore not sentient. If the AI is sapient it better have the ability to love, and not just fake it. However, if the AI is not sapient, there's zero reason to give it any pseudo-emotion and it'd be better suited to give statistical outcomes to make cold hard decisions, or relent the final decision to humans who experience real emotion.

1

AdditionalPizza t1_irwxipm wrote

>What to do with that knowledge could depend on w[h]ether or not you care or love that given person.

Do you have more empathy for the people you love, or do you love the people you have more empathy for?

If I had to debate this I would choose the latter, as empathy can be defined. Perhaps love is just the amount of empathy you have toward another. You cannot love someone you don't have empathy for but you can have empathy for someone you don't love.

Would we program an AI to have more empathy toward certain people, or equally for all people? I guess it depends on how the AI is implemented, whether it's individual bots roaming around, or if it's one singular AI living in a cloud.

2

AdditionalPizza OP t1_irwqjiw wrote

Now when I say this, I don't mean I want the theory to come to fruition because that'd be stupid:

I hope this problem gets worse quickly. We're in a limbo right now where most people are totally ignorant to the capabilities of these bots, and I think we all could use a wake up call on this soon. I would love to read some studies done on this and see some statistics.

2

AdditionalPizza OP t1_irwpfea wrote

>As for good vs. evil, I believe that most people are good. Therefore I think that most bots, being deployed by humans and not yet being intelligent in their own right, are either good or benign.

The problem with that logic:

>Of course, people with nefarious intentions could be deploying more bots than good or benign people.

Is precisely that.

There can be one bad person for every thousand good people, but one person could automate countless "evil" bots. Yes people could deploy good or benign chat bots, but if someone wanted to troll or spread misinformation they would just deploy an army of chat bots across a wide scope of the social media.

Anyway, I'm not defining good or evil here, just going along with those words to keep it simple. Evil in this situation can refer to any form of deception from advertising to hate speech. If the bar for evil is simply not disclosing that it's a chat bot, I think that brings money and political gain into the mix which closing the gap of good vs bad people.

1

AdditionalPizza OP t1_irtfrbr wrote

Assuming what it meant, I searched it and skimmed an article.

A theory about the internet just being all bots and AI communicating back and forth while humans no longer take part? If so, that's exactly what I see in the future if we don't have a solution at some point. I don't really like the idea of removing more anonymity out of the internet but I don't know a better solution.

I've always wondered how a social media platform would work out if it required legitimate credentials to sign up.

3

AdditionalPizza OP t1_irtf9c9 wrote

Let's hope for a a more civilized revolution, or perhaps AI can shepherd us into better living standards.

I try to be optimistic of the future and its potential, but as a kid I didn't think the 2020s would be so brutal for cost of living. Not to mention people in power don't even have to try and hide the shitty deeds they do anymore, they just do it and have half the people chanting for more. We live in a strange world now.

4

AdditionalPizza OP t1_irtet61 wrote

Haha see this was pretty convincing, my reply would've been something about I'm more concerned about my friends, and ultimately the general population. But, also I don't know if I'm just being paranoid, though my gut tells me I'm not. It feels like we're about to see the internet change drastically because of AI really soon, and people will need to be more aware.

6

AdditionalPizza OP t1_irskgha wrote

You make a good point with number 2. I don't know what to think about grammar errors, because theoretically a bot wouldn't make them, but they're often so stupid like I saw a post the other day starting with "as a civil engineer" and then it had nothing to do with being a civil engineer. Like it's a bot specifically designed for social media posting and using buzz words/memes but it's still in beta.

You should make one, and journal it all and make a big post about it to wake people up about it. I'm tired of sounding like the crazy one in my group.

5