Viewing a single comment thread. View all comments

JenMacAllister t1_j73xfoi wrote

A non-bias logic based AI making arguments in polite and respectful debate. Without the chance of political influence or money.

Where do I sign up?

2

Vince1128 t1_j73xz5r wrote

Are you sure and AI can't be influenced at all?

20

FacelessFellow t1_j744i33 wrote

I’m sure that true AI will have a firm grasp of objective reality. Otherwise it’s not a very good AI.

truth is not subjective. If we can program ai to be nothing but truthful, then it cannot be corrupted. Right?

1

Fake_William_Shatner t1_j749lhc wrote

>If we can program ai to be nothing but truthful, then it cannot be corrupted.

It can be useful. It can be checked. But saying something "cannot be corrupted" is the wrong way to approach this.

8

FacelessFellow t1_j74dpzg wrote

It’s like saying a math equation can be corrupted. It can be wrong(human error), but if it’s correct, it cannot be corrupted. 2+2=4 cannot be corrupted. Can it?

0

Fake_William_Shatner t1_j74fi55 wrote

No, it isn't like saying that.

With 2+2 you already KNOW the answer. It's 4. You already know the inputted data is perfect.

Creating an AI to make decisions is drawing from HUMAN sources.

And, I think your idea that "objective reality" and "facts" are certain is not really a good take. We don't even observe all of reality. Or perceptions and what we choose to pay attention to are framed by our biases. And programming an AI requires we know what those are and know what data to feed it to learn from.

FACTS are just data. The are interpreted. "TRUTH" is based on the viewer's priorities and understanding of the world. The facts can be proven, but, which facts to use? And TRUTH is a variable and different for everyone who says they know it.

8

FacelessFellow t1_j74vfea wrote

You don’t think there’s an objective truth/reality?

That’s a mind blowing concept for me.

−1

Fake_William_Shatner t1_j76u03l wrote

You can't really join the ranks of the wise people until you understand this. You don't think people with different perspectives and life histories and fortunes see a different "reality?"

If you get depressed -- doesn't that change what you see? If you take hallucinogenics, that alters your perspective. Your state of mind will interpret and experience life. Do you know if you are rich or poor until you have knowledge of what other people have or don't have?

Can you see the phone signals in the air, or do you ONLY get the phone call intended for you? You answer a call, and speak to someone -- you now have a different perspective and slice of reality than other people. Without the phone with that one number -- you walk around as if nothing was there. But, that data is there and ONLY affects some people.

Do you see in all of the EM spectrum? No. Visible light is a very small slice of it. If you had infrared or ultraviolet goggles, you would suddenly have information about your environment other people don't. Profoundly color blind people -- don't see the Green or the Red traffic lights except by position. Someone who sees colors might forget if the Red light is on the bottom or the top - -they take it for granted that they can tell. And the blind now have auditory signals at the street level -- their "knowledge" of the reality sighted people have of the same environment has changed for the better in that regard.

That's the challenge of data and science and especially statistics; what do you measure? What is significant to evaluate is a choice. And your view of reality is always in context of the framework you have from society, your situation, your "luck", your state of mind.

A nice sunny day, and one person gets a phone call that their mother has died -- it's a different reality and "truth."

So, I hope you continue experimenting with this notion that there is not and never has been one reality because we all have a different perspective and we can't all look at the entire thing. We can't all hear it. We can't all feel it. We interpret the data differently and choose different parts to evaluate.

1

FacelessFellow t1_j770iej wrote

So atomic mass is subjective? The table of elements is subjective?

Your comment just made it sound like a perspective thing. It’s sounds like it’s all about people and their subjective reality.

Objectively, an atom has so many electrons. Or does the number of electrons change depending on who is observing?

If I put 3 eggs on the table, it will be 3 eggs for someone else. Even if they’re blind, they can touch the eggs. Or be told by someone that it’s 3 eggs. I don’t see what can change the fact that there’s 3 eggs on the counter.

1

Fake_William_Shatner t1_j77sjzw wrote

>So atomic mass is subjective? The table of elements is subjective?

So you can't compare SOCIAL ENGINEERING to something that is subjective -- you want to compare it to atomic mass?

There's no point discussing things with a person who breaks so many rules of logic.

>It’s sounds like it’s all about people and their subjective reality.

Yes. Like your reality where you think Atomic mass being a stable number everyone can determine ALSO covers whether they think their outfit makes them look fat.

There is "objective reality" -- well, as far as you know, so far, with humanity's limited perception of the Universe. But, people interpret everything. Some people do not eat eggs because they are Vegan. 3 Eggs is objective fact. The "Truth" that what you gave me is a good thing, is an interpretation. And you assume how other people think based on your experience.

Reality and truth are subjective as hell. Facts are data points and can be accurate, but WHICH FACTS are we considering? "FACT; there are three eggs -- I win!" Okay, what were the rules? "That's a secret."

1

FacelessFellow t1_j74dgpw wrote

If there’s only ONE objective/factually reality, then we can program AI to perceive only ONE objective/factual reality.

The sun is hot. Agree? You think a good AI would be able to say, “no, the sun is cold.”?

The gasses we release into the atmosphere effect climate. Agree? You think a good AI would be able to say, “no, humans cannot effect climate.”?

Science aims to be as factual and accurate as possible. I imagine a true AI would know the scientific method and execute it perfectly.

Yes, some scientists are wrong, but the truth/facts usually prevail.

I don’t know if I’m making sense haha

−3

Outrageous_Apricot42 t1_j74jacv wrote

This is not how it works. Check out papers how chat gpt was trained. If you use biased training data you will get biased model. This is known since inception of machine learning.

9

FacelessFellow t1_j74nqc6 wrote

Is AI not gonna change or improve in the near future?

Is all AI going to be the same?

−4

Sad-Combination78 t1_j74y7wa wrote

Think about it like this: Anything which learns based on its environment is susceptible to bias.

Humans have biases themselves. Each person has different life experiences and weighs their own lived experiences above hypothetical situations they can't verify themselves. We create models of perception to interpret the world based on our past experiences, and then use these models to further interpret our experiences into the future.

Racism, for example, can be a model taught by others, or a conclusion arrived at by bad data (poor experiences due to individual circumstance). I'm still talking about humans here, but all of this is true for AI too.

AI is not different. AI still needs to learn, and it still needs training data. This data can always be biased. This is just part of reality. We have no objective book to pull from. We make it up as we go. Evaluate, analyze, and expand. That is all we can do. We will never be perfect. Neither will AI.

Of course one advantage of AI is that it won't have to reset every 100 years and hope to pass on enough knowledge to its children as it can. Still, this advantage will be one seen only in age.

6

FacelessFellow t1_j75215s wrote

So if a human makes an AI the AI will have the humans biases. What about when the AI start making AI. Once that snowball starts rolling, won’t future generations of AI be far enough removed from human biases?

Will no AI ever be able to perceive all of reality instantaneously and objectively? When computational powers grow so immensely that they can track every atom in the universe, won’t that help AI see objective truth?

Perfection is a human construct, but flawlessness may be obtainable by future AI. With enough computational power it can check and double check and triple check and so on, to infinity. Will that not be enough to weed out all true reality?

1

Sad-Combination78 t1_j75312i wrote

you missed the point

the problem isn't humans, it's the concept of "learning"

you don't know something, and from your environment, you use logic to figure it out

the problem is you cannot be everywhere all at once and have every experience ever, so you will always be drawing conclusions from limited knowledge.

AI does not and cannot solve this, it is fundamental to learning

6

FacelessFellow t1_j757skq wrote

But I thought AI was computers. And I thought computers could communicate at the speed of light. Wouldn’t that mean the AI could have input from billions of devices? Scientific instruments nowadays can connect to the web. Is it far fetched to imagine future where all collectible data from all devices could be perceived simultaneously by the AI?

1

Fake_William_Shatner t1_j74g1qn wrote

>If there’s only ONE objective/factually reality,

There isn't though.

There can be objective facts. But there are SO MANY facts. Sometimes people lie. Sometimes they get bad data. Sometimes the look at the wrong things.

Your simplification to a binary choice of a social issue isn't really helping. And, there is no "binary choice" what AI produces writing and art at the moment. There is no OBVIOUS answer and no right or wrong answer -- just people saying "I like this one better."

>I imagine a true AI would know the scientific method and execute it perfectly.

You don't seem to understand how current AI works. It throws in a lot of random noise and data so it can come up with INTERESTING results. An expert system, is one that is more predictable. A neural net adapts, but needs a mechanism to change after it adapts -- and what are the priorities? What does success look like?

Science is a bit easier than social planning I'd assume.

4

Vince1128 t1_j7526vu wrote

An AI is influenced by its creator, in this case, the human race, an AI uncorruptible is something from the movies, just impossible in our reality by concept. 100% objectivity is not achievable either, and if the AI could be able to do it, would include everyone of us in the group of liers, evil people or whatever you want to call it, because it's judging us based on something unreal.

3

FacelessFellow t1_j758am3 wrote

But what about AI made by AI made by AI. Would the human influence still be there?

It’s sounds like your saying computers and computer software will never evolve past what we have now. We don’t even understand gravity yet, maybe future computers/software will be unimaginable.

0

demonicneon t1_j74prq5 wrote

A completely objective AI kind of scares me tbh. So much of human life is in the subjectivity and nuance.

2

FacelessFellow t1_j74utrs wrote

I can’t wait for AI to be able to tell us (objectively) which humans are trash. I could be on that list.

It will help uneducated voters to never vote against their own interests again 👍🏽 the politicians of the future will literally not be able to lie, because the AI will tell us the truth.

1

Chase_the_tank t1_j75y3g1 wrote

>I’m sure that true AI will have a firm grasp of objective reality. Otherwise it’s not a very good AI.

Prompt: "Does Donald Trump weigh more than a duck?"

Actual answer by ChatGPT: I not have current information on the weight of Donald Trump, but it is unlikely that he would be heavier than a duck*. Ducks typically weigh between 2-4 kg, while the average weight for an adult human male is around 77 kg.* [Emphasis added.]

​

>If we can program ai to be nothing but truthful, then it cannot be corrupted.

The ChatGPT greeting screen warns that the program "May occasionally generate incorrect information". Getting an AI to understand what is true and what isn't is an extremely difficult thing to do.

2

FacelessFellow t1_j75z2sr wrote

CharGPT is not true AI, though, is it? People keep saying we don’t have true AI yet.

1

[deleted] t1_j73y8gl wrote

You have to be joking! The bias that has been baked into this AI is overwhelming.

12

__OneLove__ t1_j741xyx wrote

Some, just done get it. I don't think most are even vaguely aware of just how many AI projects have been cut/canceled due to the fact that ultimately 'we humans are training them' and therefore, AI (at least currently) suffers from the same human traits @ this juncture. AI is moving fast & I fear too many are jumping on the AI bandwagon in full force prematurely IMHO. ✌🏽

8

Fake_William_Shatner t1_j749s9d wrote

>The bias that has been baked into this AI is overwhelming.

You can fix these sorts of data models. It's likely SEEING the bias already in the system and not thinking like a human to obscure the unpleasantness.

1

__OneLove__ t1_j74cv72 wrote

Hmmm...who exactly is 'fix[ing] these sort of data models'? 🤔

2

Fake_William_Shatner t1_j74gyii wrote

Um, the people developing the AI.

To create art with Stable Diffusion, people find different large collections of images to get it to "learn from" and they tweak the prompts and the weightings to get an interesting result.

"AI" isn't just one thing, and the data models are incredibly important to what you get as a result. A lot of times, the data is randomized at it is learned -- because order of learning is important. And, you'd likely train more than one AI to get something useful.

In prompts, one technique is to choose words at random and have an AI "guess" what other words are there. This is yet another "type of AI" that tries to understand human language. Lot's of moving parts to this puzzle.

People are confusing very structured systems, with Neural Nets, Expert systems. Deep Data, and creative AI that use random data and "remove noise" to approach many target images. The vocabulary in the mainstream is too limited to actually appreciate what is going on.

−1

__OneLove__ t1_j74pyql wrote

Respectfully, smoke & mirrors imo...

TLDR;

Um, the people developing the AI. 🤦🏻‍♂️

2

Fake_William_Shatner t1_j77i4ch wrote

>TLDR;

It's really a shitty thing about reddit that the guy who makes that comment gets more upvotes than the person attempting to explain. "Smoke and Mirrors" -- how about which aspect of this are you saying that applies to? Be specific about the situation where they used AI to determine choices in business, society, planning. These are all different problems with different challenges and there are so many ways you can approach them with technology.

And, this concept that "AI do this" really has to go. They are more different in their approaches than people are. They are programmed AND trained. There's a huge difference between attempts to simulate creativity and attempts to provide the best response that is accurate, to making predictions about cause and effect. The conversation depth on this topic is remedial at best.

AI can absolutely be a tool here. It just takes work to get right. However, the main problem is the goals and the understanding of people. What are they trying to accomplish? Do they have the will to follow through with a good plan? Do the people in charge have a clue?

0

__OneLove__ t1_j77m3wj wrote

Look, don’t take it personally, ultimately, you’re stating ‘people’ (known to be naturally prone to bias) are going to ‘program the bias’ out of AI (speaks for itself imo). That was exactly the point I was making & apparently other sub members agree. Simply put, its such a poor argument imo, to the point that I am not willing to sit here & read paragraphs of text to the contrary. I don’t state that to offend you (whom I don’t know), I’m just keeping it 💯 from my perspective. You are obviously entitled to your opinion as well, hence my keeping my response short/succinct vs. trying to convince you otherwise.

At a minimum, I might suggest not taking these casual internet discussions with strangers so personally. Nothing more then a suggestion…

Peace ✌🏽

1

Fake_William_Shatner t1_j77rmew wrote

>vs. trying to convince you otherwise.

Yes, that would require you to know more about what you are saying. "Succinct" would require you to actually connect your short observation to SOMETHING -- what you did was little more than just say; "Not true!" and people didn't like my geek answer and how it made them feel so you got the karma. I really don't care about the Karma, I care about having a decent conversation. I can't do that with "Smoke & Mirrors" when I could apply it to at least a dozen different aspects of this situation, and I have no idea what the common person thinks. And the idea that people have one point of view at a time -- that's foreign to me as well.

>At a minimum, I might suggest not taking these casual internet discussions with strangers so personally.

Oh, you think my observation about "this is a shitty thing" is me being hurt? No. It's ANNOYING. It's annoying that ignorant comments that are popular get upvotes. Usually I cracking jokes and sneaking in the higher concepts for those who might catch them -- because sometimes that's all you can do when you see more than they seem to.

I could make a dick joke and get 1,000 karma and explain how to manipulate gravity and get a -2 because someone didn't read it in a textbook.

However, the ability for people to think outside the box has gotten better over time, and it's not EVERYONE annoying me with ignorance, just half of them. That's a super cool improvement right there!

0

__OneLove__ t1_j77tajo wrote

Please, by all means, keep both proving my point & justifying my unwillingness to engage with this passive aggressive dribble 🙂

...and yet this 🤡 continues to wonder/question why he warrants downvotes 🤔🤣✌🏽

1

Fake_William_Shatner t1_j78j1zv wrote

>why he warrants downvotes

Some people seem to think up and down votes prove the quality of the point being made. No, it's just the popularity in that venue at a given moment.

You could always explain what your comment meant. You don't have to, though. It's important not to take these comments too seriously. But, if you keep commenting on everything else BESIDES what you meant by "smoke and mirrors" then I will just not worry.

I have to commend you however on some top notch emoji usage.

1

__OneLove__ t1_j78jt4s wrote

Take care of yourself & have a nice life internet stranger. In the interim/simply put, I am blocking you. ✌🏽

1

JenMacAllister t1_j73zk7c wrote

It's easy to program out the bias. We have seen just how hard that is to that with humans. (over and over and over ....)

−3

__OneLove__ t1_j74d80j wrote

So who exactly is 'program[ming] out the bias'? 🤔

7

[deleted] t1_j740ayi wrote

Yes, you are technically correct. But around half of society live in a place where feelings are more important than facts. Remember the AI that was profiling potential criminals? Well, that feely segment of society didn't like the factual outcome and the AI was pulled. You will never get an objective outcome while feelings beat hard facts.

2

Fake_William_Shatner t1_j74ao1i wrote

>Remember the AI that was profiling potential criminals?

Oh, it doesn't sound like you are the "rational half" of society either.

I can definitely predict the risks of who will become a criminal by zip code. Predicting crime isn't as important as mitigating the problems that lead to crime.

Feelings are important. If people feel bad, you need to convince them, or, maybe have some empathy.

It's not everyone being entitled. Some people don't feel any control or listened to. And the point of not having "bias" is because cold hard logic can create bias. If for instance, you ONLY hire people who might 'fit the culture in tech support' -- then the bias would inherently look at who already has tech support jobs and who already goes to college for it. So, you have more of those demographics and reinforce the problem.

It's not necessarily LOGIC -- it's about what you are measuring and your goals. What is the "outcome" you want? If you ONLY go on merit, sometimes you don't allow for people to get skills that didn't yet have merit. Kids will parents who went to college do better in college -- so, are you going to just keep sending the same families to college to maximize who logically will do better? No. The people enjoying the status quo already have the experience -- but, what does it take to get other people up to speed? Ideally, we can sacrifice some efficiency now, for some harmony. And over time, hopefully it doesn't matter who gets what job.

Society and the common good are not something we are factoring in -- and THAT looks like putting your finger on the scale.

1

[deleted] t1_j74hh9v wrote

Cancel the AI project, some dude on reddit can predict by zip codes. Well, I guess that one is done! (joking!)

Feelings are important? Yes they are and that is why we should have real humans, with real families and real life experience acting as judges and juries, my reasoning follows.

But the Tech sector DOES employ people who fit the culture, just not in the way you suggest. Take a wild guess on how many people employed in Silicon Valley who vote the same way, who feel the same about Trans issues, who feel the same about gun control, who feel the same about Christianity, who feel the same about abortion.

THIS is the key problem, the AI is being developed and maintained exclusively by this group, lets say they make up half of the population - where does that lead?

I feel AI is incredible but I really think it needs to be given bounds, building better mouse traps (or cars, planes, energy generation, crop rotation etc, etc) NOT making decisions directly for human beings.

−1

Fake_William_Shatner t1_j77j8u5 wrote

>Take a wild guess on how many people employed in Silicon Valley who vote the same way, who feel the same about Trans issues, who feel the same about gun control, who feel the same about Christianity, who feel the same about abortion.

They vote the way educated people tend to vote. Yes -- it's a huge monoculture of educated people eschewing people who ascribe light switches to fairy magic.

>THIS is the key problem,

No, it's thinking like yours that is the key problem when using a TOOL for answers. Let's say the answer to the Universe and everything is 42. NOW, what do you do with that?

>NOT making decisions directly for human beings.

That I agree with. But not taking advantage of AI to plan better is a huge waste. There is no putting this Genie back on the bottle. So the question isn't "AI or not AI" the question is; what rules are we going to live by, and how do we integrate with it? Who gets the inventions of AI?

It's the same problem with allowing a patent on DNA. The concept of the COMMON GOOD and where does this go in the future has to take priority over "rewarding" someone who owns the AI device some geek made for them.

1

JenMacAllister t1_j741tz5 wrote

Yes it did. Anything created by humans will contain the biases of those humans. However others will recognize this and point it out so it could be removed in future versions.

I don't expect this to be 100% non bias on the first or even 100th version. I do not think all the humans on this planet could agree even what that would mean.

But over time I'm sure we could program an AI to be far more non bias than any human and most humans would agree that it was.

−1

ElectroFlannelGore t1_j7428du wrote

Cue (or queue... They both sort of work) people reacting emotionally because of their worry about their own indiscretions.

I approve of AI making suggestions and having that reviewed with the option to override by human judges.

2

JenMacAllister t1_j743q7c wrote

I agree, the same way Doctors would use AI to diagnose patients because of the way the AI could access the entirety of human medical knowledge to make its suggestions. No reason why Lawyers and Judges could not do the same right now.

Over time the AI could earn more and more trust to where we might give up on those people and listen to the AI.

3

Fake_William_Shatner t1_j74bt45 wrote

Yeah, the computer doesn't "forget" so having such a thing get you at least 90% of the way is useful whether or not it can do 100% of the job or not.

1