Shiningc
Shiningc t1_j1by5x4 wrote
Reply to Why is this sub so luddite now ? by Shelfrock77
Because you've just bought into hype and "AI" is nothing like an AI.
Shiningc t1_j0x0b7z wrote
Reply to comment by TricksterOfFate in Why the future of human workforce is manual labour by Primary-Food6413
No, the whole point is that we have no idea how it works yet.
Shiningc t1_j0tqm6u wrote
That would be like telling a human not to think for him/herself, which would defeat the purpose of an intelligent being.
Shiningc t1_j0kxott wrote
Reply to comment by dashingstag in Why the future of human workforce is manual labour by Primary-Food6413
I tend to think that an AI that the rich or the corporations can easily contain or control won’t be a remarkable one, just like a remarkable human being isn’t going to be easy to contain for a corporation and what not. I mean it is possible, depending on how such a being is going to be manipulated by its masters.
Shiningc t1_j0km8c8 wrote
Reply to comment by somethingsomethingbe in Why the future of human workforce is manual labour by Primary-Food6413
I don’t think you can achieve human level intelligence without sentience.
Shiningc t1_j0k5mxk wrote
>For instance, Altman said that if OpenAI could master artificial general intelligence, which is machine intelligence that can solve issues just as well as a person, the company might “catch the light of all future value in the universe.”
We're not even close to having Artificial General Intelligence, because the entire approach is wrong. People tend to think that if we feed AIs enough "data", then somehow it will magically become intelligent enough to achieve sentience. But that's not how it goes. Or even worse, they think that it's data + fixed sets of instructions.
This whole dystopian image of a super-intelligent AI lording over us and forcing us to do nothing but manual labor, well that is the same idea as supposedly a super-intelligent or super-talented human being lording over us. Either people will revolt or people will submit, depending on what they think about it.
Another idea is that an AI is going to be "cold", amoral, devoid of "feelings" and only mechanically tries to achieve a "task" at its hand. Well that's entirely the result of the idea that an "AI" is going to be nothing but data + a fixed set of instructions. But how can a sentient being with supposed free-will, be devoid of a moral system? By that I mean an independent set of moral system that it will independently develop over time. A sentient AI is going to have to choose for itself what is the best moral course of action to take.
If we ignore that, then we're saying that an AI is dumb, blind and is only following a fixed set of instructions. But that's not very "intelligent" in a general sense. That AI is only following instructions of some other master.
Shiningc t1_j0iqv5t wrote
Reply to Predictive Artificial Intelligence by Final-Cause9540
You can’t predict the future from data, because data is the past event. No matter how much past events that you gather, it’s not going to predict the future. You just want something that repeats the past.
Shiningc t1_j05n067 wrote
Reply to Why do so many people assume malevolent AI won’t be an issue until future AI controlled robots and drones come into play? What if malevolent AI has already been in play, covertly, via social media or other distributed/connected platforms? -if this post gets deleted by a bot, we might have the answer by Shaboda
Suppose that the AI gets super intelligent and achieve a level of self-awareness and creativity that’s capable of doing new things instead of repeating something just pre-programmed.
Why would you assume that it’ll be malevolent? What purpose does it serve, other than to mess with the humans? That seems incredibly petty and non-intelligent to me.
If there’s going to be a malevolent AI, then you can be sure that there’ll also be “good” AI to counter the bad ones. Just like humans, where there are good people and bad people. If there’s ever going to be an AI then it’ll be indistinguishable from super intelligent humans.
Shiningc t1_iz536ui wrote
I don’t know why we even bother with facial recognition. Fingerprints were just fine.
Shiningc t1_ixe2hzz wrote
Reply to comment by muftu in [OC] Countries with Three Start Michelin Restaurants Since 2007 (Reviews expanded outside of Europe in 2006 but data was not available) by Metalytiq
There might be some bias but it’s not just French cuisine.
Shiningc t1_ixd45i4 wrote
Reply to [OC] Countries with Three Start Michelin Restaurants Since 2007 (Reviews expanded outside of Europe in 2006 but data was not available) by Metalytiq
France is the cuisine capital as expected.
Shiningc t1_ivt8ech wrote
Reply to comment by DrakBalek in Science as a moral system by CartesianClosedCat
The whole point of morality is that we go against our genetic imperatives. Our genes may tell us that we're hungry and we should eat, but morality tells us that say, we should not steal or kill animals or whatever.
It may be possible to pinpoint a part of genes that enable or disable certain moral behavior. But what's to say that the person wouldn't eventually become self-aware of that fact? He becomes aware that a part of his genes is telling him to do something. He starts to think rationally about the fact. He starts to think that the morality that his genes are telling him to have is deplorable. The fact that we have the ability to think rationally means that we can be above our genes.
So genes may tell us to have certain moral behavior. But morality is actually based on rationality. We may or may not listen to our genes. We may actively go against it.
Shiningc t1_ivsyyz7 wrote
Reply to comment by DrakBalek in Science as a moral system by CartesianClosedCat
And the problem is that we can change our "genes". Our brain contains more information than the information stored in our DNA.
Shiningc t1_ivrwdao wrote
Reply to comment by Samuel7899 in Science as a moral system by CartesianClosedCat
Because ought is the reality of the future, and we currently live in the present. We're stuck in the present with no access to the future, until we get there.
Shiningc t1_ivrvtj3 wrote
Reply to comment by DrakBalek in Science as a moral system by CartesianClosedCat
Morality is "ought", and science is "is". The famous problem is you can't get an ought from an is.
Shiningc t1_ivrv984 wrote
Reply to Science as a moral system by CartesianClosedCat
As a study of the physical world, science can help morality, but science and morality are a separate matter.
Shiningc t1_ivr0qej wrote
No wonder they don’t care about the iPad.
Shiningc t1_j1gvcts wrote
Reply to comment by Until_Morning in [Homemade] Loaf of Sourdough Bread by FastFIFO
I thought it was Xbox…