Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is"
Submitted by Smoke-away t3_yo6jeu in singularity
Reply to comment by MarkArrows in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
Computers and brains just simply are physically phenomenally speaking, different. The physical relationships to consciousness are not the same. In the literal form they are different mechanics and different physical systems. Why would any just settle for what word relationships are used to how like a chatbot talks for instance or behavioralisms?
If you're right and computers never gain true sentience, what's lost by being ethical to them? It'd be like saying Please and Thank you to Alexa or Siri. Meaningless gesture, but harmless overall.
But on the other hand, what if you're wrong with that assumption?
Not much is lost. But the importance of consciousness and life being unique and precious may be lost a bit, if it's about taking it literal. Apposed to because of human mannerisms.
I'm not wrong with assumptions. That's not an assumption anyways.
It is, in fact. You don't have any evidence to support the claim, "Machines can never be conscious individuals," you've simply asserted it to be the case. Or do you in fact have an evidence supported hypothesis about consciousness adequate for building novel ones?
The evidence is observed by the fact they are different to begin with. Computers can't be; a machine being conscious would be different than digital computers. That's what I meant. That's why I don't think this by Bostrom serves good purpose. It's settling ethics on something incomplete.
> Computers can't be; a machine being conscious would be different than digital computers.
How do you know that? What evidence has led you to this conclusion other than, "It's different."? Do you know that at various times and places various humans have been regarded as not being conscious because, "They're different."? What actual evidence do you have of this? Have you constructed a model of a conscious mind on a digital computer and have it fail to display consciousness? How did you discern whether it did or didn't? How do you know your model was accurate? How do I know any being in this universe aside from myself is conscious in a solid and grounded way, rather than just making the assumption?
Well it wouldn't be a model, and generally speaking that's why. And basically "it's different" is observed by the fact that it just isn't fizzling like neurons and there is more too.
Do you understand consciousness well enough to explain it such that no mystery remains?
No, but at this point there is still a knowledge of difference that could be described at many points of difference from cause and effect which is the important thing. Which is just scientifically knowing a difference in how the "AI" operate and "digital" apposed to what brains do.
And a heavier than air plane will never fly. After all, how can it flap the wings fast enough?
What knowledge, exactly, are you claiming, that lets you be so certain of this?
Because a simulation cannot be conscious, otherwise it becomes semantics.
So, there is no compelling reason that consciousness cannot exist within a digital system?
How can you objectively prove that you are consciousness? Spoilers you cant.
I can't, yet. I do not think that you have sufficient evidence to claim that it cannot be done, merely that we do not yet know a way to do so.
Do you believe that everything will eventually be explained ?
Will? The prior on that is not sufficient to rise to the level that I would call belief.
Can? Yes.
That doesn't matter. Because for fact humans are, so it doesn't need "proving". Because that's just simply a fact.
It would be "settling" ethics at an incomplete place. As by the very nature of what it would mean by a computer simulating a consciousness and relative wording about computations or the math. But by very nature the differences are that itself. An identical system wouldn't be a computer. It should be obvious from cause and effect it scientifically begins from this fundamental difference.
I did not say "simulating". I said consciousness and exist.
Digital systems can only simulate.
That is a claim. What is the evidence for that claim?
That's what simulation means
You are the one who keeps insisting that everything on a digital system is a simulation.
I keep asking how do you know everything on a digital system is a simulation?
Can you please answer my question, instead of reiterating your claim?
That's what is means, otherwise it becomes semantics.
Okay, good talk. Be well!
> I'm not wrong with assumptions. That's not an assumption anyways.
https://utminers.utep.edu/omwilliamson/ENGL1311/fallacies.htm
This is literally the very first logical fallacy people run into: I'm right, and I am unable to entertain the notion that I could be wrong.
The point of logical reasoning is to be able to take assumptions you do not believe in, and examine them starting from both sides - A serious attempt, not some pretend strawman. Once you have the full fallout of both sides, right or wrong, you can compare them.
Besides, the very fact that other people don't agree with your assumption in the first place shows you there's something more to it that you're not seeing or that they're not seeing. Whatever logic convinced you, it didn't convince others intuitively. From here, your question should be "Am I the strange one, or are they?" Instead, it seems more like you simply write other people off.
Start from the assumption that you're wrong and explore from that root downwards. It doesn't matter how you're wrong in this case, it's hypothetical. For example, some divinity shows up and tells the world outright that consciousness is a pattern, and computers are able to generate this pattern the same way we are. Or any number of reasons that you can't refute, make up your own if you want. We're interested in the fallout from that branch of logic.
It's actually by fact of first order logic of phenomenal, actually. A straight line of reasoning determines it and upon evidence gathering of both empirical differences and not emprerical points. It's like 1+1=2, 1+1+1=3, 1+1+1+1=4 ... In a series ex. Because confusion upon any belief reasoning, as that's not truly belief. Exploring the notion of this being wrong is a waste of time for the explanation above.
I'm a little impressed at how I show it's literally a logical fallacy to think "I can't be wrong because my argument has convinced myself." And your response is: "My argument has convinced myself, so it's a waste of time to consider alternate arguments."
RNA and DNA work on similar rulesets and determination. If you look at the base point of what makes cells function, you'll find plenty of similarities to mechanical true/false - if/else logic at the bottom of the pole. Everything ends up being math.
We wouldn't consider them conscious, but they are organic. A variation of all these rule-abiding proteins and microorganisms eventually evolved into us.
Thus because machines follow a line of rules right now, there exists a possibility that they build on this until it's complex enough to form an artificial lifeform with consciousness, in the same way we did.
That said, I think it's a lost cause to argue with you. You aren't even able to do the basics of debate, even when it's directly pointed out.
I'm not debating it or starting an argument. Or over cells that don't work as comparison because they are not one human being of consciousness.
Also, it's not actually a fallacy at all to ignore arguments.
So? Why would a physical difference have anything to do with wether or not different system can be conscious?
Evidence that it is not. Not just by empirical means to say. I mean the differences I am talking about are corely missing from these computers.
Consciousness isn't material. It's not a substance but an information pattern. As long as you can run that pattern, the underlying mechanism is irrelevant.
>Computers and brains just simply are physically phenomenally speaking, different.
Why does this matter if the output is the same?
> The physical relationships to consciousness are not the same.
What physical relationship to the brain and consciousness can you concisely point towards? Why would an AI not be conscious if it's aware and responsive to surroundings?
Those behaviors or outputs are subjective.
Apply the Turing test - if it walks like a duck, quacks like a duck..
Viewing a single comment thread. View all comments