Submitted by Rumianti6 t3_y0hs5u in singularity
MassiveIndependence8 t1_irscqub wrote
Reply to comment by Rumianti6 in Why does everyone assume that AI will be conscious? by Rumianti6
There’s nothing inherently “supernatural” about being biological, funnily enough it’s the most “natural” thing out there. Pedantry aside, I understand where you’re coming from so I’ll take a crack at your argument. You seems to have a problem with equating 2 sets of characteristic from 2 inherently different structure. After all, machines aren’t made from what we are made out of, and aren’t structured the same way that we are then how can we compare the two traits of seemingly differently machines and assert that they are some how equivalent? How can we be sure that their “consciousness“ or if we can call it consciousness at all, is similar to our consciousness? If you define consciousness this way and confine it to biological structure then sure, I agree that consciousness can never be arisen from anything that is not biological.
But that’s not a very helpful definition. Say a highly intelligent group of aliens were to come down on earth and we discovered that they are a silicon based life form as opposed to our carbon life form. Even worse, we realized that their biology and their brain structures are wired differently than we are. Would you then assert that these being have no consciousness at all, seemingly because they are different than us? A whole race of species with science, art and culture that “seems” like they can feel pain, joy and every emotion out there are simply automatons?
Before you brush this off as a stupid hypothetical, this does present an interesting fact and the dilemma that comes with it.
Every functions out there can be modelled and recreated with neural networks
That is a fact that was mathematically proven to be true, you can read up on this on your free time but the main point I’m trying to make is that the human mind, just like anything else in the universe, is a function. A mapping from a set of input the a set output. We temporally map our perceptions (sight, hearing, taste,…) into actions in the same way that a function maps an input to an output. Give a Turing machine enough computation power, it can simulate the way human behaves. It’s only a matter of time and data until such machine exists.
But are those machines actually “conscious“? Sure they act like we do in every scenarios out there because they are functionally similar. But they aren’t like us because they aren’t using the same hardware components to compute, or even worse they might not even perform the same computation as we do. They might arrive at the right answer but they could do it differently than we do.
So there’s 2 side of the arguments depending on the definition that you use. I’m on the side of “if it quacks like a duck then it is a duck”. There’s no point in arguing about nomenclature that distinguish something that is essentially indistinguishable to us from the outside.
visarga t1_irta9lz wrote
It's not just a matter of different substrate. Yes, a neural net can approximate any continuous function, but not always in a practical or efficient way. The result has been proven on networks of infinite width, not on the finite networks we are using in practice.
But the major difference comes from the environment of the agent. Humans have the human society, our cities and nature as environment. An AI agent, the kind we have today, would have access to a few games and maybe a simulation of a robotic body. We are billions of complex agents, more complex than the largest neural net, they are small and alone, and their environment is not real but an approximation. We can do causal investigations by intervention in the environment and apply the scientific method, they can't do much of that as they don't have access.
The more fundamental difference comes from the fact that biological agents are self replicators and artificial agents are usually not (AlphaGo had an evolutionary thing going). Self replication leads to competition leads to evolution and goals aligned with survival. An AI agent would need something similar to be guided to evolve its own instincts, it needs to have "skin in the game" so to speak.
Viewing a single comment thread. View all comments