No_Ninja3309_NoNoYes

No_Ninja3309_NoNoYes t1_j3l7g1s wrote

ASI if it actually is built which is not a given would be too busy with space colonization. Colonization does not have to entail human presence. Self replicating von Neumann probes could do just fine. You will need to update their software through powerful transmitters and communicate with them. Let the drones mine for resources and bring it all home. We can build a Dyson swarm in the solar system. The robots can build Dyson swarms in other star systems. They can transform that energy in something that can be transported back home or transmit it to our ever growing Dyson swarm One.

1

No_Ninja3309_NoNoYes t1_j3gavap wrote

I take the view that if we don't understand it, it's irrelevant for science. Not literally but in the sense that if we don't see a tree falling in a forest it never happened. We might be living in a computer simulation, but if we can't observe it, who cares? Logic is not sufficient to prove everything. Otherwise experiments would be unnecessary. You can use logic to build mental models, but models are not the real thing. Of course, there is no reason why artificial intelligence can't do experiments. Even if they are only thought experiments. The problem is that every logic system has to start somewhere. With assumptions and simplifications. For instance, induction assumes that one step leads to the other. Causality assumes that cause leads to effect. But what was there before the universe? Only reality can answer questions without these concerns. And there is chaos, little changes in initial conditions of a complex system can make its future unpredictable because these little perturbations amplify themselves over time. A butterfly flapping its wings in China can cause a storm elsewhere. Philosophy is fine, but science is needed too.

0

No_Ninja3309_NoNoYes t1_j377ss4 wrote

It's a provocative idea, and I want to offer a provocative comment. Yes, we can! But there are many roadblocks. If we take ChatGPT, something like matrix multiplication is already a hurdle. Deepmind is working on a way to reduce the number of operations to multiply matrices, but I don't see that solving the issue. Then there are tricks like trying to distill a leaner neural network from a bigger one. But it is not that easy to implement.

Another option could be to have many little neural networks or other models that use different or the same data. The results could be then gathered in a distributed fashion through something akin to averaging. Or you could use the Bitcoin model, offering little assignments instead of mining. You could have a Wikipedia of data like Common Crawl.

But in the end one person and a laptop won't get far. You need thousands of hours to even understand the basics. You need a supercomputer. But never say never. Maybe there is a better way. After all humans don't need that much data to learn.

3

No_Ninja3309_NoNoYes t1_j30vkg5 wrote

Someone went from 'ChatGPT is a great toy' to 'this is some sort of AGI!!!'. We don't even agree on what intelligence is, and why it should be general. I mean, I know wicked smart people, really smart, and they are nowhere near Einstein when it comes to physics. But that is fine right? I know very little about economics, yet I would not say that I have no general intelligence. Can't tell you what general intelligence means, though.

But I think computer vision and language and spatial awareness and simple logic and basic knowledge are a must. And possibly seven other things. The Turing test sounds reasonable, and you have IQ tests, but without a PhD in the relevant field, I don't want to propagate misconceptions. It seems that we're so far in the hype cycle that anything goes.

So I think that we have to calm down. And think things through. What's the worst that can happen? What's the best that can happen? How likely are they? IMO the worst is killer robots. Autonomous or semi autonomous. I think they are unlikely in the short term, but maybe in ten years not so much. The best thing would be in my opinion that we're able to solve many problems and usher in another scientific revolution. Also unlikely since the Einsteins of the world are not blogging or active on social media. They communicate through scientific papers and no one can read those except other experts.

And another thing. This talk of parameters is just misguided. It all sounds like 'I have a penny. If I had billions of dollars, I can buy the moon'. First, more parameters means nothing if the data or programming is bad. Two, you need time and computers to find good values for the parameters. You can think of them as pixels in a picture. This is an oversimplification of course. You need to find the Mona Lisa. For that you need to get the right colours for each dot of the painting. IMO ChatGPT doesn't have all its pixels right. But somehow it beats the competition. The more pixels you have the harder it is to get the parameters right. The space of possible combinations blows up exponentially. If you have ten possible colours, two pixels correspond to hundred combinations, six a million, twelve a trillion. A parameter in a neural network is usually single or double precision floating point numbers at least dozens of bits with potentially tens of thousands of possible values for each of them.

Overall, we don't have AGI yet. (Whatever AGI means) There are good and bad things that can happen, but the more you stretch the narrative, the less likely it is. It's fun to talk about parameters, but it's like talking about the volume of brains. Also I don't understand the obsession with AGI. Specialized AI is fine, right? ChatGPT does a good job if you know its limitations.

1

No_Ninja3309_NoNoYes t1_j2r4py5 wrote

Through books, movies, art, VR, hobbies, and maybe seven other things. Of course, life expectancy will be over a century. The world population would have declined to a level that most of us can't imagine right now. Hopefully, a VR type Internet will exist. But it will be more secure and decentralised than what we have now. It will potentially mix the best features of Reddit, Twitter, WhatsApp, YouTube, Google, ChatGPT, Wikipedia, and other apps/websites, but in a democratic and rational fashion. We could have a hive mind lite, not like the Borg, but in a good way.

2

No_Ninja3309_NoNoYes t1_j2mwjgl wrote

AI can currently learn in three ways unsupervised, supervised with labeled data, or reinforced: it knows it has done well if it wins a game or achieved other objectives such as capturing a pawn. But AI is basically software and hardware configured by humans. Someone programmed the machines to interpret data in a certain way. You can tell them to interpret a list of numbers as the representation of a text or an image. Actually you are not telling them anything. If you write code it gets compiled or interpreted to lower level assembly code or instructions for a virtual machine. Which in turn is converted to machine language. All computers understand are very basic instructions, depending on the specifics of the hardware.

You can say that the human brain is just a soup of neurons, fluids, and neurotransmitters. But we clearly don't have machine or assembly language equivalents. The brain is much too complex with who knows how many layers of abstraction. It was clearly not designed by teams of engineers. Maybe this architecture is why brains are more flexible than current AI.

1

No_Ninja3309_NoNoYes t1_j2mfqmz wrote

I don't really see the need for ASI unless you mean a hive mind of AGI. In that case, why do we need AGI? An ecosystem of narrow AI products could work fine too. I can tell ChatGPT to write a funny, angry, or scared poem and it does a decent job if it. Not as good as a human poet, but hey do we really need that? I mean, computers can beat us at chess already. We need an ounce of dignity. Of course, ChatGPT doesn't really understand emotions or psychology. It sorts of associates angry, funny, and scared with other words. And OpenAI implemented filters so you will have difficulty using hateful words. So maybe in the future you will have AI cops, blocking bad content of other AI.

2

No_Ninja3309_NoNoYes t1_j2hv3z1 wrote

The quantum world deals with the very small. It's really hard to grasp it because we're not able to imagine that kind of scale. The atom is like a miniature solar system with lots of empty space. In the beginning of the previous century Rutherford bombarded gold atoms with alpha particles, helium, and some of the particles were scattered back. There were other strange experiments showing the interaction of light and slits which made people think that light was made of waves but also behaved as waves. The photo electric effect showed that light can interact with matter, producing electricity.

Theoreticians came up with mathematical functions. These correspond to probability. Quantum wave functions. In this theory some things are unknowable. Einstein scoffed that 'God does not play with dice!' But the theory has been tested. We can compute probabilities. This is done with special integrals, mathematical functions. But the gist of it is, that particles are not localized. They can go though potential barriers. Quantum tunneling. It's equivalent in our world to walking through a wall. And there is the Heisenberg uncertainty principle which states that you can't know everything about a particle. You can measure certain properties accurately but that will prevent you from knowing more about other properties.

The wave functions when turned into numbers through the appropriate operations that we can use give a probability between one and zero. However, in the quantum world everything is much quicker than in our world and very little energy makes quite a difference. Quantum states want to be in their ground state. You can think of them as pendulums in rest most of the time. If you give them a quantum of energy, they want to go back. This happens in such a short time that none of us can really imagine it.

Unfortunately, we need to bring qubits out of their ground state to do meaningful computation. This means shielding them from outside influences. Which currently requires a special mix of helium isotopes, tubing, and other stuff. This part is hard to shrink.

Also because the quantum world is a busy chaos and decoherence, going back to ground state, exists, quantum computers make many errors. Algorithms to fix them are still in development.

Qubits work fast and due to their nature can explore possibilities that classic computers simply can't. They could be useful for AI systems which will have impact for consumers. For the foreseeable future the resources required are unaffordable for almost everyone.

But never say never. If we have better algorithms and someone figures out how to keep the qubits coherent longer at somewhat higher temperature, perhaps just by having many qubits we could get closer to your wish

3

No_Ninja3309_NoNoYes t1_j28npx6 wrote

I don't have a PhD in economics and after reading Sam Altmann's essay, I have the feeling that he doesn't have one either. It reads like self-serving rhetoric, throwing in some vague and unproven concepts into the mix the way a stage magician would try to distract you.

I will offer two acronyms KISS and YAGNI. Keep it simple and you ain't going to need it. The world economy IMO is not a fast ship that can turn around and zip away whenever it needs to. It's more akin to the Titanic. So this means pain and agony in the short and maybe long term. If you expect fairness and equality, you have not been paying attention in history classes and to the news. The answer that governments and Big Business comes up with will not be UBI but something more mundane

2

No_Ninja3309_NoNoYes t1_j28g7xw wrote

I'm going to avoid discussing consciousness by saying that it might be a property of a system like information entropy. Intelligence and understanding are also not well understood. So let's go with a practical definition.

We want systems that can do almost any job. This can include using arms, legs, generating text, audio, images, or video in a useful way. Most of these tasks seem doable, but if you have to take into account all the variables, I don't think you can write down a conclusive answer.

Is it achievable? It depends on the architecture, algorithms, hardware, financial resources, availability of experts and maybe seven other factors.

Can we find a good enough architecture? If we can understand the human brain better, yes Otherwise we can only guess. The brain is self organising, decentralised, and asynchronous. This differs from many Deep Learning systems.

We could hit a wall. Even with all the data in the world, the neural networks could become too complex to train and use. Data quality is naturally also a problem. But quantum computers would surely help. However, it's too early to commit to that option. In the end I think we would have a free market of narrow AIs for the foreseeable future But of course there could be unknown unknowns, so the answer for now is Maybe.

2

No_Ninja3309_NoNoYes t1_j23i2h3 wrote

2032... Bread and games. All signs point to a group of oligarchs supporting disposable figureheads. Poverty will remain. Unemployment will stay steady at whatever level the oligarchs find acceptable. It will become fashionable to pay people for minimum wage jobs where humans babysit robots or pretend to work. They will be treated like slaves But the world population will decline, kept docile through mindless entertainment and cheap food. The end.

−1

No_Ninja3309_NoNoYes t1_j1tvxib wrote

Maybe... But actual human geniuses will beat AI for a while If we for the sake of argument say that 1% of people on earth qualify, only a certain percentage of them will be publishing their work. And it's not like you can learn everything from text. So you will still have content that is clearly superior produced by humans

1

No_Ninja3309_NoNoYes t1_j1p9r41 wrote

I am not sure if I would pay for ChatGPT. Yesterday I had it get stuck in a loop giving me the same piece of text over and over But I would probably accept a fixed and low monthly fee. Actually the Wikipedia API and a summary API could cover some of my needs. I think that Markov chain models or whatever they were called could do decent text generation. Not as good as GPT of course, but that technology is much simpler to implement. I won't be surprised if someone uses it in a novel way soon.

1

No_Ninja3309_NoNoYes t1_j1hnor4 wrote

Reply to Hype bubble by fortunum

IMO ChatGPT is the new Bitcoin. The whole idea of crunching as much data as possible is flawed I think. You need to do the opposite Use a piece of data and create as many input vectors from it as possible by adding noise, using different methods, and maybe even randomly setting a subset of the vector values to zero. Also relying solely on deep learning is insufficient IMO You could use decision trees, rules made by experts, or anything else to augment and diversify. But in the end I think a top down approach is required for AGI. Someone should be able to create a global design, an overview for the whole thing Even if it's on a napkin

1

No_Ninja3309_NoNoYes t1_j13scjg wrote

Someone at r/ChatGPT found a way to simulate a text adventure game It works, but you get what you ask for. So you can do fun things. I would say that your question is making a wrong assumption You are assuming that a language model must be creative. But it doesn't. A model is a representation of something In this case language and a collection of text documents. But is Chad Gelato creative enough to replace a human assistant? The answer is definitely not creative enough. An assistant must be able to work with you To be cognizant of your needs and aware of your interests. This requires social intelligence which Chad Gelato lacks.

0

No_Ninja3309_NoNoYes t1_j0u1km5 wrote

It's impossible to say, but currently neither hardware nor software are near where AGI is generally considered to be Personally I think that Deep Learning is the wrong paradigm. Even introducing neuromorphic hardware and asynchronous algorithms would not be enough You would need to simulate brains close to the molecular level, including neurotransmitters and chemical synapses.

2