No_Ninja3309_NoNoYes

No_Ninja3309_NoNoYes t1_j5no05m wrote

Well I am not an expert but my understanding is that a certain protein is involved. If it is feasible to synthesise it, the problem will be solved. Billionaires will be all over it because they even more afraid of death than us. I am not sure what the long term implications are. I don't think society is ready.

3

No_Ninja3309_NoNoYes t1_j5lfoth wrote

Most physical jobs are hard pass for me. If there's no other way, I could deliver pizza or newspapers... There's no Big Plan. I am invested passively through monthly contributions in managed baskets of ETFs. Or at least that's what they told me. Also I dabble in crowdfunding, but that is kind of risky, so I don't overdo it. I have another passive stream, but it will dry up soon. Obviously I want another one, but it is a hassle. I don't know how old you are, but you could look into the financial independence retire early movement. It requires a lot of discipline, though. But I am not certified to give financial advice. I have no PhD in economics.

3

No_Ninja3309_NoNoYes t1_j5knhnq wrote

The focus is currently on Deep Learning. So why would DL not bring AGI in its current form? First in simple terms, how does it work? The most common setup uses inputs and weights. The sum of the products is propagated. There are ReLus, batch normalisation, residual connections and all kinds of tricks in between. The outputs are checked against expected values. Weights are then updated to fit the expected outputs against given inputs.

There are multiple neural layers. That is why we speak of Deep Learning. So to use a crude analogy, imagine that you are the leader of a squad. Imagine that your soldiers understand 80% of your orders. Now imagine being the platoon leader. Imagine again that your squad leaders again understand 80% of your orders. How many of your orders reach your soldiers? Imagine having a hundred or more layers. Adding layers isn't free. And with almost all AI companies doing the same, we will run out of GPUs soon.

Also real neurons are more complicated than in the DL models. There are things like spiking, brain plasticity, neurotransmitters, and synapse plasticity that DL doesn't take into account. So the obvious solution is neuromorphic hardware and appropriate algorithms. It's anyone's guess when they will be ready.

5

No_Ninja3309_NoNoYes t1_j59ufny wrote

OpenAI had teams of Kenyans score ChatGPT using Proximal Policy Optimisation. You can say that they upvoted or downvoted it for Reinforced Human Feedback. This is of course not how society works. We don't upvote or downvote each other except for Reddit and other websites. AI is limited currently in the kinds of raw data it can process.

For historical reasons people value intelligence. Some experts think that language and intelligence are almost the same thing. But there are thousands of languages and thousands of ways to say similar things. Language is ambiguous.

You can say that mathematics and logic are also languages, yet they are more formal. Of course they are not perfect because they rely on axioms. But anyway if a system is not perfect that doesn't mean that we should stop using it. Experimental data and statistics rule, but certain things are not measurable and other phenomena can only be estimated. That doesn't mean we have to give up on science.

In the same vein, rules like 'Don't be rude to people' and 'Do unto others as you want done unto you' sound vague and arbitrary. But how can AI develop its own morality if it doesn't understand ours? Can a child develop its own values without parents or guardians? Yes, parents and guardians can be toxic and rude. But can AI learn in a vacuum?

2

No_Ninja3309_NoNoYes t1_j568whv wrote

I think this is more wishful thinking and anchoring than something to be taken seriously. Which is fine. It's okay to dream. But AGI requires so many things that it's hard to list them all, but I will try:

  • Learning how to learn
  • Independent data acquisition
  • Common sense. This is hard to define. It could mean a set of rules. It could mean a database of facts.
  • Embodied AI
  • Something that resembles brain plasticity. Brains can create synapses. Dendrites can branch.
  • Wake sleep cycles. AI will be gathering data or its of no use to us. I mean, if we have to acquire and clean data for it, when will we get to enjoy VR? So AI will acquire data and then process it when the opportunity presents itself.

None of these items seem trivial to me. I don't see how they can all be done by 2024.

6

No_Ninja3309_NoNoYes t1_j55ex71 wrote

In theory if VR arrives with AGI or something that is good enough, I think I will be too old to enjoy it. Unless theoretically science can 'fix' me. This makes your scenario pretty unlikely for me.

You can argue that we already live in societies where no one is 'real'. Everyone plays a role except around friends and family. But you can't control that reality like VR, right?

The problem is that we don't know who will create VR and how. Maybe like OpenAI and other technology companies they would force their moral values on us. Perhaps they will limit the VR products to something tame.

But personally, if I had the choice, I would go for the wildest dreams. With flying, magic, and interstellar travel. I don't really understand why that would be bad, unless we are talking about the load on the computing infrastructure.

2

No_Ninja3309_NoNoYes t1_j5493t8 wrote

Weakly general sounds strange to me. Sounds like almost human. I think we need some sort of minimal requirements otherwise we might be talking about different things.

I think AGI has to at minimum:

  • be multimodal
  • Embodied
  • Know how to learn
  • Able to follow a chain of arguments
  • Able to communicate with autonomy
  • Understand ethical principles

And there are many other things, but these seem hard enough. I think the first two are doable by 2027. Not so sure about the others.

I know how people love to talk about exponential growth. But let's not forget that something has to drive it. Deep learning has been driven by GPUs and the abundance of data. Both are not inexhaustible resources.

3

No_Ninja3309_NoNoYes t1_j51l1k5 wrote

Thou asketh and thou wills gethet it methinks. Interstellar travel is impossible in the real world. I think the whole being someone else thing will be big in VR. And there are at least seven other things that you can't do in the real world like doing magic, flying, talking to people who don't exist or are long dead.

0

No_Ninja3309_NoNoYes t1_j515vxl wrote

The year is 2058. Nuclear fusion, quantum supremacy, high temperature superconductivity, and first contact are old news. The world population is now four billion and dropping. I vaguely remember it being more. I have programmed my brain implant to ignore facts I don't care about. The clique of OpenAI owns 43% of all businesses on Earth. I live on a comfortable pension because I saw something bad coming and prepared. The majority of humans live on ten Kong dollars a day. I vaguely remember other currencies, but my implant is suppressing that.

Ten Kong bucks buy you a plate of rice and some proteins. It also pays for a room you get to share with seven other people. Unless you are lucky. The oligarchs still appreciate talent. Just not any kind of talent. Most people live in virtual reality. AI invented drugs that do almost the same but without too many side effects. Of course there are people who are allergic. One in hundred I heard.

I am too old to do anything meaningful except participate in the global hive mind. It's filtered and moderated, but at least I have an idea what folks are up to. My personal robot brings me breakfast. Robots know how to cook. They are much better than the average human. But there are human cooks who are actual geniuses. I can't afford to eat in their restaurants.

I don't have to work out because I have pills for that. My body functions as though I am twenty years younger. But I can't afford cellular rejuvenation. Rumors claim that aliens have been sending instructions about achieving immortality. Hivenet is full of rumors. From time to time, someone will rant about rebellion.

On the 3D and VR news, there are always protests. Androids and drones take action if it all gets out of hand. Someone on Hivenet says that our drinking water is full of opiates, but hey I haven't noticed anything. I am just a bit forgetful. It's normal for people my age.

10

No_Ninja3309_NoNoYes t1_j4z7eyb wrote

Greed is good, right? So it turns out that OpenAI was afraid of Google and other companies. They are bad at waiting and hoped to get publicity. So they went all in. Everyone who has played poker knows that you don't go all in unless you have aces and have no idea what else to do with them or if you are bluffing. I think they are bluffing.

There seems to be an obsession with parameters matching the brain. But the amount and type of data and the actual architecture and algorithms are more important. IMO for the amount of data they used they have too many parameters. They did the equivalent of fitting linear data to a cubic function. So in the best case you end up with parameters that are close to zero. In the worst you are screwed. This is not only wasteful when training and bad for the environment because tons of carbon dioxide emissions, but also awful at inference time. And still we have to pay for these extra parameters.

Why would OpenAI ever achieve AGI this way? They are doing a mix of unsupervised, supervised, and reinforcement learning. Unsupervised learning requires a lot of data. It's parsing it and trying to find patterns. But there's not enough data that can be used. Supervised has even bigger problems because it needs labels. You need to give it answers to questions. Reinforced learning requires some sort of score like in games. That is also limited. If they want AGI, they would have to look into semisupervised, self supervised, and meta learning. AI has to be able to learn on its own. Preferably going out and finding its own data.

And of course they hired Kenyans to do their dirty work which shows you what they care about. Greed is good apparently.

2

No_Ninja3309_NoNoYes t1_j4rfnhr wrote

We will have intelligence amplification through narrow AIs before we can have AGI. At a certain point, we will require neuromorphic hardware and spiking neural networks. But that will not give us AGI. We need quantum supremacy, millions of coherent qubits in a quantum computer. That alone would have a price tag in the tens of millions of dollars, inflation adjusted, or more. So if the trend continues of the rich getting richer and the poor getting poorer, the number of people and companies who can afford to build AGI would be quite low. Understandably, not all of them will have interest in AGI. So multiple maybe and certainly not billions, dependent on cracking quantum supremacy and other hurdles.

2

No_Ninja3309_NoNoYes t1_j4qpecj wrote

IMO this is way too simplistic and optimistic. Sure we can have AI listen, read, write, speak, move, and see for some definition of these words. But is that what a brain is about? Learn from lots of data and reproduce that? And imitation learning is not enough either. You can watch an expert work all you want, you will never learn to be a master from that. I think there's no other way but to let AI explore the world. Let it practice and learn. And for that it will have to be much more efficient in terms of energy and computations than currently. This could mean neuromorphic hardware and spiking neural networks.

25

No_Ninja3309_NoNoYes t1_j4q0skc wrote

The abstract: "Biointegrated neuromorphic hardware holds promise for new protocols to record/regulate signalling in biological systems. Making such artificial neural circuits successful requires minimal device/circuit complexity and ion-based operating mechanisms akin to those found in biology. Artificial spiking neurons, based on silicon-based complementary metal-oxide semiconductors or negative differential resistance device circuits, can emulate several neural features but are complicated to fabricate, not biocompatible and lack ion-/chemical-based modulation features. Here we report a biorealistic conductance-based organic electrochemical neuron (c-OECN) using a mixed ion–electron conducting ladder-type polymer with stable ion-tunable antiambipolarity. The latter is used to emulate the activation/inactivation of sodium channels and delayed activation of potassium channels of biological neurons. These c-OECNs can spike at bioplausible frequencies nearing 100 Hz, emulate most critical biological neural features, demonstrate stochastic spiking and enable neurotransmitter-/amino acid-/ion-based spiking modulation, which is then used to stimulate biological nerves in vivo. These combined features are impossible to achieve using previous technologies."

So seven years before this leaves the lab, and seven more before it is mass produced in a meaningful way?

1

No_Ninja3309_NoNoYes t1_j4puw4k wrote

Deep learning started to work in 2012 thanks to GPUs. It has been a decade. I don't expect the trend to continue into 2030 unless something changes that. But we will be left with a diverse ecosystem of AI services. This will create more billionaires, but even more paupers. Unless we manage to democratize AI. Unless it becomes open source and easy to use for everyone on Earth.

3

No_Ninja3309_NoNoYes t1_j4prtce wrote

Writing is editing. AI is not good at line editing or structural editing. It beats humans on quantity, but it hasn't been taught the basics: show not tell, avoid adverbs, avoid long sentences, try not to repeat yourself, keep the story consistent.

AI has been trained to not be biased. This is good of course, but when you write a story, you have to choose a side. You need to choose a coherent setting and cast of characters. So in the absence of personal preferences and history, AI can only choose the most prevalent patterns in the training data or random stuff. This means clichés or something incoherent. Also the training data is not up to date with modern ideas AFAIK. It leans toward books written before 1920.

So I think the question should be when can AI edit properly? When and how can it build a personal style and history? I don't think that throwing money and data at this is enough. Knowledge of psychology, neuromorphic hardware, spiking neural networks, extreme learning machines, or something else entirely is required. So a decade is probably insufficient.

1

No_Ninja3309_NoNoYes t1_j4p7iqa wrote

I for one don't think we should put so much value on intelligence. Maybe even health. There's no reason for humans to try to play God. It's not like we did such a good job on Earth. Correlation is not causation. So humans have more intelligence than chimpanzees. This might be explained by the difference in DNA. So do we really need to tweak this little piece of DNA? Is that what makes us intelligent? What if by increasing intelligence, we introduce bugs in our genetics? I have read Nick Bostrom's books. Some of his ideas make sense. But he obviously likes exaggerating and getting attention. So this email doesn't surprise me. We should beware false prophets who tap into our desires for a better world. So easy for them to introduce counterfactuals when they do that.

1

No_Ninja3309_NoNoYes t1_j4fubwf wrote

A software developer needs to understand the functional designs, technical designs, architecture, and test plans that are relevant. Being able to produce functions is not enough. In a machine learning context knowledge of the concept and mathematics are required. ChatGPT is a level higher than a Markov chain. It has a sense of which words and groups of words go together. But for it words are just lists of numbers. So in fact it evaluates nested functions with vectors as input. For a much smaller network, you can do the same thing in excel.

1

No_Ninja3309_NoNoYes t1_j4fks3t wrote

I am not sure if you are exaggerating or don't understand IQ. But anyway to answer your question, there are ultimate questions everyone has struggled with and the answers are not satisfactory for many people. Personally, I would be happy if the Internet was just a tad better, decentralised, free, and safe. This could act as a hive mind. High IQ might be fun, but will it answer the tough questions?

You can argue that society has developed from hunter gatherers to agricultural to high technology with the goal of answering these questions. And maybe that means building a Deep Thought computer. Maybe it means that our hive mind of biological beings will have to find the answers. Maybe both.

2

No_Ninja3309_NoNoYes t1_j4f5zd6 wrote

AI has never suffered. It has never been hungry or fought in a war. And it doesn't know our experiences yet. Unless we all walk around with chips in our heads. If there are cameras everywhere, AI can probably know what we feel every second of every day. But not in the I have been through that before sense. More in the sense f(4, 6, 9) = 42. But we have reduced society to GDP, inflation, and interest rates. So who cares? Entertainment is getting better. Food is plentiful because... Anyway if the world economy grows or at least doesn't shrink too much, if our material needs are met, that would be our life. Eat, drink, be merry. Let AI do all the rest. Hey, we know that certain continents might not profit for whatever reason at first, but maybe they will catch up. I mean, who cares that the AI companies are exploiting open source software, Wikipedia, and seven other types of organisations. It is all fine as long as we have our daily meals and realistic VR because we sure don't want to spend time in the real world if we can help it. In my opinion.

1

No_Ninja3309_NoNoYes t1_j47u2lt wrote

That is how it starts. Morals then a minimal handout and some entertainment. It is a sneaky form of conformism. One small group determining what the rest of the world should think or say. You get double speak and surveillance. Not only cookies on your browser but everyone reporting on everyone.

20

No_Ninja3309_NoNoYes t1_j3qlbw5 wrote

I think that Deep Learning is a bubble that will burst in five years. The space of possible combinations of parameter values in neural networks blows up exponentially. Humans learn on the fly. We sort of make our own rules. We learn from experience. Neural networks can only associate words or the equivalent in images. They don't actually know what words or images mean. They are blind and deaf strangers in a strange land.

5