SoylentRox

SoylentRox t1_j5sj5zd wrote

You do grasp the concept of singularity criticality right. (AI improving AI, making the singularity happen at an accelerating pace).

If this theory is right - it's just math, not controversial, and S curve improvements in technology have happened many times before - then longevity won't make enough progress to matter before we have ai smart enough to help us.

Or kill us.

Point is the cultist/doomer argument isn't fair. What is happening right now is the flying saucer AI cult has produced telescope imagery of the actual mother ship approaching. Ignore it at your peril.

2

SoylentRox t1_j5sisg3 wrote

I know. Don't forget "they will exclusively hoard anti-aging for themselves". (Ignoring the reality that governments and health insurance companies have more money than all billionaires combined, and they have to pay a fortune because everything in a person breaks as aging slowly wrecks everything and they become unable to work)

Or "we would be so overpopulated life would suck". Nevermind that governments could require aging clinics to make their clients infertile. You would need a license to have an additional child after your normal reproductive lifetime.

14

SoylentRox t1_j5silop wrote

Note that with futurology, 30-50 years of science press has promised the "next big thing". Better batteries, flying cars, online shopping, tablet computers, fusion power.

Some of these things minimal instrumental progress was made on in 50 years, others are now reality.

The Singularity just might happen for real before people become jaded.

1

SoylentRox t1_j5hgi50 wrote

Part of it is that "we need adults to be able to write a coherent essay with 5 paragraphs and a main idea at the end of the first paragraph and..."

And maybe the real question is: do we? When?

Do adults need to be able to do long division?

I work at a tech company at a typical TC rate (many times the average income). Somehow this is not valued. Short, to the point emails - as short and terse as possible - with clear and succinct points are what counts. Simpler words. Building up my point with screenshots and visualizations.

The whole assumption here is we still need kids to learn a task that AI already knows how to do better than most of them will be able to do in their lifetime.

−3

SoylentRox t1_j5h5lxz wrote

This. And there are bigger problems unsolved that scale might help with.

For example, finding synthetic cellular growth serums. This is a massive trial and error effort - which molecules in bovine plasma do you actually need for full development of structures in vitro.

Growing human organs. Similarly there is a vast amount of trial and error, you really need to have millions of attempts.

Even trying to do the above rationally, you need to investigate in parallel the effect of each unknown molecule. And you need a lot of experiments, not just one - you don't want to develop a false conclusion.

Ideally I can imagine a setup where scientific papers stop being encrypted with difficult to parse text, but are essentially in a standard machine duplicable form. So the setup and experiment sections are a link to the actual files used to configure the robotics, the results are the unabridged raw data. The analysis was done by an AI who was prompted on what you were looking for, so there can't be accusations of cherry picking a conclusion.

And journals don't accept non replicated work. What has to happen is the paper has to get 'picked up' by another lab, with a different source of funding (or the funding is done in a way that reduces COI), ideally using a different robotic software stack to turn the high level 'setup' files into actionable steps, a different robotics AI vendor, and a different model of robotic hardware.

Each "point of heterogeneity" above has to be part of the data quality metrics for the replication, and then depending on the discovered effects you only draw reliable conclusions on high quality data.

Also the above allows every paper to use all prior data on something rather than stand alone. Your prior should always be calculated from the set of all prior research, not split evenly between hypotheses.

Institutions are slow to change, but i can imagine a "new science" group of companies and institutions who uses the above, plus AGI, and they surge so far ahead of everyone else in results that no one else matters.

NASA vs the Kenyan space program.

6

SoylentRox t1_j53m851 wrote

Probably but suit yourself, square. Basically if it ever comes down to it even in a sorta utopia there is only so much you can experience. Especially as in reality you are still mortal. Even if you had a cortical stack getting resurrected from backup is probably unpleasant.

In VR your body can be in a very safe place, constantly tended by AI robotics who do medical procedures you won't be able to feel as needed - including lots of preventative surgery - and you can experience a lot of things, many of which would be very hazardous to experience in reality.

4

SoylentRox t1_j539guq wrote

The simplest answer in one sentence: there are many things humans would like where it requires an imbalance of fairness.

For example, this is why MMOs feel so lame to play. Your character cannot be a superhero because everyone else who plays the game doesn't want to feel week

A single player RPG is often balanced where you can be a superhero, or find exploits to get power armor in the first 20 minutes or kill a dragon, etc. Exploits that won't be patched because the game developers know you will find this fun.

Some humans are going to want to live in a palace with a thousand servants and a large harem. It's not possible in the real world without some extreme wealth inequality, and even then, by definition just 1 person gets to be the king and 10k humans have to be servants or harem members.

In VR everyone can be the king.

Or take the world of John Wick. It's fun to be john wick, it is not fun for everyone else in the films.

4

SoylentRox t1_j534guy wrote

If you wanted to dismiss metaclus, you would argue that since it's not a real money betting market, operating for a long period of time, it's not going to work that well. Real money means that people are going to only vote when they are confident, and the long timespan means that losers lose money, and winners gain money, and over time this gives the winners larger "votes" because they can bet more money.

Over an infinite timespan, the winners become the only bettors.

As for AGI in 2027: sure. I mean it's like predicting the first crossing of the atlantic by plane after planes are flying around all over shorter distances. It's obviously possible.

6

SoylentRox t1_j4qxdmv wrote

I think that instead of rigidly declaring 'here's how we're going to put the pieces together', you should make a library where existing models can be automatically used and connected together to form what I call an "AGI candidate".

That way we over iteration against an "AGI test bench" find increasingly complex architectures that perform well. We do not need the AGI to think like we do, it just needs to think as well as possible. (as determined by it's rms score on the tasks in the test bench)

2

SoylentRox t1_j4qwvnx wrote

>Which studies? Link ‘‘em and I’ll be happy to knock them down.

You are not a rational actor and can't be trusted. You believe without evidence that everyone is equal because that's 'woke'. If it turns out that everyone is not equal you would not be able to accept the possibility and would have to start seeking false explanations.

1

SoylentRox t1_j4p75zv wrote

I don't care enough about the subject to know how much was correct and embarrassing vs unscientific. I was explaining that the modern version of it - often pushed by old white scientists - usually finds Asians the smartest and women more intelligent on average but with a tighter distribution. (This got the president of Harvard cancelled for mentioning this knowledge aloud). This makes your understanding of it incorrect.

3

SoylentRox t1_j4p45bv wrote

>historically: being black or jewish or gay or any other variation of the human condition that isn't straight and white)

So this is actually false. The disreputable - as in, they are collected from data but are no longer discussed in polite academic company - studies on race and intelligence pretty much all found asians were smarter. Not white people. And for white people groups, subgroups of jewish people were the smartest.

I had a professor of human genetics who was aware of these studies, and his theory was that it was the western languages that gave westerners such overwhelming success for a period of time. It's the "operating system" not the hardware. And our large success with llms seems to suggest this is in fact correct, the hardware doesn't even need to be human!

So actually, no. In his opinion, the ubermensch was a kid from whichever asian subgroup has the highest IQ, with rich parents and growing up in a blue state in the USA... This was the "most successful" combination currently possible.

Also you gotta get real here. These differences are small. Every human is essentially mentally retarded compared to AI, both at tasks existing models are designed to do, and what they will soon be able to do.

2

SoylentRox t1_j4p3nat wrote

Umm, think this came up a few days ago on here.

I support eugenics if the consequences of not doing it are significant and well known.

Like, it's one thing if your 'master race' is buncha people of a particular appearance who are still just humans and capable of losing.

On the other hand, if the edits make people, say:

(1) live for centuries (2) regenerate limbs (3) they are all smarter than the smartest people who ever lived (4) they are better in every sport all the time, with modified tissue that gives them superhuman strength and toughness (5) They all look like models, from age 15 to age 950.

Once you are talking about such vast improvements - something a super-intelligence could likely work out exactly how to do in a matter of a few years once one exists - it's arguably dooming any unedited child to, well, being retarded, ugly, and dead at 80.

In that situation, your "principals" have a crippling and large cost to someone not yet born.

1

SoylentRox t1_j4nmug9 wrote

And then what happened?! She gets arrested for resisting arrest and assaulting an officer? All the evidence she had on her disappears?

She spends time in jail until her defense attorney presents a cloud backup of her data to the DA? The charges get dropped but no one is punished for their actions but the serial killer?

3

SoylentRox t1_j4neivp wrote

Correct but this was done at a small scale by chatGPT employees. I am saying we look at every novel that has data on its sales, every story on a site that has metrics of views or other measurements of quality and popularity, etc.

This might give the machine more information on what elements work that people like. Maybe enough to construct good stories.

3