SoylentRox

SoylentRox t1_j62l8qh wrote

It would be bilateral. Any annoying you inflict becomes part of YOUR memories after each sync. So if you annoy 10 group members, after sync (I assume it's during sleep) you remember being annoyed 10 times over and also have 1 set of memories of you gleefully being annoying.

So it would be self limiting I suspect.

8

SoylentRox t1_j62djjq wrote

The advantage is the shared experience could make it possible for your memories to survive the death of any one or multiple members. Because each of you has pieces of all the other's experiences and memories.

I don't know precisely how it would work. But you can imagine with syncing done a clever way, you would wake up in the morning and remember the member of your collective who just did something awesome, and it will be your memory too - it feels just as real even if this body didn't experience it.

Some mornings you would even sync to the memories of a member who is in another star - you're receiving each 'day' sent by laser as it happened to them however many light years they are away.

Again, the memories will feel just as rich as if you really were there.

3

SoylentRox t1_j61ooee wrote

To add to this: self driving stacks can (i assume they do) bin each detected object to an entity class.

Objects from class "road barriers" or "assorted road obstacles" are worth less than class "bus" which is worth less than class "school bus" which is worth less than class "semi".

So while the machine won't know if the school bus is empty, if there's ever a choice of which object to plow into, it will have somewhere in the codebase an approximate value by class. (and then also weight by the kinetic energy of the collision - it might choose to hit the school bus if it predicts a lower KE at impact than the other choices)

2

SoylentRox t1_j61o8t4 wrote

So if LEV ever happens at a point in the future for humanity, won't that era be the start of the "best era ever for humanity". LEV doesn't just mean "you live with no upper limit", it means more chances. More opportunities to make things better. More possibilities of a better era. You can start campaigning for social change that will take 200 years to happen and you will personally benefit.

Also even if the first versions of the tech are somewhat invasive and aren't pretty, you merely need to live long enough and if an era of everyone looking like Greek Gods and participating in open air orgies becomes the New Normal, well, you are able to partake if you wish.

1

SoylentRox t1_j5xq6e2 wrote

>Contains nothing that makes any sense to me. This is where your whole argument falls down. There's nothing 'empirical' about that claim at all,

Here's what the claim is.

Right now, Gato demonstrated expert performance or better on a set of tasks. https://www.deepmind.com/blog/a-generalist-agent .

So Gato is an AI. You might call it a 'narrow general AI' because it's only better than humans at about 200 tasks, and the average living human likely has a broader skillset.

Thus an AGI - an artificial general intelligence - is one where it's as good as the average human on a set of tasks consistent with the breadth of skills an average living person has.

Basically, make the benchmark larger. 300,000 tasks or 3 million or 30 million. Whatever it has to be. The first machine to do as well as the average human on the benchmark is the world's first AGI.

A score on a cognitive test that you have humans also tested on is an empirical measurement of intelligence.

Arguably, you might also expect generality, simplicity of architecture, and online learning. You would put a lot of points in the benchmark on with-held tasks that use skills other tasks require but in a way the machine won't have seen.

Because we cannot benchmark tasks that can't be automatically graded, this makes it difficult for the AGI to learn things like social interactions. So you are correct, it might be 'autistic'.

It will probably not even have a personality. It's basically a robot where if you tell it to do something, and that something is similar enough to things it has practiced doing, it will be able to do it successfully.

It has no values or morals or emotions - lots of things. Just breadth of skills.

1

SoylentRox t1_j5xpo6y wrote

>Your magical 'math' does not just sit on top of emotion, all superior and shiny.

From a theoretic perspective, it does. For example, you probably do know that if you're gambling in a card game, it doesn't matter how you feel. It's only the information that you have available to you and an algorithm someone validated in a simulation that should determine your actions.

Even for a game like Poker, it turns out AI is better than humans because apparently world class poker players bluff perfectly enough that other humans can't tell.

As an individual human, with an evolved meatware brain, am I above emotion? Of course not. But from a factual perspective, arguing with math is more likely to be correct (or less wrong)

1

SoylentRox t1_j5urx2v wrote

Sure. One idea I have had is :

  1. Megacorps develop very capable AGI tools and let others see them

  2. Have billionaire friends, get startup money

  3. Use the AGI tools to automate mass biomedical research

  4. Once the AGI system understands biology well, give challenge tasks to grow human organs or keep small test animals alive. Most tasks are simulated but some are real.

  5. Open up a massive hospital in a poor jurisdiction with explicit permission to do any medical treatment the AGI wants as long as the result are good.

  6. Some kind of Blockchain accountability, where patients before applying to go to your hospital register and get examined first and the conventional medical establishment writes down their current age and expected lifespan and terminal diagnosis. Then after treatment they return and get examined again. Blockchain is just so history can't be denied or altered.

  7. Payments are on success. Waitlist order is based partially on payment bid. Reinvest all profits to expand capacity and capabilities

  8. Use your overwhelming evidence of success to lobby western governments to ban non AI medicine and to revoke drug patents. (Because there is no value in pharma patents if an AI can invent a new drug in seconds. To an AI, molecules are as easy to use as we find hand tools and it can just design one to fit any target.)

  9. The owners of the clinic would be trillionaires. It's the most valuable product on earth

1

SoylentRox t1_j5uo8v5 wrote

It's not doomerism to think that rich countries will get richer and have better access to tech.

That is literally a statement of past history.

Population growth: sure. I already said rich countries have declining pops so letting rejuvenated people have kids is what they would do at first.

0

SoylentRox t1_j5ukccw wrote

How do the nanobots coordinate? Where do they get the replacement cells? How do they cut away a broken limb? What do they do with waste?

Stay plausible. Nanobots that magically heal injuries are more like stage magic.

A more plausible version: new organs or whole body subsections grown and built in the lab. Layer by layer, with inspections and functional tests. So the lab is certain each part is well made.

And then the "nanobots" are basically hacked human cells that go on the interfaces where there would be a scar otherwise. They on command will glue themselves to nearby cells and do a better job of healing than what would otherwise just be 2 human tissues held together with thread. So the patient can walk right after surgery because all the places their circulatory system and skin and so on were spliced is as strong as their normal tissue and doesn't hurt.

They also bridge nerve connections.

1

SoylentRox t1_j5uio8h wrote

What do you mean I "can't know this". You know how sports stars need surgeries when they wear out a joint or break a tendon, an injury that won't heal?

That's not even aging it can happen in your 20s. Treatments for aging alone at best restore your ability to heal from your early 20s.

If you are 80 years old and at a patient at the clinic lots of stuff will be damaged.

1

SoylentRox t1_j5uh3ns wrote

  1. This is obvious and there is a risk of malware, etc. You understand that this isn't like taking metformin, it's probably massive surgery to remove old joints and skin and possibly just your whole body except for your brain and spine. Aging does some physical damage that won't heal.

After the many surgeries you have permanent implants and sensors installed that have to interface to local clinics and a network of ambulances etc. And you have to return for checkups and repairs periodically.

  1. This is how that works now, also it's an ongoing process. You don't really get cured of aging just the condition managed

  2. They might if it results in severe overpopulation

1

SoylentRox t1_j5ub33w wrote

>Yes. You're not really appreciating the notion of 'what most humans could do'. I'm not talking about what one little homo sapiens animal could do; that's fairly tiny and feeble in the overall consideration.

This is what the AGI is.

We're saying we can make an AI that has a set of skills broad enough, as measured by points on test benches that both humans and the machine can play - and the test bench is very broad covering a huge range of skills - that it beats the average human.

That's AGI. It is empirically as smart as an average human.

No one is claiming it will be smarter than more than 1 'little homo sapiens animal' in version 1.0, though obviously we expect to be able to do lots better at an accelerating rate.

I expect we may see agi before 2030, by this definition.

As for self replicating and taking over the universe: there is a reason to think the industrial tasks for factories, etc, are easier than say original art. So even the first AGI would be able to do all the robotic control tasks that could take over the universe, albeit it likely wouldn't have the data for many of the steps that humans didn't write down.

3

SoylentRox t1_j5uaadq wrote

>Also, how in the heck can you have a 'non emotional argument'? What even is that? I was captain of my high school debate team way back when, I take a keen interest in politics, I have studied university level maths and chemistry and watched professors dispute with each other, but I have never, ever seen a non emotional argument before.
>
>Are you trying to pretend that you don't have any emotions when you 'think rationally', because you, unlike me, and the rest of the 'common rabble', are a 'clear and intelligent thinker'? That's cute if so; very quaint.

Arrguments like "numbers, math, irreducible complexity. Saying there isn't enough compute. Saying that AI companies right now are soon going to hit a wall because <your reason> and that funding will get pulled."

When you say you studied "university level maths and chemistry" but you didn't mention CS or machine learning, you're making a weak non emotional argument. (because you aren't actually qualified to have the opinion you claim)

When you say " That's cute if so; very quaint." that's an appeal to emotion.

Or "Are you trying to pretend that you don't have any emotions when you 'think rationally', because you, unlike me, and the rest of the 'common rabble', are a 'clear and intelligent thinker'?". Same thing. Because sure, everyone has emotions but some people are able to do math and determine if an idea is going to work or not.

3

SoylentRox t1_j5u8k92 wrote

I mean medical tourism might not have access to the best stuff. Today this is true, it's just that the best medicine is not much more effective than cheaper simpler stuff. (Mainly because there are no treatments for aging or treatments that grow you spare parts and transplant them)

Once it's a matter of very complex treatments you may see huge differences. As in, the good clinics have almost 100 percent 10 year survival rates and the bad have 50.

1

SoylentRox t1_j5smhx9 wrote

Yes, but, intelligence isn't just depth, it's breadth.

In this case, to make possible exponential growth, AI has to be able to do most of the steps required to build more AI (and useful things for humans to get money).

Right now that means AI needs to be capable of controlling many robots, doing many separate tasks that need to be done (to ultimately manufacture more chips and power generators and so on).

So while chatGPT seems to be really close to passing a Turing test, the papers for robotics are like this : https://www.deepmind.com/blog/building-interactive-agents-in-video-game-worlds

And not able, yet, to actually control this: https://www.youtube.com/watch?v=XPVC4IyRTG8 . (that boston dynamics machine is kinda hard coded, it is not being driven by current gen AI)

I think we're close and see for the last steps people can use chatGPT/codex to help them write the code, there's a lot more money to invest in this, they can use AI to design the chips for even better compute : lots of ways to make the last steps take less time than expected.

0

SoylentRox t1_j5slrzy wrote

The singularity is a prediction of exponential growth once AI is approximately as smart as a human being.

So you might hear in the news that tsmc has cancelled all chip orders except for AI customers, and there are zero new devices anywhere that are recently made with advanced silicon in them.

You might see in the news that the city of Shenzhen has twice as much land covered with factories as it did last month.

Then another month and it's doubled again.

And so on. Or if the USA has the tech for themselves similar exponential growth.

At some point you would probably suddenly see whoever has the tech launching tens of thousands of rockets and at night you would see lights on the Moon..that double every few weeks how much surface is covered.

This is the metric: anything that is clear and sustained exponential growth driven by AI systems at least as smart as humans.

Smart meaning they score as well as humans on a large set of objective tests.

There are a lot of details we don't know - would the factories in the Moon even be visible at night or do the robots see in IR - but that's the signature of it.

0

SoylentRox t1_j5sl0ob wrote

You understand the idea of singularity criticality right? Currently demonstrated models (especially RL based ones) are ALREADY better than humans at key tasks related to AI like:

  1. Network architecture search
  2. Chip design
  3. Activation function design

I can link papers (usually Deepmind) for each claim.

This means AI is being accelerated by AI.

2050-60 is a remote and unlikely possiblity. Like expecting covid to just stop with China and take 10 years to reach the US, if the year were 2020.

1

SoylentRox t1_j5skmgs wrote

It's a good illustration of the limits if a singularity happens. For example in the near future if it starts, AI companies will have rapidly more intelligent models (like now but faster) until tsmc and a couple other facilities are basically exclusively making AI chips.

New phones, GPUS, and game consoles would become unattainable.

And then things don't go crazy yet because every chip is going to AI research or AIs making money for their owners, and progress is rate limited by the chip production rate.

Later stages this will be solved somehow and there will be new bottlenecks.

1

SoylentRox t1_j5sjyzo wrote

Umm....

Ok so you are saying we couldn't, I dunno, make an AI training bench that resulted in a capable machine able to successfully perform most things humans can do? Including highly advanced skilled tasks like designing computer chips and jet aircraft?

Or the von Neumann replication idea? That won't happen or can't?

...why not. I would like to hear from you if you have a non emotional argument. How would this not work, what specifically. Give details

5

SoylentRox t1_j5sjtg3 wrote

I mean nuclear weapons are real and they had a powerful effect on history.

In 1944 as a member of the public how would the capabilities describe sound to you?

"Oh yeah some special rocks we found and a lot of chemistry will let us annihilate a whole country in 30 minutes. If we had nukes on V2 missiles right now we could defeat the Nazis and the soviets in 30 minutes, leaving all of them dead, with every single city they have turned to rubble"

12