SoylentRox

SoylentRox t1_j7r6eft wrote

Or the nuclear weapons/racial slur scenario. The scenario isn't trying to get ChatGPT to emit a string containing a bad word. It will do that happily with the right prompt. It's getting it to reason ethically that there exists a situation, however unlikely, where emitting a bad word would be acceptable.

4

SoylentRox t1_j7lb053 wrote

Pretty much. It's also that those math wizards may be smarter than current AI but they often duplicate work. And it's an iterative process - AI starts with what we know, tries some things very rapidly. A few hours later it has the results and tries some more things based on that and so on.

Those math wizards need to publish and then read what others published. Even with rapid publishing like Deepmind does to a blog - they do this because academic publications take too long - it's a few months between cycles.

2

SoylentRox t1_j7j08r9 wrote

I don't think he'll succeed but for a very lame reason.

He's likely right that the answer won't be in solely transformers. However, the obvious way to find the right answer involves absurd scale:

(1) thousands of people make a large benchmark of test environments (many resembling games) and a library of primitives by reading every paper on AI and implementing the ideas as composible primitives.

(2) billions of dollars of compute are spent to run millions of AGI candidates - at different levels of integration - against the test bench in 1.

This effort would consider millions of possibilities - in a year or 2, more possibilities for AGI than all work done by humans so far. And it would be recursive - these searches aren't blind, they are being done by the best scoring AGI candidates who are tasked with finding an even better one.

​

So the reason he won't succeed is he doesn't have $100 billion to spend.

20

SoylentRox t1_j6immie wrote

I would argue your skepticism is of the same form as above.

"Since AI can't control robotics well (in the sota implementations, it controls robots very well in other papers), by the time I graduated college from the time I selected my major (2-4 years) AI still won't be able to do those things"

You actually may be right for a far more pendantic reason - good robotics hardware is expensive.

2

SoylentRox t1_j6h7z4w wrote

Ironically the most anti-AI I have seen is people who are usually religious or otherwise remaining "skeptical".

Their skepticisms usually in the form of "well chatGPT is only right MOST of the time, not ALL of the time, therefore it's not progress towards AGI".

Or "it can solve all these easy problems that only some college students can solve, but can't solve the HARDEST problems so it's not AGI".

Or "it can't see or draw." (even though this very capability is being added as we speak)

So they conclude that "no AGI for 100+ years" which was their previous belief.

1

SoylentRox t1_j6h74gz wrote

You do understand that zero modifications were made to the formula. If the FDA had just approved it immediately it WOULD have saved hundreds of thousands of lives.

This was a rationally designed vaccine - it wasn't random. The sequences used had all been tested, and the protein targeted was picked from a model of the virus. So there was a legitimate scientific reason to believe it would be safe and effective the first try like it was.

The FDA's defense mechanisms are essentially designed for quacks a century ago, to make it so they couldn't push their snake oil.

0

SoylentRox t1_j6f8bbn wrote

LFP and Sodium batteries offer a pretty good compromise.

With LFP : their 4000+ cycle life is at least 11 years, possibly 15-20 before they need replacement, depending on the application. They don't use rare earths and have become reasonably cheap per kWh. BYD blades are in the $70-120 a kWh range. (have seen both numbers). You can buy them right now.

With Sodium: similar lifespan and safety benefits of LFP, similar energy density. No use of lithium which means we won't bottleneck on lithium. CATL (world's largest battery manufacturer) promises mass production this year.

26

SoylentRox t1_j6f6fpv wrote

So the problem with your analogy is this.

Farming has 2 natural limits:

(1) once you grow enough food to feed everyone plus excess to overcome waste, no more food is needed.

(2) there is only so much land available that is suitable for farming - once you cultivate enough of it, no more farming is needed per year

Coding...well...dude. I bet someone wants factories on the Moon, O'neil habitats, biotech plants that make replacement organs, AI doctors that keep people alive no matter what goes wrong, and catgirl sex robots.

Do you have ANY IDEA how much more complex the software to make the above possible will be? Just try to imagine how many more systems are involved to accomplish each thing. If software engineers and computer engineers have to do even a tiny fraction of the work, they are all going to be busy for centuries.

2

SoylentRox t1_j6evlu7 wrote

I think there will be 'overhang'. AI is developed into AGI. AGI is able to control lesser forms of biology at will. (custom plants, custom small animals, immortal pets).

And then gradually the performance gets so good that the FDA and other bottlenecks are bypassed once it simply can't be denied how good the results are. Hundreds of millions of people will die who could have been saved, just like the FDA slow walking moderna cost millions of lives.

Try not to be among them.

8

SoylentRox t1_j6bxjkj wrote

Reply to comment by Ok_Sea_6214 in I’m ready by CassidyHouse

I hope 10% survive. The skies are dark for a reason, and at our current level of knowledge it looks suspiciously easy for a lot of humans to become immortal and grab most of the universe. The remaining problems all look solvable in a reasonable (years to decades) amount of time if you have a superintelligence to handle the details.

2

SoylentRox t1_j69ls82 wrote

I was referring to molecular assemblers - a machine that runs in a vacuum chamber at a controlled temperature. It receives through plumbing hundreds of 'feedstock gases' that are pure gases of specific type. It can make many (thousands+) of nanoscale parts, and then combine those parts into assemblies, and combine those assemblies etc.

Everything is made of the same limited library of parts, but they can be combined many different ways.

This makes possible things like cuboidal metal "cells" that are robotic, do not operate in water, and can in turn interact with each other to form larger machines, making possible something like the 'T-1000' from terminator 2. (it probably couldn't reconfigure itself as quick as the machine in the movie, but that doesn't matter since it wouldn't miss when shooting)

custom proteins are for medicine, and won't work at all the same way.

1

SoylentRox t1_j69cerm wrote

The 'shape of the solution' would look like hundreds of thousands, maybe millions of separate STM automated 'labs', in some larger research facility. (imagine a 1kmx1kmx1km cube or bigger). In parallel millions of experiments would run, with the goal of finding patterns of atoms to serve as each reliable machine part you need for a full nanoforge. And other experiments investigating the rules behind many possible nanostructures to develop a general model.
For every real experiment there are millions of simulated ones, where the AI system is systematically working on finding a full factory design that will be able to self replicate the entire factory, and to find a bootstrapping path of the least cost to build the minimum amount of nanomachinery the hard way, with all the rest of the parts made by partially functioning nanoassemblers.
"the hard way" means probably atom by atom, using STM tool heads to force each bond.

3

SoylentRox t1_j69brvx wrote

Agree. Around 2014 I read nanosystems and was pretty enthusiastic about the idea.

But as it turns out, the complexity of solving this problem is so large that human labs just won't be able to do it. Forget decades - I would argue if they couldn't use some form of AI at least as good as what has already been demonstrated, it may never get solved.

2

SoylentRox t1_j64yel3 wrote

I just mean "how do I make my money back and pay back the investors".

If we want a democratized AI we vote and get the government to pay for the compute and services of all those rockstars. The compute costs (which will likely soar into the sky, later systems this decade will probably burn billions in compute just to train) mean this isn't supportable by an open source model.

2

SoylentRox t1_j64w50r wrote

I have a better proposal to exploit this.

Build an AGI system smart enough to do the work of a "think tank" (your example). Have it do lots of demo work and prove it's measurably better than the human competition.

Sell the service. Why would you sell the actual AGI architecture/weights or hardware? Sell the milk not the cow lol.

AI/AGIs will probably always be 'rented' except for open source ones that you can 'own' by downloading.

6

SoylentRox t1_j64vpu0 wrote

>I agree in AI models becoming commodities over time as has been seen with Stable Diffusion essentially disrupting the entire business model of paid image generation like Dall-E and Midjourney.

Ehhhhhhhhhh

So the basic technology to make an ok model, yes. But it's quite possible that 'machine learning rockstars', especially if they get recursive self improvement to work, will be able to make models that have a 'moat' around them. Even if it's just because it costs 500m to train it and 500m to buy the data.

Then they can sell services that are measurably better than the competition...or hiring humans...

That sounds like a license to print money to me.

2