SoylentRox

SoylentRox t1_j4nckq8 wrote

So here's the feature I think you need to make the tool work:

right now, the machine works by:

<current symbol buffer> + neural network -> <current symbol buffer> + 1 symbol

It needs to become

f(all previous sessions symbols) = salient context

<salient context> + <current symbol buffer> + neural network -> <current symbol buffer> + 1 symbol + <updated salient context>

"Salient context" is whatever the machine needs to continue generating text to match to something like a detective story. So it needs to remember the instructions, the main character's names, and so on. It does not need to remember every last word previously in the story.

To make it really good it needs to be aware of metrics of quality. Amazon/royalroad number of reviews and review ratings. Number of copies sold of the novel on the market. Etc. This way it can weight what it learns from text by how much humans liked that particular structure of text.

After that you'll need the AI to generate many stories, get user feedback, and iterate. I think eventually they will be good, and at some point past that it may discover ways to make them REALLY good that humans have not.

9

SoylentRox t1_j4ir95t wrote

>I don't think I'll see it in my lifetime if Singularity is what you mean by "merging"

Let's assume you are 35. You have about 45-65 years of lifetime left if there are zero improvements made in treatments for aging over the next 45-65 years.

You don't believe the singularity will happen in that time interval? That's 2069-2089 before you croak, again, assuming nothing that slows down aging in rats right now can work on humans.

There are multiple therapies. yamaka factors, metformin + sirolimus, others. Adding 50% more to rat lifespan isn't uncommon, which would give you another 40 years. So if any of these treatments work, or we ask AI to look at our proteins and what is failing as we age and devise a treatment, but that also fails (even though AI can already do most of this today), you are saying no singularity in about 100 years.

And if you get any of the treatments to work, maybe during that time period someone will use stem cell therapies on your weakest organs - bone marrow, heart, and brain - and add another 40 years during which...

1

SoylentRox t1_j4das0y wrote

I think this depends on your definition of 'original research'. Some AI systems already do research, and are used to set the equipment for the next run based on the numerical results of all the previous runs. This is used in semiconductor process optimization and fusion energy research. You could argue that this isn't 'original' or 'research' but you could devise a lot of experiments that are "just" have the robots do an experiment similar to before, but vary certain parameters in a way the AI 'believes' (based on past data) may give new information.

The key part in that description is having robots sophisticated enough to set up experiments, something we don't currently have.

3

SoylentRox t1_j49wqxg wrote

Reply to comment by MrGate in Programmable matter by crua9

Maybe. It's still made of little robots latching themselves together. Not really the sci Fi concept of programmable matter, just a smaller version of those robots various labs show swarming together.

The guts of each robot is not contributing to the properties you want. So the item might have to be thicker, heavier, or harder than it could be.

I think I gave a pretty clear and simpler way to accomplish the same thing. Programmable matter will be used but only when it's a good way to accomplish a task.

2

SoylentRox t1_j49nzw9 wrote

Reply to comment by xzelldx in Programmable matter by crua9

Why not just have robots at the factory make the item the way you want it, using methods that already exist. And if you don't want the item, have better recycling/a refund system so you can swap out for the one you do want.

1

SoylentRox t1_j49nwcq wrote

Reply to comment by SentientHotdogWater in Programmable matter by crua9

Yes but why. People don't have a steel forge in their home or a 6 axis CNC mill.

What benefit do people get? If they want a 3d printed part, they can just order shapeways or another company to print it for them. It only is cheaper to own your own printer if you do this a lot.

2

SoylentRox t1_j49mwr7 wrote

Reply to comment by MrGate in Programmable matter by crua9

See the singularity hypothesis.

See the research that suggests our cells can actually heal their aging damage with a simple command (3 yamaka factors) . Why don't they self heal? Because we live long enough that there's apparently not sufficient benefit for nature to develop the mechanism to emit the command, too many of us died to the environmental causes and this wouldn't heal some types of damage like scars or missing fingers, etc.

So it's possible. But sure, even if programmable matter were real, like, a drone bringing you a chair and taking away your unwanted stool or whatever might be a better and cheaper way to handle this. There are a lot of drawbacks to programmable matter. It won't be as strong or light as something custom made out of the materials needed. Or as cheap.

Drone based delivery - basically what we already have except instead of a human unloading the amazon van it's a robot - and robotic based recycling (being prototyped) might give you all the benefits of programmable matter in a much simpler way.

1

SoylentRox t1_j3rovna wrote

It's just the opinions on the eleuther AI discord. Arguably weak general AI will be here in 1-2 years.

My main point is the members I am referring to all live in Bay Area and work for hugging face and openAI. Their opinion is more valid than say a 60 year old professor in the artificial intelligence department at Carnegie melon.

2

SoylentRox t1_j3p8ys3 wrote

>most AI/ML expert surveys continue to have an AGI arrival year average of some decades from now/mid-century plus, and the majority of individuals who are AI/ML researchers have similar AGI timelines

You know, when the Manhattan project was being worked on, who would you trust for a prediction of the first nuke detonation, Enrico Fermi or some physicist who had worked on radioactive materials.

I'm suspicious that any "experts" with valid opinions exist outside of well funded labs (openAI/google/meta/anthropic/hugging face etc)

They are saying a median of about ~8 years, which would be 2031.

13

SoylentRox t1_j3p4pzb wrote

>pro euthanasia

If you had a treatment for aging, most physical diseases and effective treatments for most psychiatric conditions, how can you allow euthanasia?

Any problem a person has - whether they feel they don't want to live anymore, or have a currently incurable condition - can be fixed. Maybe not now but since there's no aging they can wait however long it will take for a cure. And if they feel they don't want to live any more, you can connect up nanowires to deep inside their brain and or sensors, and there is probably a problem you will be able to detect with this level of technology.

Do you then just kill them, knowing their impulse to die is coming from broken circuitry in your brain you can fix? (or again, wait for a future cure).

Do you not even do the testing and just accept their wish to die without inserting the probes? You know there is probably a problem with their brain.

0

SoylentRox t1_j3p47dh wrote

>Only one thing isn't ok: If you try to dictate it to others. No life- or deathsharia.

Umm I'm sorry but it's stronger than that. If you run a country that prohibits your citizens from accessing aging medication, launching a lighting strike that slaughters your entire government - and any survivors we're going to hold a trial and execute for mass murder - is a perfectly reasonable moral tradeoff. It's perfectly fine to murder 100k people to save millions.

Once you have a treatment, every person with white hair we'll see like we saw concentration camp victims.

1

SoylentRox t1_j3jtjb1 wrote

Machines cannot learn how to do something without clear, replicable examples.

Wrong. Reinforcement learning can and does let machines find out how to do something. They find a way that is often better than the way humans know how.

The real advancements in AI haven’t been in “creative thinking,” but in accuracy and efficiency.

Some of the solutions to RL environments are pretty creative, like box surfing. https://openai.com/blog/emergent-tool-use/

Answer The Ultimate Question of life the Universe and everything

humans can't

Solve Annoying Interview Puzzles

people on r/csMajors have used chatGPT to cheat on interview assessments. It apparently works amazingly well.

Write Bug-Free Software

neither can humans

15

SoylentRox t1_j2nu1ph wrote

It doesn't work that way. You can't reduce precision like that without tradeoffs. Reduced model accuracy for one thing.

You can in some cases add more weights and retrain for fp16.

Int8 may be out of the question.

Also like chatGPT is like the Wright Brothers. Nobody is going to settle for an AI that can't even see or control a robot. So it's only going to get heavier in weights and more computationally expensive.

1

SoylentRox t1_j2los27 wrote

Nobody will give you the weights so you can run locally a SOTA model. These academic/test models, sure. But the advanced ones that are built for profit/high end will not be given out that way.

You'll have to pay for usage. I mean if $1 gets you what would take an hour of work for someone with a college degree, it's easily worth paying it.

Not sure what the pay rates will turn out to be but using the current chatGPT it can slam out what would have taken me several hours in 30 seconds.

2

SoylentRox t1_j2l8yg6 wrote

No.
To run GPT here's what it actually takes.

GPT-3 is 175 billion parameters. Each parameter is a 32-bit floating point number.

So you need 700 gigabytes of memory.

For it not to run unusably slow, you need thousands of teraflops - many times what an old server CPU is capable of.

One Nvidia A-100 comes in 80 gigabyte of GPU memory models, and they are $25,000 each. You cannot use consumer GPUs because there is an interconnect you have to have connecting multiple GPUs together.

Thus you need at least 9 of them, or $218,750 just for the GPUs.

The server that hosts them, cooling, racks, etc adds extra cost. Probably at least $300,000.

The power consumption is 400 watts per A100, so 3.2 kilowatts.

11

SoylentRox t1_j2faik3 wrote

That's what I was thinking. For example if it stimulates synapses at some minimum dose, the type of synapses it acts on will saturate the ability of their target axon to carry any more signals. So doses above the minimum won't have a stronger effect.

For example, an injection of lidocaine has a minimum dose, but doses above the minimum don't make the nerves in the region any more numb.

11

SoylentRox t1_j2f9pw8 wrote

I think the issue is the cerebras has only 40 gigabytes of SRAM.

Palm is 540 billion parameters - that's 2.160 terabytes in just weights.

To train it you need more memory than that, think I read it's a factor of 3*. So you need 6 terabytes of memory.

This would be either ~75 A100 80 GB GPUs, or I dunno how you do it with a cerebras. Presumably you need 150 of them.

Sure it might train the whole model in hours though, cerebras has the advantage of being much faster.

Speed matters, once AI wars get really serious this might be worth every penny.

5

SoylentRox t1_j2f7o9e wrote

Reply to comment by ElvinRath in Game Theory of UBI by shmoculus

Why is your threshold 90%?

One reasonable way it could happen is that gradually AI and robotics starts to replace everyone in generally the easier jobs to learn.

So working in mines, as assembly workers in factories, as vehicle drivers, as warehouse pickers..these jobs will go first.

There are still plenty of jobs but they all require education/talent/physical characteristics. There's plenty of doctors - though they use AI to help them they still oversee it - and therapists. There are more SWEs than ever but you have to pass a test of talent to get the job. There are obviously many sex workers and webcam models but you have to be young and hot.

It's totally reasonable that it might be 10-50% of the population who are not employable for a period of time, until AI get substantially better.

3

SoylentRox t1_j2f5461 wrote

Reply to comment by AndromedaAnimated in Game Theory of UBI by shmoculus

Surely people will change their tunes when there's no jobs available, right?

I don't know how it is in Germany, but in the USA, the combination of early retirements partly driven by covid, and generational differences in the number of children have reduced the number of workers. There's pretty much a labor shortage in every industry. Anyone without a criminal record or major disability can probably easily get a job. (I won't say this is necessarily true nationwide but in high and medium living cost areas it is)

4