acutelychronicpanic

acutelychronicpanic t1_jdqrppa wrote

Probably not? At least not any public models I've heard of. If you had a model architecture design AI that was close to that good, you'd want to keep the secret sauce to yourself and use it to publish other research or develop products.

LLMs show absolutely huge potential for being a conductor or executive that coordinates smaller modules. The plug-ins coming to ChatGPT are the more traditional software version of this. How long until an LLM can determine it needs a specific kind of machine learning model to understand something and just cooks up and architecture and can choose appropriate data?

2

acutelychronicpanic t1_jdpw1fd wrote

If unemployment was so bad that no taxes were coming in from that many people, I don't think tax revenue is the biggest concern.

My hope is that corporations realize the smoothest way to transition from here, to an AI augmented workforce, then to post scarcity will be using something like UBI.

Someone has to actually purchase the products produced unless we are throwing out the whole system (which would also place their roles as capital owners in jeopardy).

2

acutelychronicpanic t1_jdparx0 wrote

There isn't a such thing as an unprompted GPT-X when it comes to alignment and AI safety. It seems is explicitly trained on this and probably has an invisible initialization prompt above the first thing you type in. That prompt will have said a number of things regarding the safety of humans.

3

acutelychronicpanic t1_jdp9l67 wrote

As a point to support you: We already live in a world where everyone could be fed if we wanted to do it. The US has more empty houses than homeless, and plenty of land on top of that.

I'm not saying we're doomed, but we should directly address the issue if we don't want a decade of unprecedented turmoil.

4

acutelychronicpanic t1_jd7o0et wrote

I agree entirely with what you are saying. I just think that most people talking about this greatly underestimate our available resources as technology improves.

Say we get fusion.

What does carrying capacity and farmland acreage even mean when you can create tons of starch and protein in bioreactors for pennies a pound? With the inputs being things like air, water, energy, and abundant minerals?

1

acutelychronicpanic t1_jd7ga3k wrote

This is an unpopular opinion with all of the environmental concerns we have at the moment (which are both legitimate and serious), but with advances in technology, Earth can hold a ludicrous number of people comfortably.

If AGI was here, genetic engineering of crops will be supercharged, fusion will be fast-tracked, and truly intelligent systems will be ubiquitous.

A Thanos-inspired solution would do far more harm than the overpopulation it is supposedly addressing.

11

acutelychronicpanic t1_jaan8nl wrote

I hope you succeed in making the world a better place. Just don't focus so much on efficiency and perfect solutions. They can be short-sighted.

It seems to me like you have an idea for what the ideal world might look like. But beware that it would be an unstable solution. The problem with central control is that it is fragile. Voting systems are inefficient, but they are more robust and harder to corrupt. Still corruptible, obviously, but less so.

You want to seek a system that can withstand the pressures and corruption of the real world.

1

acutelychronicpanic t1_jaa88mc wrote

The problem with this is that most people think they are wise. I doubt you will find many historical examples of governance where the leaders didn't claim to have the most wisdom and insight.

The fault in this idea is in the actual implementation. How do you find the most competent? How do you ensure that those chosen aren't corrupted by power? What does this system look like in 50 years?

Its equivalent to saying "let's all agree to only do good things for good reasons."

2

acutelychronicpanic t1_j4qjjmf wrote

I don't know about this particular implementation, but I agree that AI models will need to be interconnected to achieve higher complexity "thought".

I think LLMs are close to being able to play the role of coordinator. By using language as a medium of communication between modules and systems, we could even have interpretable AI.

1