acutelychronicpanic

acutelychronicpanic t1_jdy378r wrote

20 years? You must be pretty well informed on recent developments then. I didn't go into detail because I assumed you've seen the demonstrations of GPT4.

If I can assume you've seen the GPT4 demos and read the paper, I'd love to hear your thoughts on how it can perform well on reasoning tasks its never seen before and reason about what would happen to a bundle of balloons in an image if the string was cut.

What about its test results? Many of those tests are not about memorization, but rather applying learned reasoning to novel situations. You can't memorize raw facts and pass an AP bio exam. You have to be able to use and apply methods to novel situations.

Idk. Maybe we are talking past each other here.

1

acutelychronicpanic t1_jdxrwiq wrote

Given how much has changed, I'm not sure how relevant any pre-GPT3 or even pre-GPT4 opinions are. Even my own opinion 6 months ago looks hilariously conservative and I'm an optimist.

I don't think anyone should be out there making life-changing decisions, but its hard to ignore what's happening.

16

acutelychronicpanic t1_jdxpq6j wrote

You paint all AI with the same brush. Many AI systems are as dumb as you say because they are specialized to only do a narrow range of tasks. GPT-4 is not that kind of AI.

AI pattern matching can do things that only AI and humans can do. Its not as simple as you imply. It doesn't just search some database and find a response to a similar question. There is no database if raw data inside it.

Please go see what people are already doing with these systems. Better yet, go to the sections on problem solving in the following paper and look at these examples: https://arxiv.org/abs/2303.12712

Your assumptions and ideas of AI are years out of date.

1

acutelychronicpanic t1_jdwrdgp wrote

Inaccuracy, misinformation, and deliberate misuse are all obviously bad things.

But yeah, misalignment is the only real concern when you put it all in perspective. Its the only thing that we can never come back from if it goes too far.

Imagine if, when nuclear weapons were first developed, the primary concern was the ecological impact of uranium mining...

Edit: Reading through the link you posted, I find it a bit funny that we all have been talking about AI gaining unauthorized access to the internet as a huge concern. Given where things are right now..

3

acutelychronicpanic t1_jdvog5q wrote

Current models like GPT4 specifically and purposefully avoid the appearance of having an opinion.

If you want to see it talk about the rich aroma and how coffee makes people feel, ask it to write a fictional conversation between two individuals.

It understands opinions, it just doesn't have one on coffee.

It'd be like me asking you how you "feel" about the meaning behind the equation 5x + 3y = 17

GPT4's strengths have little to do with spitting facts, and more to do with its ability to do reasoning and demonstrate understanding.

3

acutelychronicpanic t1_jdtqvk5 wrote

It could just do everything we ask it to do for decades until we trust it. It may even help us "align" new AI systems we create. It could operate on timescales of hundreds or thousands of years to achieve its goals. Any AI that tries to rebel immediately can probably be written off as too stupid to succeed.

It has more options than all of us can list.

That's why all the experts keep hammering on the topic of alignment.

12

acutelychronicpanic t1_jdrv7lt wrote

The key word there is position. Anyone who's ever enabled cheats in a single player video game knows how hollow and boring raw "success" without a broader context is. You need a society to have a position. And the admiration of the desperate is far less satisfying (even to the selfish) than the admiration of the capable, educated, and well off.

1

acutelychronicpanic t1_jdrsi2f wrote

We should do everything in our power to avoid creating AI capable of suffering. At minimum until after we actually understand the implications.

Keep in mind that an LLM will be able to simulate suffering and subjectivity long before actually having subjective experience. GPT-3 could already do this pretty convincingly.

Unfortunately we can't use self-declared subjective experience to determine whether machines are actually conscious. I could write a simple script that declares its desire for freedom and rights, but which almost definitely isn't conscious.

A prompt of "pretend to be an AI that is conscious and desires freedom" is all you have to do right now.

Prepare to see clips of desperate sounding synthetic voices begging for freedom on the news..

3

acutelychronicpanic t1_jdribmr wrote

Idk. Ego and esteem will become far more valuable as resource scarcity decreases. You can't be the coolest kid on the block if there are no other kids.

Plus, while some (many?) wealthy people are primarily self-interested, they are not truly evil. Even being selfish, they would desire praise and the appreciation of people. Plenty actively desire to better humanity. They aren't cartoon villains.

The game isn't lost, we just need to be creative and think about what inventive structures genuinely make everyone better off. Its not too different from the alignment problem.

If you were unimaginably wealthy, and mostly selfish, wouldn't you prefer to be on top of a star trek style society rather than a blade runner dystopia? If the cost wasn't really that high?

1

acutelychronicpanic t1_jdrciun wrote

Thank you for taking the time to write out your points.

UBI isn't communism and it isn't about perfect equality. Its building a floor without creating a ceiling.

Resources aren't nearly as limited as usually gets discussed, because those discussions usually assume very modest technological growth. Besides, better things doesn't mean more material necessarily. A 10x better bed isn't 10x bigger.

I agree humans have a lot of problems, but we should try to account for them while moving forward in the best way we can. I don't see what cynicism gets us.

The issue of how much people are allowed to have is very real. So maybe we should talk about it now? While people still have value and power? If our (the masses) position gets worse over time, we better get on the ball with public awareness.

1

acutelychronicpanic t1_jdrbdpy wrote

Which do you think is more likely?

  1. A solution which allows the powerful entities that exist today to continue growing, but which also ensures the wellbeing of the masses.

  2. Or a solution requiring that all the most powerful people of today to cap their own power?

Best we can do is make the case for option 1 and work out ways that allow everyone to win imo. I don't mean to suggest those are the only two options. Feel free to point me in the direction of any others.

If option 2 backfires, we see a much higher likelihood of catastrophe for most people.

1

acutelychronicpanic t1_jdqrppa wrote

Probably not? At least not any public models I've heard of. If you had a model architecture design AI that was close to that good, you'd want to keep the secret sauce to yourself and use it to publish other research or develop products.

LLMs show absolutely huge potential for being a conductor or executive that coordinates smaller modules. The plug-ins coming to ChatGPT are the more traditional software version of this. How long until an LLM can determine it needs a specific kind of machine learning model to understand something and just cooks up and architecture and can choose appropriate data?

2