acutelychronicpanic
acutelychronicpanic t1_jdyr3mv wrote
Reply to comment by Tememachine in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Anything about what it discovered? Or is it just that it can predict race?
acutelychronicpanic t1_jdyqw5z wrote
Reply to comment by DaffyDuck in Is AI alignment possible or should we focus on AI containment? by Pointline
This might be our ray of hope. With no one model being completely dominant over the others, and these models being widespread, humanity will be able to tip the scales in our preferred direction.
At least, that's how I cope watching an intelligent Bing be given direct internet access..
acutelychronicpanic t1_jdy378r wrote
Reply to comment by speedywilfork in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
20 years? You must be pretty well informed on recent developments then. I didn't go into detail because I assumed you've seen the demonstrations of GPT4.
If I can assume you've seen the GPT4 demos and read the paper, I'd love to hear your thoughts on how it can perform well on reasoning tasks its never seen before and reason about what would happen to a bundle of balloons in an image if the string was cut.
What about its test results? Many of those tests are not about memorization, but rather applying learned reasoning to novel situations. You can't memorize raw facts and pass an AP bio exam. You have to be able to use and apply methods to novel situations.
Idk. Maybe we are talking past each other here.
acutelychronicpanic t1_jdxrwiq wrote
Reply to Singularity is a hypothesis by Gortanian2
Given how much has changed, I'm not sure how relevant any pre-GPT3 or even pre-GPT4 opinions are. Even my own opinion 6 months ago looks hilariously conservative and I'm an optimist.
I don't think anyone should be out there making life-changing decisions, but its hard to ignore what's happening.
acutelychronicpanic t1_jdxpxl9 wrote
Reply to comment by Azuladagio in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Yes. Otherwise we'd each need to independently reinvent calculus.
acutelychronicpanic t1_jdxpq6j wrote
Reply to comment by speedywilfork in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
You paint all AI with the same brush. Many AI systems are as dumb as you say because they are specialized to only do a narrow range of tasks. GPT-4 is not that kind of AI.
AI pattern matching can do things that only AI and humans can do. Its not as simple as you imply. It doesn't just search some database and find a response to a similar question. There is no database if raw data inside it.
Please go see what people are already doing with these systems. Better yet, go to the sections on problem solving in the following paper and look at these examples: https://arxiv.org/abs/2303.12712
Your assumptions and ideas of AI are years out of date.
acutelychronicpanic t1_jdxk8wn wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
I'm calling it now. When we see an AI make a significant scientific discovery for the first time, somebody is going to comment that "AI doesn't understand science. Its just applying reasoning it read from human written papers."
acutelychronicpanic t1_jdxbhx8 wrote
Reply to comment by speedywilfork in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
It can infer intent pretty effectively. I'm not sure how to convince you of that, but I've been convinced by using it. It can take my garbled instructions and infer what is important to me using the context in which I ask it.
acutelychronicpanic t1_jdwrdgp wrote
Reply to comment by 1II1I11II1I1I111I1 in The current danger is the nature of GPT networks to make obviously false claims with absolute confidence. by katiecharm
Inaccuracy, misinformation, and deliberate misuse are all obviously bad things.
But yeah, misalignment is the only real concern when you put it all in perspective. Its the only thing that we can never come back from if it goes too far.
Imagine if, when nuclear weapons were first developed, the primary concern was the ecological impact of uranium mining...
Edit: Reading through the link you posted, I find it a bit funny that we all have been talking about AI gaining unauthorized access to the internet as a huge concern. Given where things are right now..
acutelychronicpanic t1_jdvog5q wrote
Reply to comment by speedywilfork in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
Current models like GPT4 specifically and purposefully avoid the appearance of having an opinion.
If you want to see it talk about the rich aroma and how coffee makes people feel, ask it to write a fictional conversation between two individuals.
It understands opinions, it just doesn't have one on coffee.
It'd be like me asking you how you "feel" about the meaning behind the equation 5x + 3y = 17
GPT4's strengths have little to do with spitting facts, and more to do with its ability to do reasoning and demonstrate understanding.
acutelychronicpanic t1_jdvg9r2 wrote
Reply to comment by speedywilfork in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
If you don't want to go look for yourself, give me an example of what you mean and I'll pass the results back to you.
acutelychronicpanic t1_jdtqvk5 wrote
It could just do everything we ask it to do for decades until we trust it. It may even help us "align" new AI systems we create. It could operate on timescales of hundreds or thousands of years to achieve its goals. Any AI that tries to rebel immediately can probably be written off as too stupid to succeed.
It has more options than all of us can list.
That's why all the experts keep hammering on the topic of alignment.
acutelychronicpanic t1_jdtpxnz wrote
Reply to comment by speedywilfork in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
It definitely handles most abstractions I've thrown at it. Have you seen the examples in the paper?
acutelychronicpanic t1_jdt7zel wrote
Reply to comment by RadioFreeAmerika in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
It would also be the time period with the most rich data, and the only one with minds to directly analyze.
But it shouldn't change how we live. Just a fun thought.
acutelychronicpanic t1_jdrv7lt wrote
Reply to comment by Mercurionio in Taxes in A.I dominated labour market by Newhereeeeee
The key word there is position. Anyone who's ever enabled cheats in a single player video game knows how hollow and boring raw "success" without a broader context is. You need a society to have a position. And the admiration of the desperate is far less satisfying (even to the selfish) than the admiration of the capable, educated, and well off.
acutelychronicpanic t1_jdrsi2f wrote
Reply to Compassion Towards Artificial Intelligence, and 'AI Rights', Will Come About A Lot Sooner Than We May Think - Food for Thought by Odd_Dimension_4069
We should do everything in our power to avoid creating AI capable of suffering. At minimum until after we actually understand the implications.
Keep in mind that an LLM will be able to simulate suffering and subjectivity long before actually having subjective experience. GPT-3 could already do this pretty convincingly.
Unfortunately we can't use self-declared subjective experience to determine whether machines are actually conscious. I could write a simple script that declares its desire for freedom and rights, but which almost definitely isn't conscious.
A prompt of "pretend to be an AI that is conscious and desires freedom" is all you have to do right now.
Prepare to see clips of desperate sounding synthetic voices begging for freedom on the news..
acutelychronicpanic t1_jdribmr wrote
Reply to comment by Mercurionio in Taxes in A.I dominated labour market by Newhereeeeee
Idk. Ego and esteem will become far more valuable as resource scarcity decreases. You can't be the coolest kid on the block if there are no other kids.
Plus, while some (many?) wealthy people are primarily self-interested, they are not truly evil. Even being selfish, they would desire praise and the appreciation of people. Plenty actively desire to better humanity. They aren't cartoon villains.
The game isn't lost, we just need to be creative and think about what inventive structures genuinely make everyone better off. Its not too different from the alignment problem.
If you were unimaginably wealthy, and mostly selfish, wouldn't you prefer to be on top of a star trek style society rather than a blade runner dystopia? If the cost wasn't really that high?
acutelychronicpanic t1_jdrciun wrote
Reply to comment by Mercurionio in Taxes in A.I dominated labour market by Newhereeeeee
Thank you for taking the time to write out your points.
UBI isn't communism and it isn't about perfect equality. Its building a floor without creating a ceiling.
Resources aren't nearly as limited as usually gets discussed, because those discussions usually assume very modest technological growth. Besides, better things doesn't mean more material necessarily. A 10x better bed isn't 10x bigger.
I agree humans have a lot of problems, but we should try to account for them while moving forward in the best way we can. I don't see what cynicism gets us.
The issue of how much people are allowed to have is very real. So maybe we should talk about it now? While people still have value and power? If our (the masses) position gets worse over time, we better get on the ball with public awareness.
acutelychronicpanic t1_jdrbdpy wrote
Reply to comment by Mercurionio in Taxes in A.I dominated labour market by Newhereeeeee
Which do you think is more likely?
-
A solution which allows the powerful entities that exist today to continue growing, but which also ensures the wellbeing of the masses.
-
Or a solution requiring that all the most powerful people of today to cap their own power?
Best we can do is make the case for option 1 and work out ways that allow everyone to win imo. I don't mean to suggest those are the only two options. Feel free to point me in the direction of any others.
If option 2 backfires, we see a much higher likelihood of catastrophe for most people.
acutelychronicpanic t1_jdquwei wrote
Reply to comment by DixonJames in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
That sounds absolutely terrifying, please don't. We'd just be handing the reins off to chance and hoping.
acutelychronicpanic t1_jdqumo9 wrote
Reply to comment by Mercurionio in Taxes in A.I dominated labour market by Newhereeeeee
This is the wrong approach. A job is just a reason to give people money. We're better off with abundance + raising the floor with UBI. But it should be negotiated and implemented before normal people lose their bargaining power for sure.
acutelychronicpanic t1_jdqu8b7 wrote
Reply to comment by Mercurionio in Taxes in A.I dominated labour market by Newhereeeeee
I'm not sure I see why you don't think it's possible. Can you give me the key points?
acutelychronicpanic t1_jdqrppa wrote
Reply to comment by Fluglichkeiten in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
Probably not? At least not any public models I've heard of. If you had a model architecture design AI that was close to that good, you'd want to keep the secret sauce to yourself and use it to publish other research or develop products.
LLMs show absolutely huge potential for being a conductor or executive that coordinates smaller modules. The plug-ins coming to ChatGPT are the more traditional software version of this. How long until an LLM can determine it needs a specific kind of machine learning model to understand something and just cooks up and architecture and can choose appropriate data?
acutelychronicpanic t1_jdq1ep1 wrote
Reply to comment by nomoreimfull in Taxes in A.I dominated labour market by Newhereeeeee
You would need a way to circulate money through the economy. I favor UBI, but there are other possibilities, I'm sure.
acutelychronicpanic t1_jdzev0i wrote
Reply to comment by FroHawk98 in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Thats a good point. Maybe after we are all just sitting around idling our days away, we can spend our time discussing whether or not AI really understands the civilization its running for us.