turnip_burrito
turnip_burrito t1_jabrna2 wrote
Reply to comment by PoliticallyCorrect- in How can I adapt to AI replacing my career in the short term? Help needed by YaAbsolyutnoNikto
70% automation?
turnip_burrito t1_jabmheb wrote
Reply to comment by [deleted] in Leaked: $466B conglomerate Tencent has a team building a ChatGPT rival platform by zalivom1s
Not when powerful AI is being fine-tuned to maximize a reward (money).
This is the whole reinforcement learning alignment problem, just with human+AI instead of AI by itself. Unaligned incentives (money vs. human well-being).
turnip_burrito t1_jablzeb wrote
Reply to comment by drsimonz in Leaked: $466B conglomerate Tencent has a team building a ChatGPT rival platform by zalivom1s
In addition, there's also a large risk of somebody accidentally making it evil. We should probably stop training on data that has these narratives in it.
We shouldn't be surprised when we train a model on X, Y, Z and it can do Z. I'm actually surprised that so many people are surprised at ChatGPT's tendency to reproduce (negative) patterns from its own training data.
The GPTs we've created are basically split personality disorder AI because of all the voices on the Internet we've crammed into the model. If we provide it a state (prompt) that pushes it to some area of its state space, then it will evolve according to whatever pattern that state belongs to.
tl;dr: It won't take an evil human to create evil AI. All it could take is some edgy 15 year old script kid messing around with publicly-available near-AGI.
turnip_burrito t1_jaavbn1 wrote
Reply to comment by Donkeytonkers in Snapchat is releasing its own AI chatbot powered by ChatGPT by nick7566
I wish OpenAI hadn't ever released ChatGPT. Also all those "democratize AI" guys screaming for research labs to release their AI for public use. What a mess. Now we're gonna end up with third party non-research corporations trying to use AI to make money to our long term detriment, and probably get royally rekt by some corpo money-making AI.
turnip_burrito t1_jaasfdp wrote
Reply to How can I adapt to AI replacing my career in the short term? Help needed by YaAbsolyutnoNikto
I don't think it's really worth worrying about. AI won't be able to do your job until AGI, and automation will be brittle and weak until then. By the time AGI rolls around, all information workers will lose their jobs almost simultaneously.
turnip_burrito t1_ja9rdv1 wrote
Reply to comment by maskedpaki in AI powered brain implants smash thought-to-text speed record by jrstelle
That makes sense.
turnip_burrito t1_ja9qisb wrote
Reply to comment by ImproveOurWorld in What technology can we expect 200 years from now in the year 2223? by AdorableBackground83
There is no point in simulating every atom, you're right.
Also such an "always on atom level" Earth-simulating machine would be larger than Earth itself, which seems like a waste of resources.
turnip_burrito t1_ja9p7hw wrote
Reply to comment by maskedpaki in AI powered brain implants smash thought-to-text speed record by jrstelle
In curious too. When you're thinking, doesn't an entire sentence of internal dialogue flash through your mind in an instant? But then there are long periods with no internal dialogue in between. I wonder what it averages out to.
turnip_burrito t1_ja78krt wrote
Reply to comment by Ok_Sea_6214 in Singularity claims its first victim: the anime industry by Ok_Sea_6214
AI will not replace us quite that quickly.
turnip_burrito t1_ja78g5n wrote
Reply to comment by dwarfarchist9001 in Singularity claims its first victim: the anime industry by Ok_Sea_6214
No, the actual definition is the point in time when technological progress makes predictions of the future useless. Usually in context of AI, but it could also be due to other technology.
turnip_burrito t1_ja6wfzt wrote
Reply to comment by CypherLH in AI technology level within 5 years by medicalheads
In a human brains I'd guess it's a mix of both things. A more reflexive response not requiring labeling, and a response to many different kinds of post-labeled signals relating to the door. Not sure how much of each though.
turnip_burrito t1_ja6upi3 wrote
Reply to comment by [deleted] in Bio-computronium computer learns to play pong in 5 minutes by [deleted]
But it has a way higher probability of being conscious then.
turnip_burrito t1_ja6re1h wrote
Reply to comment by DizzyNobody in Raising AGIs - Human exposure by Lesterpaintstheworld
I think if we had the right resources, this would make a hell of a research paper and conference talk.
turnip_burrito t1_ja6mrbg wrote
Reply to comment by dwarfarchist9001 in Large language models generate functional protein sequences across diverse families by MysteryInc152
Spooky model magic.
turnip_burrito t1_ja6mf21 wrote
Reply to Is style the next revolution? by nitebear
> style is just a desperate grasp that humans have something to offer that ai doesn't.
I think this is the correct assessment. But we may choose human media/products/services for other subjective reasons, like a "journey, not the destination" mindset or social connection.
turnip_burrito t1_ja68bgs wrote
Reply to comment by Yuli-Ban in Some companies are already replacing workers with ChatGPT, despite warnings it shouldn’t be relied on for ‘anything important’ by Gold-and-Glory
That was entertaining but... dafuq?
It seems way more likely to me that we get aligned AGI or unaligned AGI than whatever that is lol
turnip_burrito t1_ja5y3hb wrote
Reply to comment by Nervous-Newt848 in Is multi-modal language model already AGI? by Ok-Variety-8135
I agree with all of this, but just to be a bit over-pedantic on one bit:
> Models cant speak or hear when they want to Its just not part of their programming.
As you said it's not part of their programming, in today's models. In general though, it wouldn't be too difficult to construct a new model that judges at each timestep based on both external stimuli and internal hidden states when to speak/interrupt or listen intently. Actually at first glance such a thing sounds trivial.
turnip_burrito t1_ja564o3 wrote
Reply to comment by Baetallo in Can we discuss idiocy of Deepmind’s decision to develop an AI to play a board game with limited degrees of freedom when compared to OpenAi’s decision to develop an ai to play a video game with nigh infinite degrees of freedom? by [deleted]
It's still not idiotic.
DeepMind's decision makes sense. Even in light of other teams making different decisions.
turnip_burrito t1_ja55xpa wrote
Reply to comment by Baetallo in Can we discuss idiocy of Deepmind’s decision to develop an AI to play a board game with limited degrees of freedom when compared to OpenAi’s decision to develop an ai to play a video game with nigh infinite degrees of freedom? by [deleted]
Do you realize that algorithmically, it is much easier to test approaches on finite state games and later scale up to games with infinite states?
turnip_burrito t1_ja55i8y wrote
Reply to Can we discuss idiocy of Deepmind’s decision to develop an AI to play a board game with limited degrees of freedom when compared to OpenAi’s decision to develop an ai to play a video game with nigh infinite degrees of freedom? by [deleted]
"Idiocy"
Okay buddy, let's see you advance the field of AI in a smarter way.
turnip_burrito t1_ja4urgn wrote
Reply to comment by Intrepid_Meringue_93 in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
A modification: I think the optimal future is one where all our personal AI are kept in some bounds by the programming of a superior, autonomous, human-aligned ASI. Not sure what the bounds are thpugh. It can figure that out by discussions with us.
turnip_burrito t1_ja4s1il wrote
Reply to comment by Nukemouse in AI technology level within 5 years by medicalheads
Yes it was the election, and certainly not anything else that happened that year.
Clearly
turnip_burrito t1_ja2q7t6 wrote
Reply to comment by DizzyNobody in Raising AGIs - Human exposure by Lesterpaintstheworld
That's also interesting. It's like building a specialized "wariness" or "discernment" layer into the agent.
This really makes one wonder which kinds of pre-main and post-main processes (like other LLMs) would be useful to have.
turnip_burrito t1_ja2ngmw wrote
Reply to comment by Lesterpaintstheworld in Raising AGIs - Human exposure by Lesterpaintstheworld
That's good.
Maybe also in the future, for an extra layer of safety, when you can several LLMs together, you can use separate LLMs "judges". The judges can have memory refreshed every time you interact with the main one, and can screen the main LLM for unwanted behavior. They can do this by taking the main LLM's tentative output string as their own input, and use that to stop the main LLM from misbehaving.
turnip_burrito t1_jac1qvw wrote
Reply to comment by PoliticallyCorrect- in How can I adapt to AI replacing my career in the short term? Help needed by YaAbsolyutnoNikto
That sounds sensible, or at least it might brings costs down, but it's just kind of nuts.