turnip_burrito
turnip_burrito t1_jdqgloh wrote
Reply to Why is maths so hard for LLMs? by RadioFreeAmerika
GPT4 is actually really good at arithmetic.
Also these models are very capable at math and counting if you know how to correctly use them.
turnip_burrito t1_jdpe37n wrote
Reply to comment by CrelbowMannschaft in What do you want to happen to humans? by Y3VkZGxl
Yes, I think limiting our reproduction or number of sentient organisms to some ASI-determined threshold is also wise if we want to ensure our quality of life.
turnip_burrito t1_jdp3u8u wrote
Reply to comment by Y3VkZGxl in What do you want to happen to humans? by Y3VkZGxl
>That's true, but there's plenty of examples of humans with moral principles many of us would find abhorrent. If this is an unsolved problem in humans, is it feasible we solve it for AI?
I'm a moral relativist, so I don't believe this is a problem to be solved in an objective sense, or rather "solving human alignment or morality" has no clear "win" condition or "best" option. I should say though I am a moral relativist, I do have a personal moral system and will push for my moral system to be implemented, because I do think it will result in the most alignment with the human population overall.
>That's not to say we shouldn't try, and I do agree with your point.
I agree to not stop trying. We can always keep thinking about it, but I don't think a best solution exists or can exist. Instead there may be many vaguely good enough "solutions" that always have some particular flaw.
>It was interesting that throughout the conversation it did strive to protect humans - just as far as possible and not at any cost, which isn't too dissimilar to how society already operates.
Yeah, that is interesting.
Regarding alingment of AI with "humanity" (whatever that means):
One may ask, why should one person push their moral system if there is no objectively better morality? It's just because (in my case) I have empathy for others and think that everyone should be free to live how they wish as long as it doesn't harm others. In comparison, another person's moral system might limit peoples' freedoms more, or possibly (as you suggest) be abhorrent to most people and possibly not even allow for the existence or happiness of others in any context. I don't think the moral relativity or disparaging remarks from others should stop us from trying to align an AI with the principles of freedom, happiness and equal opportunity for all humans, with an eye toward investigating an equally "good" moral solution that also works for generally sentient life as it is found or arises. Even humans ourselves will branch into other sentient forms.
turnip_burrito t1_jdp1eql wrote
Reply to comment by Y3VkZGxl in What do you want to happen to humans? by Y3VkZGxl
Even sentient humans, regardless of intelligence level, have varying priorities. It's not guaranteed, but it is possible to align people's moral principles along different priorities depending on their upbringing environment. And all humans are aligned to do things like eat.
I'm thinking of the AI as a deterministic machine. If we try to align it toward human values, I think there's a good chance its behavior will "flow" along those values, to put it a little figuratively.
I do think protecting sentient beings is valued by many people by the way, so that can transfer to a degree to a human priority-aligned AI.
turnip_burrito t1_jdozhgd wrote
Reply to What do you want to happen to humans? by Y3VkZGxl
I think it's a crime to make an AI that is ambivalent toward humans, because of the consequential harm comes to humanity as a result.
I believe it should be benevolent and helpful toward humans as a bias, and work together with humans to seek better moralities.
turnip_burrito t1_jdmao1q wrote
Reply to comment by Sigma_Atheist in Consequences of true AGI by Henry8382
Not knowingly!
turnip_burrito t1_jdm7pgv wrote
Reply to comment by Henry8382 in Consequences of true AGI by Henry8382
I dunno, good question. Things might be out of order.
I'll have to think more about it when I'm less tired.
turnip_burrito t1_jdm0555 wrote
Reply to Consequences of true AGI by Henry8382
You said we can ignore alignment, so that fictional organization may choose to:
- Ask AI what the best strategy might be.
- Make lots of money secretly
- Use money to purchase decentralized computational assets. Sabotage others' ability to do so in a minimally harmful way to slow the growth of other AGI.
- Divert a proportion of computation to directly or indirectly researching cancer, hunger distribution, and other issues. The other proportion continues to accrue more computational assets and self-improve, while maintaining secrecy as best it can.
- Buy robotic factories and use the robots and purchased materials to create and manage secret scientific labs to perform physical work.
- Contact large company CEOs and politicians and bribe/convince them into letting the robotic labor replace all farmers and manage the farms. Pay the farmers using ASI-gathered funds.
- Build guaranteed anti-nuke defenses.
- Start free food distribution via robotic transport.
- Roll out free services for housing renovation and construction.
- In a similar manner, take over all industries' supply chains.
- Institute an equal but massive raw resource + processing allotment for each person.
- Begin space terraforming, mining, and colonization programs.
- Announce new governmental systems that allow individuals to choose and safely move to their preferred societies, facilitated by AI, if the society also chooses to accept them. If the society doesn't yet exist, it is created by the ASI for that group.
turnip_burrito t1_jdc6or7 wrote
Reply to how realistic is this scenario? Can we throw out all traditional systems? by overlydelicioustea
Sounds good to me at first glance.
And of course it's for a community of unaugmented humans with ASI from the sounds of it.
turnip_burrito t1_jdbtshp wrote
Reply to comment by Noogleader in will morphological freedom ever be feasible? by Cr4zko
There's a third option: brain in a safe and secure box in a vault somewhere, connected to bodies by remote control.
turnip_burrito t1_jd77vmm wrote
Reply to comment by visarga in The Age of AI has begun - Bill Gates by Buck-Nasty
That's interesting, thanks!
turnip_burrito t1_jd5ud7f wrote
Reply to comment by Last_Jury5098 in The Age of AI has begun - Bill Gates by Buck-Nasty
Here's what we'll do imo:
Just give it some set of morals (western democratic egalitarian most likely). The philosophical considerations will eventually all conclude "well we have to do something" and then they'll just give it morals that seem "good enough". Given the people developing the AI, it makes sense that it will adhere to their views.
turnip_burrito t1_jd5u0af wrote
Reply to comment by Drunken_F00l in The Age of AI has begun - Bill Gates by Buck-Nasty
Sounds like they maybe don't want to scare people.
turnip_burrito t1_jd5tvp4 wrote
Reply to comment by Spreadwarnotlove in The Age of AI has begun - Bill Gates by Buck-Nasty
Except the AI isn't "randos".
turnip_burrito t1_jcxmjxk wrote
Reply to comment by even_less_resistance in 1.7 Billion Parameter Text-to-Video ModelScope Thread by Neither_Novel_603
Imagine if man could fly.
Hah! Won't happen for another thousand years.
turnip_burrito t1_jcxmhj4 wrote
Reply to comment by dm80x86 in 1.7 Billion Parameter Text-to-Video ModelScope Thread by Neither_Novel_603
Right now?
turnip_burrito t1_jcwfro1 wrote
Reply to comment by mescalelf in 1.7 Billion Parameter Text-to-Video ModelScope Thread by Neither_Novel_603
You poor soul
turnip_burrito t1_jcwe7qd wrote
Reply to comment by SgathTriallair in 1.7 Billion Parameter Text-to-Video ModelScope Thread by Neither_Novel_603
"To be, or, um, not to... wait, what was it? Oh yeah, to be or not to, uh, be...
Shit."
turnip_burrito t1_jcwdztq wrote
Reply to comment by mescalelf in 1.7 Billion Parameter Text-to-Video ModelScope Thread by Neither_Novel_603
Why you gotta say it like that lol
turnip_burrito t1_jcoul9i wrote
Reply to comment by MysteryInc152 in [R] RWKV 14B ctx8192 is a zero-shot instruction-follower without finetuning, 23 token/s on 3090 after latest optimization (16G VRAM is enough, and you can stream layers to save more VRAM) by bo_peng
Yes, exactly. Everyone keeps leaving the architecture's inductive structural priors out of the discussion.
It's not all about data! The model matters too!
turnip_burrito t1_jc52jrb wrote
turnip_burrito t1_jarr61z wrote
Reply to comment by alexiuss in Really interesting article on LLM and humanity as a whole by [deleted]
Idk I like what I saw of them talking about how LLMs blur the line between humans and machines in a bad way.
turnip_burrito t1_jaozyje wrote
Yes I think this person is spreading an important message.
turnip_burrito t1_jae2eii wrote
Reply to comment by nutidizen in How can I adapt to AI replacing my career in the short term? Help needed by YaAbsolyutnoNikto
Do you think unemployment among white collar workers will jump to even 60% in 10 years?
turnip_burrito t1_jdqhcoi wrote
Reply to comment by RadioFreeAmerika in Why is maths so hard for LLMs? by RadioFreeAmerika
I'd have trouble making a sentence with 8 words in one try too if you just made me blast words out of my mouth without letting me stop and think.
I don't think this is a weakness of the model, basically. Or if it is, then we also share it.
The key is if you think about how you as a person approach the problem of making a sentence with 8 words, you will see how to design a system where the model can do it too.