Baturinsky
Baturinsky t1_j8ivn54 wrote
Reply to comment by SoylentRox in Altman vs. Yudkowsky outlook by kdun19ham
Is 1 person dying more important than 1000...many zeroes..000 persons not being born because humanity is completely destroyed and future generations from now until end of space and time will be never born?
Baturinsky t1_j83ewig wrote
Reply to comment by rretaemer1 in Open source AI by rretaemer1
There are also open source trained LLM checkpoints, such as https://huggingface.co/docs/transformers/model_doc/gpt_neo or https://huggingface.co/bigscience/bloom
Baturinsky t1_j7txxpm wrote
Reply to comment by dyingbreedxoxo in Can't we just control the development of AI? by [deleted]
Problem is, AI has much lower entrance barrier and the potential of the much higher return (in money and power) than the cloning, or even nuclear energy/weapon.
Even now people can run StableDiffusion or the simpler language models on home computers with RTX2060. It's quite likely that AI will be optimised enough that eventually even AGI will be possible to run on the gaming GPUs.
Baturinsky t1_j7ttnra wrote
You are right, but it's quite hard to implement.
There is a whole science, called AI Alignment Theory, which is TRYING to figure how to make AGI without destroying the humanity.
There is https://www.reddit.com/r/ControlProblem/ subreddit about it
It's half-dead, and admins there are quite unfriendly to noobs posting (and I suspect those two things are somehow related to each other), but it has a good introduction info on it's sidebar.
There is also https://www.lesswrong.com/tag/ai with a lot of articles on the matter.
Baturinsky t1_j7p2ih7 wrote
Reply to AI Progress of February Week 1 (1-7 Feb) by Pro_RazE
Could you please do this as a text with references?
Baturinsky t1_j7k3v0n wrote
Baturinsky t1_j6nnq4k wrote
Reply to comment by im-so-stupid-lol in Prompt engineering by im-so-stupid-lol
I dunno. But if you have at least 1060gtx you can install and run Stable Diffusion locally.
Baturinsky t1_j6mmb1s wrote
Reply to Prompt engineering by im-so-stupid-lol
You have to use all that "weird garbage" in the prompt for the default Stable Diffusion checkpoints, because they have all kind of junk jammered into the learning set, so you have to sort it out.
Many third party checkpoints, such as NovelAI, AnythingV3 or PFG work well with small prompts.
Example: https://www.reddit.com/r/StableDiffusion/comments/zzdxug/pfg_model_syd_mead/ with prompt being just "drama movie by Syd Mead, female royal guard, detailed face"
Baturinsky t1_j6mma6w wrote
Reply to Prompt engineering by im-so-stupid-lol
You have to use all that "weird garbage" in the prompt for the default Stable Diffusion checkpoints, because they have all kind of junk jammered into the learning set, so you have to sort it out.
Many third party checkpoints, such as NovelAI, AnythingV3 or PFG work well with small prompts.
Example: https://www.reddit.com/r/StableDiffusion/comments/zzdxug/pfg_model_syd_mead/ with prompt being just "drama movie by Syd Mead, female royal guard, detailed face"
Baturinsky t1_j6j9bdz wrote
Test subject
Baturinsky t1_j6ae2hi wrote
Reply to Has anybody read the webcomic Seed? by Diacred
Thanks for the link! An extremely topical comic, and written by someone who is familiar with the issue, looks like.
Baturinsky t1_j64pm2e wrote
Reply to comment by BassoeG in Superhuman Algorithms could “Kill Everyone” in Due Time, Researchers Warn by RareGur3157
Duh, of COURSE it does. That's the price of the progress. The less people's destructive potential is limited by the lack of technology, the more it has to be limited by the other means. And Singularity is gonna increase the people's destructive potential tremendously.
If we'll make Aligned ASI and ask it to make decisions for us, I doubt it will find any non-totalitarian solution.
Baturinsky t1_j63p6u8 wrote
Reply to Asking here and not on an artist subreddit because you guys are non-artists who love AI and I don't want to get coddled. Genuinely, is there any point in continuing to make art when everything artists could ever do will be fundamentally replaceable in a few years? by [deleted]
People are still interested in sports, theater and such.
Baturinsky t1_j63ov2n wrote
Reply to comment by GayHitIer in Superhuman Algorithms could “Kill Everyone” in Due Time, Researchers Warn by RareGur3157
Only if you are sure that ASI would grant you a swift death.
Also, one person dying leaves others to live on. If that person has left some legacy/memory/children etc, they will live on with those people too. Lights off for everyone means lights off for everyone.
Baturinsky t1_j63ohlx wrote
Reply to comment by gaudiocomplex in Superhuman Algorithms could “Kill Everyone” in Due Time, Researchers Warn by RareGur3157
Only way for Humanity to survive Singularity (i.e. stay alive and in charge of our future) is to become Aligned with itself. I.e. to make it so that we are responsible and cooperative enough that no human that can create and unleash an Unaligned ASI would do that. By reducing the number of people that can do that, and/or by making them more responsible so they would not actually do that.
LessWrong crowd assumes that this task is so insurmountable hard, that is only solvable by creating a perfectly Aligned ASI that would solve it for you.
My opinion is that it can and should be done without making ASI first. Which is a 1. task we can start working on today 2. Would increase ETA of the DOOM even if we only solve it partially.
Baturinsky t1_j5wh1eg wrote
Reply to comment by CandyCoatedHrtShapes in Humanity May Reach Singularity Within Just 7 Years, Trend Shows by Shelfrock77
Real doomer sees thas scenario as the win, as it assumes people will still be alive and in power.
Baturinsky t1_j5p9nas wrote
Baturinsky t1_j5nwu4s wrote
Reply to comment by AsheyDS in Steelmanning AI pessimists. by atomsinmove
I believe a capability like that could be a key for our survival. It is required for our Alignment as the humanity. I.e. us being able to act together for the interest of Humanity as a whole. As the direst political lies are usually aimed at the splitting people apart and fear each other, as they are easier to control and manipulate in that state.
Also, this ability could be necessary for strong AI even being possible, as strong AI should be able to reason successfully on partially unreliable information.
And lastly, this ability will be necessary for AIs to check each other AIs reasoning.
Baturinsky t1_j5n2dnx wrote
Reply to comment by AsheyDS in Steelmanning AI pessimists. by atomsinmove
Do you really expect for ChatGPT to go against the USA machine of disinformation? Do you think it will be able to give a balanced report on controversial issues, taking in account the credibility and affiliation of sources, and quality of reasoning (such as NOT taking into account the "proofs" based on "alleged" and "highly likely"). Do you think it will honestly present the point of views from countries and sources not affiliated/bought by USA and/or Dem or Rep party? Do you think it will let the user define the criteria for credibility by him/herself and give info based on that criteria, not push the "only truth"?
Because if it won't, and AI will be used as a way of powers to braiwash the masses, instead as a power for masses to resist brainwahsing, then we'll have very gullible population and very dishonest AI by the time it will matter the most.
P.S. And yes, if/when China or Russia will make something like ChatGPT, it will probably be pushing their government agendas just like ChatGPT pushes US agenda. But is there a hope for impartial AI?
Baturinsky t1_j5ma4a3 wrote
Reply to comment by AsheyDS in Steelmanning AI pessimists. by atomsinmove
If we don't have a robust safery system that works acroos the companies and across the states by that time, I don't see how we will survive that.
Baturinsky t1_j5kpoth wrote
Reply to comment by AsheyDS in Steelmanning AI pessimists. by atomsinmove
How do you plan to make it not kill everyone, either by mistake in alignment, or someone intentionally running a misaligned AGI? I don't see how it can be done without extreme safety measures, such as many AIs and people keeping an eye on every AI and evey human at all time.
Baturinsky t1_j5k0xxz wrote
Reply to comment by User1539 in People are already working on a ChatGPT + Wolfram Alpha hybrid to create the ultimate AI assistant (things are moving pretty fast it seems) by lambolifeofficial
Yes, but I hope it can be addressed
https://www.reddit.com/r/ControlProblem/comments/109xs2a/ai_alignment_problem_may_be_just_a_subcase_of_the/
Baturinsky t1_j5jl5nt wrote
Reply to comment by User1539 in People are already working on a ChatGPT + Wolfram Alpha hybrid to create the ultimate AI assistant (things are moving pretty fast it seems) by lambolifeofficial
I agree, human + AI working together is already and AGI. With only limit of the human part being unscaleable. And can be extremely dangerous if AI part is very powerful and both are non-aligned with fundamental human values.
Baturinsky t1_j5iq32y wrote
Reply to comment by User1539 in People are already working on a ChatGPT + Wolfram Alpha hybrid to create the ultimate AI assistant (things are moving pretty fast it seems) by lambolifeofficial
And how safe is to give that tools into the hands of, among others, criminals and terrorists?
Baturinsky t1_j8iw7y7 wrote
Reply to comment by FusionRocketsPlease in Altman vs. Yudkowsky outlook by kdun19ham
We assume that at least one of AGI will be an agent. And that may be enough for it to go gray goo.