Baturinsky

Baturinsky t1_j8ivn54 wrote

Is 1 person dying more important than 1000...many zeroes..000 persons not being born because humanity is completely destroyed and future generations from now until end of space and time will be never born?

1

Baturinsky t1_j7txxpm wrote

Problem is, AI has much lower entrance barrier and the potential of the much higher return (in money and power) than the cloning, or even nuclear energy/weapon.

Even now people can run StableDiffusion or the simpler language models on home computers with RTX2060. It's quite likely that AI will be optimised enough that eventually even AGI will be possible to run on the gaming GPUs.

3

Baturinsky t1_j7ttnra wrote

You are right, but it's quite hard to implement.

There is a whole science, called AI Alignment Theory, which is TRYING to figure how to make AGI without destroying the humanity.

There is https://www.reddit.com/r/ControlProblem/ subreddit about it

It's half-dead, and admins there are quite unfriendly to noobs posting (and I suspect those two things are somehow related to each other), but it has a good introduction info on it's sidebar.

There is also https://www.lesswrong.com/tag/ai with a lot of articles on the matter.

2

Baturinsky t1_j6mmb1s wrote

You have to use all that "weird garbage" in the prompt for the default Stable Diffusion checkpoints, because they have all kind of junk jammered into the learning set, so you have to sort it out.

Many third party checkpoints, such as NovelAI, AnythingV3 or PFG work well with small prompts.

Example: https://www.reddit.com/r/StableDiffusion/comments/zzdxug/pfg_model_syd_mead/ with prompt being just "drama movie by Syd Mead, female royal guard, detailed face"

1

Baturinsky t1_j6mma6w wrote

You have to use all that "weird garbage" in the prompt for the default Stable Diffusion checkpoints, because they have all kind of junk jammered into the learning set, so you have to sort it out.

Many third party checkpoints, such as NovelAI, AnythingV3 or PFG work well with small prompts.

Example: https://www.reddit.com/r/StableDiffusion/comments/zzdxug/pfg_model_syd_mead/ with prompt being just "drama movie by Syd Mead, female royal guard, detailed face"

2

Baturinsky t1_j64pm2e wrote

Duh, of COURSE it does. That's the price of the progress. The less people's destructive potential is limited by the lack of technology, the more it has to be limited by the other means. And Singularity is gonna increase the people's destructive potential tremendously.
If we'll make Aligned ASI and ask it to make decisions for us, I doubt it will find any non-totalitarian solution.

3

Baturinsky t1_j63ohlx wrote

Only way for Humanity to survive Singularity (i.e. stay alive and in charge of our future) is to become Aligned with itself. I.e. to make it so that we are responsible and cooperative enough that no human that can create and unleash an Unaligned ASI would do that. By reducing the number of people that can do that, and/or by making them more responsible so they would not actually do that.

LessWrong crowd assumes that this task is so insurmountable hard, that is only solvable by creating a perfectly Aligned ASI that would solve it for you.

My opinion is that it can and should be done without making ASI first. Which is a 1. task we can start working on today 2. Would increase ETA of the DOOM even if we only solve it partially.

5

Baturinsky t1_j5p9nas wrote

Reply to AI + VR by CogGear

I want a set that would emulate living in an accurate (enough) any historical period and place.

7

Baturinsky t1_j5nwu4s wrote

Reply to comment by AsheyDS in Steelmanning AI pessimists. by atomsinmove

I believe a capability like that could be a key for our survival. It is required for our Alignment as the humanity. I.e. us being able to act together for the interest of Humanity as a whole. As the direst political lies are usually aimed at the splitting people apart and fear each other, as they are easier to control and manipulate in that state.
Also, this ability could be necessary for strong AI even being possible, as strong AI should be able to reason successfully on partially unreliable information.
And lastly, this ability will be necessary for AIs to check each other AIs reasoning.

1

Baturinsky t1_j5n2dnx wrote

Reply to comment by AsheyDS in Steelmanning AI pessimists. by atomsinmove

Do you really expect for ChatGPT to go against the USA machine of disinformation? Do you think it will be able to give a balanced report on controversial issues, taking in account the credibility and affiliation of sources, and quality of reasoning (such as NOT taking into account the "proofs" based on "alleged" and "highly likely"). Do you think it will honestly present the point of views from countries and sources not affiliated/bought by USA and/or Dem or Rep party? Do you think it will let the user define the criteria for credibility by him/herself and give info based on that criteria, not push the "only truth"?

Because if it won't, and AI will be used as a way of powers to braiwash the masses, instead as a power for masses to resist brainwahsing, then we'll have very gullible population and very dishonest AI by the time it will matter the most.

P.S. And yes, if/when China or Russia will make something like ChatGPT, it will probably be pushing their government agendas just like ChatGPT pushes US agenda. But is there a hope for impartial AI?

1

Baturinsky t1_j5kpoth wrote

Reply to comment by AsheyDS in Steelmanning AI pessimists. by atomsinmove

How do you plan to make it not kill everyone, either by mistake in alignment, or someone intentionally running a misaligned AGI? I don't see how it can be done without extreme safety measures, such as many AIs and people keeping an eye on every AI and evey human at all time.

3