Baturinsky

OP t1_j3ch80z wrote

I'm not qualified enough to figure how drastic measures can be enough.

From countries realising they face a huge common crisis that they only survive it if they forget the squabbles and work together.

To using the AI itself to analyse and prevent it's own threats.

To classifying all trained general-purpose models of scale of ChatGPT and above and preventing the possibility of making the new ones (as I see entire-internet-packed models the biggest threat now, if they can be used without the safeguards)

And up to to forcebly reverting all publically avaiable computing and communication technology to the level of 20 of 30 years ago, until we figure how we can use it safely.

0

OP t1_j3bh9kb wrote

Thanks.

I think people vastly underestimate the possibilities of use of ChatGPT-like model. If it has learned from the entire(-ish) interned scrapped, it's not just language model, it's the model of entire human kowledge avaialbe on the internet, neatly documented and cross-referenced for very easy use by algorithms. Currently it's used by quite simple algorithms, but what if it will be algorithms that try to use that data to write itself, btw? Or something else we don't forese yet.

And I don't even know how it's possible to contain the danger now, as algorithm for "pickling" internet like that is already widely known, so it could be easily done by anyone with budget and internet access. So, one of necessary measures could be switching off the internet...

1

OP t1_j39irdy wrote

I'm a programmer myself. Actually, I'm writing an AI for a bot in game right now, without the ML, of cause. And it's quite good at killing human players, btw, even though algorithm is quite simple.

So tell me, please, why AI can't become really dangerous really soon?
By itself, network like ChatGPT is reatively harmless. It's not that smart, and can't do anything in real world directly. Just tells something to human.

But, corpos and countries funnel ton of money into the field. Models are learning different things, algorithms are improving, so they will know much more stuff soon, including how to move and operate things in real world. Then, what stops somebody from connecting some models together, and stick it into a robot arm, which will make and install more robot arms and war drones, which will seek and kill humans? Either specific kind of humans, or humans in general, depending on that "somebody"'s purpose?

−6

OP t1_j37bbwe wrote

Imagine the following scenario. Alice has an advance AI model at home. And asks it, "find me a best way to to a certain bad thing and get away from it". Such, harming or even murdering someone. If it's a model like ChatGPT, it probably will be trained to avoid answering such questions.

But if network models are not regulated, she can find an immoral warez model without morals, or retrain the morale out of it, or pretend that she is a police officer that needs that data to solve the case. Then model gives her the usable method.

Now imagine if she asks for a method to do something way more drastic.

−1

OP t1_j372otw wrote

I don't know. Would require serious measures and cooperation between countries, and I don't think the world is ready for that yet.

But I'd say, classifying the research and trained models, limiting the access and functionality of equipment that can be used for AI training.

Especially the more general-purpose models, like programming ones.

−5

OP t1_j36yncl wrote

Yes, restricting it just in one country is pointless, which is why major countries should work on this together, like on limiting nuke spread.

Biotech, Nanomaterials, Chip research, etc could require regulation too, though I don't see them as unpredictable as ML now.

And I don't suggest banning AI research - just limiting and regulaing it's development and the spread of algorithms and equipment, so it's less likely to get in hands of underground illegal communities.

−12