Baturinsky
Baturinsky OP t1_j3c6wv6 wrote
Reply to comment by KerbalsFTW in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
I hate it, but see no other alternatives safe enough.
Baturinsky OP t1_j3bx0gj wrote
Reply to comment by bitemenow999 in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
I understand the sentiment, but I think it's irresponsible. Possible bad consequences of AI misuse is worse by far than enything other research before. It's not a reason to stop them, but a reason to treat them with extreme care.
Baturinsky OP t1_j3bwbno wrote
Reply to comment by NovelspaceOnly in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
I'd say abosultely yes for about any field, except for AI.
Yes, it's unfair, but I would prefer to depend on goodwill of people, than on goodwill of machines.
Baturinsky OP t1_j3bh9kb wrote
Reply to comment by LanchestersLaw in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
Thanks.
I think people vastly underestimate the possibilities of use of ChatGPT-like model. If it has learned from the entire(-ish) interned scrapped, it's not just language model, it's the model of entire human kowledge avaialbe on the internet, neatly documented and cross-referenced for very easy use by algorithms. Currently it's used by quite simple algorithms, but what if it will be algorithms that try to use that data to write itself, btw? Or something else we don't forese yet.
And I don't even know how it's possible to contain the danger now, as algorithm for "pickling" internet like that is already widely known, so it could be easily done by anyone with budget and internet access. So, one of necessary measures could be switching off the internet...
Baturinsky OP t1_j39irdy wrote
Reply to comment by [deleted] in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
I'm a programmer myself. Actually, I'm writing an AI for a bot in game right now, without the ML, of cause. And it's quite good at killing human players, btw, even though algorithm is quite simple.
So tell me, please, why AI can't become really dangerous really soon?
By itself, network like ChatGPT is reatively harmless. It's not that smart, and can't do anything in real world directly. Just tells something to human.
But, corpos and countries funnel ton of money into the field. Models are learning different things, algorithms are improving, so they will know much more stuff soon, including how to move and operate things in real world. Then, what stops somebody from connecting some models together, and stick it into a robot arm, which will make and install more robot arms and war drones, which will seek and kill humans? Either specific kind of humans, or humans in general, depending on that "somebody"'s purpose?
Baturinsky OP t1_j38z6vw wrote
Reply to comment by Cherubin0 in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
I won't argue the existence of those camps. But China definitely got ML training and development running on massive scale, very likely using technologies, uncontrollably leaked from US. So, now we need the goodwill not of just US to contain the danger, but of China too and who know who else.
Baturinsky OP t1_j38yn7r wrote
Reply to comment by PredictorX1 in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
I hate it too. But I don't see any other options that does not carry the existential threat.
Baturinsky OP t1_j38yeo8 wrote
Reply to comment by Cherubin0 in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
Yes, but it being spread uncontrollably means there is much more governments and corporations that can mass censor the population with AI or create an army of kill bots.
Baturinsky OP t1_j38kn1z wrote
Reply to comment by PredictorX1 in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
And yet, we don't have nuclear wars so far.
Baturinsky OP t1_j384rvx wrote
Reply to comment by i_know_about_things in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
Yeah, me knowing me is one of the reason I think AI is not safe in hands of general public:)
Baturinsky OP t1_j37rkj0 wrote
Reply to comment by EmbarrassedHelp in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
Yes, but it would require a lot of time and effort. AI has already read it all and can give it an equivalent of millenias worth of human time to analyse.
Baturinsky OP t1_j37g36w wrote
Reply to comment by [deleted] in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
As far as I see, whoever is doing it is not doing it very good. Be it AI or human.
Baturinsky OP t1_j37fvgz wrote
Reply to comment by Omycron83 in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
Key word here is "right now".
Baturinsky OP t1_j37ej92 wrote
Reply to comment by anon_y_mousse_1067 in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
Ok, how would you suggest solving that issue then?
Baturinsky OP t1_j37bbwe wrote
Reply to comment by Omycron83 in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
Imagine the following scenario. Alice has an advance AI model at home. And asks it, "find me a best way to to a certain bad thing and get away from it". Such, harming or even murdering someone. If it's a model like ChatGPT, it probably will be trained to avoid answering such questions.
But if network models are not regulated, she can find an immoral warez model without morals, or retrain the morale out of it, or pretend that she is a police officer that needs that data to solve the case. Then model gives her the usable method.
Now imagine if she asks for a method to do something way more drastic.
Baturinsky OP t1_j379whv wrote
Reply to comment by bitemenow999 in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
Yes, it's kinda self-limit itself by the costs of training now. But I think it's inevitable that there will be more efficient training algorithms soon, possibly by orders of magnitude. Probably found with the help of ML, as AI now can be trained for programming and research too.
Baturinsky OP t1_j379g68 wrote
Reply to comment by Duke_De_Luke in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
Nothing we knew yet has the danger potential of the self-learning AI.
Even though it's still a potential still.
And it's true that we should restrict only certain applications of it, but it could be a very wide list of application, with very serious measures necessary.
Baturinsky OP t1_j375886 wrote
Reply to comment by [deleted] in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
Yes, exactly. Which is why it's important to not give access to dangerous things into hands of those who could misuse it with catastrophic consequences.
Baturinsky OP t1_j372otw wrote
Reply to comment by Cpt_shortypants in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
I don't know. Would require serious measures and cooperation between countries, and I don't think the world is ready for that yet.
But I'd say, classifying the research and trained models, limiting the access and functionality of equipment that can be used for AI training.
Especially the more general-purpose models, like programming ones.
Baturinsky OP t1_j36yncl wrote
Reply to comment by KerbalsFTW in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
Yes, restricting it just in one country is pointless, which is why major countries should work on this together, like on limiting nuke spread.
Biotech, Nanomaterials, Chip research, etc could require regulation too, though I don't see them as unpredictable as ML now.
And I don't suggest banning AI research - just limiting and regulaing it's development and the spread of algorithms and equipment, so it's less likely to get in hands of underground illegal communities.
Submitted by Baturinsky t3_104u1ll in MachineLearning
Baturinsky t1_j36t9vl wrote
Reply to [Discussion] If ML is based on data generated by humans, can it truly outperform humans? by groman434
Yes, it can, because it can gather it's own data.
Baturinsky OP t1_j3ch80z wrote
Reply to comment by PredictorX1 in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
I'm not qualified enough to figure how drastic measures can be enough.
From countries realising they face a huge common crisis that they only survive it if they forget the squabbles and work together.
To using the AI itself to analyse and prevent it's own threats.
To classifying all trained general-purpose models of scale of ChatGPT and above and preventing the possibility of making the new ones (as I see entire-internet-packed models the biggest threat now, if they can be used without the safeguards)
And up to to forcebly reverting all publically avaiable computing and communication technology to the level of 20 of 30 years ago, until we figure how we can use it safely.