Viewing a single comment thread. View all comments

Green-Future_ t1_j6f1kxn wrote

Honestly the development of AI is starting to worry me... I think there is way too little legislation for people building models with AI. I think some sort of license and compliance training should be a necessity.

I feel like cancer research is always ongoing but there seems to be so few major breakthroughs. As so many people are impacted by cancer I would love to see more effective research made in the field. Both preventative and corrective.

I also value the importance of renewables, EVs, cheaper space transport, and less false propaganda (more truth!)...

This is a post well suited to r/OurGreenFuture

3

radicalceleryjuice t1_j6fitv4 wrote

AI? What could go wrong?

(I'm kidding)

(Edit to add something useful) It's crazy to me that this is happening and most people think either "students will cheat" or "some people will lose their jobs" are the big issues.

But the problem is that any legislation will only stop the good actors. Private interests can just take development overseas and nothing's going to stop militaries from weaponizing AI. Thus legislation will stop public institutions and everybody else will develop their models in secret or elsewhere. There's really no way to stop it, so a better plan is to adapt and move fast with benign systems.

I say all that not really being an expert!

3

Green-Future_ t1_j6jwtze wrote

Interesting take, thanks for sharing. Please could you explain what you mean by "benign systems"?

I agree such that the issues most commonly publicised by MSM are minor compared to some more evil potential ab(use) cases.

1

radicalceleryjuice t1_j6lxoi5 wrote

By "benign systems" I mean ML models + how they're embedded to = cancel the apocalypse.

I don't think an AI system can be understood as benign or dangerous without analyzing how they are embedded within other systems.

1