Mercurionio

Mercurionio t1_jdqer7w wrote

Mmm, no.

Humans are still needed because you need them for applying to human needs.

I mean, in plain work AI is better. So is the machine. But AI won't solve human's problems (I mean, it can, but you better shoot yourself then take this solution). So, humans will command the AI to do stuff.

The problem the control over AI.

PS: and keep the shit about AGI to yourself. In case you wanted to apply with it.

2

Mercurionio t1_jdpwv4p wrote

The question is in rejection.

Plastic won't be good anytime soon (I mean, without consuming drugs to contain them), while organic has issues of their own.

STEM is good and nice, but we won't know for sure, since it requires study in a long run ( I mean, we need to understand how it will mutate in human body, can we control it and so on in 20-40 years).

3

Mercurionio t1_jdllxw0 wrote

Trade will become even more secured then now, basically, creating a very tight market, instead of global trading like it was before. Pandemic, and Russian invasion already pushed that.

Traveling will be a problem because of that too. Although to a lesser point.

Workforce and educational systems will be obliterated. Not only education will be useless (most of the stuff won't be needed anyway), it is also a problem from financial stand point.

Workforce will be obliterated. Fast growing of AI replacing everyone won't be equal between countries, making a chaotic and colossal migration of those who will still be able to find a job.

1

Mercurionio t1_jdgocc4 wrote

Our brain does NOT prove it. It's actually the opposite. Ask any autistic kid about 174th number in Pi and he will easily answer your question (exaggerating, but still).

What our brain proves is that it's highly concentrated even when we think it's not. Manipulating our body is a VERY demanding task, it consumes a lot of resources. So, when you are on a "trip", your brain will just relax and do whatever it wants. And your creativity will burst way better than gpt4, for example.

−19

Mercurionio t1_jdgjbie wrote

Because most of the stuff isn't in the public due to being impractical.

I mean, lots of revolutionary tech is good only on paper. There are fuckton of problems that will appear alongside with that tech. So why not solve the problem completely, with all the secondary stuff (or most of them) first, BEFORE hyping stuff and creating a false hope?

−1

Mercurionio t1_jdbvsz1 wrote

We know exactly everything that can and Will happen.

There are 2 scenarios:

  1. Single gestalt consciousness of AI, once it starts to create it's own tasks. At this moment tech will either stop advancing, coz AI will understand the usefulness of existence, or it will do it's tasks without stopping. Humans will be an obstacle, either to be ignored completely or to get rid off.

  2. Before gestalt, people will use AI as a tool to control the power over others. Through propaganda, fake world, fake artists and so on. This scenario is already happening in China.

In both cases, freaks, that are working on it, are responsible for the caused chaos. Because they should have been understanding that even before starting the work. Also, just look on the ClosedAI. They are the embodiment of everything bad, that could happen with the AI development.

1

Mercurionio t1_jd7o1bt wrote

An AI analysis tool will be a good thing for us. The problem goes that fuckers won't stop on that.

And you can build it by yourself (if you know how). Dudes in Stanford created it for 600$ based on Meta type. It can be targeted towards very narrow thing, but still run great in that specific area. Like, a co-pilot for a house builder, to look for mats needed, some math calculations and so on.

The ideal option is to stop it on ISAAC from the Division level. A cool analysis helper, that gives you all the information you need in your very narrow task. You could be a driver with an auto update for the environment (not only in cities, but in the wild too), trade sales, market exploring. That kind of stuff.

1