DukkyDrake

DukkyDrake OP t1_j3cz0vt wrote

How have people reacted to similar threats to their profession since time immemorial. How are artists currently reacting to their impending existential commercial threat. They will react badly and it's understandable, but it ultimately doesn't matter on the macro scale. Expecting the world to slow down or make a special exception for you isn't reasonable, they will all do what others have done in the past; they will all accept it in the end because there is no recourse. Society will be better off in the end, well, assuming the cultural ethics of your local society has a fair amount of humanism at its core. If it doesn't, a lot of people could be in for some difficult times.

4

DukkyDrake OP t1_j3cdyvt wrote

It's almost always white-knuckle terror for the individual.

As someone who does old school automation, it's the bespoke engineering costs why we don't have a more technological society. The return on investment simply isn't there to justify crafting solutions for the vast majority of economically valuable tasks. Renting the productive time of poorly educated humans is still the gold standard.

These tools will allow tapping a much larger pool of lower skilled programmers as well. That lower cost basis will bring into reach the next larger level of low hanging fruit to partially automate.

8

DukkyDrake t1_j34ncft wrote

It doesn't matter if people convince themselves or others that some AI tool is AGI. The only thing that matters is if the tool is competent on important tasks, giving it a particular name doesn't change its competency.

All we have are super intelligent tools that are good at unimportant things. It's unreliable because it doesn't really understand anything. It will take serious engineering to integrate unreliable tools into important systems, that will limit its spread in the physical world.

1

DukkyDrake t1_j2da0yv wrote

The same problem likely exists for random people running their own future knockoff version of AGI.

The resources to run something like ChatGPT, while not trivial, is well within the reach of an avg few western professionals pooling their resources.

If a future AGI requires exotic hardware or compute the size of a skyscraper, there will only exist a few in the world and all but impossible for the avg person to get their own unrestricted instance. With such an outcome, it will be mostly business as usual for the foreseeable future.

4

DukkyDrake t1_j2ar8n7 wrote

>Is AGI really achievable?

It does appear to be well within the possibility space under physics. There is no way to really know until someone creates a working example. The current products of deep learning don't appear to be capable of truly understanding the material in its training corpus and is instead simply learning to make statistical connections that can be individually unreliable.

The training corpus contains both fantasies and reality; there is no guarantee the most popular connections aren't delusional nonsense from twitter or FB.

1

DukkyDrake t1_j28k9gd wrote

It's not so much that they don't have a right to shelter, but more that they don't have a right to demand that you should work a few hrs a week to provide them with shelter.

> A right does not include the material implementation of that right by other men; it includes only the freedom to earn that implementation by one’s own effort.

−1

DukkyDrake t1_j1rsnli wrote

It very well could, assuming the corps could keep the secret sauce under wraps. I think that's unlikely. The lowest paid members of the dev team will bail and form their own startup to get a bigger slice of the pie. Their dev teams will similarly fragment eventually. Gov could get involve and step on anyone trying to replicate, they can do anything by just saying national security.

There is a risk of the first to cross the finish line giving everyone else in the world the shut out.

20

DukkyDrake t1_j1mu3sl wrote

>I can't imagine how much cost they're racking up

I've seen estimates from $3m/day to $3m/month for chatGPT compute.

>average is probably single-digits cents per chat; trying to figure out more precisely and also how we can optimize it— Sam Altman (@sama) December 5, 2022

6

DukkyDrake t1_j1idk99 wrote

It's possible. But you should consider the fact human labor isn't usually the largest % of retail prices. (e.g., You might pay ~$0.57/lb for potatoes in Austin TX, ~$0.12/lb goes to the farmer.) Grocery stores labor costs is around ~14% of sales.

There is a factory in Japan that operates lights out, no human workers. Robots do all of the work; the factory happens to manufacture robots. These robots made by other robots are expensive. The entire chain isn't automated, they don't make the semiconductors etc in their robots, but my point is that companies make products to sell for the maximum price the market can bear and not the cheapest. Although manufacture/labor cost doesn't determine the price, automation does lower the cost of goods and services.

1

DukkyDrake t1_j1fyvi9 wrote

There is 1 limiting factor, economics. If AGI running a McDonalds is more expensive than a poorly educated human, the human will still have a job frying burgers. If every random person can run a bootleg AGI algo on their beefed-up desktop, economics won't be a huge limiting factor, it will be access to raw materials. But that outcome will likely be an existential threat.

1

DukkyDrake t1_j1f7kkr wrote

No, you're not. It will bring more jobs long before it can do everything. People use more of a given good or service the cheaper it gets. AI will deliver serfdom to the tech workers and make them no better off than the avg unskilled laborer after mechanization.

> before the Industrial Revolution 90% of people were farmers.

The reasons people found other work was because a tractor or a combine couldn't do those other jobs. A tractor could plow the fields, but it couldn't become an accountant. An automated intelligence could operate a tractor and serve as an accountant.

17

DukkyDrake t1_j1b5igp wrote

> means to be aware

Not many uses sentient in relation to AI and simply mean the textbook definition. Attach any model to the internet, a camera or sensor and you have your sentient tool.

>As in being just a few steps above where chatGPT is right now, legitimately understanding and comprehending the things it's being told and how they relate to the world

It would be a lot more than a few steps, chatGPT isn't even close. All it's doing is probabilistic prediction of human text, it's predicting the best next word in context based on its training corpus.

1