DukkyDrake
DukkyDrake OP t1_j3cwfmw wrote
Reply to comment by tinyogre in A more realistic vision of the AI & Programmer's jobs story by DukkyDrake
All known existing AI tools doesn't really understand anything, this tool produces probabilistic text. That's why it won't do your work for you, it can't produce precise and dependable results. It will make you more productive as a programmer and is incapable of directly replacing what a programmer does.
DukkyDrake OP t1_j3cdyvt wrote
Reply to comment by just-a-dreamer- in A more realistic vision of the AI & Programmer's jobs story by DukkyDrake
It's almost always white-knuckle terror for the individual.
As someone who does old school automation, it's the bespoke engineering costs why we don't have a more technological society. The return on investment simply isn't there to justify crafting solutions for the vast majority of economically valuable tasks. Renting the productive time of poorly educated humans is still the gold standard.
These tools will allow tapping a much larger pool of lower skilled programmers as well. That lower cost basis will bring into reach the next larger level of low hanging fruit to partially automate.
DukkyDrake OP t1_j3cam5e wrote
Reply to comment by just-a-dreamer- in A more realistic vision of the AI & Programmer's jobs story by DukkyDrake
The cheaper useful goods and services becomes, the more those goods and services are consumed.
Submitted by DukkyDrake t3_105qmfy in singularity
DukkyDrake t1_j34ncft wrote
Reply to comment by bernard_cernea in 2022 was the year AGI arrived (Just don't call it that) by sideways
It doesn't matter if people convince themselves or others that some AI tool is AGI. The only thing that matters is if the tool is competent on important tasks, giving it a particular name doesn't change its competency.
All we have are super intelligent tools that are good at unimportant things. It's unreliable because it doesn't really understand anything. It will take serious engineering to integrate unreliable tools into important systems, that will limit its spread in the physical world.
DukkyDrake t1_j2ehqbp wrote
Reply to Game Theory of UBI by shmoculus
>the general view here is that a UBI will become necessary and is likely to be implemented.
It's the general view in some circles.
The bad case may be more likely, but the ugly case isn't off the table for the USA.
>The Economics of Automation: What Does Our Machine Future Look Like?
DukkyDrake t1_j2da0yv wrote
Reply to There's now an open source alternative to ChatGPT, but good luck running it by SnoozeDoggyDog
The same problem likely exists for random people running their own future knockoff version of AGI.
The resources to run something like ChatGPT, while not trivial, is well within the reach of an avg few western professionals pooling their resources.
If a future AGI requires exotic hardware or compute the size of a skyscraper, there will only exist a few in the world and all but impossible for the avg person to get their own unrestricted instance. With such an outcome, it will be mostly business as usual for the foreseeable future.
DukkyDrake t1_j2aztlg wrote
>expecting this to remain readily available and free - while the costs for openai are "eye-watering"
The rest of that quote.
DukkyDrake t1_j2ar8n7 wrote
Reply to Is AGI really achievable? by Calm_Bonus_6464
>Is AGI really achievable?
It does appear to be well within the possibility space under physics. There is no way to really know until someone creates a working example. The current products of deep learning don't appear to be capable of truly understanding the material in its training corpus and is instead simply learning to make statistical connections that can be individually unreliable.
The training corpus contains both fantasies and reality; there is no guarantee the most popular connections aren't delusional nonsense from twitter or FB.
DukkyDrake t1_j2905nm wrote
Reply to comment by fluffy_assassins in How are we feeling about a possible UBI? by theshadowturtle
What disgusts me, people that think they have a right to demand free labor from others.
DukkyDrake t1_j28otc7 wrote
Wait a few days and see, /u/kevinmise have been posting a popular end of year prediction thread for a few years now.
Here is last year's Singularity Predictions 2022
DukkyDrake t1_j28k9gd wrote
Reply to comment by fluffy_assassins in How are we feeling about a possible UBI? by theshadowturtle
It's not so much that they don't have a right to shelter, but more that they don't have a right to demand that you should work a few hrs a week to provide them with shelter.
> A right does not include the material implementation of that right by other men; it includes only the freedom to earn that implementation by one’s own effort.
DukkyDrake t1_j25qlwf wrote
Reply to comment by turnip_burrito in ChatGPT is cool, but for the next version I hope they make a ResearchAssistantGPT by CommunismDoesntWork
These models are good at probabilistic prediction of human text, that isn't the same as truth. 100% accuracy at that task doesn't mean what you expect it to mean, it doesn't mean the results are necessarily coherent in any way.
These AI tools aren't true AI.
DukkyDrake t1_j1zrg9m wrote
Reply to comment by Current_Side_4024 in LifT Bioscience - Cure for Cancer by Homie4-2-0
A few studies suggest a universal cure for cancer would only increase life expectancy by about ~3 years.
DukkyDrake t1_j1rsnli wrote
Reply to Genuine question, why wouldn’t AI, posthumanism, post-singularity benefits etc. become something reserved for the elites? by mocha_sweetheart
It very well could, assuming the corps could keep the secret sauce under wraps. I think that's unlikely. The lowest paid members of the dev team will bail and form their own startup to get a bigger slice of the pie. Their dev teams will similarly fragment eventually. Gov could get involve and step on anyone trying to replicate, they can do anything by just saying national security.
There is a risk of the first to cross the finish line giving everyone else in the world the shut out.
DukkyDrake t1_j1nj7a3 wrote
Reply to comment by turnip_burrito in Sam Altam revield capabilites of GPT 4. It'll be Enormous. by Ok_Criticism_1414
Yes. It's obvious a lot of those were aspirational statements unrelated to any existing models.
DukkyDrake t1_j1mu3sl wrote
Reply to comment by TouchCommercial5022 in Money Will Kill ChatGPT’s Magic by vernes1978
>I can't imagine how much cost they're racking up
I've seen estimates from $3m/day to $3m/month for chatGPT compute.
>average is probably single-digits cents per chat; trying to figure out more precisely and also how we can optimize it— Sam Altman (@sama) December 5, 2022
DukkyDrake t1_j1ilas7 wrote
Reply to comment by Ortus12 in Am I the only one on this sub that believes AI actually will bring more jobs (especially in tech)? by raylolSW
That example is just a long existing manufacturer of industrial robots, demonstrating products aren't necessarily super cheap just because their production is as maximally automated as permitted by economic concerns.
DukkyDrake t1_j1ihagt wrote
Reply to There are far more dissenting opinions in this sub than people keep saying. by Krillinfor18
No one can really know what the future will be like, different people have certain hopes framed by some limited set of variables and some IMO desperation. I may include/exclude other variables and starting conditions leading to a different framing. There are many possible futures, but somethings are more likely than others.
DukkyDrake t1_j1idk99 wrote
Reply to comment by Ortus12 in Am I the only one on this sub that believes AI actually will bring more jobs (especially in tech)? by raylolSW
It's possible. But you should consider the fact human labor isn't usually the largest % of retail prices. (e.g., You might pay ~$0.57/lb for potatoes in Austin TX, ~$0.12/lb goes to the farmer.) Grocery stores labor costs is around ~14% of sales.
There is a factory in Japan that operates lights out, no human workers. Robots do all of the work; the factory happens to manufacture robots. These robots made by other robots are expensive. The entire chain isn't automated, they don't make the semiconductors etc in their robots, but my point is that companies make products to sell for the maximum price the market can bear and not the cheapest. Although manufacture/labor cost doesn't determine the price, automation does lower the cost of goods and services.
DukkyDrake t1_j1fyvi9 wrote
Reply to comment by Nervous-Newt848 in Am I the only one on this sub that believes AI actually will bring more jobs (especially in tech)? by raylolSW
There is 1 limiting factor, economics. If AGI running a McDonalds is more expensive than a poorly educated human, the human will still have a job frying burgers. If every random person can run a bootleg AGI algo on their beefed-up desktop, economics won't be a huge limiting factor, it will be access to raw materials. But that outcome will likely be an existential threat.
DukkyDrake t1_j1f7kkr wrote
Reply to Am I the only one on this sub that believes AI actually will bring more jobs (especially in tech)? by raylolSW
No, you're not. It will bring more jobs long before it can do everything. People use more of a given good or service the cheaper it gets. AI will deliver serfdom to the tech workers and make them no better off than the avg unskilled laborer after mechanization.
> before the Industrial Revolution 90% of people were farmers.
The reasons people found other work was because a tractor or a combine couldn't do those other jobs. A tractor could plow the fields, but it couldn't become an accountant. An automated intelligence could operate a tractor and serve as an accountant.
DukkyDrake t1_j1f0ddz wrote
Reply to If your opinion is "it's good because it's AI," you're not really thinking very far ahead. by OldWorldRevival
It doesn't matter if it's good or bad. It's happening and there is nothing most people can do to stop it.
DukkyDrake t1_j1b5igp wrote
Reply to comment by SendMePicsOfCat in Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
> means to be aware
Not many uses sentient in relation to AI and simply mean the textbook definition. Attach any model to the internet, a camera or sensor and you have your sentient tool.
>As in being just a few steps above where chatGPT is right now, legitimately understanding and comprehending the things it's being told and how they relate to the world
It would be a lot more than a few steps, chatGPT isn't even close. All it's doing is probabilistic prediction of human text, it's predicting the best next word in context based on its training corpus.
DukkyDrake OP t1_j3cz0vt wrote
Reply to comment by just-a-dreamer- in A more realistic vision of the AI & Programmer's jobs story by DukkyDrake
How have people reacted to similar threats to their profession since time immemorial. How are artists currently reacting to their impending existential commercial threat. They will react badly and it's understandable, but it ultimately doesn't matter on the macro scale. Expecting the world to slow down or make a special exception for you isn't reasonable, they will all do what others have done in the past; they will all accept it in the end because there is no recourse. Society will be better off in the end, well, assuming the cultural ethics of your local society has a fair amount of humanism at its core. If it doesn't, a lot of people could be in for some difficult times.