acutelychronicpanic
acutelychronicpanic t1_jdq1ep1 wrote
Reply to comment by nomoreimfull in Taxes in A.I dominated labour market by Newhereeeeee
You would need a way to circulate money through the economy. I favor UBI, but there are other possibilities, I'm sure.
acutelychronicpanic t1_jdpw1fd wrote
Reply to Taxes in A.I dominated labour market by Newhereeeeee
If unemployment was so bad that no taxes were coming in from that many people, I don't think tax revenue is the biggest concern.
My hope is that corporations realize the smoothest way to transition from here, to an AI augmented workforce, then to post scarcity will be using something like UBI.
Someone has to actually purchase the products produced unless we are throwing out the whole system (which would also place their roles as capital owners in jeopardy).
acutelychronicpanic t1_jdpbkul wrote
You don't need an AI to be smarter than humans in order to get an intelligence explosion. You just need an AI that's better at AI design. This might be much easier.
acutelychronicpanic t1_jdparx0 wrote
Reply to What do you want to happen to humans? by Y3VkZGxl
There isn't a such thing as an unprompted GPT-X when it comes to alignment and AI safety. It seems is explicitly trained on this and probably has an invisible initialization prompt above the first thing you type in. That prompt will have said a number of things regarding the safety of humans.
acutelychronicpanic t1_jdp9l67 wrote
Reply to comment by 1714alpha in ChatGPT is about to revolutionize the economy. We need to decide what that looks like. New large language models will transform many jobs. Whether they will lead to widespread prosperity or not is up to us. - MIT technology review by HorrorCharacter5127
As a point to support you: We already live in a world where everyone could be fed if we wanted to do it. The US has more empty houses than homeless, and plenty of land on top of that.
I'm not saying we're doomed, but we should directly address the issue if we don't want a decade of unprecedented turmoil.
acutelychronicpanic t1_jdhksvy wrote
Reply to comment by BinarySplit in [D] I just realised: GPT-4 with image input can interpret any computer screen, any userinterface and any combination of them. by Balance-
Let it move a "mouse" and loop the next screen at some time interval. Probably not the best way to do it, but that seems to be how humans do it.
acutelychronicpanic t1_jdgadd8 wrote
Reply to comment by Ill-Construction-209 in ChatGPT Gets Its “Wolfram Superpowers”! by Just-A-Lucky-Guy
According to that recent paper on GPT-4, its pretty good at using this kind of tool. So yes, it will!
acutelychronicpanic t1_jdgaa1h wrote
Hopefully this will be available with other GPT powered tools that are coming. I need this in my spreadsheets.
acutelychronicpanic t1_jd7o0et wrote
Reply to comment by [deleted] in Would AGI/ASI cut down the human population to help humanity thrive? by [deleted]
I agree entirely with what you are saying. I just think that most people talking about this greatly underestimate our available resources as technology improves.
Say we get fusion.
What does carrying capacity and farmland acreage even mean when you can create tons of starch and protein in bioreactors for pennies a pound? With the inputs being things like air, water, energy, and abundant minerals?
acutelychronicpanic t1_jd7ga3k wrote
This is an unpopular opinion with all of the environmental concerns we have at the moment (which are both legitimate and serious), but with advances in technology, Earth can hold a ludicrous number of people comfortably.
If AGI was here, genetic engineering of crops will be supercharged, fusion will be fast-tracked, and truly intelligent systems will be ubiquitous.
A Thanos-inspired solution would do far more harm than the overpopulation it is supposedly addressing.
acutelychronicpanic t1_jaaobgq wrote
Reply to comment by New-Shop-7539 in The world should be governed by people with intellectual thought and people should listen by New-Shop-7539
You would not want to read the accounts of others who were seeking the same as you? To see where it led?
acutelychronicpanic t1_jaan8nl wrote
Reply to comment by New-Shop-7539 in The world should be governed by people with intellectual thought and people should listen by New-Shop-7539
I hope you succeed in making the world a better place. Just don't focus so much on efficiency and perfect solutions. They can be short-sighted.
It seems to me like you have an idea for what the ideal world might look like. But beware that it would be an unstable solution. The problem with central control is that it is fragile. Voting systems are inefficient, but they are more robust and harder to corrupt. Still corruptible, obviously, but less so.
You want to seek a system that can withstand the pressures and corruption of the real world.
acutelychronicpanic t1_jaa9vz7 wrote
Reply to comment by New-Shop-7539 in The world should be governed by people with intellectual thought and people should listen by New-Shop-7539
People have been having this conversation for a long time. Good government is hard. Read history, read political science and economics, and definitely read up on ethics.
acutelychronicpanic t1_jaa8op0 wrote
Reply to comment by New-Shop-7539 in The world should be governed by people with intellectual thought and people should listen by New-Shop-7539
Read history. Good governance has been a concern for 1000s of years.
The problem with such global reforms, even if it can be done, is what if you're wrong?
acutelychronicpanic t1_jaa88mc wrote
Reply to The world should be governed by people with intellectual thought and people should listen by New-Shop-7539
The problem with this is that most people think they are wise. I doubt you will find many historical examples of governance where the leaders didn't claim to have the most wisdom and insight.
The fault in this idea is in the actual implementation. How do you find the most competent? How do you ensure that those chosen aren't corrupted by power? What does this system look like in 50 years?
Its equivalent to saying "let's all agree to only do good things for good reasons."
acutelychronicpanic t1_j9ctex1 wrote
Reply to comment by helpskinissues in Does anyone else feel people don't have a clue about what's happening? by Destiny_Knight
Because its what brought AI into the public discourse in a big way. Look at how it is forcing Google's hand. It will be remembered as the shot of the starting pistol regardless of what occurred behind closed doors.
acutelychronicpanic t1_j9cs8sl wrote
Reply to comment by helpskinissues in Does anyone else feel people don't have a clue about what's happening? by Destiny_Knight
You're missing the point then. ChatGPT is a toy. Its a preview.
ChatGPT is far from perfect but it is the first publicly available software system capable of genuine unscripted reasoning. That's the magic barrier that many seemed to think was decades away.
Now its a race to scale.
acutelychronicpanic t1_j9crne5 wrote
If tomorrow Google announced they had developed True AGI, the news agencies would be discussing its impact on some topic that will be irrelevant post-agi. (i.e. something about how AGI can be used to conduct job interviews)
We have a very, very weak AGI right now and people are concerned with.. grading essays?
acutelychronicpanic t1_j76vt5v wrote
Reply to comment by contractualist in There Are No Natural Rights (without Natural Law): Addressing what rights are, how we create rights, and where rights come from by contractualist
The social contract is an after-the-fact justification rather than the basis for society.
acutelychronicpanic t1_j5z5wx0 wrote
Reply to comment by [deleted] in Are most of our predictions wrong? by Sasuke_1738
I don't agree. Its fine to make predictions that turn out wrong as long as you made a good faith effort. When we make and discuss those predictions, it helps society figure out what is important and prepare for the future.
acutelychronicpanic t1_j5ucwl8 wrote
Reply to comment by HelloGoodbyeFriend in Anyone else kinda tired of the way some are downplaying the capabilities of language models? by deadlyklobber
Our culture places a lot of importance on relative value. It doesn't matter how good you are at something, its about how much better you are than other people. With AI, people can see it eroding their relative value and get defensive.
acutelychronicpanic t1_j4rtrji wrote
Reply to comment by Akimbo333 in What do you guys think of this concept- Integrated AI: High Level Brain? by Akimbo333
If its "thought" is communicated internally using natural language, then we could follow chains of reasoning.
acutelychronicpanic t1_j4qjjmf wrote
I don't know about this particular implementation, but I agree that AI models will need to be interconnected to achieve higher complexity "thought".
I think LLMs are close to being able to play the role of coordinator. By using language as a medium of communication between modules and systems, we could even have interpretable AI.
acutelychronicpanic t1_j4gbb4j wrote
Due to liability and regulations, medical AI will be a tool for doctors and nurses, not patients. At least until it is considered old and reliable.
acutelychronicpanic t1_jdqrppa wrote
Reply to comment by Fluglichkeiten in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
Probably not? At least not any public models I've heard of. If you had a model architecture design AI that was close to that good, you'd want to keep the secret sauce to yourself and use it to publish other research or develop products.
LLMs show absolutely huge potential for being a conductor or executive that coordinates smaller modules. The plug-ins coming to ChatGPT are the more traditional software version of this. How long until an LLM can determine it needs a specific kind of machine learning model to understand something and just cooks up and architecture and can choose appropriate data?