DukkyDrake
DukkyDrake t1_j16qxou wrote
Reply to Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
Because 100% of all existing sentient agents have goals, desires, or objectives outside of what they're told to do.
DukkyDrake t1_izy7n2l wrote
I followed NIF's progress a decade ago, I stopped when they gave up and went back to nuclear weapons research. I was skeptical and still am of a fusion power plant looking anything like their setup. I think there is at least 1 private effort following their general approach. I fear this is just scientific progress that doesn't necessarily quicken progress to a power plant. But I would be happy if it only serves to boost funding for fusion engineering efforts.
DukkyDrake t1_izui3py wrote
Reply to AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
>Clearly, we have already achieved narrow super intelligence
It's very good at what it was trained to do, probabilistic prediction of human text. Use it outside of that context and it will fail unexpectedly and badly.
DukkyDrake t1_izcftva wrote
Reply to comment by asschaos in How will the transition between scarcity-based economics and post-scarcity based economics happen? by asschaos
Not no chance, just unlikely.
DukkyDrake t1_izc96i3 wrote
Reply to comment by asschaos in How will the transition between scarcity-based economics and post-scarcity based economics happen? by asschaos
could, yes.
DukkyDrake t1_izc78cg wrote
Reply to How will the transition between scarcity-based economics and post-scarcity based economics happen? by asschaos
That possible future transition isn't guaranteed. A few broadly capable AGIs in the world that are tightly controlled would make that transition less likely. Many easily replicated AGI systems in the world would make it more likely, but it would also make it more likely you would not survive long enough to enjoy it.
The Economics of Automation: What Does Our Machine Future Look Like?
DukkyDrake t1_iyx1ail wrote
>This thing apparently knows everything from its vast training data
This AI tool is just predicting the next word based on your prompt and training data, it doesn't actually know anything in the way you mean it. The AI architectures you need to worry about does not currently exist. Existing architectures improved to 99.99% accuracy will not turn them into the AIs you need to worry about.
DukkyDrake t1_iyorgm3 wrote
I don't think any improvements in existing models changes the AGI landscape. Existing architectures perfected to 99.99% accuracy gets you a bunch of narrow/weak super intelligent models and not AGI. If you had millions of those for every economically useful task, that would pass for AGI.
R&D needs to max out on existing architectures before they will seriously branch out and search the possibility space for something that will get you a proper learning algorithm.
If you want AGI, you will need the R&D community to realize existing models won't get them what they want and they need to explore elseware.
DukkyDrake t1_iy6b20g wrote
I think you will have to wait for attrition to clear the field of the bulk of pre 1980s generations before you have a shot at a UBI in the US, perhaps ~2050s.
DukkyDrake t1_iy3azz8 wrote
Reply to Why is VR and AR developing so slowly? by Neurogence
Depends on your end stage expectations for vr. If it's simply PS5 realism in a vr helmet, then you'll get that in Zuck's metaverse time horizon, 3-15 years. I ultimately expect the vr will be a letdown because of the interface, there is nothing on the likely tech roadmap that will make vr interface with the human body as good as fiction. You will need something from an alternate tech tree derived from some future white swan event.
DukkyDrake t1_ixw4xs3 wrote
Reply to For anyone still believing that standalone VR/AR/MR will flourish and popularize in the 2020s, please watch this video and think again. by Quealdlor
How does this video relate to "standalone VR/AR/MR will flourish and popularize in the 2020s"?
DukkyDrake t1_ixus2bo wrote
Reply to comment by Plzbanmebrony in The West is slowly rebuilding its rare earths supply chain. by BalticsFox
They were simply willing to cut the most corners, the way American industry operated before dumping toxic waste in the nearest stream was frowned upon.
The problem is the global value of rare earth imports was only $1.15 billion back in 2019. It's not a huge market, but it involves a lot of cost on the processing end. It's not just about mining.
DukkyDrake t1_ixukj4e wrote
Reply to comment by Plzbanmebrony in The West is slowly rebuilding its rare earths supply chain. by BalticsFox
> They don't have the cheapest source they just sell at the lowest price just to have control.
That makes China the cheapest source.
Good luck to any country that think they can beat their prices without turning their own countryside into an hellscape
DukkyDrake t1_ixsee50 wrote
Reply to comment by dex3r in Your perfect guide to understand the role of Python in Artificial Intelligence (AI) by Emily-joe
> was just chosen by Google to build early...
It's no accident. Python was about as close to the description "programmer" as academic scientists wanted to get.
DukkyDrake t1_ixs8ixk wrote
China doesn't have a monopoly, they're simply the cheapest source the past many decades.
DukkyDrake t1_ixqy02c wrote
DukkyDrake t1_ixqrd8h wrote
>Assuming you believe, like most people, that quantum computers are just a super faster kind of computer you can just install Linux and use like usual.
While what you're imagining probably isn't realistic, there are some theoretical Quantum speedups that could benefit AI. Lookup quantum-machine-learning & quantum-memristor if you're interested. Even if AI running on Q hardware doesn't pan out, AGI will likely make heavy use of Q the same way humans will, via an API accessed from more classic hardware.
The end result could be the same even if the exact implementation details of "AGI run on a quantum computer" is not what you envisioned. An AGI could theoretically setup Q calculation in a pipeline and analyze the result a lot faster than humans could. Humans slowly experimentally exploring a possibility space over decades, that time window could be greatly compressed if an AGI does the work, the exact degree of improvement and speedup of resulting breakthroughs is debatable.
DukkyDrake t1_ixf2gpq wrote
Reply to Meta AI presents CICERO — the first AI to achieve human-level performance in Diplomacy, a strategy game which requires building trust, negotiation and cooperation. by Kaarssteun
This is an engineered system and not just some singular large fined tuned model. I like the continued progress in this direction, even the various proposed "learning agents" points in this direction. I still expect and hope it remains more likely that AGI will be a CAIS like system.
DukkyDrake OP t1_ix7v3z7 wrote
Reply to comment by ahfoo in AGI: Impossibility of safe explicit control by DukkyDrake
Yes, most tend to confuse current AI tools with future AI and their associated capabilities.
Submitted by DukkyDrake t3_z0guca in technology
DukkyDrake t1_ix49b1i wrote
Reply to is it ignorant for me to constantly have the singularity in my mind when discussing the future/issues of the future? by blxoom
There are no guarantees existing r&d efforts will result in a technological savior within your time horizon. There is no master plan, society is comprised of a bunch of individual money-making efforts. if the medicine that cures whatever ails you isn't profitable, you are not going to survive. Nothing can exist unless it has a high profit potential vs risk, there are always easier ways to make money.
>we lived through the ice age
Humans, not necessarily you individually, can survive the worst end of the AGW prediction range over the next 75 years. Just don't be poor. Would you prefer to spend your time trying to survive in such an environment or in the temperate interglacial that coincided with the rise of technological human civilization.
>what you just think by 2050 we'll be sitting on our asses doing nothing to prevent a mass extinction?
Why not, doing nothing is easy. Was anything done to prevent mass extinctions over the preceding 30 years.
DukkyDrake t1_iwz9l0s wrote
Reply to comment by rixtil41 in Lev/ Modern super computer tech question by IzanTeeth
I'm only aware of 1 pathway that can improve extant nanofabrication, and that has gone nowhere in 30 years. There were easier ways to make money.
DukkyDrake t1_iwx65l4 wrote
Reply to comment by tokkkkaaa in Lev/ Modern super computer tech question by IzanTeeth
Economics of scale, the world is big.
>The newest EUV machines are state of the art producing the smallest feature sizes in nanofabrication and costs > $350m a piece. There are a few hundred of previous versions in existence in the world and lead times for the old versions are 12-18months. This tech is currently used to spit out semiconductor wafers before they're chopped up into chips, a large fab might produce 250k wafers a month.
You're going to need a high-tech manufacturing stack like semiconductors to manufacture your nanobots. It will take decades to ramp capacity, available capacity will land in the hands of the highest bidders. Also, 60 year olds trying to stay alive will be competing for the retail product with 30-year Olds trying to look like they're 20 years Olds.
DukkyDrake t1_iwwu3pu wrote
Reply to Lev/ Modern super computer tech question by IzanTeeth
It depends on a lot of factors. If medical nanobots to reverse aging are created tomorrow, I would not expect your avg 60-year-old to survive long enough to get access to them.
DukkyDrake t1_j187ffw wrote
Reply to comment by SendMePicsOfCat in Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
Why even assume sentience or consciousness in the first place.