SoylentRox
SoylentRox t1_ivwv2ia wrote
AGI is convergent. Now that there are multiple countries and many well funded companies and government groups working on parts of it, it means that almost all of them can fail and it won't change anything.
It means that if someone tries inferior proto-AGI prototype X1, but building it gives them some information on what they screwed up, X2 will be closer to something that works. And so on. Even going wrong directions is ok when you can try thousands of things in parallel.
It means that once you hit the "proto" AGI stage - some deeply inferior machine that just barely works - it just has to design a better version of itself over a few hours and then...
The reason this didn't happen prior to now - the reason it didn't happen in the 1960s when early AI researchers thought the problem might not be as difficult as it is - is they didn't have close to enough computing power and memory. It turns out to take thousands and thousands of TOPS, and terabytes of memory.
What was solely the domain of supercomputers 10 years ago - in fact we're effectively throwing more computing power than any supercomputer on earth into AI research each day right now. Maybe as much as all of them combined.
SoylentRox t1_ivw3sij wrote
Reply to Will Text to Game be possible? by Independent-Book4660
For a more level headed take :
Think about how complicated a game is. Enormously huge environments, behavior of npcs and objects fitting well defined rules, multiplayer synchronization.
So even with pretty advanced AI, getting A game that can be played is a very different ask from "give me a game like gta 5 but better"
Moreover, even if a machine could in fact give you a "better" knockoff of gta 5, without many people playing the game and giving the AI feedback on where to improve it still might not be a very good game.
SoylentRox t1_iuh5vhs wrote
Reply to comment by ehj in Is there such a thing as a gamma radiation mirror? by AlarmingAffect0
Are we talking "plasma thick enough for fusion" or what?
SoylentRox t1_iu0qt8k wrote
Reply to comment by purple_hamster66 in The Great People Shortage is coming — and it's going to cause global economic chaos | Researchers predict that the world's population will decline in the next 40 years due to declining birth rates — and it will cause a massive shortage of workers. by Shelfrock77
Yeah basically. Typing on my phone but yes. Basically like if you want to engineer a gear train you are really asking for "the cheapest set of gears that does function X, has a 99 percent chance of working past warranty period Y, and fits in as small as space as possible".
So the machine can propose various gear sets and you can auto score how well it met the 3 terms I gave above.
It can use that score as an RL signal to propose better gears.
At scale - with millions of simulated years of practice and hundreds of thousands of variations of the "design a gear train" problem - even a very stupid algorithm that learns poorly will still be better than any human alive.
Simply by brute force - it has more experience and can propose a thousand solutions in the time a human engineer needs to propose one.
SoylentRox t1_iu08mt6 wrote
Reply to comment by a1b4fd in The Great People Shortage is coming — and it's going to cause global economic chaos | Researchers predict that the world's population will decline in the next 40 years due to declining birth rates — and it will cause a massive shortage of workers. by Shelfrock77
Yeah. Just a few elite companies are able to get AI to even work. You need a critical mass of skilled AI experts on one company, a bunch of other people to support the infrastructure required so the AI development is productive, and tens of millions in hardware. You need technically savvy managers who know the context of the decisions they are making, people person's who fake it can't make effective decisions on cutting edge ai projects.
Thos excludes almost all companies.
SoylentRox t1_iu088qw wrote
Reply to comment by Dras_Leona in The Great People Shortage is coming — and it's going to cause global economic chaos | Researchers predict that the world's population will decline in the next 40 years due to declining birth rates — and it will cause a massive shortage of workers. by Shelfrock77
Probably won't even take an engineer just a technician working remotely.
SoylentRox t1_iu084rl wrote
Reply to comment by TheDividendReport in The Great People Shortage is coming — and it's going to cause global economic chaos | Researchers predict that the world's population will decline in the next 40 years due to declining birth rates — and it will cause a massive shortage of workers. by Shelfrock77
Of course it will automate engineers. Many but not all engineering problems are described as a simple optimization problem you can autograde.
SoylentRox t1_iu07tl6 wrote
Reply to comment by Kong_Here in The Great People Shortage is coming — and it's going to cause global economic chaos | Researchers predict that the world's population will decline in the next 40 years due to declining birth rates — and it will cause a massive shortage of workers. by Shelfrock77
So the automation will be developed regardless of if usaians are available to do it. China is investing heavily in AI also and has less obstructive regulations actually enforced. (They care a lot less about the risks to workers and pollution)
A shortage of workers means more potential profits for automation.
SoylentRox t1_itls3hm wrote
Reply to comment by purple_hamster66 in Could AGI stop climate change? by Weeb_Geek_7779
There are again problems with this that limit how far you can get. Market is zero sum. Ultimately creating your own company or buying one and producing real value may pay more than manipulating the market.
SoylentRox t1_iticdbm wrote
Reply to comment by purple_hamster66 in Could AGI stop climate change? by Weeb_Geek_7779
The GATO paper is one yeah.
HFT isn't the same kind of AI and there is a problem with training them to manipulate markets as the behavior is too complex to simulate.
SoylentRox t1_iti8nsc wrote
Reply to comment by purple_hamster66 in Could AGI stop climate change? by Weeb_Geek_7779
I am saying that if we have AGI like we have defined it, funding it is simple.
Also we know exactly how AGI will work as we nearly have it - pay attention to the papers.
The people building it have outright explained how it will work, just go read the GATO paper or Le Cun's.
These systems cannot manipulate markets.
SoylentRox t1_ithqsxp wrote
Reply to comment by purple_hamster66 in Could AGI stop climate change? by Weeb_Geek_7779
? We don't have working AGI yet. But the funders of it have 250 billion+ in revenue.
There's no gnomes. It's:
(1) a megacorp like Google/Amazon/Facebook develop AGI
(2) the megacorp funds the massive amounts of inference accelerator hardware (the robots are the cheap part, the expensive part is the chips the AGI is using to think) to run many instances of the AGI software. (which is not singleton, there's many variants and versions)
(3) the megacorp makes a separate business division and spins it off as an external company for an IPO, such that the megacorp retains ownership but gets hundreds of billions of dollars from outside investors.
(4) outside investors aren't stupid. They can and will see immediately that the AGI will quickly ramp to near infinite money, and will price the security accordingly.
(5) with hundreds of billions of starter money, the AGI starts selling services to get even more money and building lots of robots, which ultimately will be used to make more robots and inference accelerator cards. Ergo exponential growth, ergo the singularity.
Frankly do you know anything of finance? This isn't complicated. For a real world example of this right now: see Waymo and Cruise. Both are preparing exactly this IPO for a lesser use of AI than AGI: autonomous cars.
SoylentRox t1_itf7zug wrote
Reply to comment by purple_hamster66 in Could AGI stop climate change? by Weeb_Geek_7779
I go over how to do that in my post. The rest is a lot of reinforcement learning.
SoylentRox t1_itdl8hg wrote
Reply to comment by purple_hamster66 in Could AGI stop climate change? by Weeb_Geek_7779
Robots cost so much money mostly because
(1) high end robots are made in small numbers and are built by hand mostly by other humans
(2) IP for high end components. (Lidars, high power motors and advanced gearing systems, etc)
So in theory an AGI would need some starter money, and it would pay humans to make better robots in small numbers. Those robots would be specialized for making other robots - whatever the most expensive part of the process is. Then the next generation of robots is cheaper, and then those robots are sent to automate the second most expensive part of the process, etc.
Assuming the AGI has enough starter money it can automate the entire process of making robots. It can also make back money to keep itself funded by having the robots go make things for humans and sell them to humans.
The IP is solved a similar way - the AGI would need to research and develop it's own designs free of having to pay license fees for each component.
SoylentRox t1_itcyamu wrote
Reply to comment by ConnorGoFuckYourself in 3D meat printing is coming by Shelfrock77
Sounds less and less appetizing lol (meat glue: for fake crab and gluing pink slime together!) but yeah it'll work. Plus less cruel than killing animals.
SoylentRox t1_itav3hc wrote
Reply to comment by TorchOfHereclitus in 3D meat printing is coming by Shelfrock77
>that we haven't nailed down yet in replicating it, but we'll get there eventually
So that's not how this works. You can grind up the real muscle tissue and the fake muscle tissue and assay out how much of each amino acid and how much of each lipid type is in the sample.
At this point you can modify the genes for the lab grown cells until it has the nutritional profile you want. If it exactly matches the beef steak sample, it is nutritionally the same. It doesn't matter the path taken to grow it.
SoylentRox t1_itamong wrote
Reply to comment by TorchOfHereclitus in 3D meat printing is coming by Shelfrock77
Remember those biology classes they made you take when you trained to be a nutritionist?
I mean you do have a degree, right?
Well in those classes they probably taught you that cells grow together into organs. A chunk of 'steak' from a cow is part of a muscle organ.
So if you grew the same cells in a vat, probably in separate vats, one for each cell type - and assembled the cells into the same geometric shape as the organ - then the nutritional value will be the same.
Or slightly better - you can manipulate very easily the kind of fat the adipose cells make, right. Cows don't make enough omega-3 fatty acids but there is no reason your lab grown version has to work that way. Just edit the genes slightly so you get the fat ratio you want.
This would be better then. As a nutritionist you'd have to start putting people on a fish/lentils/lab grown beef diet.
SoylentRox t1_itamd8q wrote
Reply to comment by Ezekiel_W in 3D meat printing is coming by Shelfrock77
Probably is a distance away at scale but this is one of the first demos I have seen where it's even possible.
The trick here seems to be rather than try to grow a whole steak, where you have to trick biology into getting the signals it got in a cow embryo, you grow the muscle and fat separately in separate vats. And any other components.
Then turn the grown cells into something with the right consistency and adhesiveness to print, I am not sure how they do that.
SoylentRox t1_irsecq0 wrote
What is sad is the AI is good enough to detect some of these problems but not smart enough to develop solutions. Knowing you have parkinson's is worthless as there is no treatment that slows the disease.
SoylentRox t1_iqu21dr wrote
Reply to comment by insectula in Self-Programming Artificial Intelligence Using Code-Generating: a self-programming AI implemented using a code generation model can successfully modify its own source code to improve performance and program sub-models to perform auxiliary tasks. by Schneller-als-Licht
Pretty much. Numerous ways to do this and you simply need to nail down what tasks you believe are intelligent, build a bench that is automatically scored, and supply enough compute and a way for lots of developers to tinker with self amplifying methods. (Vs 100 people at Deepmind having access). Once the pieces are all in place the singularity should happen almost immediately. (Within months)
SoylentRox t1_iw2vqt0 wrote
Reply to What if the future doesn’t turn out the way you think it will? by Akashictruth
So I think there's one important thing to note here.
Yes, absolutely, in fact if nothing is changed in terms of how economic policy is done, AI would just make the owners of the IP for the AI and a small elite crew of software engineers and decision makers the wealthiest people on earth. While 99% of the population has no function. And they can't even rebel - assuming the AI is controllable, you just give it a task to design some defense robots, and another task to manufacture them (with recursive tasks to the design the robots to manufacture the robots to...)
Riots are pointless, you could literally have painted lines around defense perimeters where anyone stepping over the line is shot instantly, no missed shots, no hesitation. You can't meaningfully overrun something that like unless you literally have more rioters than the guns have ammo - and good defense software could set up multi kills in that scenario where it kills several people per bullet.
But it's a heck of a lot more interesting than business as usual, where we are supposed to live our short boring lives and die of aging, which seems to just be a software bug in our cells that we have the tools to patch, we just don't have the knowledge to know what to change.