SoylentRox
SoylentRox t1_j8e72bz wrote
Reply to comment by PrivateUser010 in Anthropic's Jack Clark on AI progress by Impressive-Injury-91
2017...
Everything that mattered was the last few years. The previous stuff didn't work well enough.
SoylentRox t1_j8cblun wrote
Reply to comment by [deleted] in This is Revolutionary?! Amazon's 738 Million(!!!) parameter's model outpreforms humans on sience, vision, language and much more tasks. by Ok_Criticism_1414
Theoretically it should query a large number of models, and have a "confidence" based on how likely each model's answer is to be correct. Then return the most confidence answer.
SoylentRox t1_j8cb8c0 wrote
Reply to This is Revolutionary?! Amazon's 738 Million(!!!) parameter's model outpreforms humans on sience, vision, language and much more tasks. by Ok_Criticism_1414
Obviously the next question is "what happens if you give it 1 Trillion parameters". (this would be 1355 times as many params.)
SoylentRox t1_j7r6eft wrote
Reply to I asked Microsoft's 'new Bing' to write me a cover letter for a job. It refused, saying this would be 'unethical' and 'unfair to other applicants.' by TopHatSasquatch
Or the nuclear weapons/racial slur scenario. The scenario isn't trying to get ChatGPT to emit a string containing a bad word. It will do that happily with the right prompt. It's getting it to reason ethically that there exists a situation, however unlikely, where emitting a bad word would be acceptable.
SoylentRox t1_j7lb053 wrote
Reply to comment by dasnihil in John Carmack’s ‘Different Path’ to Artificial General Intelligence by lolo168
Pretty much. It's also that those math wizards may be smarter than current AI but they often duplicate work. And it's an iterative process - AI starts with what we know, tries some things very rapidly. A few hours later it has the results and tries some more things based on that and so on.
Those math wizards need to publish and then read what others published. Even with rapid publishing like Deepmind does to a blog - they do this because academic publications take too long - it's a few months between cycles.
SoylentRox t1_j7la9hx wrote
Reply to comment by dasnihil in John Carmack’s ‘Different Path’ to Artificial General Intelligence by lolo168
We got 100b to spare on this. More than that. Might as well use it. Once we find working AGI we can work on power efficiency.
SoylentRox t1_j7j08r9 wrote
I don't think he'll succeed but for a very lame reason.
He's likely right that the answer won't be in solely transformers. However, the obvious way to find the right answer involves absurd scale:
(1) thousands of people make a large benchmark of test environments (many resembling games) and a library of primitives by reading every paper on AI and implementing the ideas as composible primitives.
(2) billions of dollars of compute are spent to run millions of AGI candidates - at different levels of integration - against the test bench in 1.
This effort would consider millions of possibilities - in a year or 2, more possibilities for AGI than all work done by humans so far. And it would be recursive - these searches aren't blind, they are being done by the best scoring AGI candidates who are tasked with finding an even better one.
​
So the reason he won't succeed is he doesn't have $100 billion to spend.
SoylentRox t1_j6m6i16 wrote
Reply to comment by CriticalUnit in Study: Enough minerals to fuel green energy shift -"The analysis is robust and this study debunks those (running out of minerals) concerns" by Surur
Yep. For batteries, sodium for lithium. LFP doesn't require any nickel or cobalt. Some motor designs are just as efficient with zero rare earth magnets. Aluminum for the heavy copper cables in an EV.
SoylentRox t1_j6immie wrote
Reply to comment by DustBunnicula in Are there any real movements against AI technology? by musicloverx98x
I would argue your skepticism is of the same form as above.
"Since AI can't control robotics well (in the sota implementations, it controls robots very well in other papers), by the time I graduated college from the time I selected my major (2-4 years) AI still won't be able to do those things"
You actually may be right for a far more pendantic reason - good robotics hardware is expensive.
SoylentRox t1_j6h7z4w wrote
Ironically the most anti-AI I have seen is people who are usually religious or otherwise remaining "skeptical".
Their skepticisms usually in the form of "well chatGPT is only right MOST of the time, not ALL of the time, therefore it's not progress towards AGI".
Or "it can solve all these easy problems that only some college students can solve, but can't solve the HARDEST problems so it's not AGI".
Or "it can't see or draw." (even though this very capability is being added as we speak)
So they conclude that "no AGI for 100+ years" which was their previous belief.
SoylentRox t1_j6h74gz wrote
Reply to comment by Northcliff in How rapidly will ai change the biomedical field? What changes can be expected. by Smellz_Of_Elderberry
You do understand that zero modifications were made to the formula. If the FDA had just approved it immediately it WOULD have saved hundreds of thousands of lives.
This was a rationally designed vaccine - it wasn't random. The sequences used had all been tested, and the protein targeted was picked from a model of the virus. So there was a legitimate scientific reason to believe it would be safe and effective the first try like it was.
The FDA's defense mechanisms are essentially designed for quacks a century ago, to make it so they couldn't push their snake oil.
SoylentRox t1_j6gn8qr wrote
Reply to comment by Idiot_Savant_Tinker in Scientists lower price of lithium's best competition - flow batteries - by 20%. Makes the battery effectively equal to or cheaper than lithium ion when spread over 30 years (flow battery lifetimes are effectively infinite with light repowering efforts). by PorkyPigDid911
We do but it's very cheap and not an issue if some is trapped in batteries.
SoylentRox t1_j6f8bbn wrote
Reply to comment by PorkyPigDid911 in Scientists lower price of lithium's best competition - flow batteries - by 20%. Makes the battery effectively equal to or cheaper than lithium ion when spread over 30 years (flow battery lifetimes are effectively infinite with light repowering efforts). by PorkyPigDid911
LFP and Sodium batteries offer a pretty good compromise.
With LFP : their 4000+ cycle life is at least 11 years, possibly 15-20 before they need replacement, depending on the application. They don't use rare earths and have become reasonably cheap per kWh. BYD blades are in the $70-120 a kWh range. (have seen both numbers). You can buy them right now.
With Sodium: similar lifespan and safety benefits of LFP, similar energy density. No use of lithium which means we won't bottleneck on lithium. CATL (world's largest battery manufacturer) promises mass production this year.
SoylentRox t1_j6f6fpv wrote
Reply to AI will not replace software developers, It will just drastically reduce the number of them. by masterile
So the problem with your analogy is this.
Farming has 2 natural limits:
(1) once you grow enough food to feed everyone plus excess to overcome waste, no more food is needed.
(2) there is only so much land available that is suitable for farming - once you cultivate enough of it, no more farming is needed per year
Coding...well...dude. I bet someone wants factories on the Moon, O'neil habitats, biotech plants that make replacement organs, AI doctors that keep people alive no matter what goes wrong, and catgirl sex robots.
Do you have ANY IDEA how much more complex the software to make the above possible will be? Just try to imagine how many more systems are involved to accomplish each thing. If software engineers and computer engineers have to do even a tiny fraction of the work, they are all going to be busy for centuries.
SoylentRox t1_j6evlu7 wrote
Reply to How rapidly will ai change the biomedical field? What changes can be expected. by Smellz_Of_Elderberry
I think there will be 'overhang'. AI is developed into AGI. AGI is able to control lesser forms of biology at will. (custom plants, custom small animals, immortal pets).
And then gradually the performance gets so good that the FDA and other bottlenecks are bypassed once it simply can't be denied how good the results are. Hundreds of millions of people will die who could have been saved, just like the FDA slow walking moderna cost millions of lives.
Try not to be among them.
SoylentRox t1_j6bxjkj wrote
Reply to comment by Ok_Sea_6214 in I’m ready by CassidyHouse
I hope 10% survive. The skies are dark for a reason, and at our current level of knowledge it looks suspiciously easy for a lot of humans to become immortal and grab most of the universe. The remaining problems all look solvable in a reasonable (years to decades) amount of time if you have a superintelligence to handle the details.
SoylentRox t1_j69ls82 wrote
Reply to comment by ecnecn in Myth debunked: Myths about nanorobots by kalavala93
I was referring to molecular assemblers - a machine that runs in a vacuum chamber at a controlled temperature. It receives through plumbing hundreds of 'feedstock gases' that are pure gases of specific type. It can make many (thousands+) of nanoscale parts, and then combine those parts into assemblies, and combine those assemblies etc.
Everything is made of the same limited library of parts, but they can be combined many different ways.
This makes possible things like cuboidal metal "cells" that are robotic, do not operate in water, and can in turn interact with each other to form larger machines, making possible something like the 'T-1000' from terminator 2. (it probably couldn't reconfigure itself as quick as the machine in the movie, but that doesn't matter since it wouldn't miss when shooting)
custom proteins are for medicine, and won't work at all the same way.
SoylentRox t1_j69cerm wrote
Reply to comment by Sashinii in Myth debunked: Myths about nanorobots by kalavala93
The 'shape of the solution' would look like hundreds of thousands, maybe millions of separate STM automated 'labs', in some larger research facility. (imagine a 1kmx1kmx1km cube or bigger). In parallel millions of experiments would run, with the goal of finding patterns of atoms to serve as each reliable machine part you need for a full nanoforge. And other experiments investigating the rules behind many possible nanostructures to develop a general model.
For every real experiment there are millions of simulated ones, where the AI system is systematically working on finding a full factory design that will be able to self replicate the entire factory, and to find a bootstrapping path of the least cost to build the minimum amount of nanomachinery the hard way, with all the rest of the parts made by partially functioning nanoassemblers.
"the hard way" means probably atom by atom, using STM tool heads to force each bond.
SoylentRox t1_j69brvx wrote
Reply to comment by Sashinii in Myth debunked: Myths about nanorobots by kalavala93
Agree. Around 2014 I read nanosystems and was pretty enthusiastic about the idea.
But as it turns out, the complexity of solving this problem is so large that human labs just won't be able to do it. Forget decades - I would argue if they couldn't use some form of AI at least as good as what has already been demonstrated, it may never get solved.
SoylentRox t1_j64yel3 wrote
Reply to comment by TwitchTvOmo1 in ⭕ What People Are Missing About Microsoft’s $10B Investment In OpenAI by LesleyFair
I just mean "how do I make my money back and pay back the investors".
If we want a democratized AI we vote and get the government to pay for the compute and services of all those rockstars. The compute costs (which will likely soar into the sky, later systems this decade will probably burn billions in compute just to train) mean this isn't supportable by an open source model.
SoylentRox t1_j64w50r wrote
Reply to comment by TwitchTvOmo1 in ⭕ What People Are Missing About Microsoft’s $10B Investment In OpenAI by LesleyFair
I have a better proposal to exploit this.
Build an AGI system smart enough to do the work of a "think tank" (your example). Have it do lots of demo work and prove it's measurably better than the human competition.
Sell the service. Why would you sell the actual AGI architecture/weights or hardware? Sell the milk not the cow lol.
AI/AGIs will probably always be 'rented' except for open source ones that you can 'own' by downloading.
SoylentRox t1_j64vpu0 wrote
Reply to comment by genshiryoku in ⭕ What People Are Missing About Microsoft’s $10B Investment In OpenAI by LesleyFair
>I agree in AI models becoming commodities over time as has been seen with Stable Diffusion essentially disrupting the entire business model of paid image generation like Dall-E and Midjourney.
Ehhhhhhhhhh
So the basic technology to make an ok model, yes. But it's quite possible that 'machine learning rockstars', especially if they get recursive self improvement to work, will be able to make models that have a 'moat' around them. Even if it's just because it costs 500m to train it and 500m to buy the data.
Then they can sell services that are measurably better than the competition...or hiring humans...
That sounds like a license to print money to me.
SoylentRox t1_j64veyb wrote
Reply to comment by RobleyTheron in ⭕ What People Are Missing About Microsoft’s $10B Investment In OpenAI by LesleyFair
This is awesome.
And the moonshot upside is handled rather elegantly. Microsoft won't own the universe :)
SoylentRox t1_j64udw7 wrote
Reply to comment by freeman_joe in MusicLM: Generating Music From Text (Google Research) by nick7566
Yeah that was the joke
SoylentRox t1_j8e7bvb wrote
Reply to comment by genericrich in Anthropic's Jack Clark on AI progress by Impressive-Injury-91
This is false. None of the algorithms we use now existed. They were not understood. Prior versions of the algorithms that were much simpler did exist. It is chicken egg - we needed immense amounts of compute to find the algorithms needed to take advantage of immense amounts of compute.