SoylentRox
SoylentRox t1_iyys3mc wrote
Reply to comment by [deleted] in bit of a call back ;) by GeneralZain
Because without AI, each succeeding problem for technology is harder to solve than the one prior. So building the first transistor took 2 guys in a lab, and going to 4 nanometer takes thousands and thousands of people and 10-50 billion dollars. It's so hard to do that only one company, TSMC, is expected to do it soon.
Developing penicillin took basically one guy noticing that some mold scrapings were killing bacteria in a microscope. Developing a better antibiotic - especially one to deal with all these resistant bacteria that have popped up - takes thousands of people and billions of dollars.
And so on.
AI basically gives you the intellectual equivalent of having thousands of geniuses, then millions, then a situation where the cognitive equivalent all of humanity are geniuses solely dedicated to research, and so on, to apply to your problems.
They do still keep getting harder but for a brief window of time during the Singularity is when you will see some crazy improvement. Obviously, post singularity, tech is pretty close to as good as it can get, and it would be smooth from there on out. (without black swans, like material from other universes or cheat codes, etc)
SoylentRox t1_iyyqypq wrote
Reply to comment by Desperate_Donut8582 in bit of a call back ;) by GeneralZain
At a certain point the FDA is going to be under a lot of pressure to reform it's policies. Other countries will allow more advanced medical procedures, and once people start getting majorly improved care driven by AI - patients with 'multiple organ failure' surviving because an AI doctor can handle complex situations humans can't, patients with stage IV cancer regularly returning from the clinic after only 1 treatment and no horrible side effects, that kind of thing.
It is possible if you had the right tools and infrastructure to solve these problems. (the how is fairly obvious - robots transplant in lab grown organs for all the failing ones for the multiple organ failure case, and in real time deliver hundreds of drugs in parallel, the dose changing by the second, and splice in substitute organs externally as needed to keep them alive from all the trauma of the surgeries and all the things that would cause them to die. AI can do it because there are thousands of rules you need to take into account that a human doctor can't - the what to do is very complicated and screw up just once and their brain tissue dies. For the cancer that's simpler, it's just a gene hack that introduces cancer suppression genes in the areas of the tumor, causing the tumor cells to self destruct while leaving the healthy ones alone)
SoylentRox t1_iyyqigr wrote
Reply to comment by Down_The_Rabbithole in bit of a call back ;) by GeneralZain
It kinda seems like we could direct the AI to build a huge number of research nodes and to study exhaustively these effects, seeking a safe electrode or nanotechnology wiring that tricks the brain into thinking that it's friendly. Or a mixture of drugs that does this. Or enough rules for life support that someone's immune system can be safely shut down. Or...
Basically there seem like there are a lot of ways to accomplish this once you can start manipulating biology with more consistent results, and you can practice on millions of samples of human brains (nothing unethical, just small batches of living cells from living or deceased donors) and learn from them all in parallel.
I mean no human scientist alive can learn from a million experiments in parallel so of course we couldn't figure it out. It's too complicated, there are obviously thousands of variables.
And there are an awful lot of ways to succeed is my point. Genetically modified neurons that get introduced, they synapse to our cortical columns, and then they grow into new signaling molecules never used in humans before, borrowed from another animal, that get emitted by the electrode grids which are embedded in sheets surgically installed. That might solve all problems you mentioned above because these modified 'bridge cells' can have the behaviors programmed in to not become inflamed and to synapse well to artificial electrodes.
SoylentRox t1_iyyper0 wrote
Reply to comment by Head_Ebb_5993 in bit of a call back ;) by GeneralZain
Head_Ebb, do you understand the Singularity hypothesis?
While it's been rehashed many times, in it's most general form, if humans build AI, that AI can use it's above human intelligence to build better AI, and to control vast numbers of robots to build more robots that go out and collect materials and energy and then more computers to run AI on and so on.
It is exponential. So if the hypothesis is correct, you will see rapidly accelerating progress to levels unknown in history. It will be impossible to miss or fake.
It doesn't continue 'forever', it halts when technology is improved close to the true limits allowed by physics, and/or when all the available matter in our star system is turned into waste piles and more robots.
So anyways because it's exponential your hypothesis of '2300-2400' for the technology of full dive VR isn't a plausible one. In order for your theory to be correct, it would mean that human researchers continue to steadily study biology and neuroscience (arguably they really became somewhat competent at it less than a century ago, with DNA actually discovered in 1953 and full genome sequencing in 1999) to eventually develop safe neural implants.
You think it will take 328 years!!! for that to happen. Hell, we don't have any technology now that people started on 328 years ago, and they have already started on neural implants. (by 'start' I mean have a theory as to how to do it, and begin building working prototypes). About the only technology I can readily think of that humans have been working on for a long time that doesn't work yet is fusion, and it does work, just not well enough.
This doesn't mean humans will get FDVR, but it means either they will have it in...uh...well if the singularity is actually starting right now then in 10-20 years but maybe it isn't actually hitting criticality* yet...or well, they will be extinct.
*criticality : nuclear materials do jack shit really until you reach a critical mass. So for years fission scientists theorized a chain reaction was possible, but they didn't have enough enriched uranium in one lab with enough neutron reflectors for it to work. So all they could do was measure activity counts and do math.
With AI we theorize that we can get an AI smart enough to reprogram new versions of itself (or asymmetric peers) to perform well on tests of cognitive ability that include simulated tasks from the real world. Criticality happens when this works.
SoylentRox t1_iywax9s wrote
Reply to comment by Heizard in bit of a call back ;) by GeneralZain
Yep. Note part of the cause of AI winters was massive hype over future AI capabilities. So while from 1965 to 2012 steady improvements were made in AI, from neural networks to numerous attempts at symbol logic machines to tons of other machine learning techniques, it was never amazing and instantly real world useful.
It would be like if physicists were hyping nuclear fission bombs for decades prior but they simply couldn't get their hands on more than a gram of plutonium or uranium. Hype all they want, it isn't gonna work.
Obviously once you reach criticality with fission crazy shit happens and reactor cores glow red hot with absurd amounts of activity. And prompt critical, well...
AI needs many many TOPs worth of compute and vast amounts of computer memory - a couple terrabytes of GPU memory is in use on these bigger models.
SoylentRox t1_ixgnzr5 wrote
Reply to comment by was_der_Fall_ist in 2023 predictions by ryusan8989
In theory. In practice, Intel held a layoff for their AI accelerator teams. Amazon let go a lot of Amazon Robotics and Alexa workers. Argo AI closed.
While yeah more pure AI plays like Hugging Face raised on a unicorn valuation.
It seems to be mixed outcomes.
SoylentRox t1_ixahf5k wrote
Reply to comment by Homie4-2-0 in Would like to say that this subreddit's attitude towards progress is admirable and makes this sub better than most other future related discussion hubs by Foundation12a
Yeah. And mexico and India are mostly cheaper because of regulations that artificially restrict how many doctors can be trained and how difficult it is to get a license to produce generic medicine.
SoylentRox t1_ixackl6 wrote
Reply to comment by Homie4-2-0 in Would like to say that this subreddit's attitude towards progress is admirable and makes this sub better than most other future related discussion hubs by Foundation12a
So again I think real medicine - my definition of real medicine is one where the error rate and speed of responsiveness is such that deaths are almost never from all causes - isn't compatible with the FDA.
At a certain point you need the AI system and doctors overseeing it to do what needs to be done and theres no way to regulate it by chemical compounds used. You would need to use anything and everything, often synthesizing what you need right before use. Not to mention gene edits would be patient specific done by a learning algorithm that changes by the hour.
One way around this would be to offer the treatment in other jurisdictions. If the group doing this has an AGI and singularity grade robotics it's going to actually work. Once enough wealthy people are restored you'd start a political campaign to have the FDA abolished and replaced with an outcome based agency.
There could amusingly be an edge period, where medicare is tired of paying nursing home and hospice bills so it sends the patients to Antigua or whatever to be regenerated and legally no longer eligible for Medicare. But the FDA is still fighting it's abolition and USA hospitals are still running and filling their morgues with their mistakes.
SoylentRox t1_ixa7i22 wrote
Reply to comment by Homie4-2-0 in Would like to say that this subreddit's attitude towards progress is admirable and makes this sub better than most other future related discussion hubs by Foundation12a
Yes. And maybe early on pre AI revolution or whatever patients will benefit from sloppier treatments. But like the kind of pristine, works every time, and your body looks like a supermodel when done, with cosmetic fixes so you are stronger and smarter and better looking than original...yeah that's gonna take AI. Easy to describe what we want the outcomes to be, very difficult millions of steps to achieve.
SoylentRox t1_ixa20m8 wrote
Reply to comment by Homie4-2-0 in Would like to say that this subreddit's attitude towards progress is admirable and makes this sub better than most other future related discussion hubs by Foundation12a
The reason is that if you reset their bio clocks so the cells give an effort as large as if you were a baby any mutations in functional genes can cause a tumor and kill you. An aggressive fast growing tumor. So you need to be sure there genes are correct and probably to bring in cancer detection genes from people more resistant to cancer than you are or animals if your immune system will permit it. Those naked mole rats probably have some great genes to borrow.
And if it all fails this is where you need that Ai life support so you live through a stay in the ICU while huge surgeries are done against your various tumors.
SoylentRox t1_ixa1hba wrote
Reply to comment by fractal_engineer in How much time until it happens? by CookiesDeathCookies
Note that "cyberpunk shit" is a series of well described technical problems that human beings can't solve.
How do you build a neural interface that won't react with the human body or cause damage.
How do you perform the installation neurosurgery cheap and quick and reliably.
Note we have prototypes for all this stuff that works in animals. The difference with cp2077 is it has to work 100 percent of the time in humans.
How does the software work. How does the cyber security work so what we see in the games isn't possible, implants have network links but lower level systems cannot be hacked or even manipulated without physical access and keys.
When you say "75-150" years you actually mean "I will not be alive to see it". And that may be so. I wasn't sure I would be alive to see AI make decent art but here we are.
SoylentRox t1_ix9tmx9 wrote
Reply to comment by Homie4-2-0 in Would like to say that this subreddit's attitude towards progress is admirable and makes this sub better than most other future related discussion hubs by Foundation12a
Yeah I know. Messed up thing is if this is correct the actual cause of aging is our bodies are sabotaged. Just telling the cells they are young again makes them work harder.
Real treatments might have to be starting with 1 cell, patching any mutations - you might generate the 1 cells genome from scratch. So it has all new genes. The differentiate it to 0 age stem cells then inject those. So your skin gets fixed by thousands and thousands of microjnjections of new stem cells. Your liver gets surgically reduced then a regrown lobe is spliced in. Heart gets muscle microjnjections. Arteries similar. And so on.
A lot of careful work and any mistake and you die.
SoylentRox t1_ix9pyt3 wrote
Reply to comment by Homie4-2-0 in Would like to say that this subreddit's attitude towards progress is admirable and makes this sub better than most other future related discussion hubs by Foundation12a
Do they even have de aged rats or primates yet?
Like again I know it can be done. I think we will need AI driven life support to do it in a way that works every time. AI driven life support is basically an AI system that looks at the results of thousands of blood tests and other tests for you, and takes into account the outcomes for millions of other people, and then decides what treatment to give you. And the machine doesn't leave the room, it reevaluates every second or so.
With good software design and so on and the fact that the machine takes into account more information than any human can learn in their life you can expect much better results.
SoylentRox t1_ix9luaf wrote
Reply to comment by Homie4-2-0 in Would like to say that this subreddit's attitude towards progress is admirable and makes this sub better than most other future related discussion hubs by Foundation12a
I think that actually doing this in a way that doesn't kill you will take many other advances. I just hope something is available within 50 years. Even if it is just good enough to add another 50 years, you know how LEV works.
SoylentRox t1_ix95ntf wrote
Reply to Would like to say that this subreddit's attitude towards progress is admirable and makes this sub better than most other future related discussion hubs by Foundation12a
I think the one drawback is we know from history that tech advances are almost always more difficult and slower than people initially hope. There is always a hidden problem or unexpected delay. New technology always has a hidden drawback that sometimes makes early versions of a tech worse than what it replaces. Remember the poor light quality of cheap cfl replacements for incandescent bulbs? That kinda thing. Early smartphones pre iphone were actually worse than Nokia flip phones.
So when someone gushes about AGI or a treatment for aging by 2025 I kinda roll my eyes. It's not impossible but both those problems have been so difficult and humans have tried for so long that it will probably not be quite that soon.
SoylentRox t1_ix49fll wrote
Reply to comment by ChronoPsyche in is it ignorant for me to constantly have the singularity in my mind when discussing the future/issues of the future? by blxoom
So I also see it from the OP's perspective because...we don't need sentient AGI trillions of times smarter to solve climate change.
All we need is narrow AI that you can go look at papers demoing the results right now. And it just needs to be a little better and used to drive robots, which various startups are doing right now.
So the speculation is not "some day sentient AGI", it's "robots in the lab that work now will work a bit better, such that they can automate most tasks that robots are capable of performing now."
Why is it important for better robots to do tasks that you could use a current gen robot to do? Simple. If the robots are a little smarter with narrow AI they can do a lot greater breadth of tasks. Instead of electronics factories having robots do 80% of the work they do 100%. Instead of mines having mining equipment doing 80% of the work driven by human operators it's 100%. And so on.
This solves climate change.
It gives you several obvious tools:
​
- It doesn't matter if cities are too close to the sea when it rises - these robots can be used to make modular subunits for new buildings, robotruck them to a site, and lift them into place. You could build an entire new city in months.
- It doesn't matter if arable land gets scarce - self contained farms built and run by robots
- It doesn't matter if the equator gets uninhabitable - robots go down there and get resources while people live at northern latitudes
- We can build CO2 gathering systems and cover the sahara desert with solar panels to power them. Robots make this feasible since they do 99.9% of the labor of manufacturing, deploying, wiring up, and maintaining the panels. A sahara covered in solar is about enough energy gathering to thermodynamically reverse the last 200 years of combustion to gain energy.
The OP is right. We wouldn't be sitting helpless. Even if there is no sentience, and the tools are simply made production grade from what we already know works right now today.
SoylentRox t1_ix1f1g2 wrote
Reply to comment by was_der_Fall_ist in 2023 predictions by ryusan8989
Agree mostly. One confounding variable is the coming recession may cut funding. I don't know how much gain we are getting from "singularity feedback". What this is as AGI gets closer, AGI subcomponents become advanced enough to speed up the progress towards AGI itself. As concrete examples, autoML is one and the transformer is another and mass ai accelerator compute boards is a third. Each of those is a component that a true AGI will have a more advanced version inside itself, and each speeds up progress.
The other form of singularity feedback as it becomes increasing obvious the AI is very near in calendar time, more money will be spent on it because of a higher probability of making ROI. You might have heard huggingface, a startup that duplicated openais work with stable diffusion but better, has a paper value of a billion dollars basically overnight.
This is similar to how as humanity got closer to a nuke multiple teams were trying in multiple countries.
Anyways if Singularity gain is say 2x, and funding gets cut to 1/4, then in 2023 we will see half progress.
Just as an example. If the gain is 10x the funding cut will be meaningless.
And obviously gain scales to well technically infinity though since the singularity is a physical process it will not be quite that high as the actual singularity happens, and presumably AGIs advance themselves and technology in lockstep until we hit the laws of physics.
That last phase would I guess be limited by energy input and the speed of computers.
SoylentRox t1_ix0pvgm wrote
Reply to comment by TemetN in 2023 predictions by ryusan8989
do you have a prediction from last year? Didn't this image generation stuff come out of left field this year?
I am wondering if your predictions are way too conservative.
Those of us who survive next year will find out, but the last year has seemed suspicious to me. Too many advances, they work too well. Not empty promises and hype as usual but stuff that is starting to work.
If the singularity hypothesis is correct this pattern is going to continue - progress on AI itself accelerating as AI is chain reacting with itself. And if correct then progress will accelerate until the end.
SoylentRox t1_iwznvv0 wrote
Reply to comment by purple_hamster66 in A typical thought process by Kaarssteun
Maybe? Why salt. Why not just heat up a bunch of ceramic bricks to almost their melting point. Salt especially hot salt can corrode and melt things. With the bricks, if the truck crashes, you just end up with glowing pottery on the ground. Don't touch it but it won't flow to you.
You also have poor efficiency converting from the heat back to work, you need a steam engine.
Frankly probably better to just transport diesel.
SoylentRox t1_iwwq5w1 wrote
Reply to comment by rnimmer in Full Self-Driving Twitter by [deleted]
This is vaguely possible eventually but as you know DevOps is incredibly fragile. You can't just get close you need the exact correct commands to merge repos, build the codebase, run unit tests etc.
This isn't really automatable with current or plausible future AI ML tech. Note I do work in AI, have also done some DevOps, and am very enthusiastic about AI/ML for task spaces where the goal is clearly defined, the task environment can be adequately simulated, and performance is differentiable.
For example a robot moving boxes in a warehouse fits that criteria. It's relatively simple what the goal is and what not to do, and expressible in a robust sim. DevOps has many many subtle consequences and you need the solution to get every relevant detail right.
Software that code translates from one language to a faster language without loss of correctness is also possible a similar way.
SoylentRox t1_iwhhk1l wrote
Reply to comment by purple_hamster66 in A typical thought process by Kaarssteun
Agree totally on the solar. The only thing the fusion does for you is it saves you having to develop a long term method of energy storage. There are lots of ways to do this but there are tradeoffs.
Flow batteries being the most promising because they are efficient - you get 80 percent plus of the stored energy back - so you just need some electrolyte chemistry that is not too toxic and cheap and can be stored in gigantic cheap unpressurized tanks.
You can also make hydrogen, maybe store it in metal or as ammonia or just pressurized gas, and burn it in fuel cells. This loses a lot of energy and is also expensive equipment.
There are also various pressurized air and heat storage concepts - they all have cheap storage material but poor efficiency.
Note that hydroelectric and lithium battery storage is not long term, it's short term storage. It's for the next couple days. You need something to store energy to make up for seasonal shortfalls and for black swan periods of little renewable production for a while
SoylentRox t1_iw7rj54 wrote
Reply to comment by eddieguy in Will this year be remembered as the start of the AI revolution? by BreadManToast
That was later. This was kitty/no kitty.
SoylentRox t1_iw6djg5 wrote
Reply to comment by Yuli-Ban in Will this year be remembered as the start of the AI revolution? by BreadManToast
Agree. I came here to say this. 2012, with Andrew Ng's effort to use a neural network on a large farm using CPUs for the compute to find cats in youtube video was the first "modern" AI attempt. As I recall all that compute, and their accuracy was like 60%. Vs YOLO which can run in real time on one GPU and find cats by breed.
That was the start. But 2022 does feel like something special. It might be the "knee" of the S curve - when things start to go exponential. Hard to say, due to the recession throttling the money flow into AI it may slow things down a little.
SoylentRox t1_iw4w8rv wrote
Reply to comment by cjeam in What if the future doesn’t turn out the way you think it will? by Akashictruth
I'm willing to accept that risk because it means the possible gain of things like:
(1) treatments that turn the elderly back into young people, stronger and smarter and better looking than they were originally.
(2) catgirl sexbots.
SoylentRox t1_iyytf3y wrote
Reply to comment by Head_Ebb_5993 in bit of a call back ;) by GeneralZain
>Is this some kind of cult ? Or religion ? I know what is singularity , just because it sounds simple doesn't matter , because you underestimate how hard it is to get to that level , when we can't even practically define what is intelligence...
It's neither. It's a large group of people, many of us live in the Bay Area and work for AI companies to make it happen. It's an informed opinion of what we think is about to happen. Similar to those nuclear fission researchers in the 1940s who thought they would be able to blow up a city, but weren't entirely sure they weren't about to blow up the planet.
Your other objections are dated prior 2012. Please update your knowledge.