ReasonablyBadass
ReasonablyBadass t1_iwygn2q wrote
The headline should continue with: "So far no task found for them there that couldn't be done on the ISS or with a moon orbiting Spaceship"
ReasonablyBadass t1_iwplk0c wrote
Reply to comment by zzzthelastuser in [R] Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning - Epochai Pablo Villalobos et al - Trend of ever-growing ML models might slow down if data efficiency is not drastically improved! by Singularian2501
Semantics. It didn't see any of it's data more than once and it had more available. Not one full epoch.
ReasonablyBadass t1_iwoq0ug wrote
Reply to comment by TheRealSerdra in [R] Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning - Epochai Pablo Villalobos et al - Trend of ever-growing ML models might slow down if data efficiency is not drastically improved! by Singularian2501
Not a complete one. GPT-3,I think, didn't complete it's first pass-through
ReasonablyBadass t1_iwnbmrx wrote
ReasonablyBadass t1_iwk1f0i wrote
Reply to MIT researchers solved the differential equation behind the interaction of two neurons through synapses to unlock a new type of fast and efficient artificial intelligence algorithms by Dr_Singularity
Can someone ELI5 these liquid networks?
ReasonablyBadass t1_iwfkgus wrote
Reply to comment by Numinak in AGI Content / reasons for short timelines ~ 10 Years or less until AGI by Singularian2501
In that case, wouldn't you want to help them?
ReasonablyBadass t1_iw1zh3z wrote
Reply to Two-armed bismuth behemoth by BismutNL
Did you Bismiss me?
ReasonablyBadass t1_ivsrk7d wrote
The Free Shavacadoo
ReasonablyBadass t1_ivo63q5 wrote
Reply to comment by TheFram in Experimental “FLASH” cancer treatment aces first human trial by tonymmorley
Ah cool, thank you!
ReasonablyBadass t1_ivnt9d7 wrote
Reply to comment by ptjunkie in Experimental “FLASH” cancer treatment aces first human trial by tonymmorley
I meant at the same time, so that you get higher energy in the tumor tissue.
ReasonablyBadass t1_ivnskp7 wrote
I never quite got why we don't use multiple radiation beams from multiple angles?
Low powered in the tissue they pass, but overlapping in the tumor.
ReasonablyBadass t1_ivnhqt5 wrote
Reply to [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
People still talk about the chinese room? But it's so nonsensical.
It's like saying: a cpu can't play pong. That's why a cpu plus program to play pong can't play pong
ReasonablyBadass t1_ive55c2 wrote
Reply to comment by Glitched-Lies in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
Consciousness isn't material. It's not a substance but an information pattern. As long as you can run that pattern, the underlying mechanism is irrelevant.
ReasonablyBadass t1_ivdloar wrote
Reply to comment by Glitched-Lies in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
So? Why would a physical difference have anything to do with wether or not different system can be conscious?
ReasonablyBadass t1_ivdlih6 wrote
Reply to comment by Carl_The_Sagan in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
Difference being digital minds will be able to talk.
ReasonablyBadass t1_iv1x2pv wrote
Reply to comment by droneowner in Paralyzed patients can now connect their iPhones to their brains to type messages using thoughts alone | It's now possible to mind control your smartphone. But are we ready to open this can of worms? by prOboomer
And their idea of prepardness is "just not do anything new, ever"
ReasonablyBadass t1_iv19j3e wrote
Reply to Paralyzed patients can now connect their iPhones to their brains to type messages using thoughts alone | It's now possible to mind control your smartphone. But are we ready to open this can of worms? by prOboomer
Always the same, tired Doom bullshit.
BUT WHAT IF SOMETHING BAAAD HAPPENS?
ReasonablyBadass t1_iuzpvgh wrote
Reply to comment by pseudorandom_user in [N] Class-action lawsuit filed against GitHub, Microsoft, and OpenAI regarding the legality of GitHub Copilot, an AI-using tool for programmers by Wiskkey
Wouldn't be mich better to state any AI derived code from this will automatically be Open Source?
ReasonablyBadass t1_iu04n6l wrote
Reply to comment by AmishAvenger in Star Trek: Strange New Worlds wins Saturn Award for Best Streaming Sci-fi Series by Shizzlick
They are roughly the same level of meldorama mixed with stupid for me.
ReasonablyBadass t1_itzyag5 wrote
There is a realm of possibility between "willing slave" and "genocidal maniac"
ReasonablyBadass t1_itubdlz wrote
Reply to comment by porcenat_k in Where does the model accuracy increase due to increasing the model's parameters stop? Is AGI possible by just scaling models with the current transformer architecture? by elonmusk12345_
>Human beings understand basic concepts and don’t need to read the entire internet for that.
We have years of training data via multiple high input channels before we reach that level though.
ReasonablyBadass t1_itts24o wrote
Reply to Where does the model accuracy increase due to increasing the model's parameters stop? Is AGI possible by just scaling models with the current transformer architecture? by elonmusk12345_
Current transformer architecture may need a few more tweaks for AGI to work, but I'd say it's close already.
ReasonablyBadass t1_ittryn7 wrote
Reply to comment by SatisfyingLatte in Where does the model accuracy increase due to increasing the model's parameters stop? Is AGI possible by just scaling models with the current transformer architecture? by elonmusk12345_
Overfitting isn't an issue anymore due to the discovery of double descent/grokking.
ReasonablyBadass t1_ittrx0o wrote
Reply to comment by manOnPavementWaving in Where does the model accuracy increase due to increasing the model's parameters stop? Is AGI possible by just scaling models with the current transformer architecture? by elonmusk12345_
No? There have been a lot of developments of getting results with snaller models though. Basically people figured out ways to not need to train such huge modeks. Which means the bigger models will now be even better. But the focus currently is figuring out how to get the most out of current sizes.
ReasonablyBadass t1_ixtkk6p wrote
Reply to comment by misterhamtastic in Covering a cylinder with a magnetic coil triples its energy output in nuclear fusion test by Gari_305
I know at least one fusion start up uses direct conversion.