footurist
footurist t1_izmbg49 wrote
Reply to comment by ttkciar in The technological singularity is happening (oc/opinion) by FrogsEverywhere
That's a bit harsh. The guy accomplished a lot actually, queue the insufferably long list of patents and inventions...
Although it's becoming more obvious these days that it's likely going to take a bunch of conceptual breakthroughs to change course towards AGI from what's currently being developed. Admittedly he foresaw things to be quite different than they are today. Also the broken record stuff...
footurist t1_iyb0pl4 wrote
Reply to comment by Artanthos in Sci-fi-like space elevators could become a reality in the "next 2 or 3 decades" by Shelfrock77
I wouldn't try that one here. By my observation this is the single most often ignored property of the singularity by a large margin. I mean, it's understandable at least, a phenomenon which evokes strong curiosity in many people.
footurist t1_iy7bbxk wrote
footurist t1_iy5n515 wrote
footurist t1_ixgzonm wrote
Reply to what does this sub think of Elon Musk by [deleted]
I'll refrain from ethical judgements, since these quickly derange into incredibly complex arguments.
That said, he appears to be one of the greatest system thinkers of all time.
footurist t1_ix81fou wrote
Reply to comment by blueSGL in Metaculus community prediction for "Date Weakly General AI is Publicly Known" has dropped to Oct 26, 2027 by maxtility
The reason it needs to be efficient is the wealth and complexity of computations that are required otherwise. There's already stuff like AIXI and Schmidhuber's thing if you got a couple billion years to spare...
footurist t1_ix7fhfz wrote
Reply to comment by ArgentStonecutter in Metaculus community prediction for "Date Weakly General AI is Publicly Known" has dropped to Oct 26, 2027 by maxtility
For some reason these people aren't willing to accept just how different a continuously learning, efficient, general abstracter like our brain is from these giant clever data crunchers.
I highly doubt they'll be able to push those to resemble what we have.
footurist t1_iw3nh4i wrote
Reply to comment by IndependenceRound453 in DeviantArt AI Update: Now Artists Will Be "Opted Out" For AI Datasets by LittleTimmyTheFifth5
I feel like this whole AI art thing exposes a lot of people with questionable ethics and morality. It's kind of a shock really when you witness this widespread obnoxious behavior and lack of empathy... But such is life I guess, lol.
footurist t1_iw01b6x wrote
Reply to comment by AdditionalPizza in 2023: The year of Proto-AGI? by AdditionalPizza
It is, in the sense that it must prove the concept. If it doesn't, it's maybe a precursor of some kind, but not the prototype.
footurist t1_ivz26ai wrote
Reply to comment by AdditionalPizza in 2023: The year of Proto-AGI? by AdditionalPizza
The inadequacy occurs with the usage of the term prototype, which has a reasonably well defined meaning. Basically it serves as an MVP for one or more concepts that are themselves well-defined, so their feasibility and worthiness can be displayed. In the case at hand the concept is true generality of learning as we know it, which the current mainstream paradigm is definitively not capable of. As mentioned before, they might achieve limited imitation thereof, to an extent which is probably quite hard to guesstimate, but never the real thing ( in their current form, evolution can always change the landscape of course, but then they wouldn't be the same thing anymore ).
I recommend some YouTube videos by Numenta. Jeff Hawkins can explain these kinds of things to laymen incredibly well ( he was on Lex's podcast aswell ).
footurist t1_ivyw1fq wrote
Reply to 2023: The year of Proto-AGI? by AdditionalPizza
Aggressive TLDR : Inadequate definition term
I've read about these "Proto-AGI" definitions before here, but to me these mostly don't make sense.
Perhaps there's debate about the definition of AGI itself, but in general ( heh ) the G in it should imply the ability to ( constrained, because total generality isn't really achievable with our current knowledge I believe, have read ) learn any task that ( continuously aswell ) and like a human would.
The coming up of these definitions chronically lined up with the rise of transformer based LLMs I believe, especially GPT-3. This timeline makes sense.
However, these architectures don't learn like humans do at all. They don't leverage armadas of extremely subtle abstractions like our brains ( the kind of which can be displayed in simple thought experiments, but which I'm too tired to go through here; think carefully about stages in the first time assessing the rules of a roundabout for example ) efficiently do and they don't learn continuously. They're more like impressive data crunchers than efficient abstracters like our brains.
To me it's only logical that this ability to potentially learn each and every task that crosses one's mind and approach human level in it ( again, within the constraints mentioned above ), leveraging efficient transfer learning along the way, were deemed a requirement of this definition, because otherwise the agent wouldn't really be a general learner, but merely a sort of wasteful imitator thereof. That is especially true for the current LLMs, however impressive they are.
So, in conclusion, maybe at the admission of improving the term at hand something in resemblance of what is talked about in this post could indeed surface in the coming year. But as it stands, no imo.
footurist t1_ivvw097 wrote
Reply to Let's assume Google, Siri, Alexa, etc. start using large language models in 2023; What impact do you think this will have on the general public/everyday life? Will it be revolutionary? by AdditionalPizza
It will take some more generations for them to mature for that. They're simply too inconsistent for now.
footurist t1_ivmli8y wrote
Reply to comment by existentialzebra in Lab-grown blood given to people in world-first clinical trial by Phoenix5869
I AM SRI THOUSAND YEARS OLT!
footurist t1_iv5a0iw wrote
Reply to Ray Kurzweil hits the nail on the head with this short piece. What do you think about computronium / utilitronium and hedonium? by BinaryDigit_
The funniest comment I read on that video some years back was that he threw his last bit of rationality together with that stone in the sea, lol.
Obviously not true, but he does feel way too confident and somehow some angry internet man needing to write that made me laugh.
footurist t1_iuvt5pb wrote
No offense meant, but I find it somewhat strange that this question is asked so often out of all as the name "Singularity" already implies that it's unknowable.
footurist t1_itkeds6 wrote
Reply to comment by ChronoPsyche in how old are you by TheHamsterSandwich
I'd bet on that too. Usually, once the teens have been "exhilarated futurists" for a while they'll probably realize the wheels aren't spinning as fast and the mountain top is a lot foggier as it appeared to be. Then they'll probably progress into a more reserved kind of optimism for the future.
I could be wrong, but that would explain the amount of ( imo ) partly unfounded exhilaration in here.
footurist t1_itb31zt wrote
Reply to 3D meat printing is coming by Shelfrock77
Unfortunately I think not many meat lovers will opt for this, only mostly the ones that would have already accepted today's common meat alternatives or even just simply to refrain from meat entirely. Not to speak of the purists.
This is a noble attempt but if you really want to eliminate meat consumption for the sake of the planet and the poor animals then you need to come up with something that Gordon Ramsay could not distinguish from a freshly grilled high quality steak.
What a task...
footurist t1_irwffnf wrote
Reply to comment by Mr_Hu-Man in Any examples of future prediction models? by Mr_Hu-Man
If you're thinking going towards capabilities even remotely approaching Laplace's daemon ( even just for tiny chunks of the universe like the weather of city x ) then sadly ( or not ? ) that kind of assurance is way too computationally expensive and requires datasets no one can assemble.
However, a lot weaker variants may be possible, for that I don't know enough.
SPOILER
>!That said, in the tv show Devs they got it to work, lol.!<
footurist t1_irn6v8i wrote
If you listened to any of Aubrey's talks about the topic then you'll know that the man knows A LOT about this. And I think despite having very likely been the victim of a coup and character assassination attempt his new foundation will flourish since some of the deep pocketed donors have already sued most of their money back and invested that in the new foundation.
If anybody in this list got a clue on this, it's him.
footurist t1_izmckld wrote
Reply to comment by tacocatVV in The technological singularity is happening (oc/opinion) by FrogsEverywhere
Not really, as the root of the problem lies further up the hierarchy. With capitalism and money and power silos enabled by current governmental structures you can spin up any new fancy technology you want - it's gonna find its way into the hands of centralized power. Until that changes the dance is gonna be the same, even if the tune changes.