Shelfrock77
Submitted by Shelfrock77 t3_zt5omf in Futurology
Shelfrock77 t1_j1bouln wrote
Reply to Will we run out of data? by visarga
“You create data by thinking about it with neuralink” -Elon Musk
Shelfrock77 t1_j16cp3c wrote
Reply to comment by SendMePicsOfCat in Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
I’m sorry but you are delusional if you genuinely think there are no chances of predatory barbaric aliens coming into contact with a human colony. That’s why it’s important to be minduploaded so if you do die out in space, you respawn back here at home.
Shelfrock77 t1_j16atiq wrote
Reply to Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
The ASI will be like our parents that protect us from predators out in the wilderness of space (hopefully).
Shelfrock77 t1_j1593ka wrote
Reply to comment by TheSecretAgenda in How hard would it be for an AI to do the work of a CEO? by SeaBearsFoam
CEO’s don’t do shit but just sit on a pile of money.
Shelfrock77 t1_j12nogg wrote
Reply to comment by [deleted] in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
we are all lunatics
Shelfrock77 t1_j11orfd wrote
Reply to comment by solarnoise in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
just saying.
Shelfrock77 t1_j11ezf3 wrote
Reply to comment by Kinoxciv in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
Already did, scroll around.
Shelfrock77 t1_j11d9id wrote
Reply to comment by phriot in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
Do you lose free will in non-lucid dreams ? Guess you were programmed eat stuff that has no flavor.
Shelfrock77 t1_j11bkmn wrote
Reply to comment by phriot in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
Why did you even eat the food then dude ?
Shelfrock77 t1_j11bgbg wrote
Reply to comment by DarthBuzzard in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
When do you think we will get AGI ?
Shelfrock77 t1_j11ba19 wrote
Reply to comment by MyCuteData in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
delusional. The WEF tells you to your face but your too fucking stupid to know what exponential growth is.
Shelfrock77 t1_j11b0iy wrote
Reply to comment by DarthBuzzard in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
Okay human, when do you think we will have all five senses ?
Shelfrock77 t1_j11aea1 wrote
Reply to comment by DarthBuzzard in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
“Mark Zuckerberg claims humanity will move into the metaverse in the future, leaving reality behind for a world of our own creation, that we completely control.”
“Meta plans to spend the next five to 10 years building an immersive virtual world, including scent, touch and sound to allow people to get lost in VR.”
And don’t try to say “bu- but da site isn’t credible” cause i’ve seen Mark talk about the metaverse shit like 20 times and he clearly states that it’ll become indistinguishable from reality. Elon has even said on the Babylon Bee podcast that neuralink would be able to let you play games online like in sword art online.
Shelfrock77 t1_j119zal wrote
Reply to comment by phriot in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
How do you know you’ve tasted things if you just said you can’t remember tasting it.
Shelfrock77 t1_j119sk5 wrote
Reply to comment by DarthBuzzard in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
I’m not watching this Gary Marcus clone. How far do you predict we will get an interet of senses ? I set my deadline by 2030.
Shelfrock77 t1_j1192on wrote
Reply to comment by DarthBuzzard in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
Ok
Shelfrock77 t1_j118lkg wrote
Reply to comment by DarthBuzzard in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
Idiot, listen. The metaverse won’t be mainstream until it incorporates all five senses.
Shelfrock77 t1_j117tga wrote
Reply to comment by phriot in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
I’ve ate plenty of things in my dreams. Recently I was eating wings lol. I’ve smelled smoke in my dreams cause my house was burning with fire. I’ve gone through wormholes in space. I usually have those wild dreams when I take weed tolerance breaks. My schizophrenic friend says he will sometimes taste or smell different things from time to time.
Shelfrock77 t1_j11791a wrote
Reply to comment by DarthBuzzard in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
You haven’t watched his interviews, even Meta’s president of global affairs Nick Clegg says “it’ll likely take 10-15 years before it’s investments fully pay off. To back this claim up a third time, Mark has also said the metaverse will be mainstream in 5-10 years.
Shelfrock77 t1_j10qcj7 wrote
Reply to comment by EnomLee in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
The Zuck
Shelfrock77 t1_j10pncd wrote
Reply to comment by ShowerGrapes in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
MZ says it’ll take “5-15 years” for VR to have all 5 senses. He details it as if it’s lucid dreaming online.
Shelfrock77 t1_j10pcr0 wrote
Reply to comment by Sashinii in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
We will basically unlock the gates to other multiverses in our minds. It’ll change the entire fabric of society as we know it.
Shelfrock77 OP t1_j1027uk wrote
The next breakthrough to take the AI world by storm might be 3D model generators. This week, OpenAI open sourced Point-E, a machine learning system that creates a 3D object given a text prompt. According to a paper published alongside the code base, Point-E can produce 3D models in one to two minutes on a single Nvidia V100 GPU.
Point-E doesn’t create 3D objects in the traditional sense. Rather, it generates point clouds, or discrete sets of data points in space that represent a 3D shape — hence the cheeky abbreviation. (The “E” in Point-E is short for “efficiency,” because it’s ostensibly faster than previous 3D object generation approaches.) Point clouds are easier to synthesize from a computational standpoint, but they don’t capture an object’s fine-grained shape or texture — a key limitation of Point-E currently.
To get around this limitation, the Point-E team trained an additional AI system to convert Point-E’s point clouds to meshes. (Meshes — the collections of vertices, edges and faces that define an object — are commonly used in 3D modeling and design.) But they note in the paper that the model can sometimes miss certain parts of objects, resulting in blocky or distorted shapes.
Image Credits: OpenAI
Outside of the mesh-generating model, which stands alone, Point-E consists of two models: a text-to-image model and an image-to-3D model. The text-to-image model, similar to generative art systems like OpenAI’s own DALL-E 2 and Stable Diffusion, was trained on labeled images to understand the associations between words and visual concepts. The image-to-3D model, on the other hand, was fed a set of images paired with 3D objects so that it learned to effectively translate between the two.
When given a text prompt — for example, “a 3D printable gear, a single gear 3 inches in diameter and half inch thick” — Point-E’s text-to-image model generates a synthetic rendered object that’s fed to the image-to-3D model, which then generates a point cloud.
After training the models on a dataset of “several million” 3D objects and associated metadata, Point-E could produce colored point clouds that frequently matched text prompts, the OpenAI researchers say. It’s not perfect — Point-E’s image-to-3D model sometimes fails to understand the image from the text-to-image model, resulting in a shape that doesn’t match the text prompt. Still, it’s orders of magnitude faster than the previous state-of-the-art — at least according to the OpenAI team.
Shelfrock77 OP t1_j1c63ln wrote
Reply to comment by TheSecretAgenda in Why is this sub so luddite now ? by Shelfrock77
“By 2030, you’ll own nothing and be happy”