Shelfrock77

Shelfrock77 t1_j11aea1 wrote

https://www.dailymail.co.uk/sciencetech/article-10573907/amp/Mark-Zuckerberg-claims-live-metaverse-future.html

https://in.mashable.com/tech/28254/humans-will-live-in-metaverse-soon-claims-mark-zuckerberg-what-about-reality?amp=1

“Mark Zuckerberg claims humanity will move into the metaverse in the future, leaving reality behind for a world of our own creation, that we completely control.”

“Meta plans to spend the next five to 10 years building an immersive virtual world, including scent, touch and sound to allow people to get lost in VR.”

And don’t try to say “bu- but da site isn’t credible” cause i’ve seen Mark talk about the metaverse shit like 20 times and he clearly states that it’ll become indistinguishable from reality. Elon has even said on the Babylon Bee podcast that neuralink would be able to let you play games online like in sword art online.

0

Shelfrock77 t1_j117tga wrote

I’ve ate plenty of things in my dreams. Recently I was eating wings lol. I’ve smelled smoke in my dreams cause my house was burning with fire. I’ve gone through wormholes in space. I usually have those wild dreams when I take weed tolerance breaks. My schizophrenic friend says he will sometimes taste or smell different things from time to time.

5

Shelfrock77 OP t1_j1027uk wrote

The next breakthrough to take the AI world by storm might be 3D model generators. This week, OpenAI open sourced Point-E, a machine learning system that creates a 3D object given a text prompt. According to a paper published alongside the code base, Point-E can produce 3D models in one to two minutes on a single Nvidia V100 GPU.

Point-E doesn’t create 3D objects in the traditional sense. Rather, it generates point clouds, or discrete sets of data points in space that represent a 3D shape — hence the cheeky abbreviation. (The “E” in Point-E is short for “efficiency,” because it’s ostensibly faster than previous 3D object generation approaches.) Point clouds are easier to synthesize from a computational standpoint, but they don’t capture an object’s fine-grained shape or texture — a key limitation of Point-E currently.

To get around this limitation, the Point-E team trained an additional AI system to convert Point-E’s point clouds to meshes. (Meshes — the collections of vertices, edges and faces that define an object — are commonly used in 3D modeling and design.) But they note in the paper that the model can sometimes miss certain parts of objects, resulting in blocky or distorted shapes.

Image Credits: OpenAI

Outside of the mesh-generating model, which stands alone, Point-E consists of two models: a text-to-image model and an image-to-3D model. The text-to-image model, similar to generative art systems like OpenAI’s own DALL-E 2 and Stable Diffusion, was trained on labeled images to understand the associations between words and visual concepts. The image-to-3D model, on the other hand, was fed a set of images paired with 3D objects so that it learned to effectively translate between the two.

When given a text prompt — for example, “a 3D printable gear, a single gear 3 inches in diameter and half inch thick” — Point-E’s text-to-image model generates a synthetic rendered object that’s fed to the image-to-3D model, which then generates a point cloud.

After training the models on a dataset of “several million” 3D objects and associated metadata, Point-E could produce colored point clouds that frequently matched text prompts, the OpenAI researchers say. It’s not perfect — Point-E’s image-to-3D model sometimes fails to understand the image from the text-to-image model, resulting in a shape that doesn’t match the text prompt. Still, it’s orders of magnitude faster than the previous state-of-the-art — at least according to the OpenAI team.

16