Submitted by SpatialComputing t3_10nccbg in MachineLearning
Comments
youcandigit t1_j686m30 wrote
Where can I do this right now?
LetMeGuessYourAlts t1_j68ai7i wrote
This is going to do amazing things for GIF reactions when it's fast and cheap.
pulpquoter t1_j68hppt wrote
Brilliant. How about the thing that you put on your head and see images? This must be worth trillions.
Herrmaciek t1_j68kkbi wrote
Billions well spent
marcingrzegzhik t1_j68ugfe wrote
Great post! I'm really excited to explore this project and see what kind of applications it has! Can you tell us a bit more about what kind of data it works with and how it works?
deathtosquishy t1_j68vfh4 wrote
Now this is what I've been waiting for. Can it create obscene images is the question?
kiteguycan t1_j68xk83 wrote
Would be cool if it could take a book as an input and immediately make it into a passable movie
GhostCheese t1_j699f2m wrote
in the offices of meta?
doesn't look like they provide a portal to use it, just showing off what they can do.
[deleted] t1_j69ceeq wrote
[deleted]
[deleted] t1_j69ci5w wrote
[removed]
Dontgooo t1_j69ciw8 wrote
Or a virtual reality you could step in to.. why you think meta is going hard at VR?
SaifKhayoon t1_j69e65n wrote
They had a problem sourcing labeled training data of 3D videos, you can tell this tech is still early from the shield in the bottom right example
They could generate a labeled 3D environments from 2D images using InstantNGP and GET3D with Laion's labeled dataset of 5.85 billion CLIP-filtered image-text pairs to create a useful dataset for training because this currently relies on a workaround of only being trained on text-image pairs and unlabeled videos due to lack of labeled 3D training data.
[deleted] t1_j69ivw1 wrote
[deleted]
Hannekiii t1_j69peit wrote
Probably not is the answer
STEMeducator1 t1_j69tx9w wrote
It'll literally be like lucid dreaming.
AvgAIbot t1_j6abe0g wrote
Thatβs where the future is headed, no doubt in my mind. If not in the next few years, definitely within this decade
strickolas t1_j6ahh8k wrote
That's actually a really great idea. There are tons of movies adapted from books, so you already have a labeled data set π€
Dr_Kwanton t1_j6aikky wrote
I think the next challenge would be producing a progression of a scene and not just a short gif. It would take a new tool to create smooth, natural transitions between the 2D scenes that train the model.
whilneville t1_j6ar901 wrote
The consistency is so stable, would be amazing to use a video as reference, not interested in 360 turntable tho
MemeBox t1_j6ch49b wrote
not yet.
hapliniste t1_j6gvcgp wrote
I guess AR glasses will make access to 3d video (as in first person scanned scenes) way easier (for the companies that control the glasses OS).
SpatialComputing OP t1_j67xr7u wrote
>Text-To-4D Dynamic Scene Generation
>
>Abstract
>
>We present MAV3D (Make-A-Video3D), a method for generating three-dimensional dynamic scenes from text descriptions. Our approach uses a 4D dynamic Neural Radiance Field (NeRF), which is optimized for scene appearance, density, and motion consistency by querying a Text-to-Video (T2V) diffusion-based model. The dynamic video output generated from the provided text can be viewed from any camera location and angle, and can be composited into any 3D environment. MAV3D does not require any 3D or 4D data and the T2V model is trained only on Text-Image pairs and unlabeled videos. We demonstrate the effectiveness of our approach using comprehensive quantitative and qualitative experiments and show an improvement over previously established internal baselines. To the best of our knowledge, our method is the first to generate 3D dynamic scenes given a text description. github.io