Submitted by valdanylchuk t3_xy3zfe in MachineLearning
PC-Bjorn t1_isnicrn wrote
Reply to comment by jazmaan in [R] Google AudioLM produces amazing quality continuation of voice and piano prompts by valdanylchuk
Soon, we might be upscaling beyond higher bitrate, -depth and fidelity and into multi channel reproductions, or maybe even into individual streams for each instrument and actor on stage as well as a volumetric model for the stage layout itself, allowing us to render the experience as how it would be when experienced from any coordinate on - or around - the stage.
Pair that with a realtime, hardware-accelerated reproduction of the visual experience of being there, based on a network trained on photos from the concert and we'll all be able to go to Woodstock in 1969.
Viewing a single comment thread. View all comments