Submitted by aozorahime t3_y2nyn5 in MachineLearning
Hi, I am a master's student working on GAN in speech enhancement. Probably I must say I learned a lot from this topic and I had to restudy probability to get an understanding of generative models and such. I am just curious whether the generative model such as GAN is still a good topic for Ph.D. since recently I am getting exposed to the current model such as diffusion model. BTW, I also interested in information bottleneck in deep learning. Any suggestion would be helpful :) thanks
ThatInternetGuy t1_is46ghv wrote
Transformer-based models are gaining traction since 2021 for generative models as you could practically scale up to tens of billions of parameters, whereas GAN-based models are already saturated, not that GAN(s) were any less powerful, as GAN(s) are generally much more efficient in terms of performance and memory.