Submitted by DevarshTare t3_11725n6 in MachineLearning
ggf31416 t1_j9clwen wrote
Reply to comment by DevarshTare in [D] What matters while running models? by DevarshTare
I actually have a 3060 too, in theory a 3060ti should be up to 30% faster, but most of the times the 3060 is fast enough and faster than any T4.
For making a few images on stable diffusion maybe the difference will be 15 vs 20 seconds, for running whisper on several hours of audio it could be 45 minutes vs 1 hour. The difference will only matter if the model is optimized to fully use the GPU in the first place.
DevarshTare OP t1_j9ngofc wrote
I've seen the same across multiple threads now, the VRAM does make a difference in being able to run a model or having to optimize it. This has been really helpful, thanks a lot guys!
Viewing a single comment thread. View all comments