Submitted by Oceanboi t3_zm6h07 in MachineLearning
MrFlufypants t1_j0ao1ei wrote
I’ve had issues where tensorflow automatically grabs the whole gpu while PyTorch only uses what the model asks for. Could totally not be your problem, but if you’re running multiple models it could be your problem
veb101 t1_j0atf1m wrote
I think this can be solved using:
tf.config.experimental.set_memory_growth(gpu_device, True)
MrFlufypants t1_j0atiwu wrote
There are a couple ways to do it. That’s the one I use normally. Sometimes that doesn’t work though. Can’t quite remember the use case where it wasn’t working
MOSFETBJT t1_j0dbib4 wrote
This is what helped when I had a similar issue
Viewing a single comment thread. View all comments