Viewing a single comment thread. View all comments

MrFlufypants t1_j0ao1ei wrote

I’ve had issues where tensorflow automatically grabs the whole gpu while PyTorch only uses what the model asks for. Could totally not be your problem, but if you’re running multiple models it could be your problem

1

veb101 t1_j0atf1m wrote

I think this can be solved using:

tf.config.experimental.set_memory_growth(gpu_device, True)

11

MrFlufypants t1_j0atiwu wrote

There are a couple ways to do it. That’s the one I use normally. Sometimes that doesn’t work though. Can’t quite remember the use case where it wasn’t working

1

MOSFETBJT t1_j0dbib4 wrote

This is what helped when I had a similar issue

1