Submitted by MazenAmria t3_zhvwvl in deeplearning
suflaj t1_izruvvi wrote
Reply to comment by MazenAmria in Advices for Deep Learning Research on SWIN Transformer and Knowledge Distillation by MazenAmria
That makes no sense. Are you sure you're not doing backprop on the teacher model? It should be a lot less resource intensive.
Furthermore, check how you're distilling the model, i.e. what layers and what weights. Generally, for transformer architectures, you distill the first, embedding layer, the attention and hidden layers, and the final, prediction layer. Distilling only the prediction layer works poorly.
MazenAmria OP t1_izt68w9 wrote
I'm using with torch.no_grad():
when calculating the output of the teacher model.
suflaj t1_iztjolh wrote
Then it's strange. Unless you're using a similarly sized student model, there is no reason why a no_grad teacher and a student are similarly resource intensive as a teacher with backprop.
As a rule of the thumb, you should expend several times less memory. How much less are you expending for the same batch size in your case?
Viewing a single comment thread. View all comments