Submitted by netw0rkf10w t3_zmpdo0 in MachineLearning
TimDarcet t1_j1w6ifs wrote
Reply to comment by netw0rkf10w in [D] What are the strongest plain baselines for Vision Transformers on ImageNet? by netw0rkf10w
I think the supervised training they report in MAE is 300 epochs, they used a different recipe compared to finetuning (appendix, page 12, table 11)
netw0rkf10w OP t1_j2939o2 wrote
You are right, indeed. Not sure why I missed that. I guess one can conclude that DeiT 3 is currently SoTA for training from scratch.
Viewing a single comment thread. View all comments