Submitted by kaphed t3_124pbq5 in MachineLearning
Looking at some old tables:
https://arxiv.org/pdf/1512.03385.pdf, Table 4
https://arxiv.org/pdf/1905.11946.pdf, Table 2
Why do the ResNet-152 results vary? E.g. Top-1 error on ImageNet validation set is 19.38 in the original, but 22.2 in the EfficientNet paper.
Normally I would assume these type of results would be copied from the previous publication.
[deleted] t1_je06cl7 wrote
[deleted]