Submitted by rubbledubbletrubble t3_zmxbb5 in deeplearning
rubbledubbletrubble OP t1_j0iib4p wrote
Reply to comment by BrotherAmazing in Why does adding a smaller layer between conv and dense layers break the model? by rubbledubbletrubble
The 1000 layer is the softmax layer. I am using a pretrained model and training the classification layers. My logic is to reduce the number of output layers the feature extractor to reduce the number of total parameters.
For example: If mobilenet outputs 1280 and I had a 1000 unit dense layer. The parameters would be 1.28 million. But if I added a 500 unit layer in the middle, it would make the network smaller.
I know the question is bit vague. I was just curious
Viewing a single comment thread. View all comments