Submitted by DrummerSea4593 t3_10q44ld in deeplearning
msltoe t1_j6o4430 wrote
Since CNN weights are often 4D tensors, WxHx(#inputs)×(#outputs), they're hard to visualize directly. Instead, the trick is to ask what input images to the model fully activate the queried node and none of the other nodes in the same layer. There's a Keras example script that does this. The generic term is "deep dream."
BellyDancerUrgot t1_j6p8tor wrote
Wouldn’t a deconvolution operation (backprop basically but without updating gradients) on any layer after the network has been trained show u what features activate that layer?
Viewing a single comment thread. View all comments