local response normalization
One of the layers used in deep networks
- it acts as a sort of “lateral inhibitor”
- in our brain, the most excited neuron has the tendency to suppress the output of its neighboring neurons.
- LRN tries to achieve the same thing
- 2 ways of doing LRN
- across-channels: Nx1x1 local neighborhood is considered
- within-channel: 1xNxN local neighborhodd is considered
- eqn is as follows: N, , , k are hyper-parameters
AKA: LRN