Activation Functions
Activation Functions
Sigmoid
Explanation: Squashes input into (0, 1). Useful in binary classification or probabilistic interpretation.
Tanh
Explanation: Like sigmoid, but squashes into (-1, 1). Zero-centered.
ReLU
Explanation: Fast, simple, widely used in deep learning. Outputs 0 for negative input.
Leaky ReLU
Explanation: A small slope (αα) for negative values avoids “dead neurons” in ReLU.
Softmax
Explanation: Converts a vector into a probability distribution. Used in multi-class classification.