Sunday, 9 November 2025

Deep Learning 05: Leaky ReLU & Tanh

 Tanh

Imagine a squishy line that takes any number — big or small — and squeezes it so it always stays between -1 and +1.

  • If you give it a big positive number, it goes up close to +1.
    → Input: +10 → Output: 0.999 (almost +1)

  • If you give it a big negative number, it goes down close to -1.
    → Input: -10 → Output: -0.999 (almost -1)

  • If you give it 0, it gives you 0 (right in the middle).
    → Input: 0 → Output: 0

 Leaky ReLU

Imagine you have a magic box like the ReLU one — it passes positive numbers through and turns negative numbers into zero.

But sometimes, that can be a problem 😕 — because if everything becomes zero, the network can’t learn!

So, Leaky ReLU fixes that by letting a tiny bit of the negative numbers “leak” through instead of blocking them completely.


📦 How it works:

  • If the number is positive, it stays the same.
    → Input: +5 → Output: +5

  • If the number is negative, it becomes a small negative number instead of 0.
    → Input: -5 → Output: -0.05 (just a small leak!)


🧮 In math form:

LeakyReLU(x)={x,if x>00.01x,if x0\text{LeakyReLU}(x) = \begin{cases} x, & \text{if } x > 0 \\ 0.01x, & \text{if } x \le 0 \end{cases}

(The 0.01 is the “leakiness” — how much of the negative value gets through.)


💡 Why it’s useful:
Leaky ReLU keeps the neurons alive even when the inputs are negative — kind of like a door that doesn’t shut completely, letting a little light shine through 🌤️

So the network keeps learning instead of getting “stuck in the dark”!

No comments:

Post a Comment

Data Engineering - Client Interview question regarding data collection.

What is the source of data How the data will be extracted from the source What will the data format be? How often should data be collected? ...