The other day on my way to work busy with my thought, I almost bump to a student. Trying to avoid collision, we move to the same direction. I turn right and he turn left, almost colliding and did the same thing again twice. And I wonder why I just smiled and not irritated.

Because I thought of gradient descent. And how I can use that analogy to explain the concept.

And so, analogy activity to my AI class I did. But it failed. The setup is the same avoiding collision (maximum loss/error) but with a blindfold twist.

I just told the story and how it fitted my learning.

When going towards your destination and there’s a hindrance along the way, the instinct is to change direction.

With that experience on my way to work, we went to the direction where we almost collided and made us stop and be delayed.

In Gradient Descent, the algorithm derives the value of parameter, say the weight of the connection, to optimize loss function through minimum loss or error.

The weight is the direction, should I turn right or left so that I won’t bump anyone. The collision in this case is the high loss function, high error. To avoid collision is the goal.

Identifying to increase or decrease weight parameter is abstracted to the turning left or right so that loss function (collision) is optimized, meaning, lowest error or no collision.

But the experiment failed like what I said and I accepted that blindfolded way (random) needs more attention.

I just hope the same acceptance happens when the direction of media outlet colliding freedom of expression with extortion is a moral descent.

Reflecting on my everyday experience to simplify the chaos in politics is what I dream of.

To be continued…

Leave a Reply

Discover more from Eiraborates. My Way to DEAR STUFF, Elaborated.

Subscribe now to keep reading and get access to the full archive.

Continue reading