Who’s afraid of Machine Learning? Part 3 : About That Learning
Intro to ML (for mobile developers)
Last post described how to create an Artificial Neural Network (ANN), inspired by the way our brains work. Basically how to create an algorithm that takes data and makes a conclusion.

Many numbers on the previous post were merely guesses or random numbers. You might say: “wait, but you just told us you made up a bunch of stuff! What’s up with that? How could this give us something that we can trust?”
Teaching, learning, training
We will now “teach” the model, the same way we were to teach a baby:
We’ll take a data set, meaning a bunch of images with labels that fit each image. We’ll give the model each image and ask: “With your current model, is this a strawberry or not?”. After the model runs and produces a conclusion, we can compare it with our original label, and say “yes, you were right! It was a strawberry” or “no you were wrong”…

Then, according to our feedback, the model can tweak the numbers that I guessed before.

The input can’t be tweaked, of course, since the image and its features are objective. But it can tweak the weights and the bias that I guessed (the blue and green numbers on the image). By tweaking them, the model can make sure that next time we will get the right conclusion. Also, we can make our conclusion more accurate, meaning to get a higher probability for the result label.
This process of giving the model many many many images with results, and tweaking the numbers to better fit, is called training. This is the heart of the “learning” process that the model goes through.
The goal of course, is that after this model is trained enough, it will be able to take any image, that wasn’t a part of the training process, and produce an accurate enough conclusion. That is because it knows well enough how to separate the image into features, give the right weights, and create a fit calculation for deciding how do the features affect the final conclusion.
