Machine learning models can be broadly classified into three categories based on the nature of the algorithm used: Statistical Machine Learning, Artificial Neural Networks, and Deep Learning. Artificial Neural Networks (ANNs), also known as Neural Networks, are a type of algorithm that is inspired by the human brain. While the comparison between ANNs and the human brain is superficial, it can help us understand ANNs in a simple way. In this article, we will delve deeper into the intricacies of this branch of Machine Learning.
Key takeaways from this blog
Here, we will be discussing these topics in greater detail:
- An analogy from the human brain and defining ANNs in simple English language.
- Explanations of the constituent terms in the definition of ANNs.
- Key insights from the schematic diagram of NN.
- What are the Advantages and Disadvantages of ANNs?
- Practical use-cases of ANNs in real life.
The human brain is a complex system that learns and adapts over time through experiences and circumstances. Neurons, the basic building blocks of the brain, play a crucial role in this process. Billions of neurons, along with their connections, store learned experiences in the form of memories. When our sensory organs, such as our eyes, skin, and ears, encounter similar situations, they respond in a similar way. An example of this is learning to drive a car. Our brains experience various situations on the road and learn how to respond. Once learned, the brain uses signals from our eyes, nose, and ears to control the various parts of the vehicle.
As new experiences occur, the brain adapts and modifies the stored learnings. Essentially, the brain maps input signals to output signals in response to new information. This process is mimicked in artificial neural networks, which also use interconnected neurons to store and process data.

Defining ANNs in Layman's term
If we define the term ANN in plain English, we can say:
Neural Networks are user-defined nested mathematical functions with user-induced variables that can be modified in a systematic trial-&-error basis to arrive at the closest mathematical relationship between given pair of input and output.
Let's know the terms used in the above definition:
With all this, we might have understood what exactly is present in any Neural Network. Now let's learn how exactly this nestedness work.

Key insights from the Neural Network Diagram
When designing the structure of Neural Networks, it is important to consider the following:
- Each neuron in the input layer corresponds to one feature of the dataset. So if the dataset has 50 features, the Input layer will have 50 neurons.
- The number of output categories determines the total number of neurons in the Output layer. For example, if there are 10 categories in the output, then the Output layer will have 10 neurons.
- The number of hidden layers and the number of neurons in each hidden layer are pre-defined in the network, and they are not trainable parameters. These non-trainable parameters are called hyperparameters, and they are tuned based on multiple experiments on the same dataset.
- Every neuron in any layer is connected to every neuron in the adjacent layers. For example, if the hidden layer 1 has 20 neurons, and the hidden layer 2 has 60 neurons, every 20 neurons in hidden layer 1 will be connected to all 50 neurons of the input layer and all 60 neurons in hidden layer 2.
- Every neuron in the hidden and output layers has one trainable parameter called bias, and every connection between neurons is weighted by a trainable variable called weights. These weights and biases are collectively called the weight matrix.
- In a Neural Network with 50 features in the input, 10 output categories, 20 neurons in the hidden layer, and 60 neurons in hidden layer 2, the total number of trainable parameters will be: Biases = 20 + 60 + 10 = 90 and Weights = 50*20 +* 20*60 + 60*10 = 2800, so the total trainable parameters will be 2890.
Advantages of Neural Networks
Some of the key benefits of using Neural Networks include:
- Neural Networks are able to learn intricate non-linear relationships between input and output data, making them well suited for complex problems.
- They have the ability to generalize their learning to new, unseen data and make predictions with a high degree of accuracy.
- Neural Networks are not limited by the distribution of the input data, and can work well with data that follows heterogeneous distributions. This makes them versatile and suitable for a wide range of data types.
- Neural Networks are robust to noise in the data, meaning that their predictions are not greatly affected by random variations in the data.
Disadvantages of Neural Networks
While Neural Networks have many advantages, there are also some drawbacks to consider. Some of these include:
- Training Neural Networks can be computationally intensive and require powerful hardware. As the number of hidden layers or nodes in any hidden layer increases, the need for better processing power also increases.
- One of the main disadvantages of Neural Networks is their lack of interpretability. They are not able to explain how or why they arrived at a certain prediction, which can make it difficult to understand and trust their results.
- There is no set method for designing Neural Networks, and various hyperparameters such as the number of layers, number of neurons, and the type of activation function must be fine-tuned through experimentation.
Practical use-cases of ANNs in real life
Neural Networks are particularly well suited for datasets that exhibit high levels of non-linearity. Some areas where this is commonly the case include:
- Optical Character Recognition (OCR): OCR is a complex problem that involves highly complex non-linear relationships between the characters in an image. Neural Networks can process a large amount of input, such as a complete image represented in matrix form, and automatically identify these complex relationships. This makes them well suited for OCR, facial recognition, and handwriting verification applications.
- Stock market price prediction: Forecasting stock prices is a challenging task, as the market behavior is often unpredictable. However, many companies now use Neural Networks to anticipate whether prices will rise or fall in the future. By making more accurate predictions based on past prices, Neural Networks can help companies make significant financial gains.
Conclusion
In this article, we covered the fundamental concepts of Artificial Neural Networks, including the constituent terms used in its definition. We also discussed the learnable parameters present in any Neural Network and how to calculate the total number of these parameters. Lastly, we presented some practical, real-world applications of ANNs in various fields. We hope you found the information presented in this article to be informative and useful.
Enjoy Learning!