
Neural Networks explained
Neural Networks, Pillars of Modern AI
You glance at your screen and it unlocks instantly.
Ask about the weather: Siri tells you it’ll be sunny.
Got a question? ChatGPT drafts the answer.
It feels like magic, yet it’s simply… mathematics.
Behind these feats of artificial intelligence lies a fascinating technology: neural networks.
They are invisible but ubiquitous. These networks are at the heart of today’s most impressive AI applications. And yet, how they work often remains a mystery.
In this article, discover in a simple, visual way what happens behind the scenes of these “artificial brains” and why they have become indispensable to the ongoing technological revolution.
Welcome to the AI’s Brain
A journey to the heart of the network
Understanding networks without dropping the connection

The neural networks are computing architectures at the core of AI, designed to learn and process information by connecting thousands of tiny computing units—like simplified brains.
These architectures form the main building block of artificial intelligence, enabling major technological advances and paving the way for revolutionary applications in many fields such as Machine Learning, Deep Learning and Computer Vision…
From synapses to circuits, our digital cousins
The human brain is made up of billions of interconnected neurons. Each neuron plays an essential role in information processing: it receives signals, analyses them, and fires under certain conditions.
When you see an image with fur, whiskers and pointy ears, your brain links these clues and concludes: “That’s a cat!”. This process, which feels natural and immediate, relies on a vast network of neurons whose connections strengthen over time through learning and experience.
An artificial neural network works on a similar principle, but instead of biological signals it uses mathematical calculations to process information. It consists of artificial neurons connected together that transform data through successive stages to achieve a global understanding, like recognising a face or understanding a sentence.

Diving into the neural depths
Strength in numbers
In the beginning was the neuron…

A neuron is a basic unit that can be combined into more complex networks.
The simplest of them is called a perceptron, a little pawn of artificial intelligence invented in the 1950s.
This neuron is capable of making simple decisions. For example, an email can be classified as spam if it contains suspicious words, lots of links, or comes from a dubious source. Each criterion has a weight, and if their sum exceeds a certain threshold, the email is considered spam (1); otherwise not (0).
Even if it seems limited, its very simplicity made the perceptron a fundamental building block of modern AI. By stacking hundreds or thousands of perceptrons, we obtain very powerful networks.
Its operation is based on several key steps:
1
Receiving input values
The neuron receives a list of numbers representing transformed data. Whether image, text, or something else, everything must first be converted into numerical values to be processed. For example, every pixel of an image or every word in a text can be represented by a number.
2
Assigning a weight
Each value is given a weight, a coefficient that determines its influence.
The higher the weight, the more relevant the value is considered and the more it counts in the calculation.
3
Adding the values
The weighted values are summed by the neuron, condensing all the information into a single figure—much like a grade-point average where each subject has its own coefficient. This score will serve as the basis for making a decision.
4
Activation function
An activation function is applied; it determines whether or not the neuron should “fire”, based on the score obtained. It can:
– switch on like a light if the score exceeds a threshold
– respond in a more gradual and nuanced way
5
Passing the result
If the neuron fires, it passes its result to the neurons in the next layer. Information thus flows from neuron to neuron, layer after layer, allowing increasingly refined analysis at each step.
A multi-layered intelligence
The artificial neural networks operate through multiple layers that gradually transform information. Each layer plays a specific role in data processing, like an assembly line where each stage refines the result a little more.
1️⃣ The input layer
This is the network’s first step, where raw data are converted into a form the computer can understand, then sent to the following layers for analysis. For example, a cat image is transformed into a series of numbers, each pixel being represented by a value corresponding to its colour or brightness.
2️⃣ The hidden layers
These layers progressively analyse the data to spot patterns—recurring elements that help identify objects, shapes or concepts. A rounded shape with a point, for example, might correspond to a cat’s ear. The more hidden layers a network has, the more subtle details it can detect and the finer its understanding—this is the principle of Deep Learning, which is characterised by stacking more than one hidden layer.
3️⃣ The output layer
This is the last layer, which provides the final result in the form of a prediction. In our example, the network calculates the probability that the image matches a certain object and gives an answer: “90 % chance it’s a cat”.

Intelligence: the daughter of experience
Thanks to the chaining of its layers, a network can learn by itself, gradually adjusting its internal connections as it processes data.
This process relies on experience and trial-and-error: the network makes a prediction, compares the result with reality, then adjusts its connections to reduce the gap between the two. The more examples it sees, the better it can generalise and make accurate predictions, even when faced with new data.
We distinguish two major types of learning for neural networks: supervised learning and unsupervised learning (article on this topic coming soon).

What if we opened the black box?

Contrary to what some may make you believe, neural networks are not always easy to interpret. The more layers a model has, the more its connections multiply and the harder its reasoning is to follow.
If a network can identify a cat with precision, it is often impossible to know why it made that choice: which parts of the image counted? which features influenced the decision?
We then talk about a “black box”: the AI gives a result, but its reasoning remains opaque.
This opacity can be problematic in fields where it is crucial to understand and justify AI decisions. For example, when a model analyses industrial data or guides strategic decisions, it is generally necessary to know which elements its answer is based on.
That’s where Explainable AI (XAI) comes in—an ensemble of methods that aim to make the decisions of complex models such as neural networks more understandable (this is also the direction of the AI Act). The goal is not to modify the model, but to provide a posteriori insight into its functioning, for instance by visualising the areas of an image that influenced recognition or identifying key data in a prediction.
This booming field meets a growing need for transparency and trust in AI usage—particularly for open-source models—and improves the reliability of interactions between humans and machines. By making AI more explainable, we enable users to better understand, use and collaborate with it.
So tell me, what’s the point?
Networks really do layer it on
Neural Networks: Sentinels of AI
Neural networks make it possible to analyse, understand and generate information without human intervention. They adapt and improve continuously, which allows tasks to be automated efficiently and a wide variety of needs to be met.
Learn
Adapt
Analyse
Generalise
Neurons grow their network
These networks are part of Machine Learning, a key subset of AI. They are at the heart of modern artificial intelligence and are particularly used in Deep Learning.
There are many neural-network architectures, each adapted to specific types of data and tasks. These architectures differ in their number and type of neurons, their number of layers, and their type of connection.
Here are some examples of connections:
– Dense: every neuron is connected to all those in the next layer, like a spider’s web
– Convolutional: the network analyses images piece by piece, like a jigsaw puzzle
– Recurrent: it remembers previous information, like short-term memory
– Attention-based: it spots links and important elements within a set, even if they are far apart
Thanks to these advanced architectures, AI can accomplish sophisticated tasks such as image recognition, automatic translation or text generation—they are essential for making it as powerful as it is efficient. Among the best known are:

Architecture
Meaning
Definition
Application
CNN
Convolutional Neural Network
This network architecture is designed to analyse images by automatically detecting patterns such as edges and shapes.
I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
RNN
Recurrent Neural Network
I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
Transformers
Used to process sequential data. Unlike RNNs, which process data word by word in order, they analyse the entire sequence in parallel thanks to an attention mechanism.
The basis of language models (LLMs) such as ChatGPT, used in Generative AI for document generation, translation or analysis.
Neural networks are far more than just mathematical tools: they are structures capable of learning, adapting and generalising, making ever smarter and more autonomous applications possible. From recognising a cat in a photo to instant text translation, they are today the backbone of modern artificial intelligence.
Understanding how they work, even at a simplified level, not only helps us grasp what AI does today, but also anticipate what it will achieve tomorrow. One thing is certain: these networks haven’t finished amazing us.