Artificial intelligence is often mentioned, but its exact meaning remains unclear to many of us. In this article, we will decipher artificial intelligence by examining its recent advances, while addressing its limitations and best practices to fully benefit. We will also explore the essential prerequisites for embarking on an AI project.
What is artificial intelligence?
These are computer technologies capable of solving problems of high algorithmic complexity, especially tasks that humans perform through their cognitive functions.
To operate, AI requires several essential elements :
- A large amount of data representative of the problem to be solved
- Algorithms to process this data and create representation models
- Specific hardware for AI training and execution
Machine learning, a sub-branch of AI, has evolved considerably in recent years. Today, it no longer needs millions of data; a few can be enough to address different categories of problems, making machine learning much more accessible.
Machine learning demonstration
First, we built a database containing about 150 images, 90 with a face and 60 with an empty background. AI will be trained to recognize both the individual and the empty background, in this demonstration, this process will only take a few seconds. In a real application, training can take several days to several weeks.. Then the AI will be able to determine whether the person appears on the image or not.
We create a database, optimize the parameters of a model, and finally, so that AI can predict an output for new data. It is essential that the database be representative of the problem to be solved. Thus, it is more important to have quality data than to have a large amount of data to optimally train a machine learning algorithm.
Speaking of generative AI, GANs (generative adversarial network) are based on two components: the generator, whose objective is to deceive the discriminater, and the discriminater, which distinguishes real data from false ones. The better the discriminater identifies the real data, the more data the generator creates. That’s how data can be generated.
There are also AIs that work without annotation. This is the case for example of foundation models that are used by generative AI. To generate text, we train an algorithm to complete phrases we know. Properties that were not intended can then appear. For example, AI can learn to generate its own stories or do arithmetic, without being explicitly trained to do so.
However, there are limits to generative artificial intelligence. It cannot think like a human being and operates on the basis of a predefined database, which creates limits. She excels in general knowledge questions, but can provide incorrect answers on specific or complex topics.
Data and regulations
To train an AI, it is imperative to have actionable data. The collection of personal data first requires consent, followed by the anonymization of this data. In order to use raw data, their annotation is essential. To teach an AI to distinguish distinct elements, it must be shown various examples. For example, to segment an object precisely in an image and determine its nature, it is necessary to carefully annotate images pixel by pixel.
Attention to bias is crucial to minimize errors. These biases may arise from the company or the scenarios present in the databases.
It should be noted that the use of personal data is not unlimited. The GDPR, the European general regulation in force since May 2018, concerns the confidentiality and protection of personal data. It applies to data stored in Europe. Beware of foreign operators, such as the United States where American law prevails. In this context, anonymization or pseudonymization of data is essential.
In addition, the IA Act, scheduled to be implemented by the end of 2025, establishes that there is no good or bad technology, but that some use cases will be unacceptable and prohibited. This includes subliminal manipulation, exploitation of children or persons with psychological disabilities, widespread social notation (currently in force in China), and remote biometric identification (however allowed in exceptional cases, such as the search for a fugitive). In addition, some high-risk AIs will be subject to data governance, technical documentation, user transparency, and human control to ensure accuracy, robustness, and cybersecurity.
Use cases in companies
Detection of defects on bottles
Glass is a very malleable material and therefore prone to defect, on this kind of product these defects can be very dangerous. So we developed an AI that runs on a graphics card with a high rate, which allows to detect the defect, identify it to finally sort the bottles correctly.
Recognition of bottle of wine
WineAdvisor is an integrated AI mobile application that recognizes wine bottles using a simple photo of the label. This image recognition enhances the user experience by providing quick access to detailed wine information. The OCR function makes it possible to read the vintage, thus offering accurate data, even from images of medium quality.
Laser engraved skin recognition
We have developed algorithms for reading the laser engraving of animal skins to ensure traceability at every stage of the supply chain.
Diagnosis and medical planning
Software that learns the surgeon’s planning on past cases to propose predictions for new operations, choosing the right implant and offering placement. This technology has been approved by various health experts.
A technology to sort waste, so that food plastics are not mixed with non-food plastics, respecting European standards that are maximum 5% non-food plastic in food plastic. It is a software that detects in real time the different plastics in 1.5 milliseconds.
By taking into account past stocks and external factors such as weather, balances, holidays, we can predict future sales and therefore inventory management to avoid shortages or over-stock.
The optimization of packaging
We worked on temperature curves, which are internal and external factors of an isothermal box containing products requiring demanding packaging. The aim was to predict the box’s cold capacity and to determine whether there would be temperature excursions, so as to determine which would meet the cold constraints required to limit the risks associated with the cold chain, optimize the choice of packaging and the routes.
Visual analysis of documents
Analysis of documents containing complex electrical diagrams, including many pages with symbols that are difficult to interpret, such as circuit breakers and explanatory tables. The aim is to develop automatic reading of PDF or scanned documents. This automation enables the information to be imported into specialized software to enable costing for electrical system maintenance.
For school transporters, we carried out tour optimization, the aim being to allocate the right resources to the right tours, for hundreds of transporters. This AI saved our customer 500,000 km / year, given that 1km = 1€ on average.
We also worked on detecting people on buses. Using AI to count the number of children on the bus has enabled the company to optimize the size of its buses, and thus save money.
IA Booster France 2030
IA Booster is a support and financing program of Bpifrance that aims to promote the adoption of AI by companies. It allows a company to know its maturity in AI and to be accompanied by Data AI experts referenced by Bpifrance.
IA Booster is done in 4 distinct phases :
First, artificial intelligence training on the Bpifrance University platform.
Secondly, the identification of the different use cases of data by the AI, thanks to the Data IA Diagnostic.
Third, the selection of relevant artificial intelligence according to the problem.
Finally, the experimentation of the artificial intelligence solution.