Giving a definition of Artificial Intelligence is not easy especially for its many areas of application. So, what is Artificial Intelligence from a computer science point of view? It is the discipline that studies all the theories and techniques useful for the elaboration of an algorithm that, using cognitive methods, is able to process a large amount of data. Software that uses AI provides a probabilistic output that is different from the deterministic output generated by traditional software. Now let’s look specifically at how it differs from human reasoning and what we mean by neural networks, machine learning, and deep learning.
What is meant by Artificial Intelligence?
SAS Italy has tried to condense Artificial Intelligence in 10 words and from this definition we have taken the cue to discuss a fascinating topic that has inspired many Hollywood screenplays and that today is still the subject of conceptual and ethical discussions.
Software developed following AI principles has a decision-making autonomy that, to a common observer, might seem identical to human abilities but is not even remotely comparable to the intuitive abilities of the individual.
If we take the literal meaning of Intelligence we realize how many abilities the human mind has:
intelligènza (ant. intelligènzia) s. f. [from lat. intelligentia, der. of intelligĕre
“to intend”]. – 1. a. A set of psychic and mental faculties that enable man to think, understand or explain facts or actions, elaborate abstract models of reality, understand and be understood by others, judge, and make him capable of adapting to new situations and of modifying the situation when it presents obstacles to adaptation. Source: Treccani
Currently, there isn’t any software that incorporates all these capabilities. Therefore, It appears legitimate to ask: but it is appropriate to define it Intelligence? Actually, it would be more accurate to describe it as a great simulation capability.
Can machines think like humans?
Ever since people started talking about Artificial Intelligence, and the first robots simulating human movements were built, many people immediately thought: robots will replace human beings. The movies that dealt with this topic certainly didn’t help to imagine a different scenario. Those who know the subject well know that machines, today, will never be able to reason like human beings. Moreover, this is not the goal of the research and development studies of this technology.
AI emulates some human senses such as sight and hearing but there are aspects of human reasoning that are not replicable through artificial intelligence, such as the abilities to:
- define goals and reasons for achieving them;
- abstract knowledge;
- generate empathic feelings independently.
What does the future hold for us? Let’s leave the answer to Judea Pearl, an Israeli-American computer scientist and philosopher, winner of the Turing Prize in 2011 and known for defending the probabilistic approach to artificial intelligence:
“The day when AI will be able to approximate human intelligence is near, but its capabilities must be judged on three levels of cognitive ability: seeing (association), doing (intervention), and imagining (counterfactuals). AI today works only at the lowest level, which is seeing.”
Artificial Intelligence: neural networks, machine learning and deep learning
The idea behind the concept of AI is the attempt to emulate the capabilities of the human intellect. Where to start then? From the complex system of neural networks that in the individual allow elaborate activities such as reasoning, learning, reproduction of sounds, words, images and the ability to act.
In the 40s of the twentieth century, W.S. McCulloch and W. Pitts were the first to create a prototype of an artificial neuron. Since then, more and more elaborate systems of artificial neural networks capable of learning and adapting to different purposes have been realized.
Artificial neural networks and deep learning
Artificial neural networks are based on computational models that recreate the connections typical of biological neural networks. The structure is composed of nodes and interconnections in the form of flow. The data are inserted in the Input Layer, cross the Hidden Layer and generate an output in line with the initial design objective.
Built the connections we pass to the phase of learning in which the insertion of a set of data happens and, in the case of supervised learning we supply also the possible final output, otherwise in the not supervised learning we let the system learn from the generated outputs.
An example of an artificial neural network: the recognition of a human face from a processed image.
Deep Learning architectures are features of neural networks. They involve a much larger number of “hidden” layers useful to identify the characteristics of the data.
In Deep Learning architectures we can find, in the most complex cases, more than 150 hidden layers.
What are the application areas of neural networks and Deep Learning?
Solutions related to neural networks and deep learning architectures can be applied in most companies. The greater the amount of data to be managed the better the response given as output.
- Speech recognition: many apps take advantage of this software to translate spoken language into written text. In the enterprise, for example, if we use speech recognition for warehouse operations we’ll get a productivity boost. How? By transforming voice messages into text that can be understood by the software that will then manage the processes. This process is known in technical jargon as speech-to-text.
- Natural Language Processing (NLP): useful in interactions between computers and the human voice, which in addition to voice recognition, understands and generates natural language. This also allows translation into another language and textual analysis of large amounts of data. In the business organization, this translates into data optimization through document synthesis and information classification.
- Recognition of parts of the text: if we input clearly named datasets, we can obtain as output a check for inconsistencies or apply rules that allow us to immediately verify an error. In the insurance field this aspect is fundamental to detect fraud.
- Recognition and classification of objects in an image through algorithms can recognize them, identify the shape and color, and extrapolate them from the context. The application of Image Recognition involves different areas such as security, surveillance and control of goods.
Machine learning: the ability of machines to learn
When these enormous potentials are translated into softwares, we enter the field of machine learning. Teaching machines to learn automatically and act without being explicitly programmed is a major achievement that allows for faster turnaround times. How does the learning take place? By analyzing data to build and adapt patterns. Patterns allow learning through an experiential path similar to the human one, i.e. discarding mistakes and promoting correct actions. By identifying these patterns you can build an algorithm that adapts models and improves their ability to make predictions. When working with software structured with this technology it must be remembered that the results are not certain but probable and it is necessary to carefully evaluate the percentage of the correctness of the data.
There are many areas in which machine learning can be used, we will mention a few:
- translating a text into another language;
- choosing investment opportunities through trading systems;
- customizing the featured products of an e-commerce based on searches made online;
- quickly detecting fraud in banking institutions by creating tools that leverage machine learning techniques.
Artificial Intelligence is the science of training Systems to emulate Human Activities through Learning and Automation. Click To Tweet
Who invented Artificial Intelligence?
The overview of AI cannot end without clearing up one last doubt. When was AI born and who invented it? The scientific community unanimously recognizes an official date of birth for AI: 1953, the year in which a seminar held in New Hampshire decreed the foundation of the discipline on the basis of the collection of important contributions on the subject. Instead, when we think about a name to associate to AI, we cite Alan Turing and his article “Computing machinery and intelligence”, because actually its application has been realized just with the birth of the first computers that were well suited to render the concept of intelligence associated to machines.