The development of artificial intelligence arouses many emotions. We are aware of the enormous possibilities of using AI in everyday life, as well as the potential of AI to revolutionize the business world, using, for example, cognitive computing or deep machine learning.

While analysing the AI issues, it is worth considering a certain paradox related to its significant limitations, which we, humans, being conscious users of AI, may not be aware of.

Moravec’s paradox

A certain discovery in the field of artificial intelligence and robotics is still considered to be valid: contrary to traditional beliefs, high-level reasoning requires little computing power, while AI operations related to the need for using low-level perception, sensory processing and motor skills require enormous computing power (this view, known as Moravec’s Paradox, was formulated in the 1980s by, among others, Hans Moravec, Rodney Brooks and Marvin Minsky).

Reasoning is commonly referred to as the thought process of recognizing a certain belief or statement as true based on another belief or statement previously recognized as true. Recognizing reasoning as correct requires the application of the rules of logic and axioms accepted as true, such as scientific laws, legal systems, dogmas, cultural principles, customs, traditions and authorities.

In turn, perception is defined as sensory processes and those interpreting them as the product of objects or events coming from the external, three-dimensional world. Perception can be broadly described as the ability to register and capture objects and events in the external environment: their sensory reception, understanding, identification and verbal description, as well as preparation to react to the stimulus. Human perception systems enable them to see, hear and feel tastes, smells, touch as well as temperature changes. However, the most important element of perception seems to be the sense of awareness (of one’s own existence and that of the surroundings).

Sensory processing in humans is, to put it simply, a neurological process occurring in the nervous system that involves organizing information from our body and the external world in order to use it in everyday life. The human brain effectively processes impressions, triggering automatic adaptive responses that allow a person to function in the environment.

When the human brain receives sensory information that is interpreted as important, we pay attention to it as part of a process called “arousal.” When a person is threatened with danger, they are ready to flee or fight owing to the processes taking place in the body. When the brain determines that a particular impression is not important for survival, the human brain allows us to filter out useless information through the process of “inhibition.” This is how we recognize street noises or other common sounds as non-threatening.

Hans Peter Moravec (Canadian futurologist and transhumanist, a research professor at the Robotics Institute at Carnegie Mellon University in Pittsburgh) wrote: It is relatively easy to make computers display the skills of adult humans in intelligence tests or in playing checkers, but it is difficult or even impossible to program them the skills of a one-year-old child in perception and mobility (Moravec H.P., Mind Children, Harvard University Press, 1988).

We should agree with psychologist Steven Pinker that this statement may be one of the most important discoveries made in the field of artificial intelligence. Pinker determined that the main lesson he had learned from his years of research on AI was that hard problems were easy, and easy problems were hard. The mental abilities of a four-year-old that we commonly take for granted – such as recognizing a face, picking up a pencil, crossing a room – actually pose some of the most difficult engineering problems… When a new generation of smart devices emerges, stock analysts, engineers and jurors may become replaced by machines. Gardeners, receptionists and cooks are safe in the coming decades (Pinker S., The Language Instinct, Harper Perennial Modern Classics, 2007).

What may seem surprising is that the most difficult to program are those human skills that we, humans, are unaware of. Marvin Minsky in his work emphasized that, in general, we are least aware of the things that our minds do best. We are more aware of simple processes that don’t work well rather than complex processes that work flawlessly (Minsky M., The Society of Mind, Simon and Schuster1986).

According to Moravec’s paradox, contrary to popular belief, high-level reasoning requires very little computation compared to low-level sensorimotor skills, which require enormous computational resources. According to some researchers, the explanation for Moravec’s paradox can be related to cerebral lateralization and the differentiated functions of the left and right hemispheres of the human brain (Rotenberg V.S., Moravec’s Paradox, Activitas Nervosa Superior, 2013).

The evolutionary path of human skills

The theory of evolution can provide us with a possible explanation for Moravec’s paradox. According to its basic assumptions, all our human skills are based on biology and use machinery developed through the process of natural selection, which has favoured improvements and optimisations in the human body for millions of years. It should be estimated that the older a skill is, the longer it took to improve it.

 Human brain constantly analyses the world around us, which is why our reactions occur without verbalisation of a specific threat. This results from training the neural connections of our brain over millions of years of evolution. In this field, AI is still much worse than humans. Data analysis by AI, which is to lead to a correct “understanding” of the world around us, requires complex processes, the level of complexity of which is incomparably higher than the so-called high-level reasoning, such as taking intelligence tests, mathematical analysis, advanced chess.

To sum up, AI must train skills that humans are already born with.

The difficulty of AI reproducing a particular human skill will be proportional to the time it took for that skill to evolve in animals and then in humans. The oldest evolutionarily skills are mostly unconscious to humans and people perform them effortlessly.

The skills that have evolved over millions of years include moving, grasping objects, recognizing voices and faces, but also social skills such as setting goals or assessing human motivation.

Examples of skills that have recently emerged in the human species’ evolutionary timeline include mathematics, engineering, logic, and science. Man has had only a few thousand years to improve these skills, mainly through cultural evolution.

Abstract thinking has developed relatively recently in the evolution of the human species and, consequently, we should not expect its implementation to be particularly effective. Moravec concluded that a billion years of experience about the nature of the world and how to survive in it are encoded in the sensory and motor centers of the human brain. The conscious process we refer to as thinking is a thin layer of human thoughts, effective only thanks to the support of much older and much more powerful, although usually unconscious, motor knowledge. We are all outstanding Olympians in the field of perception and motor skills, so good that difficult tasks seem easy to us. Abstract thought, however, is a new ability, perhaps less than 100,000 years old. We haven’t mastered it well yet. It is not difficult in itself – it just seems that way when we do it (Moravec H.P., Mind Children (…)).

Currently, many researchers focus their analyses on consciousness itself. We can undoubtedly point out the difference between intelligence and consciousness. Modern computers and artificial intelligence systems have long had an advantage over the human mind in terms of memory capacity and calculation speed. However, there is no convincing evidence yet that the mere issue of counting ability and memory capacity gives robots the ability to be self-aware. We cannot really define it well (see:

The importance of Moravec’s paradox for organizations

Even though many years have passed since Moravec formulated his theorems, it should be admitted that they are still valid. Obviously, this discovery was undoubtedly influenced by the then stage of AI development, which was incomparably less advanced compared to the current state of science in this area. These findings were also influenced by the relatively low computing power and memory capacity of computers at that time, insufficient data amounts made available to artificial intelligence, or the limitation of the software in certain areas, e.g. to classical logic. However, the basic assumptions formulated by Moravec should mainly be considered valid. When assessing the possibility of implementing AI in organizations and selecting areas of its application, the described paradox should be taken into account.

The effectiveness of AI can be assessed in relation to how it performs compared to the functioning of the human mind. As already indicated above, in terms of computational processes, their speed and quality, AI is incomparably more perfect than the human brain. In turn, at the level of abstract thinking, required, for example, in the area of translating literary content into other languages, a person still shows greater abilities. Although in medical diagnostics, which involves the ability to analyse images, AI is able to compare thousands of images and indicate specific disease entities in a time that would take doctors months, it is still in areas such as customer service, based on the quality of interaction with other people and building relationships over time, where interpersonal contact is still irreplaceable.

It can therefore be concluded that replacing employees with AI in areas of organizational functioning requiring human intuition or based on human interactions will be difficult for a long time. The good thing about this situation is that these are usually positions that do not generate high employment costs. In turn, the use of AI for activities previously performed by employees in areas requiring deep analytical thinking will allow organizations to save on employing highly qualified staff in the long run.

AI as a technological innovation in the telecommunications industry

The development of artificial intelligence creates new opportunities for the development of the telecommunications industry. Telecommunications companies use AI algorithms to design networks, improve customer service, streamline and automate business processes and to optimize network infrastructure. AI applications are driving innovation in the transition to 5G virtual networks, while fiber infrastructure and 5G networks are accelerating the digitization of industrial services and processes, enabling the rapid expansion of the Internet of Things (IoT). Advanced artificial intelligence algorithms are used to develop IT systems supporting 5G network design processes, which aims to shorten the planning and construction time of these networks. It is also expected that telecommunications operators will actively implement artificial intelligence to improve the functioning of their network infrastructure. AI supports the optimization of network operation, especially in the area of cybersecurity, and also creates the possibility of predicting anomalies in network operation and activating automatic traffic redirection mechanisms using monitoring systems supported by artificial intelligence systems. Thanks to AI applications, the use of network self-optimization functions by telecommunications companies is increasing, and it is also possible to prevent device and system failures before they occur (predictive maintenance) based on historical data analysis. AI-based solutions are also highly beneficial for the telecommunications industry, allowing for monitoring the level of use with regards to telecommunications devices, which allows for an estimate of the time and place for potential failures to occur. Additionally, the implementation of AI innovations in the field of improving the energy efficiency of telecommunications networks, improving billing systems, or automating business processes (RPA – Robotic Process Automation) seems to be becoming more and more important (see:

The above areas of AI application, in the context of Moravec’s paradox described above, suggest the possibility of its implementation with great success.

In the context of still greater difficulties in the proper application of AI to customer relations, we can expect less success in introducing innovations in call center operations, where AI and neuro-linguistic programming (NLP) are used to analyze messages sent by customers to the call center or notes prepared by call center agents to improve the quality of customer service.

Therefore, there are still valid concerns about the correct operation of AI in the area of personalized digital interaction with the customer, the effectiveness of cooperation with the customer of virtual assistants and chatbots at the level of real solving problems related to servicing recipients of telecommunications services.

Moravec’s paradox in relation to the Artificial Intelligence Act

In 2021, the European Commission submitted a proposal to the European Parliament and the Council for a regulation establishing harmonized rules on artificial intelligence (

To date, the AI Act has not achieved its final shape, mainly due to the complexity of its subject matter and the number of related issues that need to be regulated. Critical areas include regulations relating to the risks associated with the AI usage. The European Commission’s regulatory proposal aims to provide AI creators, AI implementers and its users with clear requirements and obligations regarding specific applications of AI.

The proposed Artificial Intelligence Regulation aims to ensure that EU citizens have confidence in what AI has to offer. The proposed regulations aim to address the risks posed by artificial intelligence applications through establishing a list of high-risk applications, defining clear requirements for artificial intelligence systems, specifying specific obligations for users of artificial intelligence and providers of high-risk applications, and introducing a requirement for compliance assessment before an artificial intelligence system is put into use or placing on the market.

The regulatory framework defines four levels of risk in the application of artificial intelligence: unacceptable risk, high risk, limited risk, minimal risk along with a no-risk area. Unacceptable risk is defined as all artificial intelligence systems considered to pose a clear threat to public security and human rights. AI systems identified as high-risk include AI technology used in critical infrastructure (e.g. transportation) that could put citizens’ lives and health at risk. In addition, the high-risk area includes the use of AI in vocational education or training (which may determine access to education and professional life), product safety elements (e.g. the use of artificial intelligence in robot-assisted surgery), employment, employee management and access to self-employment (e.g. in the form of CV sorting software in recruitment procedures). High-risk systems also include services such as credit scoring that denies citizens the opportunity to obtain loans, law enforcement systems that may interfere with citizens’ fundamental rights (e.g. by assessing the reliability of evidence), migration, asylum and border control management (e.g. through verification authenticity of travel documents), administration of justice and democratic processes (e.g. by applying the law to a specific set of facts).

Artificial intelligence systems in the aforementioned areas, as high-risk AI systems, will be subject to strict obligations before they can be launched on the market, by applying appropriate risk assessment and mitigation methods, requiring high-quality data sets feeding the system, recording AI activities to ensure traceability of results, the obligation to keep detailed documentation containing all information about the system and its purpose. The availability of clear and relevant information to the AI user and appropriate human oversight measures to minimise any risks will also be a requirement.

The AI Act highlights the free use of artificial intelligence with minimal risk, which includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.

The areas classified by the EU authorities as high-risk systems using AI seem to correspond to those indicated by Moravec as AI operation zones related to the need to use features specific to the human species in the areas of low-level perception, sensory processing and motor skills, and whose implementation within the area AI requires significant computing power.

It seems reasonable to state that those areas of AI application that Moravec’s paradox defines as requiring human features or skills, such as intuition, perception or motor skills, should be classified as high-risk areas. The areas identified by the EU in the AI Act as reduced or minimal risk overlap with what Moravec describes as high-level reasoning requiring very little computation on the part of the AI.

To sum up, the action of EU bodies aimed at strict legal protection of artificial intelligence users is therefore a correct action, which indirectly demonstrates the validity of the statements called Moravec’s paradox.