About
What we do
INSIGHTS
open positionsblog

A recent TikTok video caught my eye: the famous singer Robbie Williams, sitting quietly on a bench, observing passersby undisturbed. This seemingly mundane scene reminded me of a social experiment conducted in 2007, offering valuable insights into modern management.

The Pearls Before Breakfast experiment

The Pearls Before Breakfast experiment, conceived by Washington Post journalist Gene Weingarten, posed a provocative question: in an ordinary context, would people recognize talent? To answer this, Weingarten orchestrated a bold demonstration with renowned violinist Joshua Bell.

On January 12, 2007, at 7:51 AM, Bell positioned himself at the entrance of L’Enfant Plaza metro station in Washington D.C. Dressed casually in jeans and a baseball cap, the celebrated musician began playing complex classical pieces from his repertoire on a 1713 Stradivarius violin worth $3.5 million. The location wasn’t random: L’Enfant Plaza is primarily frequented by mid-level managers heading to the heart of the federal capital.

The result was startling: out of 1,097 passersby, only seven stopped for more than a minute. Most walked by without a glance, hurried and distracted. Bell collected just $32.17, with $20 coming from a single person who recognized him. This outcome was surprising, considering that the previous evening, Bell had performed in a Boston theater with tickets priced at $100.

5 key lessons

I find this experiment illuminating on crucial aspects of modern management, offering five fundamental lessons that I’ve adopted and wish to share:

  1. Contextual Influence: As managers, we must be aware of how context influences our perception of others’ competencies. The environment shouldn’t overshadow talent.
  2. Overcoming Biases and Expectations: It’s essential not to tie our judgment to where skills are displayed. Talent can emerge anywhere, even in the most unexpected contexts.
  3. Attention Beyond Haste: In a frantic world, we must resist the temptation to focus solely on immediate goals. An open mind can reveal hidden opportunities and talents.
  4. Intrinsic Value vs. Perception: A person’s or performance’s value shouldn’t depend on the context but on its inherent quality. As managers, we must hone our ability to recognize authentic value.
  5. Rethinking Cost-Benefit Analysis in Relationships: We often quickly assess whether to interact with others based on superficial criteria. It’s crucial to develop a deeper, more human evaluation model.

Bell’s experiment reminds us that true managerial talent lies in the ability to recognize excellence, regardless of context. In an era where managers risk becoming mere executors, these lessons invite us to rediscover the role of guide and mentor.

Managerial competence

Managerial competence is a synthesis of training and experience, and its ultimate purpose is to grow people. Recognizing the “music” in everyday noise, appreciating talent in its most unexpected forms, and cultivating an environment that values excellence are crucial skills for the modern manager.

As we reflect on these lessons, let’s remember that the true art of management lies in creating harmony from the diverse notes of human talent, transforming the workplace into a symphony of growth and innovation where icollective genius can always manifest undisturbed.

P.S.: Thanks to the article in which Weingarten described this experiment, he was awarded the Pulitzer Prize in 2008.

Article source: Linkedin Article by Vincenzo Gioia

Entra in contatto con noi

The mechanic's hammer and generative artificial intelligence

Artificial "Intelligences" (AIs) have always attracted me for the fascination with which they were cloaked thanks to Hollywood films. 2001 A Space Odyssey, Ex Machina are films that I watch over and over again, always discovering new food for thought. In recent years, the professional contribution from which I have benefited thanks to the colleagues with whom I have been lucky enough to work and with whom today I have a structured vision (at least I think it is) of the technological capabilities and impacts on people's professional and personal daily lives.

The points for reflection that have followed one another over time have been noted in my articles to generate a comparison that does not stop at the state of the art of research which requires years to understand the nature of what is happening through the use of AI.

For personal convenience, I summarize the path traced by the articles I wrote.

Alien "intelligences"

Some time ago, in this article, I wrote down my reflections on the concept of intelligence. I did it because I think that intelligence is an unknown creature despite being talked about every day. Among the "intelligences" that populate our world and which I talk about in the article there are also the newborn artificial "intelligences" (AI) which are becoming an increasingly widespread tool both in the professional and consumer fields due to their notable generative capabilities and simplicity with which you can interact with them.

Bias and hallucinations of AI

History has taught us that, like any great tool, artificial intelligence also brings with it new and unexpected problems which, as written in this article, in the case of AI, the major problems are linked to the presence of bias which I consider inevitable and existing even where its presence is excluded. The unexpected nature of AI problems, as exposed in this article co-written with my friend Remco Foppen, is also manifesting itself in the form of hallucinations which, far from being an exclusive human prerogative, has become the worry of those who work with the latest generation LLM systems because, as happens with men, even artificial hallucinations are increasingly consistent with reality and, therefore, difficult to identify.

Impacts of AI in creative processes

Despite the limitations that AI shows, I find these systems unquestionably useful even where large gray areas can be glimpsed linked to the fact that limits, opportunities, risks and benefits are still unclear. Whatever use is made of it, AI is already changing our way of being and thinking so much so that, as written in this article, I do not exclude the possibility of their effect also appearing on the mechanisms that govern creative processes. Writer's block is easily circumvented if you use an AI and, to those who complain about the lack of originality, I respond by saying that, as in every creation, inspiration is never the final work.

Don't use GPT because customers notice

This article was born from a joke made by a colleague at the end of a project meeting who, in a peremptory tone, said:

“Please do not use ChatGPT to write project documents because there are systems capable of revealing it and we will make a bad impression on the client”.

This joke immediately made me think back to the times of the general mathematics exam when the Professor who held the chair prohibited us from using calculators. As a young university student, I asked myself what was more important between the reasoning that leads to the calculation and the calculation itself. Today, I ask myself what is more important between the mechanism that leads to the essay and the essay.

Innovation and the evils of humanity

The use of productivity support tools has always been viewed with distrust. Bizarre as it may be, criticism of innovation ignores the nature of innovation, so much so that there was a time when even writing was criticized, which Plato, in the Phaedrus, defined as the cause of memory deterioration. No better treatment was given to the calculator, the use of which was opposed because it reduces the ability to carry out calculations in the mind, even very complex ones. Today none of us could do without writing and the use of the calculator has not only been accepted, but encouraged by the Ministry of Education which for decades has allowed its use in mathematics tests at scientific and technological high schools.

The unstoppable pace of change

The change that is triggered by the spread of each new technology is unstoppable. It was so for writing, it was so for photography and it was so for the calculator at the expense of the slide rule. No matter how much we complain, change will become everyday life. This evolutionary cycle forces us to reflect on what we are, how we perceive ourselves and what we want to be.

The importance of the right whys

AI has produced many impacts on human nature and I would like to note one of these in this article. I'm talking about an aspect that I hadn't yet taken into consideration despite it being linked to the ability to develop a concept, or an idea. I have no hesitation in considering this aspect the fuel of every analysis. I'm talking about the ability, not at all innate, to ask structured questions in such a way as to direct the interlocutor towards a concise and clear answer.

The power of well-formulated questions

A well-formulated question is a powerful tool through which you can guide an entire analysis process, allowing you to focus attention on what is really important: defining the objective of the analysis and determining which data are needed. A well-posed question can help identify any limitations or challenges that may arise during analysis, solve complex problems and build relationships. A poorly constructed question creates a dangerous trap whose effects can distance us from our exploration and end up confirming what we already know.

The relationship between the right whys and the work of an AI

The relationship between the ability to ask well-structured questions and the use of an AI is represented by the fact that AIs operate only if a request/question is given to them and the quality of the paper produced by an AI is directly linked to the quality of the question formulated. Similarly to what happens with humans, an incorrectly formulated question can also be misleading for an AI system whose fragilities can emerge through the formulation of questions that make use of specific logical constructs. A correctly structured question allows you to activate an AI on tasks for which it has not been directly trained to operate but for which it is able to formulate reliable answers because they are based on verified data (at least one hopes that they are).

Prompt engineering

Prompt engineering is the field of study in which we try to identify the best technique to choose the most appropriate formats, phrases, words and symbols in the formulation of questions that guide a generative AI to produce relevant and high-quality results. quality. The relationship that links the quality of the question to the relevance of the answer is central in every area of philosophical studies as it is for a correct interaction with an AI.

The balance of curiosity

The importance of prompt engineering leads me to think that, even in times in which knowledge can be achieved through superhuman "intelligences" (this is how the "intelligences" after "move 37" are defined), the value of knowledge remains linked to the fixed deficit between questions and answers. Having more questions than answers means having the key to knowledge since the effective operation of AI, as with that of the human brain, is closely linked to the ability to ask the right questions.

The mechanic's hammer and the originality of the essay

Continuing my reasoning and focusing on the question relating to the originality of the essay, I wonder how your opinion on the quality of this text would change if I told you that it was generated with AI support. Let me explain better by referring to the well-known anecdote of the mechanic's hammering. A guy, after having consulted numerous mechanics and spent a lot of money, goes to a mechanic who asks him to eliminate the annoying squeaking that afflicts the engine of his car. The elderly mechanic, after listening to the unfortunate sound for a few seconds, takes an old hammer and, with a firm blow, solves the problem. For the hammering, he asks for 500 euros. To the customer who did not intend to pay that amount for a hammer blow, the mechanic responded by asking if the customer would be able to do the same. I'll leave the rest of the reasoning to you as you read me. In practice, if I had ever used an AI to generate this text, its instrumental role in the creative/productive path would not be dissimilar to that assumed by the hammer in the hands of the elderly mechanic. For this reason, rather than asking myself whether or not an AI was used to carry out an activity, I prefer to ask myself what degree of mastery of the tool was achieved in its use.

AI, writing and the calculator

I think the use of AI is on par with the use of writing, the calculator and the mechanic's hammer. Their functional value is linked to the ability to make good use of them. Just as no one would ask me whether or not I used a calculator in carrying out my radio engineering tests, I hope that one day I will no longer be asked whether or not I used an AI in the process of drafting my papers because I consider these criticisms to be the result of banal neo-feudal frustrations that dwell in the Pleistocene minds of characters who want to know that they "worked hard" instead of concentrating on the quality of the work produced.

The craving for originality

To the diehards of the originality of the work, the leaders of the ultras of "it is the fruit of my sweat", the scandalized virgins of "no one here copies" I declare that I have left their side to others and without regret cultural in favor of the role of the one who tells you that Santa Claus doesn't exist. Ladies and Gentlemen, most of the professionals I have met in the last 20 years around the dozens of companies in which I have worked produce documents drawing "copiously" from what has already been produced for and/or by others. Get over it, the world is based on 3 simple keyboard keys CTRL, C and V.

Deductions

Having clarified my position regarding the use of AIs and their functional role, it remains to understand how these impact the ability to formulate questions appropriate to the expectations that are nourished on the expected paper. The potential offered by generative AI will lead people to find ways to use them in increasingly broader contexts of daily and working life. This will lead to the development of an ever-increasing ability to formulate structured questions whose nature can only derive from an effort of analysis and abstraction. In essence, as much as it may bother you, in the future no one will write thinking of doing so without the support of an AI because, in addition to sharing what is thought, a text has the purpose of disseminating content for cultural purposes.

Article source: Linkedin Article by Vincenzo Gioia

Get in touch with us

Premise

I decided to write this article to bring order to the reflections and deductions that I have developed in recent months regarding the concept of bias. The need to bring order was born from the confusion that arose in me when I noticed that the term bias had become part of everyday language, sometimes taking on ambiguous meanings for me.

By writing this article I do not harbor any certainty or absolute truth. Indeed, I write it to note down what I think regarding bias and I do so, as always, publicly because I trust that it can turn into a useful opportunity for discussion with those who have the patience to read my reflections.

Before starting this reading, relax and repeat after me:

Without explainability, artificial intelligence is useless and dangerous

Introduction

In this article I talk about bias and, as always, I prefer to start from the definition of bias to which I refer in my reflections.

Biases are the manifestation of systematic distortions of judgment resulting from cognitive heuristics of which we have lost control, i.e. mental shortcuts that we use to simplify the decision-making process which have been pushed to such a level of trivialization of reality that we lose contact with reality. same reality from which they are generated, negatively impacting the decision-making model adopted by any biological or artificial intelligent agent (Kahneman & Egan, 2011).

Biases can influence any decision model to make it ineffective. Even where we think we have prepared a decision-making model based on bias-free heuristics, Tversky's studies demonstrate that these take on a fundamental role in the analysis of reality, producing consequences that are not necessarily detectable or detectable in the short term.

The awareness of the structural and structuring role assumed by biases in decision-making processes based on heuristics paradoxically makes them a "false problem" of the processes themselves. A heuristic model based on biases that are admissible and functional for the purpose of the model does not make the model itself free of bias. A decision-making process in which there are no dangerous, macroscopic distortions of reality leads me to think that the biases present in the model are invisible to our analysis but effective in influencing the decision-making process. A well-orchestrated constellation of biases assumes in the decision-making process the same mechanics as the small weights with which the wheels of our cars are balanced: at a certain threshold, they demonstrate a powerful conditioning of the system. The existence of this conditioning process was witnessed by Alexander Nix, CEO of Cambridge Analytica, in his speech “From Mad Men to Math Men” presented at the Online Marketing Rockstars conference held in Hamburg in 2017. The potentially cataclysmic force of this conditioning was tested through the psychometric conditioning that Cambridge Analytica implemented during the 2010 general elections in Trinidad and Tobago in favor of the United National Congress (UNC) through the “Do So” campaign.

The analysis of a decision-making model must therefore not focus on the simple identification of the presence of obvious biases such as racial or gender biases but must be able to understand how much the method of administration of the individual results of the analysis carried out is capable of generating strategic micro-conditioning similar to that produced by AlphaGo with the “U-37” move.

The awareness that biases are not the real problem of a decision-making model is also given to me by the fact that biases are not the cause of an anomaly but only and always a mere consequence of the latter. To be clear, a bias is to an anomaly what the flu is to an infection: it's just a symptom.

Stating that a decision-making system is affected by bias is obvious to me as the entire decision-making process is almost always based on heuristics. At the same time, talking about bias is also an admission of inadequacy. The inadequacy is determined by the fact that treating a bias is the equivalent of a symptomatic treatment caused by the inability to understand the origin of the anomaly and/or to correct the anomaly itself.

Artificial intelligence systems are not free from bias because these systems also operate through clustering processes or abstraction processes that are based on biases that are admissible and functional to the analysis.

In this article I explain with a "step-by-step" approach the logical path that led me to my conclusions, summarized already in this introduction to share with those who read me the awareness that mitigating the risk generated by cognitive dynamics that manifest in the form of bias does not exclude the presence of bias whose impact is equally serious but not immediately detectable by our ability to evaluate.


Decision-making models are, in most cases, based on heuristic approaches.

I have always been fascinated by the mechanisms by which the mind analyzes the world and human relationships. I devoured the TV series “Brain Games” by National Geographic and the essay written by Sergio Della Sala and Michaela Dewar entitled “Never trust the mind” which, through experiments bordering on magic, show us how unknown it is, even today, the human brain and how much the mechanisms that govern it in the daily effort of analysis and adaptation are linked to errors, to illusions of thought, to inconsistencies of mental processes, to imperfections of memory that lead to the development of real decision-making shortcuts.

Decision-making shortcuts are the strategy our brain uses to save energy. They manifest themselves every time we are faced with challenges, problems and decisions to make for which we prefer to adopt a "heuristic" approach, that is, an approach that makes use of generalizations, empirical rules and assumptions.

The heuristic approach is a decision-making model that is based on a set of strategies, techniques and creative processes that help us find solutions more quickly and easily. With this approach, decisions are made considering a limited number of alternatives, with partial awareness of the consequences of each of these. This process is driven by "heuristics", which are rules of thumb used to solve problems or do certain types of calculations and based on the knowledge that available information is never perfect and that human abilities are limited and fallible. As the psychiatrist Mauro Maldonato says: "Unlike formal calculation, heuristics are an immediate solution."

The strategies, techniques and creative processes that make up the heuristic approach are useful distortions of reality. These distortions simplify the analysis of the facts and aim to provide a subjective view based on the knowledge that we can recognize only a limited number of alternatives and are aware of only some of the consequences of each alternative. In most cases, these distortions allow us to interpret and, where possible, predict reality quickly and effectively.


Decision-making models based on heuristics are characterized by processes of simplification of reality and abstraction.

The processes of simplification of reality are based on schemes and categories with which the knowledge we use in the processes of perception, memory and thought is organised.

The schemes and categories that we use to organize our knowledge describe people, objects and events through only the characterizing, common or most frequent details, excluding anything that can be traced back to a specific phenomenal manifestation.

Knowledge schemas are based on associations that are immediately available to our awareness and represent what is most common or considered typical. To be clear, when I talk about the beauty of dogs, no one thinks of the beauty of the Bracco Italiano or the Spinone Italiano because everyone thinks of the generic and subjective image of the dog that has been built up over the years.

Knowledge schemes are fundamental for a correct classification of the world which necessarily requires the implementation of an abstraction process with which a set of non-identical elements is created although belonging to the same phenomenal category.

Abstraction processes are fundamental for simplifying the processes of understanding and adaptation. We can say that they are the basis of the mechanisms that govern survival and evolution.

Without an abstraction process we would be incapable of making decisions because each phenomenon would produce a separate element that cannot be compared with similar others. The "environmental dependence syndrome" would develop (Lhermitte, 1986) which makes one unable to inhibit actions stimulated by any single input. In a similar condition, conifers or the single species of which they are composed would not exist (e.g. Scots pine, larch, fir, spruce) but only the single tree that is different from another due to the characteristic assumed by each single leaf.

Although the importance of abstraction processes is shared by all, it should be remembered that in abstractions exceptions or diversity are not taken into consideration. For this reason, when we talk about Africans we don't think about the white-skinned African population, even if it exists.

This tendency of schemes to generalize and exclude exceptions leads to prejudice when we do not have sufficient information on what we are talking about.


The processes of simplification of reality can generate anomalies that manifest themselves in the form of cognitive biases.

The simplification processes that we find at the basis of the heuristic model have an important flaw: the limit to which they go is constituted only by the common sense of those who apply them. For this reason, in some cases, the heuristic process goes beyond the simple simplification of reality and generates real banalizations from which preconceptions arise which, although they may be derived from reality, no longer retain any objective link with it.

The trivialization of reality leads to the development of preconceptions which reverberate in decision-making processes through inevitable errors of evaluation which can be more or less serious. Such errors, regardless of their nature, are generically called "cognitive biases" or more simply "bias".

Cognitive biases are systematic errors of thought that, by causing us to deviate from logic or rationality, influence the way we perceive reality, make decisions and the way we formulate our judgments.

The difference between bias and heuristics is, therefore, represented by the fact that heuristics are convenient and quick shortcuts closely linked to reality and which lead to quick conclusions. Cognitive biases are also shortcuts but they manifest themselves through prejudices that have lost all connection with reality and which are acquired, in most cases, without a critical spirit or judgment.

It is not easy to understand at what point a simplification process turns into a trivialization from which a cognitive bias arises. I believe that it is impossible to set a sort of threshold that allows us to understand that we are in the presence of a simplification process of which we have lost control to the point of declaring it dysfunctional to the decision-making process. For this reason, perhaps, we realize the existence of a bias always, so to speak, once the decision-making process has manifested its effects on the environment and people.


Abstraction processes are common to all intelligent agents.

A world of absolute uniqueness in which it is not possible to create groups through abstract processes is a world in which any form of intelligent life is impossible. As unreasonable as it may seem, organizing knowledge by schemes and from these abstractions is common to all intelligent or teleological agents, even of an alien (non-human) nature. For my dog, birds are birds regardless of whether they fall within the species for which he was selected and trained to hunt. You could argue that my dog chases everything that moves purely out of predatory instinct. However, his reluctance to prey on objects unknown to him is common to all other dogs. I still remember the way he behaved when he saw, for the first time in his life, a balloon rolling on the floor because it was moved by the wind and how he behaved in subsequent encounters with this environmental phenomenology.

Abstractions are not lacking even in plant intelligence that implements clustering schemes in learning and adaptation processes. A testimony of this ability is given to us by Stefano Mancuso through his observations regarding the evidence collected by the French naturalist and botanist Lamarck (1744-1829) regarding the behaviors that the "mimosa pudica", so called because it closes its leaves as soon as it is touched, implements in a presumable attempt to defend against herbivores.


Abstraction processes are also present in artificial intelligence systems

A specific aspect is, in my opinion, assumed by Artificial Intelligence (AI) systems which, although not life forms, operate as teleological agents and do so by implementing abstraction and classification processes not dissimilar to those produced by other living species. As noted by Nello Cristianini, every time an AI system implements a classification it does so with its theoretical construct based on its form of intelligence.

It is not possible to know what are the characteristics of the knowledge schemes that an AI adopts to distinguish a dog from a cat or to classify the world. If we could ever find out, we would find that it has nothing to do with our criteria based on human sensory data. I would not be surprised to find in an AI a classification similar to that proposed by Jorge Luis Borges in which the animal world is divided into:

The issue of biases that manifest themselves in AI systems is much more complex if we consider the fact that the statistical correlations that are used in abstraction processes are often, if not always, defined on data which, in addition to being naturally affected by bias, they could hide weakly correlated statistical links that are not evident to humans and capable of generating negative effects on the analysis and decision-making process. To understand the importance of weak correlations and their danger, I report a beautiful definition produced by the Ammagamma team which, in my opinion, David Bevilacqua teaches and disseminated on the topic: "the [weak correlations are] weaker relationships between the variables which influence a phenomenon [and are] difficult to read and interpret. Our mind is not able to grasp them, unlike strong correlations, but by equipping ourselves with a mathematical model it is possible to identify them [and use them to our advantage]". The awareness of the importance that weak correlations assume in the abstraction processes generated by an AI also comes from the studies conducted by James Pennebaker which demonstrate the feasibility of a psychometric segmentation of a user through the linguistic structure adopted in the exposition of their opinions alone. Thanks to its studies and weak correlations, Facebook can cluster groups of people starting only from the likes expressed on users' images and public posts.

Recognizing the existence of abstraction processes in every intelligent agent allows us to understand that biases can be present in every heuristic process regardless of the nature of the agent that brings it into being. Furthermore, I find Borges's provocation a useful tool for understanding that our principles of classification and ordering of the world are anything but obvious and natural as it is possible to hypothesize infinite other ways of organizing the objects of our experience such for example, the paradoxical one I reported above.


Quick Summary

At this point in my reasoning, it is best for me to give a brief summary of what I have attempted to explain so far.

Point 1 - Heuristic processes are based on simplifications of reality which, even if functional to achieving the result, are the matrix from which biases arise.

Point 2 - Biases, being linked to simplification processes, are not the result of a specific level of abstraction but, rather, the result of a limit determined only by the level of unreliability that our common sense finds admissible in our cognitive processes and decision-making. In the terms set out, bias is present in every heuristic process and every time we deviate from objective data.

Point 3 - Simplification processes are necessary to implement the abstraction processes that allow us to understand the world regardless of specific phenomenal manifestations. I have also found this capacity for abstraction in agents endowed with intelligence alien to ours.


First deduction: heuristic processes are based on bias

Bias, understood as a deviated form of simplification and abstraction mechanisms, is present in every heuristic process because it is through the adoption of one or more shortcuts that one can avoid the adoption of a logical-scientific approach which is always very expensive in terms of computing resources and data acquisition and verification time.

The presence of bias in all heuristic processes is also demonstrated by the experiment carried out by psychologist Emily Pronin who, in 2002, described the "blind spot bias" as the natural inclination of human logic to consider ourselves always more objective than anyone else. Another demonstration of the bias-heuristic link comes from the psychologist Paolo Legrenzi and the neurologist Carlo Umiltà who, in the book "Many unconscious for a brain", write

Given the enormous flow of information, we tend to select those that we already know, those with which we agree, those that we can assimilate better thanks to the presence of mental patterns and categories that are familiar to us and which are already consolidated. Furthermore, we are inclined to share this information with those who think like us and with those who we know will appreciate why they think like us. These new forms of life give rise to a sort of collective unconscious which translates into the radicalization of people's opinions. Individuals are comforted by sharing a current of opinions that is simple, clear, and requires low cognitive and attentional efforts

The role of biases in cognitive processes has led to a careful classification of them which, in the absence of taxonomic proposals or reference models, has generated a list of over 150 items divided into four macro-areas over the years.

The Cognitive Bias Codex by John Manoogian

With such a large list of items, I find it obvious to consider biases as an inseparable part of heuristics, despite the fact that in some cases they become the element that shows the fallacy of some simplification/abstraction processes.

Nobody likes the idea that heuristic processes are based on more or less effective biases because it demonstrates that every choice is always wrong or, if you prefer, right until proven otherwise. This scenario, however, is not as deplorable as it seems since it is precisely thanks to biases that it is possible to accelerate the analysis processes, improve the detection of critical choice factors in changing or uncertain situations and arrive at a more streamlined decision-making model. This is due to the fact that bias is closely linked to the schemes and categories with which the knowledge that underlies the processes of perception, memory and thought is organized.


Second deduction: bias is not a false problem

Heuristics are necessarily based on biases even if, in most circumstances, these biases do not have harmful effects on the context or object of our decisions. In such a condition, as much as we don't like it, it is no longer necessary to ask ourselves whether a decision is made on the basis of a model whose mechanisms show bias or not. Rather, we need to ask ourselves what relevance is assumed by the biases certainly present in the current decision-making process. In essence, since choices are always based on errors of evaluation, let's focus on the distinction between serious errors and slightly irrelevant errors whose effects are, however, only apparently of low impact.


Third deduction: bias is not an explanation

The vision of bias as a problem that explains the anomaly revealed downstream of a decision-making process is misleading because it transforms bias from the effect of an anomaly into a cause of the anomaly. Biases are always the symptom of a problem affecting the decision-making model and, for this reason, they do not exist except as a distorted manifestation of a cognitive process. To be clearer, I don't think it's correct to say that the anomaly found in a decision-making process can be traced back to a bias or is produced by a bias. When faced with an anomaly, it should be said that the problem from which it is generated manifests itself in the form of one or more biases.

The interpretation of bias as a symptomatic manifestation of a cognitive problem requires some reflections. The first is represented by the fact that the correction of the anomaly does not involve correcting the biases through which the anomaly manifests itself (it would be like lowering the fever instead of curing the infection). The second is represented by the fact that the anomaly we find through one or more biases does not mean that it has not produced others of equal importance but not yet identified.


Fourth conclusion: biases show the limits of our capabilities to monitor AI

A bias understood as an effect and not as a cause requires the adoption of a completely different approach from the one currently adopted for the analysis of the efficiency of AI systems in which one must necessarily be able to identify which abstraction process generated the anomaly that we identify as bias. To carry out such an analysis of the causes, one must know in detail the parameters that contribute to the generation of the decision and, even more, one must know which combination of "weak signals" generates the specific statistical signature that generated the anomaly. To date, there is no way to be certain which model generated the response.

The presence of biases is, therefore, not useful in explaining something since biases are present in every heuristic process and are present both in analysis processes whose outcomes conform to expectations and in those that do not produce the expected results.

Saying that a decision-making system is affected by bias means knowing why the bias was generated, why it was not avoided, and why correcting the anomaly does not generate a different one in pursuit of the complexity typical of the first Microsoft service packs of the years '90.

But what if the bias is instead a peculiarity of the decision-making model? A decision model is always focused on a contained set of data. This means that, even if there were no extreme discrimination phenomena, we would still find ourselves in a context in which it is not possible to exclude that we are in the presence of a bias worthy of AlphaGo's famous "U-37" move whose effects show themselves as a poison at a time and in a way that makes it impossible to understand the origin of the disease and identify an adequate cure.

Without a selective attention decision-making system, we would be at the mercy of environmental stimuli, even if irrelevant or inconsistent with what we are doing. We might think that it is possible to be supported by an AI in the data reading process but, even in this case, no one could exclude that the agent adopted to support us does not itself fall victim to the complexity of the world by developing the technological equivalent of " environmental dependence syndrome" (Lhermitte, 1986) which makes one unable to inhibit actions stimulated by any single input.

Article source: Linkedin Article by Vincenzo Gioia

Get in touch with us

"I’m so deep in this bloodshed that if I stopped this business now, going back would be as difficult as continuing all the way". (Macbeth, III, 4 di William Shakespeare)

I have always been fascinated by the way Shakespeare encapsulates Macbeth's journey towards the unknown with just a few words. It doesn't matter how his journey began; what matters is the clear perception of the so-called "point of no return." The elegance of this phrase, recited by Macbeth in the third act of Shakespeare's play, conceals a condition that can emerge as a consequence of every significant action in our personal and professional lives.

The condition that Shakespeare describes is also known as the "Macbeth Effect". It summarizes a perception that leaves no room for choice and is based on the clouded belief that by continuing along the path, one will find clarity or a solution to the current state.

This effect manifests in many areas of private and professional life, where our decisions often begin with phrases such as: "It costs nothing to try," "There's so little risk" or the bolder, "If he did it, I can do it easily too".

In professional life, the Macbeth effect is often accompanied by a sort of industrial mystique, epitomized by impressive aphorisms on office walls, like a Steve Jobs poster with a motivating quote, similar to how a photo of Marilyn Monroe might adorn a hair salon.

The Macbeth effect arises from an approach that leads us to develop a high propensity for risk, neglecting any form of control and measurement of current and expected results. By its nature, the Macbeth effect is linked to the exploration of the unknown, often found in innovation, research and development, and invention processes. Anyone embarking on a path without adequately analyzing its risks or duration can find themselves in the same position as Macbeth.

The Concorde project

Industrial history has numerous failures linked to the belief that there is no turning back, with no escape routes except continuing forward. A notable example is the Concorde project, a supersonic aircraft produced by the Anglo-French consortium of British Aerospace and Aérospatiale. The Concorde was one of the most ambitious innovation projects in aeronautics history, beginning in the late 1950s and seeing the first prototype take off in March 1969. It wasn't until November 4, 1970, that the aircraft first reached Mach 2, becoming the second commercial aircraft to fly at that speed, after the Soviet Tupolev Tu-144. This historical context helps us understand the decisions leading to the first flight in 1976 and its disastrous failure in October 2003. Although many believe its decommissioning was due to the July 2000 disaster, the truth is that its abandonment was due to the massive consumption, unsustainable maintenance costs, a small number of passengers (due to the high flight price), and often questionable marketing choices. The tragic accident in Paris merely accelerated the closure of the Concorde project, as the French and British governments had been covering its budget deficit despite clear financial evidence against its sustainability. This persistence is a classic example of the human tendency to continue a project without considering future benefits, focusing instead on past efforts and investments.

The opportunity cost and sunk costs

The analysis of future advantages is described in economics by the concept of "opportunity cost," which defines the future value of one's choices based on the cost of forgoing an alternative opportunity. Essentially, it is the sacrifice made to make a choice. However, in evaluating investments, assessments often give more weight to "sunk costs." To illustrate this dynamic, imagine being at the head of a research and development project with an uncertain outcome and having 100,000 euros to invest.

Consider two scenarios: in the first, you have already invested 500,000 euros and can close the project with an additional 100,000 euros; in the second, you haven't started the project yet and can invest your 100,000 euros to begin activities with an uncertain outcome. How would you act? You are likely inclined to invest in the first scenario, considering what has already been done. But any answer is neither correct nor wrong because the question itself is flawed. The correct question should be: "What is the opportunity cost in the current state of the project?" Only this question provides the logical basis for making our choice.

Cognitive distortion in the analysis of sunk costs

The incorrect evaluation of sunk costs is due to a cognitive distortion known as the "Sunk Cost Effect," evident in the Concorde case, where heavy investments by the French and British governments led to further investments even when the project's financial unsustainability was clear. This bias reflects a paradoxical behavior: when we have invested significant effort, time, and money into a failing project, instead of abandoning it to limit losses, we tend to continue investing, exacerbating our losses.

You might think this wouldn't happen to you, but consider a fixed-menu restaurant where you're almost full but have already paid for dessert. You might order and leave it on your plate because you paid for it, demonstrating the sunk cost fallacy. This phenomenon also occurs in relationships, where people maintain unhappy, unsatisfactory relationships to avoid "wasting" the time spent together.

Friedman's cognitive dissonance

Daniel Friedman (University of California-Santa Cruz) explored this in his 2007 study, “Searching for the Sunk Cost Fallacy.” He describes the psychological mechanisms underlying bad decisions related to sunk costs. According to Friedman, bad decisions stem from "cognitive dissonance," leading to continuous self-justification. People who invest in an unprofitable activity modify their beliefs about its profitability to avoid admitting a mistake. Cognitive dissonance varies among individuals; anxious people are more sensitive to uncertainty and tend to continue investing despite likely failure, whereas depressed individuals are more likely to stop investing due to unrealistically positive future expectations.

The escalation effect

The behaviors driven by the Macbeth effect demonstrate that distorted perceptions of sunk costs have costly consequences in terms of money, time, and effort. A more severe form of the Macbeth effect is the "Escalation Effect." When a project begins to fail, sunk cost bias irrationally pushes individuals to make even more investments, leading to further losses. This growing spiral of investment is also known as the “Vietnam Effect,” explained by conditions during the US Vietnam War. According to Secretary of State George Ball's 1965 memorandum to President Johnson, retreating becomes impossible as soldiers die, leading to more investments to avoid their deaths being in vain.

The roadmap of madness

My professional experience has allowed me to observe the Macbeth Effect and the Escalation Effect closely. The lessons I've learned can be summarized in a path of increasing investments, which I call the "roadmap of madness." This path is common to the projects analyzed for this article and unfolds in the following steps:

  1. Someone decides to solve a business/personal problem or unleash their creativity with a proprietary technical solution.
  2. The inventor presents the product, and a manager decides it has potential.
  3. Based on a superficial market check, the manager convinces the chain of command to allocate the budget to develop the product.
  4. The team develops a testable version, and the sales force begins work.
  5. Poor commercial results prompt more investment to avoid wasted efforts and reputational damage.
  6. Even an expert sales force fails to deliver results, prompting internal adoption of the product, which also fails.
  7. The manager, confident in the product's potential, sets up a new company to handle it.
  8. Further failures lead to seeking new investors.

The end of this roadmap is uncertain, but it is unlikely to be pleasant given the described path. My experience with two now-bankrupt companies (a cloud operating system and a procurement platform) has taught me to pay close attention to the Macbeth effect, as in real life, the curtain can indeed fall.

Article source: Linkedin Article by Vincenzo Gioia

Get in touch with us

For some years, I have been contributing to the development of solutions in the AI field, leveraging the expertise of a small, knowledgeable group of colleagues and a think tank with whom I enjoy sharing the doubts and perplexities that arise as I delve deeper into this technological domain. Lately, my reflections have been accompanied by a sense of incompleteness, which I managed to pinpoint this morning after coming across Robert Silvers' work titled "Barack Obama, 2009."

In this photomosaic on aluminum, which can be appreciated in detail by clicking here, the artist portrays the former US president using a technique that, much like a traditional mosaic, combines many small photos to create a single, large image. Silvers' work illustrates his vision of Barack Obama through the pages of articles discussing him and his presidency. The essential characteristic of the photomosaic technique is that, unlike a traditional mosaic, each tile is an image in itself. This creates a fascinating dual-level effect, requiring the observer to adopt different perspectives to fully understand what is represented.

In Silvers' work, this is reversed by allowing the observer to construct their own image of the Obama character by reading each individual article or by viewing the overall image formed by the graphic assembly of all the articles. This represents the subjective image the artist has constructed of the character through these articles.

Similarly, in the AI universe, I feel that the individual technological components are akin to single images within a broader and more complex technology. This technology must be viewed from a different perspective to truly understand what AI is.

Article source: Linkedin Article by Vincenzo Gioia

Get in touch with us

As the debate surrounding AI intensifies, we often focus on its implications for privacy, bias, ethics, and employment. Yet, what if the profound impact of this technology extends to the very model through which our brains generate new ideas?

Generative AI has emerged as a valuable tool for conceptual work across various fields, including journalism, literature, advertising, and art. Many creatives have encountered the dreaded "writer's block" or "blank page syndrome," and AI offers a promising solution by unlocking ideas and streamlining task execution. However, the human brain's interactions with the external world are far from simple input-processing-output processes due to its adaptive nature.

Neuroscience reveals that the brain's plasticity enables it to modify existing neural connections, allowing for the development of new responses to stimuli and interpretations of the external environment. This adaptability is accompanied by conditioning phenomena, which serve as computational shortcuts for automatic responses to specific events. The method forms the basis of idea generation, as no creative individual operates without it. Even musical improvisation relies on applying a method to transform chaos into creation.

But what happens when an adaptive brain is exposed to generative AI over an extended period? Are we certain that such exposure does not induce conditioning phenomena that impact the model of thought generation, thereby altering natural creative capacity? While one may argue that no creature exists in isolation from external stimuli, and adaptation itself entails conditioning, the crucial distinction lies in the nature of these stimuli.

Natural stimuli from real-world interactions occur in an environment characterized by constant change. Conversely, exposure to generative AI entails stimuli curated by researchers for training purposes, creating a potentially synthetic representation of the universe. This contrasts with the dynamic and diverse stimuli encountered in natural environments.

Moreover, our inability to define intelligence and creativity universally complicates matters. Existing definitions are often anthropocentric, reflecting human biases. Tests designed to measure these traits are inherently imperfect and may fail to capture the nuances of intelligence and creativity.

The rise of AI has further exacerbated this issue, with AI systems surpassing human performance on creativity tests. This phenomenon raises questions about the fundamental nature of intelligence and creativity, particularly when AI's responses are based on synthesized data rather than genuine exposure to diverse stimuli.

If correct, this hypothesis suggests that addressing biases in AI may be simpler than tackling the profound effects it has on human cognition and creativity. As we navigate the evolving landscape of AI, it is imperative to consider not only its technical capabilities but also its potential impact on our most cherished human faculties.

Article source: Linkedin Article by Vincenzo Gioia

Get in touch with us

We are what we think and we think through what we are. (V.G.)

What does it truly mean to be intelligent? It's a question that has puzzled philosophers, scientists, and everyday individuals alike for centuries. Despite our efforts to define intelligence, it remains an enigmatic concept, veiled in ambiguity and subjectivity.

Consider this simple experiment: ask the person next to you what they believe intelligence entails. Their response is likely to vary significantly from your own interpretation. This mundane exercise highlights the vast array of perspectives surrounding intelligence and underscores the challenge of establishing a definitive definition.

In this article, I embark on a quest to unravel the mysterious essence of intelligence. My goal is not to offer a novel definition—such an endeavor eludes even the most discerning minds—but rather to explore the diverse conceptualizations of intelligence. I seek to deepen our comprehension of this intricate phenomenon and foster receptivity to alternative viewpoints that may enrich our societal and professional landscapes.

Intelligence, at its core, defies facile classification. It encompasses a spectrum of cognitive faculties, ranging from problem-solving and critical thinking to creativity and emotional intelligence. Yet, attempts to encapsulate its essence often falter, constrained by the constraints of language and the biases inherent in human perception.

Moreover, intelligence is not a static attribute but a dynamic and multifaceted phenomenon that evolves over time and manifests diversely across individuals and cultures. What may constitute intelligent behavior in one context may be deemed irrelevant or foolish in another.

As I navigate the labyrinthine terrain of intelligence, I am reminded of philosopher Ludwig Wittgenstein's assertion: "The limits of my language mean the limits of my world." Indeed, our comprehension of intelligence is intricately linked to the language we employ to elucidate it, yet language alone cannot fully capture its complexity.

In our quest to comprehend intelligence, we must embrace ambiguity and uncertainty, acknowledging that our pursuit of knowledge entails posing insightful questions as much as discovering definitive answers. By embracing diverse perspectives and engaging in meaningful discourse, we can begin to glimpse the rich tapestry of human intelligence and its profound ramifications for our lives and society at large.

Let us embark on this voyage together, not in search of a singular truth, but in celebration of the boundless diversity and intricacy of the human mind. For it is through our collective exploration of intelligence that we may uncover new realms of thought and potential, enriching our understanding of ourselves and the world we inhabit.


PART 1: a concept that we all use and no one knows how to define

Defining intelligence is akin to navigating through a labyrinth of perspectives, each offering a glimpse into the multifaceted nature of this elusive concept. Yet, amidst the plethora of definitions that abound, we find ourselves grappling with the fundamental question: What does it truly mean to be intelligent?

One commonly cited definition posits that intelligence is the ability to solve problems. While this characterization captures one facet of intelligence—problem-solving—it fails to encompass the full spectrum of intelligent behaviors that humans and other beings exhibit. Indeed, intelligence extends beyond mere problem-solving to encompass a broader capacity: the ability to behave effectively in new and unfamiliar situations.

But is intelligence merely an academic pursuit, relegated to the realm of abstract theorizing? Far from it. Understanding the nature of intelligence is paramount, for it allows us to recognize and appreciate the diverse forms of intelligence that exist within and beyond our own species. By acknowledging the intelligence inherent in other beings, we open ourselves to the possibility of harnessing and enhancing the resources that they offer.

Yet, current definitions of intelligence often fall short, constrained by anthropocentric biases that tether our understanding to human-centric perspectives. We equate intelligence with traits and capabilities that are uniquely human, such as language proficiency, tool usage, and artistic expression. However, upon closer examination, we discover that many non-human beings exhibit similar behaviors, challenging the notion of human exceptionalism.

Moreover, our tendency to measure intelligence based on the complexity of behaviors overlooks the intrinsic value of all forms of intelligence, regardless of their manifestation. Intelligence is not a relative concept contingent upon the ability to perform certain tasks; rather, it is an absolute quality that transcends species boundaries.

The anthropocentrism inherent in our definitions of intelligence blinds us to the rich tapestry of intelligences that permeate the natural world. It leads us to overlook the remarkable cognitive abilities of non-human beings and to impose our own cognitive framework upon the universe at large.

In essence, the quest to define intelligence is not merely an intellectual exercise but a profound journey of self-discovery. It challenges us to confront our preconceptions and biases, inviting us to embrace a more inclusive and expansive understanding of intelligence—one that celebrates the diversity and complexity of life itself.

We are the measure of what exists and the dimension in which we decline things.


PART 2: Limits of the anthropocentric nature of the definition of intelligence

The anecdote of Nello Cristianini's cat and Carl Sagan's message to potential extraterrestrial beings serves as a poignant illustration of the limitations imposed by our anthropocentric perspectives. Despite our assumption that any intelligent being, whether terrestrial or extraterrestrial, should be able to comprehend the message, the cat's indifferent response underscores the fallacy of our human-centric expectations.

In crafting the message aboard the Pioneer probes, Sagan and his team operated under the assumption that any intelligent entity capable of intercepting the probe would possess a level of cognitive sophistication comparable to our own. However, the cat's lack of interest in—or perhaps comprehension of—the message calls this assumption into question.

The cat's indifference highlights the inherent biases embedded within our definitions of intelligence. We tend to equate intelligence with human-like cognitive abilities, such as language comprehension and abstract reasoning, thereby overlooking the diverse forms of intelligence that exist across the spectrum of life. Just as the cat's cognitive processes differ from our own, so too may the cognitive processes of extraterrestrial beings diverge from our expectations.

Moreover, the cat's behavior underscores the importance of humility in our quest to understand intelligence. Rather than imposing our own cognitive framework onto other beings, we must approach the study of intelligence with an open mind, acknowledging that intelligence may manifest in myriad ways that defy our conventional understanding.

Ultimately, the tale of Nello Cristianini's cat prompts us to reconsider our assumptions about intelligence and to embrace a more inclusive and nuanced perspective—one that recognizes the inherent value of all forms of intelligence, whether human, feline, or extraterrestrial. In doing so, we may come to appreciate the richness and diversity of intelligence in the universe, transcending the confines of our anthropocentric worldview.

Carl Sagan holding the Pioneer plate. Credit: NASA

The "Cristianini's cat test" offers a thought-provoking perspective on the nature of intelligence and challenges our assumptions about it. By demonstrating that even a creature evolved on Earth may not comprehend human-designed messages, it prompts us to reconsider our anthropocentric view of intelligence and recognize the diversity of cognitive processes in the natural world.

The concept of "alien intelligences" extends beyond extraterrestrial beings to encompass the varied cognitive capacities found on our planet. From the complex communication systems of social insects to the problem-solving abilities of cephalopods, non-human intelligences abound in nature. Embracing these diverse forms of intelligence requires us to confront our biases about human superiority and adopt a more inclusive understanding of intelligence.

Sagan's message, though intended for potential extraterrestrial recipients, also reflects our human-centric worldview. By crafting a message based on our own cognitive framework, we inadvertently project our biases onto hypothetical alien intelligences. However, the existence of non-human intelligences challenges this anthropocentrism and urges us to reconsider our place in the universe.

The notion of "universal intelligence" seeks to impose human-like cognitive abilities on all forms of intelligence, terrestrial or extraterrestrial. Yet, this overlooks the unique cognitive adaptations of different species and fails to appreciate the richness of intelligence in all its forms. Instead of imposing a singular definition of intelligence, we should celebrate the diversity of cognitive processes found in nature.

Our understanding of intelligence is limited by our cognitive constraints and linguistic biases. The hierarchical structures prevalent in human society reflect our anthropocentric view of intelligence, yet they fail to capture the decentralized and collective intelligence observed in many non-human species. Recognizing and respecting the diversity of intelligence in the natural world is essential for fostering a more inclusive and holistic understanding of cognition.


PART 3: alien intelligences

The assumption that aliens possess inherently superior intelligence due to their technological prowess or fictional portrayals stems more from anthropocentric bias and speculative fiction than from empirical evidence. This notion overlooks the vast diversity of cognitive capacities that may exist across different species and environmental contexts.

Intelligence is not solely defined by problem-solving abilities or technological achievements. It encompasses a broad spectrum of cognitive processes, adaptive behaviors, and interactions with the environment. While humans may excel in certain domains like language and tool use, other species demonstrate remarkable intelligence in areas such as social cooperation, navigation, and sensory perception.

Comparing intelligence across species proves challenging due to variations in sensory modalities, ecological niches, and evolutionary trajectories. Each organism has evolved unique cognitive adaptations suited to its specific environment and survival requirements. Therefore, assessing non-human intelligence through human-centric criteria disregards the diverse ways in which intelligence manifests in nature.

Acknowledging the presence of intelligence among various species does not necessitate ranking them on a hierarchical scale with human intelligence at the summit. Instead, it involves appreciating the complexity and diversity of cognitive processes observed across different life forms. Embracing a more inclusive understanding of intelligence allows us to better comprehend the richness of the natural world and our interconnectedness within it.


PART 4: The multidimensionality of intelligence

Teleological intelligences - Teleological intelligences refer to the concept that intelligence is inherently goal-oriented, always serving the purpose of achieving objectives within a dynamic environment. This perspective broadens the scope of intelligence beyond problem-solving to include reflexes, planning, reasoning, and learning as the means through which agents adapt to their surroundings and ensure the survival of their species.

The hypercube of intelligence - Imagine intelligence as a sphere suspended in an empty room, its position defined by coordinates on three spatial dimensions (x, y, z). Each dimension represents a facet of intelligence, with various cognitive processes occupying different points within this conceptual space. For instance, a squid's intelligence may occupy a distinct region inaccessible to human intelligence due to differences in communication modalities, such as the absence of color-based communication in human language. No single dimension holds precedence over others; rather, each dimension contributes to the overall complexity of intelligence, much like different aspects of an athlete's preparation contribute to their performance.

Idiot intelligences do not exist - The notion of "idiot intelligences" is a misnomer that undermines the inherent value and diversity of cognitive abilities across species. Human intelligence cannot be deemed superior to other forms of intelligence, as each serves a unique and indispensable role in its respective ecological niche. Comparing intelligences across species is akin to comparing dissimilar entities like pears and apples in an intelligence test. The prevalence of so-called "idiotic intelligence," such as that exhibited by plants, challenges our anthropocentric biases and underscores the effectiveness of diverse adaptive strategies in ensuring survival and proliferation.

Aliens have always been here - Furthermore, the search for extraterrestrial intelligences need not extend beyond our own planet, as Earth is teeming with examples of diverse cognitive capacities. Stefano Mancuso's research on plant intelligence illustrates the complexity inherent in different forms of intelligence, expanding our understanding beyond conventional human-centric perspectives. By acknowledging the intelligence present in non-human species, we gain insight into the multifaceted nature of intelligence and its diverse manifestations across the natural world.


PART 5: Emergent properties and collective genius

The importance of understanding what we are talking about - Understanding the parameters of the intelligent universe enables us to identify its inhabitants and comprehend the diverse ways in which intelligence is expressed through emergent behaviors and properties. This understanding is crucial for developing interaction models that foster what Linda Hill, Greg Brandeau, Emily Truelove, and Kent Lineback refer to as "Collective Genius."

Emergent properties and collective genius - Collective genius is a captivating concept that arises when a group of individuals collaborates to produce outcomes surpassing the capabilities of any individual member, thereby manifesting emergent properties. Emergent properties are phenomena that emerge unexpectedly within complex systems and cannot be explained solely by aggregating the abilities of individual members. In essence, the whole transcends the sum of its parts, showcasing the power of collective collaboration.

Factors That Contribute to Collective Genius - Several factors contribute to the emergence of collective genius, as highlighted by the research of Hill, Brandeau, Truelove, and Lineback. These factors include the diversity of skills and knowledge within the group, effective communication among members, and a collaborative environment that fosters mutual learning. The diversity of skills and knowledge fuels creativity and innovation within the group, while effective communication ensures that all ideas are shared and valued without bias. Moreover, a collaborative environment encourages mutual learning, enabling members to build upon each other's contributions and insights.


Deduction

It seems that our fixation on our own intelligence has blinded us to the existence and potential of other intelligences in the world. Despite our efforts to create artificial intelligences, we remain narrowly focused on human-like traits and behaviors. However, all living beings exhibit teleological and autonomous behaviors driven by innate goals and sensory information.

Alien intelligences, while different from our own, possess the capacity to thrive in dynamic environments through decision-making, strategic planning, reasoning, and learning—capabilities akin to our own. Therefore, it is essential to acknowledge that these alternative intelligences can offer valuable insights and inspiration for generating new ideas and forms of social organization. By expanding our perspective to embrace the diversity of intelligences in the world, we open ourselves to a wealth of innovative possibilities and enrich our understanding of intelligence as a whole.

Article source: Linkedin Article by Vincenzo Gioia

Get in touch with us
logo-frontiere
LinkedinInstagramYoutubeFacebookTwitter
Privacy policy
Copyright 2024 Frontiere
FRONTIERE Srl
Headquarters in Via Oslavia, 6 - 00195 Rome, RM | C.F. and VAT 17464921000
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram