About
What we do
INSIGHTS
open positionsblog

About

We make innovation accessible by removing the internal and external barriers to companies, reducing the time to access new technologies thanks to our network, eliminating the entry costs required to introducing new digital technologies into the business.

Partner Network

We firmly believe in the power of sharing.
Our partner network includes Tech Companies, Innovative Startups, Research Centers, Trade Associations and Venture Capital.
Collaborating with us means access to a wide range of top-level skills and know-how that will make your business take a concrete leap in quality.

Embracing digital transformation doesn’t simply mean adopting new technologies; it requires a profound rethinking of how organizations operate. This process encompasses people, culture, and strategies, integrating them into a coherent and unified vision.

Digital transformation is not just a technical exercise but a journey that demands flexibility and adaptability. Every technological choice must align with business objectives, keeping values like sustainability and ethics at the forefront. This approach not only enables companies to tackle today’s challenges but also prepares them to thrive in the future.

Frontiere: a human and transparent approach

Our goal is not merely to deliver technological solutions but to help companies find a balance between innovation and sustainability. We enjoy working side by side with businesses, understanding their needs, and guiding them through a journey that has a tangible impact on their operations.

Transparency is a cornerstone of our work. Every step, decision, and development is shared: we want our partners to maintain full autonomy and control over the solutions we create together. This approach stems from the belief that real change is built on trust, not dependencies or constraints.

People are always at the center. For us, innovation means creating solutions that are not only functional but also accessible and respectful of the users’ needs. It’s a process that balances efficiency and empathy, technology and identity. Our goal is not just to improve but to do so without losing sight of the human context every innovation must serve.

Looking ahead: a conscious digital future

Technology is a powerful tool, but it is not an end in itself. We view digital as an opportunity to overcome limits and boundaries, opening new pathways for both businesses and society. Every project is designed to be useful, sustainable, and ethical because we believe progress only makes sense if it improves people’s lives.

It’s not just about innovating but doing so responsibly. We want our solutions to have a positive impact—not just for those who adopt them but for the entire ecosystem in which they operate. This is our vision of a digital future: a space that offers real possibilities without losing sight of fundamental values.

An inspiring experience

One collaboration that left a significant impression on us occurred a few years ago, just before the COVID period, with a small manufacturing company grappling with the challenges of digitalization. The initial goal was to optimize production processes with new technologies, but early discussions revealed a more complex reality: employees were afraid of losing their roles to automation.

We realized that introducing innovative tools wasn’t enough. Building a path of trust and education was essential. Together with the company, we organized workshops to explain the value of the new solutions and demonstrate how they could enhance everyone’s work, not replace it. Gradually, enthusiasm replaced fear, and it was extraordinary to witness.

Today, that company not only successfully utilizes new technologies but also has a more motivated and engaged team of employees. This experience reminds us daily how vital it is to put people at the center because true change starts there.

Conclusion

Digital transformation is never an easy path, but it can be an opportunity to create something meaningful. At Frontiere, we strive to be an ally in this journey—not just a provider but a partner, sharing ideas, challenges, and achievements.

The future isn’t built alone, but together. We’re here to make it happen.

Entra in contatto con noi

At the vibrant intersection of art and technology, Re:Humanism stands as a pioneering project, exploring the impact of artificial intelligence (AI) on contemporary art. Founded in 2018, this biennial award goes beyond celebrating the creative potential of new technologies, delving deeply into the ethical, social, and cultural questions they raise.

We had the privilege of interviewing Daniela Cotimbo, art historian and curator, who guided us through the genesis, evolution, and future of Re:Humanism. Below is the full interview, where Daniela shares her vision, highlights of past editions, and the challenges awaiting the next chapter of this unique initiative.

"Re:Humanism brings art and artificial intelligence into a unique dialogue. What was the initial idea behind the project, and how has it evolved over time?"

Re:Humanism was born in 2018 from a collaboration between me, an art historian and curator, and Alfredo Adamo, now CEO of Frontiere, with the aim of offering reflections on the advancements of techno-science, particularly AI. The first edition of the prize was a success, with significant participation from artists and audiences and thought-provoking discussions. This encouraged us to continue, establishing the prize as a biennial event and creating a cultural association.

Today, Re:Humanism seeks to maintain its focus even between prize editions, organizing activities such as exhibitions, performances, debates, workshops, and the production of publications.

"Looking back at past editions of the Re:Humanism Art Prize, which projects or works impressed you the most, and why? How did these pieces embody the spirit of the prize?”

It’s always difficult to single out specific projects from a collection where every piece was chosen to enhance the initiative’s goals. That said, I’ll highlight a few projects that have sparked new perspectives and collaborations.

In the first edition, Adversarial feelings by Lorem comes to mind—a project that, even in 2018-2019, experimented with generative neural networks to explore emotional states as audiovisual flows. This was Lorem’s first project using AI, and it led to further collaborations, including a publication with his Krisis Publishing house. 

The second edition, held in the immediate post-pandemic period, focused heavily on our relationship with nature and other living species. A standout project was Epitaphs for the Human Artist by Numero Cromatico, a multidisciplinary research center in neuroaesthetics. Their work included an installation and a publication of poems generated by a neural network trained to create epitaphs—poetic forms presented alongside a study on primary colors, highlighting the evocative power of "alien" poetry.

From the third edition, I’d mention Ai Love, Ghosts and Uncanny Valleys <3 . I Broke up with my Ai and will never download them again by Mara Oscar Cassiani. This project offered a poignant reflection on toxic relationships mediated by social platforms, affective chatbots, and customizable avatars. The participatory installation allowed visitors to interact with discarded, sexualized avatars through cushions on a fake grass setup, addressing digital exploitation while also offering resources to combat online abuse.

These are just a few of the many projects that have enriched our experience over the years. Some are even part of the collection hosted at Frontiere’s headquarters in Rome.

"The Re:Humanism Art Prize invites contemporary artists to engage with AI-related themes. What can we expect this year? What is the theme for the 2025 edition? What are the main selection criteria, and what kind of dialogue do you aim to foster with the public through these creations? When and where will the selected works be exhibited?"

Compared to previous years, this edition of Re:Humanism will take a more specific approach, delving into some of the issues surrounding new artificial intelligence models, commonly known as LLMs. Artists will be invited to reflect on their creative potential, explore how AI can be utilized from feminist and decolonial perspectives, and address the theme of sustainability.

It’s a significant challenge that, in our view, aligns with the growing accessibility of tools enabling everyone to test both the potential and risks of this technology.

The selection process will remain the same: a mixed jury composed of experts in contemporary art, media art, and advanced technologies will evaluate proposals based on the quality of the submission, the robustness of the project and the artist, originality, and thematic sensitivity.

There will also be some new developments regarding the awards, which we are not yet ready to announce. However, we can confirm that the exhibition will be hosted by Fondazione Pastificio Cerere and will take place in June 2025.

"How is AI redefining the role of artists and the creative process? What challenges and opportunities do you see for the future of art in relation to technology?"

I see challenges for everyone, but I see art as an opportunity to face them! I’m quite critical of the debate that fascinates many, namely the interference between human artists and AI as a creative entity.

AI is undoubtedly changing how we conceive and produce art—and, as always with new technologies, this is both good and bad. On the positive side, AI expands expressive possibilities, even for those who might not be entirely comfortable with visual or digital media.

On the other hand, it’s often forgotten that behind the magic of a prompt lies a learning and generative process that operates with autonomous, recursive dynamics, often marked by biases, errors, and other dysfunctions. Not settling for results and learning to observe them critically is essential for the proper use of AI in the artistic field, and this is the invitation we extend to all artists.

Another important point: reflecting on AI has taught me that the future that matters most to me isn’t the one I can predict, but the one I can build.

"Re:Humanism tackles ethical and social issues related to AI. What are the most pressing implications that art can help us better understand?"

We are living in a rather dark era, where disinformation and echo chambers are increasingly driving polarization and radicalization of opinions.

This phenomenon is rooted in a burdensome history of discrimination that has, for years, shaped technological advancements.  

Techno-scientific progress is not neutral; it is created by people who often design from asymmetric perspectives and objectives. While some issues related to AI are not new, it continues to present significant problems, including discrimination against women, non-conforming individuals, and people of diverse ethnic and social backgrounds. These challenges become even more dangerous when we lose the ability to trace their origins.  

Advanced interfaces of large language models (LLMs) allow us to generate increasingly sophisticated and realistic content using natural language. The problem will arise when these tools are fully integrated into search engines, potentially excluding non-repetitive but culturally diverse results.  

We also know that many big tech companies that pledged to meet sustainability goals —regarding energy consumption, resource use, and CO2 emissions—are failing to achieve these targets, largely due to the demands of artificial intelligence.  

This raises an urgent question: how can we design technologies that do not further endanger the planet's health and well-being but instead provide solutions to mitigate the damage already done?  

How can art help us? Artists have never stopped questioning these issues, although today they are asked to pay greater attention to the context surrounding them. Advanced technologies represent new forms of language that shape our experience, and major themes like time, space, intimacy, relationships, and memory have evolved in response to these discoveries.  

Artists can amplify this perspective because they have the unique ability to delve into the heart of these issues, overturn established viewpoints, and, more often than not, act as outstanding inventors.  

The importance of an ongoing dialogue

Re:Humanism is more than an award—it’s a platform for questioning the potential and limits of AI through the critical lens of contemporary art. The challenges faced by artists today are not just about creative experimentation but also about building a more inclusive, sustainable, and conscious technological future.

To learn more about Re:Humanism and the 2025 edition, follow our blog and official project channels. Stay connected to explore how art and technology can come together to redefine our perception of the world.

Entra in contatto con noi

Digital Transformation is one of the most discussed topics of our time, a phenomenon that has evolved conceptually over decades. From the initial attempts at digitization in the 1960s to the Web 2.0 era and the current widespread adoption of advanced technologies such as artificial intelligence (AI) and blockchain, the pillars that drive this transformation have adapted and expanded to meet the changing needs of organizations.

This article explores, on one hand, the history of digital transformation pillars, analyzing their evolution and the theories shaping their development. On the other hand, it delves into the essential pillars driving today’s successful transformations.

A historical overview of Digital Transformation Pillars

1960s–70s: digitalizing basic operations

In the 1960s and 1970s, digital transformation was synonymous with automation and the computerization of core business processes. Companies replaced manual ledgers with computer systems, often relying on large mainframes.

A landmark example is the IBM System/360, launched in 1964, which allowed businesses to standardize digital processes at scale. The key pillars during this era were:

Frederick P. Brooks Jr., in The Mythical Man-Month (1975), highlighted the complexities of managing large-scale technology projects, laying the groundwork for more deliberate approaches to digital transformations.

1990s: Internet and digital models

The advent of the internet in the 1990s sparked a new wave of innovation, extending digitization beyond internal processes to customer and partner interactions. The era’s key pillars included:

Clayton Christensen’s concept of “disruption” in The Innovator’s Dilemma (1997) emphasized the necessity of embracing innovative technologies to stay competitive.

2000s: mobility and cloud computing

The rise of smartphones and cloud technologies enriched digital transformation pillars:

Nicholas Carr’s Does IT Matter? (2003) raised the issue of how IT could lose its strategic value if not implemented distinctively, underscoring the importance of tailored solutions.

2010 to present: data and Artificial Intelligence

In recent years, the focus has shifted to leveraging data strategically and adopting emerging technologies:

McKinsey highlights that only 30% of digital transformations achieve tangible results, emphasizing the need for a clear vision and well-defined pillars.

Today’s Pillars of Digital Transformation

1. Leadership and strategy

Digital transformation demands strong leadership and a well-defined strategy. Leaders must identify digital opportunities and translate them into actionable business objectives.

An interesting example is Starbucks, which, under the leadership of Kevin Johnson, introduced a digitalization strategy integrating mobile apps, digital payments, and data-driven personalization, enhancing customer experience and increasing loyalty.

2. Talent and organizational culture

People are at the heart of digital transformation. A culture that fosters continuous learning, collaboration, and openness to change is crucial.

According to a Deloitte study, companies that invest in employee training are 37% more likely to successfully complete their digital transformation.
Take the case of Adobe, which shifted its business model from traditional software licenses to a cloud-based subscription system. This transition was accompanied by significant investment in employee training and the development of a customer-oriented culture.

3. Data and Artificial Intelligence

Data underpins modern strategic decisions. Companies leveraging advanced analytics and AI can anticipate market trends and respond to customer needs more effectively.

A significant example is Heineken, which leverages data analysis to optimize advertising campaigns and logistics, improving product distribution based on local demand.

4. Agility and innovation

The ability to adapt quickly is vital in today’s business environment. Agile methodologies and design thinking empower companies to experiment with new ideas and bring solutions to market rapidly.

For example, Tesla adopts an agile approach to introduce innovations in its vehicles at record speed, often outperforming traditional competitors.

5. Sustainability and social impact

Today, sustainability is an essential pillar of digital transformation. Companies cannot overlook the environmental and social impact of their operations.

Patagonia is a shining example: it uses digital technologies to optimize its supply chain and reduce waste, demonstrating how innovation and sustainability can go hand in hand. Another noteworthy example is IKEA, which has invested in technologies to optimize energy management in its stores and improve material traceability, ensuring a more sustainable lifecycle for its products.

Conclusion

Digital transformation is an ongoing journey, driven by pillars that have evolved to address the challenges of each era. From the operational automation of the 1960s to today’s data-driven ecosystems, the pillars reflect a shift toward holistic approaches that prioritize people, processes, and societal impact.

In the modern era, the pillars of digital transformation go beyond technology to encompass leadership, culture, innovation, and sustainability. Organizations mastering these elements will not only adapt to change but thrive in an ever-evolving world.

Entra in contatto con noi

In today's rapidly evolving business environment, addressing the challenges of digital transformation requires a clear strategy and a structured method. Frontiere has developed a three-phase approach — Assessment, Strategic Planning, and Execution — which not only manages the complexities of change effectively but also aligns with global best practices in consulting and business transformation. This approach is not just a statement of intent but a process validated by academic studies and market insights that confirm its effectiveness.

1. Assessment: the foundation of accurate preliminary analysis

Every transformation journey begins with a thorough analysis of the organization. The goal is to map workflows, analyze existing systems, and identify opportunities for improvement. This phase, often underestimated, forms the bedrock of success for any strategic intervention.

According to McKinsey’s report "The Key to Digital Transformation Success", a detailed initial analysis allows companies to establish a clear starting point, highlighting gaps to address and areas of excellence to leverage. Similarly, Gartner’s "Digital Transformation Playbook" emphasizes that companies conducting rigorous assessments are 35% more likely to achieve tangible results compared to those that overlook this phase.

Our approach is rooted in this principle: analyzing, understanding, and mapping internal dynamics to avoid generic interventions and instead deliver solutions tailored to the client’s specific needs.

2. Strategic planning: crafting a tailored roadmap

Following the assessment, we focus on defining a strategic roadmap centered on concrete objectives and customized solutions. This process goes beyond merely proposing technologies; it integrates operational processes and business goals into a feasible and sustainable plan.

Academic contributions in this area are extensive. Harvard Business Review, in its article "Why Strategy Execution Unravels—and What to Do About It", asserts that clear priorities and a well-structured plan are critical to overcoming operational challenges and ensuring success. Furthermore, MIT Sloan Management Review's report "The Nine Elements of Digital Transformation" highlights that a strategic roadmap helps optimize resources and mitigate risks effectively.

Our team translates these best practices into tangible results. For instance, in a recent engagement with an Italian manufacturing company, implementing a strategic plan led to a 30% reduction in production times and improved operational efficiency through automation and predictive analytics solutions.

3. Execution: from idea to concrete impact

The execution phase is the critical moment where planned strategies are put into practice. Our organization stands out for its pragmatic approach, which doesn’t stop at theoretical solutions but aims to achieve measurable outcomes, ensuring that every recommendation is applied effectively and sustainably.

PwC, in its study "Success Factors in Digital Transformation Projects", states that implementation is the most crucial stage of digital transformation. The ability to execute a strategy effectively defines the boundary between success and failure. Similarly, Accenture’s research "Getting Unstuck: Breaking Through the Barriers to Transformation Success" highlights that a focus on measurable impact distinguishes successful transformation projects.

A practical example of our execution efficiency is its work with a retail chain in Italy, which experienced a 50% increase in e-commerce traffic and saw 35% of online orders placed for in-store pickup, thanks to a seamless integration between physical and digital channels.

Validated by global best practices

Frontiere’s structured three-phase approach aligns closely with methodologies adopted by global leaders like Amazon Web Services (AWS) and Deloitte, who use similar models to guide business transformation. AWS, for example, follows a framework structured around Assess, Mobilize, Execute, which mirrors our process, while Deloitte employs a model based on analysis, strategic planning, and implementation.

These parallels demonstrate that Frontiere’s approach is not only innovative but also consistent with globally accepted best practices, reinforcing the validity of its solutions and the value it delivers to clients.

What sets us apart from these giants, however, is its agile structure, enabling it to respond to clients’ needs more effectively, flexibly, and efficiently. This agility reduces response times, further customizes solutions, and ensures constant engagement with businesses, delivering results that truly address their unique requirements.

Conclusion: creating value through a proven methodology

The strategic approach we’ve been discussing is not just an operational method but a structured, results-oriented pathway designed to address the challenges of digital transformation with precision and vision. The combination of accurate assessment, tailored strategic planning, and effective execution ensures that businesses can not only adapt to change but thrive in an ever-evolving landscape.

With the support of academic and market evidence, it is clear that this method is not merely an option but a necessity for those looking to build success on solid, sustainable foundations. Frontiere, with its targeted and proven approach, stands as a trusted partner to guide organizations into the future.

Entra in contatto con noi

In the rapidly evolving landscape of Artificial Intelligence (AI), 2024 marks a pivotal moment for the governance of this revolutionary technology. The announcement of Huderia, an innovative tool for assessing the risks and impacts of AI systems, underscores the Council of Europe’s Artificial Intelligence Committee (CAI)’s commitment to responsible and transparent regulation.

What is Huderia?

Huderia, officially unveiled on December 11, 2024, is a tool designed to guide governments, companies, and organizations in assessing risks associated with the use of AI systems. This framework builds on the fundamental principles of the Framework Convention on AI, adopted by the Council of Europe in May 2024, emphasizing the importance of ensuring that AI is developed and used in respect of human rights, democracy, and the rule of law.

Huderia offers a systematic approach to identifying risks to human rights, evaluating the social and economic impact of AI technologies, and ensuring transparency and accountability in decision-making processes.

Why is Huderia important?

The introduction of Huderia is a significant step toward more robust and inclusive AI governance. In a global context where technology is often implemented without adequate oversight, Huderia provides a structured framework to mitigate risks and maximize AI’s benefits.

Some key aspects of Huderia's importance include the following:

  1. Protecting human rights: Huderia places fundamental rights at the core, aiming to prevent discrimination, privacy violations, and other negative impacts.
  2. Building trust: By offering clear and transparent guidance, Huderia helps build trust among citizens, institutions, and AI developers.
  3. Aligning with European values: The tool reflects Europe’s commitment to responsible innovation based on ethical and regulatory principles.

The role of CAI in 2024

Huderia’s launch is just one of many milestones achieved by the Artificial Intelligence Committee throughout the year. Under the Council of Europe’s guidance, the CAI has worked on multiple fronts to ensure effective AI governance, including adopting the Framework Convention on AI, which establishes principles and guidelines for member states to promote harmonized, rights-oriented regulation. The CAI has also fostered international cooperation, facilitating dialogue among governments, international organizations, and tech companies to address global AI challenges. In addition, the CAI has supported practical tools like Huderia while creating operational guidelines and implementation frameworks to assist member states in adhering to the convention. Furthermore, the CAI has launched initiatives to educate citizens and professionals about AI’s risks and opportunities.

Our contribution: Frontiere and a shared vision

As the Frontiere team, we have followed the work of CAI with great interest and engagement, recognizing in Huderia an approach that deeply resonates with our vision, which is also central to the associations we co-lead: Re:Humanism and Sloweb. As an entity committed to developing responsible technological solutions, we share with CAI the goal of balancing innovation with respect for human rights.

Huderia inspires us to continue developing tools and frameworks that integrate ethical principles, sustainability, and transparency. We believe our approach, which focuses on identifying risks and promoting trust in decision-making processes, complements the framework outlined by CAI.

Our vision is to build a future where AI’s benefits are equitably distributed and accessible to all, helping bridge the digital divide and addressing the ethical and social challenges posed by technology. Collaboration with institutional and private stakeholders is essential to realizing this vision, ensuring that technology remains a driver of equitable and sustainable progress.

Looking Ahead

Huderia represents a turning point in AI governance, and we are eager to see how it will shape the work of global stakeholders and what the next steps will be toward more responsible and inclusive AI governance. At Frontiere, we will continue to closely monitor these developments, contributing our approach and vision to the global dialogue on ethical and sustainable technology.

Entra in contatto con noi

In this article, I address my concerns regarding the implications of uncertainty in decision support systems, with a particular focus on its effects on AI-based decision support systems (AI-DSS). The genesis of this article stems from the reflections shared in a previous piece, where I explore the concept of uncertainty beyond the realm of data quality (here is the link).

Decision Support Systems Overview - A decision support system (DSS) is a software system designed to collect, organize, and analyze company data to provide valuable information for decision-making or to support management and planning activities. Essentially, a DSS aids individuals tasked with making strategic decisions in complex contexts where determining the optimal choice or decision-making strategy is challenging. Their reliability hinges on the algorithms used to process data and the quality of the dataset utilized, constituting a simplified model of reality derived from the data available to the decision-maker.

AI-based Decision Support Systems - In recent years, DSS software systems have evolved with the integration of artificial intelligence (AI) to enhance the reliability of the representation model of reality upon which calculations are based. AI autonomously constructs the representation model of the analyzed universe (AI model) solely from the dataset provided by analysts.

The Importance of Dataset Selection - An AI develops its representation model based solely on the dataset designed by analysts. However, since the real world surpasses our ability to synthesize, analysts strive to strike a balance between capturing enough dimensions to represent macro dynamics while avoiding complexity that hampers result verification.

Emerging Doubts - Despite meticulous dataset design, it remains one of many possible representations of the real-world environment. As one form of uncertainty is linked to environmental complexity, doubts arise regarding whether the dataset itself is immune from environmental uncertainty. This concern transcends bias and addresses a potentially impactful yet less tangible aspect.

Consideration of Dimensions - Doubts arise concerning the selection of dimensions within the dataset and the uncertainty surrounding their influence on the AI model and processing outcomes. Unexpected interactions or interdependencies among dimensions could affect processing results, even those deemed marginal or independent.

Artificial Intelligence Development Process - AI systems operate through neural networks trained for specific tasks, utilizing deep learning. These networks employ layered structures where each layer contributes to final processing, with the ability to learn and solve complex problems autonomously. However, the nonlinear data processing within neural networks renders their processing opaque, resembling black boxes.

Certainty of Results - The primary limitation of AI today lies not in computing power or data availability but in the structure of AI models. Without a comprehensive understanding of the context, caution is warranted when entrusting AI to identify solutions, as it may generate optimal solutions in contexts diverging from reality.

Article source: Linkedin Article by Vincenzo Gioia

Get in touch with us

Premise

I decided to write this article to bring order to the reflections and deductions that I have developed in recent months regarding the concept of bias. The need to bring order was born from the confusion that arose in me when I noticed that the term bias had become part of everyday language, sometimes taking on ambiguous meanings for me.

By writing this article I do not harbor any certainty or absolute truth. Indeed, I write it to note down what I think regarding bias and I do so, as always, publicly because I trust that it can turn into a useful opportunity for discussion with those who have the patience to read my reflections.

Before starting this reading, relax and repeat after me:

Without explainability, artificial intelligence is useless and dangerous

Introduction

In this article I talk about bias and, as always, I prefer to start from the definition of bias to which I refer in my reflections.

Biases are the manifestation of systematic distortions of judgment resulting from cognitive heuristics of which we have lost control, i.e. mental shortcuts that we use to simplify the decision-making process which have been pushed to such a level of trivialization of reality that we lose contact with reality. same reality from which they are generated, negatively impacting the decision-making model adopted by any biological or artificial intelligent agent (Kahneman & Egan, 2011).

Biases can influence any decision model to make it ineffective. Even where we think we have prepared a decision-making model based on bias-free heuristics, Tversky's studies demonstrate that these take on a fundamental role in the analysis of reality, producing consequences that are not necessarily detectable or detectable in the short term.

The awareness of the structural and structuring role assumed by biases in decision-making processes based on heuristics paradoxically makes them a "false problem" of the processes themselves. A heuristic model based on biases that are admissible and functional for the purpose of the model does not make the model itself free of bias. A decision-making process in which there are no dangerous, macroscopic distortions of reality leads me to think that the biases present in the model are invisible to our analysis but effective in influencing the decision-making process. A well-orchestrated constellation of biases assumes in the decision-making process the same mechanics as the small weights with which the wheels of our cars are balanced: at a certain threshold, they demonstrate a powerful conditioning of the system. The existence of this conditioning process was witnessed by Alexander Nix, CEO of Cambridge Analytica, in his speech “From Mad Men to Math Men” presented at the Online Marketing Rockstars conference held in Hamburg in 2017. The potentially cataclysmic force of this conditioning was tested through the psychometric conditioning that Cambridge Analytica implemented during the 2010 general elections in Trinidad and Tobago in favor of the United National Congress (UNC) through the “Do So” campaign.

The analysis of a decision-making model must therefore not focus on the simple identification of the presence of obvious biases such as racial or gender biases but must be able to understand how much the method of administration of the individual results of the analysis carried out is capable of generating strategic micro-conditioning similar to that produced by AlphaGo with the “U-37” move.

The awareness that biases are not the real problem of a decision-making model is also given to me by the fact that biases are not the cause of an anomaly but only and always a mere consequence of the latter. To be clear, a bias is to an anomaly what the flu is to an infection: it's just a symptom.

Stating that a decision-making system is affected by bias is obvious to me as the entire decision-making process is almost always based on heuristics. At the same time, talking about bias is also an admission of inadequacy. The inadequacy is determined by the fact that treating a bias is the equivalent of a symptomatic treatment caused by the inability to understand the origin of the anomaly and/or to correct the anomaly itself.

Artificial intelligence systems are not free from bias because these systems also operate through clustering processes or abstraction processes that are based on biases that are admissible and functional to the analysis.

In this article I explain with a "step-by-step" approach the logical path that led me to my conclusions, summarized already in this introduction to share with those who read me the awareness that mitigating the risk generated by cognitive dynamics that manifest in the form of bias does not exclude the presence of bias whose impact is equally serious but not immediately detectable by our ability to evaluate.


Decision-making models are, in most cases, based on heuristic approaches.

I have always been fascinated by the mechanisms by which the mind analyzes the world and human relationships. I devoured the TV series “Brain Games” by National Geographic and the essay written by Sergio Della Sala and Michaela Dewar entitled “Never trust the mind” which, through experiments bordering on magic, show us how unknown it is, even today, the human brain and how much the mechanisms that govern it in the daily effort of analysis and adaptation are linked to errors, to illusions of thought, to inconsistencies of mental processes, to imperfections of memory that lead to the development of real decision-making shortcuts.

Decision-making shortcuts are the strategy our brain uses to save energy. They manifest themselves every time we are faced with challenges, problems and decisions to make for which we prefer to adopt a "heuristic" approach, that is, an approach that makes use of generalizations, empirical rules and assumptions.

The heuristic approach is a decision-making model that is based on a set of strategies, techniques and creative processes that help us find solutions more quickly and easily. With this approach, decisions are made considering a limited number of alternatives, with partial awareness of the consequences of each of these. This process is driven by "heuristics", which are rules of thumb used to solve problems or do certain types of calculations and based on the knowledge that available information is never perfect and that human abilities are limited and fallible. As the psychiatrist Mauro Maldonato says: "Unlike formal calculation, heuristics are an immediate solution."

The strategies, techniques and creative processes that make up the heuristic approach are useful distortions of reality. These distortions simplify the analysis of the facts and aim to provide a subjective view based on the knowledge that we can recognize only a limited number of alternatives and are aware of only some of the consequences of each alternative. In most cases, these distortions allow us to interpret and, where possible, predict reality quickly and effectively.


Decision-making models based on heuristics are characterized by processes of simplification of reality and abstraction.

The processes of simplification of reality are based on schemes and categories with which the knowledge we use in the processes of perception, memory and thought is organised.

The schemes and categories that we use to organize our knowledge describe people, objects and events through only the characterizing, common or most frequent details, excluding anything that can be traced back to a specific phenomenal manifestation.

Knowledge schemas are based on associations that are immediately available to our awareness and represent what is most common or considered typical. To be clear, when I talk about the beauty of dogs, no one thinks of the beauty of the Bracco Italiano or the Spinone Italiano because everyone thinks of the generic and subjective image of the dog that has been built up over the years.

Knowledge schemes are fundamental for a correct classification of the world which necessarily requires the implementation of an abstraction process with which a set of non-identical elements is created although belonging to the same phenomenal category.

Abstraction processes are fundamental for simplifying the processes of understanding and adaptation. We can say that they are the basis of the mechanisms that govern survival and evolution.

Without an abstraction process we would be incapable of making decisions because each phenomenon would produce a separate element that cannot be compared with similar others. The "environmental dependence syndrome" would develop (Lhermitte, 1986) which makes one unable to inhibit actions stimulated by any single input. In a similar condition, conifers or the single species of which they are composed would not exist (e.g. Scots pine, larch, fir, spruce) but only the single tree that is different from another due to the characteristic assumed by each single leaf.

Although the importance of abstraction processes is shared by all, it should be remembered that in abstractions exceptions or diversity are not taken into consideration. For this reason, when we talk about Africans we don't think about the white-skinned African population, even if it exists.

This tendency of schemes to generalize and exclude exceptions leads to prejudice when we do not have sufficient information on what we are talking about.


The processes of simplification of reality can generate anomalies that manifest themselves in the form of cognitive biases.

The simplification processes that we find at the basis of the heuristic model have an important flaw: the limit to which they go is constituted only by the common sense of those who apply them. For this reason, in some cases, the heuristic process goes beyond the simple simplification of reality and generates real banalizations from which preconceptions arise which, although they may be derived from reality, no longer retain any objective link with it.

The trivialization of reality leads to the development of preconceptions which reverberate in decision-making processes through inevitable errors of evaluation which can be more or less serious. Such errors, regardless of their nature, are generically called "cognitive biases" or more simply "bias".

Cognitive biases are systematic errors of thought that, by causing us to deviate from logic or rationality, influence the way we perceive reality, make decisions and the way we formulate our judgments.

The difference between bias and heuristics is, therefore, represented by the fact that heuristics are convenient and quick shortcuts closely linked to reality and which lead to quick conclusions. Cognitive biases are also shortcuts but they manifest themselves through prejudices that have lost all connection with reality and which are acquired, in most cases, without a critical spirit or judgment.

It is not easy to understand at what point a simplification process turns into a trivialization from which a cognitive bias arises. I believe that it is impossible to set a sort of threshold that allows us to understand that we are in the presence of a simplification process of which we have lost control to the point of declaring it dysfunctional to the decision-making process. For this reason, perhaps, we realize the existence of a bias always, so to speak, once the decision-making process has manifested its effects on the environment and people.


Abstraction processes are common to all intelligent agents.

A world of absolute uniqueness in which it is not possible to create groups through abstract processes is a world in which any form of intelligent life is impossible. As unreasonable as it may seem, organizing knowledge by schemes and from these abstractions is common to all intelligent or teleological agents, even of an alien (non-human) nature. For my dog, birds are birds regardless of whether they fall within the species for which he was selected and trained to hunt. You could argue that my dog chases everything that moves purely out of predatory instinct. However, his reluctance to prey on objects unknown to him is common to all other dogs. I still remember the way he behaved when he saw, for the first time in his life, a balloon rolling on the floor because it was moved by the wind and how he behaved in subsequent encounters with this environmental phenomenology.

Abstractions are not lacking even in plant intelligence that implements clustering schemes in learning and adaptation processes. A testimony of this ability is given to us by Stefano Mancuso through his observations regarding the evidence collected by the French naturalist and botanist Lamarck (1744-1829) regarding the behaviors that the "mimosa pudica", so called because it closes its leaves as soon as it is touched, implements in a presumable attempt to defend against herbivores.


Abstraction processes are also present in artificial intelligence systems

A specific aspect is, in my opinion, assumed by Artificial Intelligence (AI) systems which, although not life forms, operate as teleological agents and do so by implementing abstraction and classification processes not dissimilar to those produced by other living species. As noted by Nello Cristianini, every time an AI system implements a classification it does so with its theoretical construct based on its form of intelligence.

It is not possible to know what are the characteristics of the knowledge schemes that an AI adopts to distinguish a dog from a cat or to classify the world. If we could ever find out, we would find that it has nothing to do with our criteria based on human sensory data. I would not be surprised to find in an AI a classification similar to that proposed by Jorge Luis Borges in which the animal world is divided into:

The issue of biases that manifest themselves in AI systems is much more complex if we consider the fact that the statistical correlations that are used in abstraction processes are often, if not always, defined on data which, in addition to being naturally affected by bias, they could hide weakly correlated statistical links that are not evident to humans and capable of generating negative effects on the analysis and decision-making process. To understand the importance of weak correlations and their danger, I report a beautiful definition produced by the Ammagamma team which, in my opinion, David Bevilacqua teaches and disseminated on the topic: "the [weak correlations are] weaker relationships between the variables which influence a phenomenon [and are] difficult to read and interpret. Our mind is not able to grasp them, unlike strong correlations, but by equipping ourselves with a mathematical model it is possible to identify them [and use them to our advantage]". The awareness of the importance that weak correlations assume in the abstraction processes generated by an AI also comes from the studies conducted by James Pennebaker which demonstrate the feasibility of a psychometric segmentation of a user through the linguistic structure adopted in the exposition of their opinions alone. Thanks to its studies and weak correlations, Facebook can cluster groups of people starting only from the likes expressed on users' images and public posts.

Recognizing the existence of abstraction processes in every intelligent agent allows us to understand that biases can be present in every heuristic process regardless of the nature of the agent that brings it into being. Furthermore, I find Borges's provocation a useful tool for understanding that our principles of classification and ordering of the world are anything but obvious and natural as it is possible to hypothesize infinite other ways of organizing the objects of our experience such for example, the paradoxical one I reported above.


Quick Summary

At this point in my reasoning, it is best for me to give a brief summary of what I have attempted to explain so far.

Point 1 - Heuristic processes are based on simplifications of reality which, even if functional to achieving the result, are the matrix from which biases arise.

Point 2 - Biases, being linked to simplification processes, are not the result of a specific level of abstraction but, rather, the result of a limit determined only by the level of unreliability that our common sense finds admissible in our cognitive processes and decision-making. In the terms set out, bias is present in every heuristic process and every time we deviate from objective data.

Point 3 - Simplification processes are necessary to implement the abstraction processes that allow us to understand the world regardless of specific phenomenal manifestations. I have also found this capacity for abstraction in agents endowed with intelligence alien to ours.


First deduction: heuristic processes are based on bias

Bias, understood as a deviated form of simplification and abstraction mechanisms, is present in every heuristic process because it is through the adoption of one or more shortcuts that one can avoid the adoption of a logical-scientific approach which is always very expensive in terms of computing resources and data acquisition and verification time.

The presence of bias in all heuristic processes is also demonstrated by the experiment carried out by psychologist Emily Pronin who, in 2002, described the "blind spot bias" as the natural inclination of human logic to consider ourselves always more objective than anyone else. Another demonstration of the bias-heuristic link comes from the psychologist Paolo Legrenzi and the neurologist Carlo Umiltà who, in the book "Many unconscious for a brain", write

Given the enormous flow of information, we tend to select those that we already know, those with which we agree, those that we can assimilate better thanks to the presence of mental patterns and categories that are familiar to us and which are already consolidated. Furthermore, we are inclined to share this information with those who think like us and with those who we know will appreciate why they think like us. These new forms of life give rise to a sort of collective unconscious which translates into the radicalization of people's opinions. Individuals are comforted by sharing a current of opinions that is simple, clear, and requires low cognitive and attentional efforts

The role of biases in cognitive processes has led to a careful classification of them which, in the absence of taxonomic proposals or reference models, has generated a list of over 150 items divided into four macro-areas over the years.

The Cognitive Bias Codex by John Manoogian

With such a large list of items, I find it obvious to consider biases as an inseparable part of heuristics, despite the fact that in some cases they become the element that shows the fallacy of some simplification/abstraction processes.

Nobody likes the idea that heuristic processes are based on more or less effective biases because it demonstrates that every choice is always wrong or, if you prefer, right until proven otherwise. This scenario, however, is not as deplorable as it seems since it is precisely thanks to biases that it is possible to accelerate the analysis processes, improve the detection of critical choice factors in changing or uncertain situations and arrive at a more streamlined decision-making model. This is due to the fact that bias is closely linked to the schemes and categories with which the knowledge that underlies the processes of perception, memory and thought is organized.


Second deduction: bias is not a false problem

Heuristics are necessarily based on biases even if, in most circumstances, these biases do not have harmful effects on the context or object of our decisions. In such a condition, as much as we don't like it, it is no longer necessary to ask ourselves whether a decision is made on the basis of a model whose mechanisms show bias or not. Rather, we need to ask ourselves what relevance is assumed by the biases certainly present in the current decision-making process. In essence, since choices are always based on errors of evaluation, let's focus on the distinction between serious errors and slightly irrelevant errors whose effects are, however, only apparently of low impact.


Third deduction: bias is not an explanation

The vision of bias as a problem that explains the anomaly revealed downstream of a decision-making process is misleading because it transforms bias from the effect of an anomaly into a cause of the anomaly. Biases are always the symptom of a problem affecting the decision-making model and, for this reason, they do not exist except as a distorted manifestation of a cognitive process. To be clearer, I don't think it's correct to say that the anomaly found in a decision-making process can be traced back to a bias or is produced by a bias. When faced with an anomaly, it should be said that the problem from which it is generated manifests itself in the form of one or more biases.

The interpretation of bias as a symptomatic manifestation of a cognitive problem requires some reflections. The first is represented by the fact that the correction of the anomaly does not involve correcting the biases through which the anomaly manifests itself (it would be like lowering the fever instead of curing the infection). The second is represented by the fact that the anomaly we find through one or more biases does not mean that it has not produced others of equal importance but not yet identified.


Fourth conclusion: biases show the limits of our capabilities to monitor AI

A bias understood as an effect and not as a cause requires the adoption of a completely different approach from the one currently adopted for the analysis of the efficiency of AI systems in which one must necessarily be able to identify which abstraction process generated the anomaly that we identify as bias. To carry out such an analysis of the causes, one must know in detail the parameters that contribute to the generation of the decision and, even more, one must know which combination of "weak signals" generates the specific statistical signature that generated the anomaly. To date, there is no way to be certain which model generated the response.

The presence of biases is, therefore, not useful in explaining something since biases are present in every heuristic process and are present both in analysis processes whose outcomes conform to expectations and in those that do not produce the expected results.

Saying that a decision-making system is affected by bias means knowing why the bias was generated, why it was not avoided, and why correcting the anomaly does not generate a different one in pursuit of the complexity typical of the first Microsoft service packs of the years '90.

But what if the bias is instead a peculiarity of the decision-making model? A decision model is always focused on a contained set of data. This means that, even if there were no extreme discrimination phenomena, we would still find ourselves in a context in which it is not possible to exclude that we are in the presence of a bias worthy of AlphaGo's famous "U-37" move whose effects show themselves as a poison at a time and in a way that makes it impossible to understand the origin of the disease and identify an adequate cure.

Without a selective attention decision-making system, we would be at the mercy of environmental stimuli, even if irrelevant or inconsistent with what we are doing. We might think that it is possible to be supported by an AI in the data reading process but, even in this case, no one could exclude that the agent adopted to support us does not itself fall victim to the complexity of the world by developing the technological equivalent of " environmental dependence syndrome" (Lhermitte, 1986) which makes one unable to inhibit actions stimulated by any single input.

Article source: Linkedin Article by Vincenzo Gioia

Get in touch with us

"I’m so deep in this bloodshed that if I stopped this business now, going back would be as difficult as continuing all the way". (Macbeth, III, 4 di William Shakespeare)

I have always been fascinated by the way Shakespeare encapsulates Macbeth's journey towards the unknown with just a few words. It doesn't matter how his journey began; what matters is the clear perception of the so-called "point of no return." The elegance of this phrase, recited by Macbeth in the third act of Shakespeare's play, conceals a condition that can emerge as a consequence of every significant action in our personal and professional lives.

The condition that Shakespeare describes is also known as the "Macbeth Effect". It summarizes a perception that leaves no room for choice and is based on the clouded belief that by continuing along the path, one will find clarity or a solution to the current state.

This effect manifests in many areas of private and professional life, where our decisions often begin with phrases such as: "It costs nothing to try," "There's so little risk" or the bolder, "If he did it, I can do it easily too".

In professional life, the Macbeth effect is often accompanied by a sort of industrial mystique, epitomized by impressive aphorisms on office walls, like a Steve Jobs poster with a motivating quote, similar to how a photo of Marilyn Monroe might adorn a hair salon.

The Macbeth effect arises from an approach that leads us to develop a high propensity for risk, neglecting any form of control and measurement of current and expected results. By its nature, the Macbeth effect is linked to the exploration of the unknown, often found in innovation, research and development, and invention processes. Anyone embarking on a path without adequately analyzing its risks or duration can find themselves in the same position as Macbeth.

The Concorde project

Industrial history has numerous failures linked to the belief that there is no turning back, with no escape routes except continuing forward. A notable example is the Concorde project, a supersonic aircraft produced by the Anglo-French consortium of British Aerospace and Aérospatiale. The Concorde was one of the most ambitious innovation projects in aeronautics history, beginning in the late 1950s and seeing the first prototype take off in March 1969. It wasn't until November 4, 1970, that the aircraft first reached Mach 2, becoming the second commercial aircraft to fly at that speed, after the Soviet Tupolev Tu-144. This historical context helps us understand the decisions leading to the first flight in 1976 and its disastrous failure in October 2003. Although many believe its decommissioning was due to the July 2000 disaster, the truth is that its abandonment was due to the massive consumption, unsustainable maintenance costs, a small number of passengers (due to the high flight price), and often questionable marketing choices. The tragic accident in Paris merely accelerated the closure of the Concorde project, as the French and British governments had been covering its budget deficit despite clear financial evidence against its sustainability. This persistence is a classic example of the human tendency to continue a project without considering future benefits, focusing instead on past efforts and investments.

The opportunity cost and sunk costs

The analysis of future advantages is described in economics by the concept of "opportunity cost," which defines the future value of one's choices based on the cost of forgoing an alternative opportunity. Essentially, it is the sacrifice made to make a choice. However, in evaluating investments, assessments often give more weight to "sunk costs." To illustrate this dynamic, imagine being at the head of a research and development project with an uncertain outcome and having 100,000 euros to invest.

Consider two scenarios: in the first, you have already invested 500,000 euros and can close the project with an additional 100,000 euros; in the second, you haven't started the project yet and can invest your 100,000 euros to begin activities with an uncertain outcome. How would you act? You are likely inclined to invest in the first scenario, considering what has already been done. But any answer is neither correct nor wrong because the question itself is flawed. The correct question should be: "What is the opportunity cost in the current state of the project?" Only this question provides the logical basis for making our choice.

Cognitive distortion in the analysis of sunk costs

The incorrect evaluation of sunk costs is due to a cognitive distortion known as the "Sunk Cost Effect," evident in the Concorde case, where heavy investments by the French and British governments led to further investments even when the project's financial unsustainability was clear. This bias reflects a paradoxical behavior: when we have invested significant effort, time, and money into a failing project, instead of abandoning it to limit losses, we tend to continue investing, exacerbating our losses.

You might think this wouldn't happen to you, but consider a fixed-menu restaurant where you're almost full but have already paid for dessert. You might order and leave it on your plate because you paid for it, demonstrating the sunk cost fallacy. This phenomenon also occurs in relationships, where people maintain unhappy, unsatisfactory relationships to avoid "wasting" the time spent together.

Friedman's cognitive dissonance

Daniel Friedman (University of California-Santa Cruz) explored this in his 2007 study, “Searching for the Sunk Cost Fallacy.” He describes the psychological mechanisms underlying bad decisions related to sunk costs. According to Friedman, bad decisions stem from "cognitive dissonance," leading to continuous self-justification. People who invest in an unprofitable activity modify their beliefs about its profitability to avoid admitting a mistake. Cognitive dissonance varies among individuals; anxious people are more sensitive to uncertainty and tend to continue investing despite likely failure, whereas depressed individuals are more likely to stop investing due to unrealistically positive future expectations.

The escalation effect

The behaviors driven by the Macbeth effect demonstrate that distorted perceptions of sunk costs have costly consequences in terms of money, time, and effort. A more severe form of the Macbeth effect is the "Escalation Effect." When a project begins to fail, sunk cost bias irrationally pushes individuals to make even more investments, leading to further losses. This growing spiral of investment is also known as the “Vietnam Effect,” explained by conditions during the US Vietnam War. According to Secretary of State George Ball's 1965 memorandum to President Johnson, retreating becomes impossible as soldiers die, leading to more investments to avoid their deaths being in vain.

The roadmap of madness

My professional experience has allowed me to observe the Macbeth Effect and the Escalation Effect closely. The lessons I've learned can be summarized in a path of increasing investments, which I call the "roadmap of madness." This path is common to the projects analyzed for this article and unfolds in the following steps:

  1. Someone decides to solve a business/personal problem or unleash their creativity with a proprietary technical solution.
  2. The inventor presents the product, and a manager decides it has potential.
  3. Based on a superficial market check, the manager convinces the chain of command to allocate the budget to develop the product.
  4. The team develops a testable version, and the sales force begins work.
  5. Poor commercial results prompt more investment to avoid wasted efforts and reputational damage.
  6. Even an expert sales force fails to deliver results, prompting internal adoption of the product, which also fails.
  7. The manager, confident in the product's potential, sets up a new company to handle it.
  8. Further failures lead to seeking new investors.

The end of this roadmap is uncertain, but it is unlikely to be pleasant given the described path. My experience with two now-bankrupt companies (a cloud operating system and a procurement platform) has taught me to pay close attention to the Macbeth effect, as in real life, the curtain can indeed fall.

Article source: Linkedin Article by Vincenzo Gioia

Get in touch with us

“These are times of great uncertainty”

The times I have said and heard this phrase cannot be counted. I usually hear it when a decision needs to be made, and we find ourselves in a complex context. But what is uncertainty? How does it affect our decisions? How significant is the impact of uncertainty on AI-based decision-making systems?

In this first article, I share some reflections on the concept of uncertainty, hoping to stimulate a useful discussion that helps me delve deeper into aspects I might not currently be considering.

Definition of Uncertainty

I've noticed that we don't always share a common understanding of what uncertainty is. According to the Treccani dictionary, uncertainty is the condition one finds oneself in when the information or data relating to a fact are uncertain or contradictory, offering insufficient or not entirely well-founded knowledge. Usually, this condition of uncertainty is determined by complex contexts in which it is not easy to determine the level of risk associated with the decision we have to make.

Uncertainty, Risk, and Complexity

The conjunction of risk and complexity often leads us to believe that uncertainty is determined by the complexity of a context and/or its riskiness. However, while risk can be assessed through statistical methods, uncertainty involves the inherent inability to recognize influential decision variables and their functional relationships. Regarding complexity, I don't think a complex system is necessarily uncertain since it is characterized by numerous parts that interact in a complex but not necessarily uncertain way.

What Causes Uncertainty

Having established that uncertainty is not linked solely to the complexity of a context or the risk associated with decisions, I consider it useful to expand its nature beyond the initial definition, which attributes it solely to data incompleteness. In my opinion, uncertainty can also stem from an inadequate understanding of the context and the undifferentiated alternatives among which we make our choices. To put it more elegantly and in line with academic discourse, the origin of uncertainty can be divided into three dimensions: informational, environmental, and intentional.

Informational Uncertainty

Informational uncertainty derives from incomplete information and is the most frequent source of uncertainty. It manifests when there is a lack of data acquisition or when one possesses an incomplete, low-quality, or insufficiently diverse data set.

Environmental Uncertainty

Environmental uncertainty is linked to the natural complexity of the context (the real world), where it is crucial to distinguish between cause-effect relationships and other phenomena that are not causally linked. If we cannot understand the context and its determining relationships, we risk being confused by the contradictory meanings conveyed by some phenomena.

Intentional Uncertainty

Intentional uncertainty arises from the fact that, in many cases, the decision-maker relies on variable criteria even when the generated choice alternatives are concrete and objective. In other words, humans make final decisions based on subjective perspectives or preferences, even in the presence of a standard decision-making process. This context with a strong subjective imprint is further accentuated by a decision-making model in which the available options are characterized by what we can define as an "undifferentiated alternative," where the choices do not present a clear distinctive factor.

Summary on the Concept of Uncertainty

A decision-making process affected by uncertainty can be characterized by three different sources of uncertainty: informational, environmental, and intentional. Environmental uncertainty arises from the unpredictability of the environment, while intentional uncertainty originates from individuals' specific preferences and needs. Informational uncertainty, although considered manageable to the extent that one can understand and describe the causal relationships between variables and acquire the most complete information possible, remains difficult to manage due to the unpredictability of some real-world events, whose nature is often determined by components linked to intentional uncertainty.

Article source: Linkedin Article by Vincenzo Gioia

Get in touch with us

Are you sure you need an AI? For many, the question is an idle one and when it is asked in commercial contexts it tends to significantly complicate the progress of negotiations and the definition of the scope of a project. In essence, this is the typical question that should not be asked since the answers it generates are often worse than the question itself.

The reasons behind an investment in AI are as many as the people I have met in my professional experiences. In this article, I explain the reasons that, in my opinion, should be the basis of an investment in AI systems and the ways in which such an investment should be conducted.

Introduction

Before starting it is good to make a premise: my reflections arise from the observation gained during my participation in projects for the adoption of AI systems to support decision-making processes and are shared with the awareness of their debatable and refutable nature. To understand what a decision support system is, here you will find an article I wrote about it, and in this article my reflections start from the description of the concept of decision-making.

Uncertainty and risk

Decision-making processes are divided into rational processes and irrational processes. Irrational decision-making processes are strongly influenced by uncertainty whereas rational decision-making processes are characterized by a natural component of risk. I wrote an article on the topic which you can find here, in which I summarize those concepts that I consider fundamental. Risk and uncertainty are two sides of the same coin whose figure is represented by the ability to recognize the relevant variables in the decision-making process and attribute weight to them through statistical methods. To understand better, let's take an example. Let's imagine we are at a horse race and want to choose the horse to bet on. If our choice is based on the analysis of data representative of the context and the horses competing, the choice will present a risk component and the decision taken will be rational. If our choice is, however, made without any information about the context and the horses, we will only be able to choose at random, making our decision-making process an irrational one. As far as I'm concerned, both processes lead to a choice, we often move from one process to another without interruption and both are characterized by a component of randomness which can only be transformed into uncertainty and risk based on our hypotheses and of what we take as known.

A world of rational people who often act irrationally

Decisions are the basis of our daily actions and no one decides by rolling the dice. However, the context in which we make our decisions always presents a certain degree of complexity which we manage through assumptions and simplifications aimed at containing the number of data to be managed. But it is precisely in the perimeter of the assumptions that the nature of risk or uncertainty assumed by the randomness present in our decision-making process manifests itself. This nature is, in fact, closely linked with our ability to establish which variables significantly influence the decision-making process. For this reason, I am no longer surprised when I see rational people come to an irrational decision after having evaluated the relevance and impact of such a large number of variables as to make the model of representation of reality too complex to manage.

Modern oracles and ancient weaknesses

Faced with the complexity of the world, AI takes on the same role as the oracles of 2,500 years ago, offering modern questioners answers that, although clear, are often incomprehensible in terms of causal relationships. Exactly as happened with the Cumaean Sibyl, even the new oracles are not asked about the cause-effect relationships that lead to the answer because, like the sibyls of the past, the statistical report produced by these intelligences cannot be read as it is the fruit of too complex a process. An example of this is the “U37 Move” with which Alphago led its developers to state that the algorithm “is no longer bound by the limits of human knowledge”. This clearly shows the alien nature of this intelligent agent.

When to use the oracle?

The choice to entrust our decisions to an AI, in my opinion, should not depend on the complexity of the context in which a choice must be made but on the evaluation of the cost we must bear to make our decision. If I had to choose, today I would entrust decision-making to an AI only in areas in which the cost we have to bear to make a good decision is greater than the benefits we derive from the decision made. This approach is exquisitely economic and refers to cost-benefit analysis which is a systematic approach to the evaluation of the choices to be made based on the measurement and comparison of all direct and indirect costs and benefits.

Oracles to be used with caution

I am very cautious in the decision to adopt an AI oracle as our nature is particularly inclined to delegate the analysis processes to others both for issues relating to the effort that such analyzes impose and because our brain has limits in managing decision-making models. Furthermore, these systems, in addition to being inscrutable, are based on statistical relationships often constructed through the analysis of data linked by weak relationships which make an analysis of the decision taken on a causal basis impossible. In this case, consider the psychometric inference activities carried out by the Cambridge Analityca company and widely documented.

Summary

The adoption of an AI-based decision support system requires a considerable ability to monitor the context in which the suggested decision must be applied. An inexperienced user could be exposed to reality conditioning phenomena implemented through "resonant chambers" which, also due to the power gap existing between the user and AI, would lead to a progressive loss of control of the evaluation criteria of a decision and of the human perception of reality. Furthermore, there is an ethical question behind the conditioning process that can be subjected to by a decision support algorithm and which is implemented through forms of macro-solicitation, known as nudges, towards a specific direction aimed at conditioning the vision of perceived reality by users.

Article source: Linkedin Article by Vincenzo Gioia

Get in touch with us
logo-frontiere
LinkedinInstagramYoutubeFacebookTwitter
Privacy policy
Copyright 2025 Frontiere
FRONTIERE Srl
Headquarters in Via Oslavia, 6 - 00195 Rome, RM | C.F. and VAT 17464921000
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram