The mechanic's hammer and generative artificial intelligence
Artificial "Intelligences" (AIs) have always attracted me for the fascination with which they were cloaked thanks to Hollywood films. 2001 A Space Odyssey, Ex Machina are films that I watch over and over again, always discovering new food for thought. In recent years, the professional contribution from which I have benefited thanks to the colleagues with whom I have been lucky enough to work and with whom today I have a structured vision (at least I think it is) of the technological capabilities and impacts on people's professional and personal daily lives.
The points for reflection that have followed one another over time have been noted in my articles to generate a comparison that does not stop at the state of the art of research which requires years to understand the nature of what is happening through the use of AI.
For personal convenience, I summarize the path traced by the articles I wrote.
Some time ago, in this article, I wrote down my reflections on the concept of intelligence. I did it because I think that intelligence is an unknown creature despite being talked about every day. Among the "intelligences" that populate our world and which I talk about in the article there are also the newborn artificial "intelligences" (AI) which are becoming an increasingly widespread tool both in the professional and consumer fields due to their notable generative capabilities and simplicity with which you can interact with them.
History has taught us that, like any great tool, artificial intelligence also brings with it new and unexpected problems which, as written in this article, in the case of AI, the major problems are linked to the presence of bias which I consider inevitable and existing even where its presence is excluded. The unexpected nature of AI problems, as exposed in this article co-written with my friend Remco Foppen, is also manifesting itself in the form of hallucinations which, far from being an exclusive human prerogative, has become the worry of those who work with the latest generation LLM systems because, as happens with men, even artificial hallucinations are increasingly consistent with reality and, therefore, difficult to identify.
Despite the limitations that AI shows, I find these systems unquestionably useful even where large gray areas can be glimpsed linked to the fact that limits, opportunities, risks and benefits are still unclear. Whatever use is made of it, AI is already changing our way of being and thinking so much so that, as written in this article, I do not exclude the possibility of their effect also appearing on the mechanisms that govern creative processes. Writer's block is easily circumvented if you use an AI and, to those who complain about the lack of originality, I respond by saying that, as in every creation, inspiration is never the final work.
This article was born from a joke made by a colleague at the end of a project meeting who, in a peremptory tone, said:
โPlease do not use ChatGPT to write project documents because there are systems capable of revealing it and we will make a bad impression on the clientโ.
This joke immediately made me think back to the times of the general mathematics exam when the Professor who held the chair prohibited us from using calculators. As a young university student, I asked myself what was more important between the reasoning that leads to the calculation and the calculation itself. Today, I ask myself what is more important between the mechanism that leads to the essay and the essay.
The use of productivity support tools has always been viewed with distrust. Bizarre as it may be, criticism of innovation ignores the nature of innovation, so much so that there was a time when even writing was criticized, which Plato, in the Phaedrus, defined as the cause of memory deterioration. No better treatment was given to the calculator, the use of which was opposed because it reduces the ability to carry out calculations in the mind, even very complex ones. Today none of us could do without writing and the use of the calculator has not only been accepted, but encouraged by the Ministry of Education which for decades has allowed its use in mathematics tests at scientific and technological high schools.
The change that is triggered by the spread of each new technology is unstoppable. It was so for writing, it was so for photography and it was so for the calculator at the expense of the slide rule. No matter how much we complain, change will become everyday life. This evolutionary cycle forces us to reflect on what we are, how we perceive ourselves and what we want to be.
AI has produced many impacts on human nature and I would like to note one of these in this article. I'm talking about an aspect that I hadn't yet taken into consideration despite it being linked to the ability to develop a concept, or an idea. I have no hesitation in considering this aspect the fuel of every analysis. I'm talking about the ability, not at all innate, to ask structured questions in such a way as to direct the interlocutor towards a concise and clear answer.
A well-formulated question is a powerful tool through which you can guide an entire analysis process, allowing you to focus attention on what is really important: defining the objective of the analysis and determining which data are needed. A well-posed question can help identify any limitations or challenges that may arise during analysis, solve complex problems and build relationships. A poorly constructed question creates a dangerous trap whose effects can distance us from our exploration and end up confirming what we already know.
The relationship between the ability to ask well-structured questions and the use of an AI is represented by the fact that AIs operate only if a request/question is given to them and the quality of the paper produced by an AI is directly linked to the quality of the question formulated. Similarly to what happens with humans, an incorrectly formulated question can also be misleading for an AI system whose fragilities can emerge through the formulation of questions that make use of specific logical constructs. A correctly structured question allows you to activate an AI on tasks for which it has not been directly trained to operate but for which it is able to formulate reliable answers because they are based on verified data (at least one hopes that they are).
Prompt engineering is the field of study in which we try to identify the best technique to choose the most appropriate formats, phrases, words and symbols in the formulation of questions that guide a generative AI to produce relevant and high-quality results. quality. The relationship that links the quality of the question to the relevance of the answer is central in every area of philosophical studies as it is for a correct interaction with an AI.
The importance of prompt engineering leads me to think that, even in times in which knowledge can be achieved through superhuman "intelligences" (this is how the "intelligences" after "move 37" are defined), the value of knowledge remains linked to the fixed deficit between questions and answers. Having more questions than answers means having the key to knowledge since the effective operation of AI, as with that of the human brain, is closely linked to the ability to ask the right questions.
Continuing my reasoning and focusing on the question relating to the originality of the essay, I wonder how your opinion on the quality of this text would change if I told you that it was generated with AI support. Let me explain better by referring to the well-known anecdote of the mechanic's hammering. A guy, after having consulted numerous mechanics and spent a lot of money, goes to a mechanic who asks him to eliminate the annoying squeaking that afflicts the engine of his car. The elderly mechanic, after listening to the unfortunate sound for a few seconds, takes an old hammer and, with a firm blow, solves the problem. For the hammering, he asks for 500 euros. To the customer who did not intend to pay that amount for a hammer blow, the mechanic responded by asking if the customer would be able to do the same. I'll leave the rest of the reasoning to you as you read me. In practice, if I had ever used an AI to generate this text, its instrumental role in the creative/productive path would not be dissimilar to that assumed by the hammer in the hands of the elderly mechanic. For this reason, rather than asking myself whether or not an AI was used to carry out an activity, I prefer to ask myself what degree of mastery of the tool was achieved in its use.
I think the use of AI is on par with the use of writing, the calculator and the mechanic's hammer. Their functional value is linked to the ability to make good use of them. Just as no one would ask me whether or not I used a calculator in carrying out my radio engineering tests, I hope that one day I will no longer be asked whether or not I used an AI in the process of drafting my papers because I consider these criticisms to be the result of banal neo-feudal frustrations that dwell in the Pleistocene minds of characters who want to know that they "worked hard" instead of concentrating on the quality of the work produced.
To the diehards of the originality of the work, the leaders of the ultras of "it is the fruit of my sweat", the scandalized virgins of "no one here copies" I declare that I have left their side to others and without regret cultural in favor of the role of the one who tells you that Santa Claus doesn't exist. Ladies and Gentlemen, most of the professionals I have met in the last 20 years around the dozens of companies in which I have worked produce documents drawing "copiously" from what has already been produced for and/or by others. Get over it, the world is based on 3 simple keyboard keys CTRL, C and V.
Having clarified my position regarding the use of AIs and their functional role, it remains to understand how these impact the ability to formulate questions appropriate to the expectations that are nourished on the expected paper. The potential offered by generative AI will lead people to find ways to use them in increasingly broader contexts of daily and working life. This will lead to the development of an ever-increasing ability to formulate structured questions whose nature can only derive from an effort of analysis and abstraction. In essence, as much as it may bother you, in the future no one will write thinking of doing so without the support of an AI because, in addition to sharing what is thought, a text has the purpose of disseminating content for cultural purposes.
Article source: Linkedin Article by Vincenzo Gioia
OpenAI continues to innovate in the field of artificial intelligence, and the ChatGPT-4o version represents a significant step forward from its predecessors. This model introduces a number of improvements and new features that expand the capabilities of AI, making it more powerful, versatile, and accessible.
One of the most remarkable new features of ChatGPT-4o is its multimodal capability. This model is able to simultaneously process different types of input, including text, images, audio, and video. This feature enables more natural and comprehensive interactions with AI, offering more contextualized and relevant responses.
GPT-4o is designed to be faster and more efficient. Compared with previous models, it is twice as fast, with reduced response time and greater capacity to handle simultaneous requests. In addition, the model is more energy efficient, reducing resource consumption.
Response Time: responds in less than 300 milliseconds, ensuring fast and smooth interactions.
Request handling: ability to handle up to 10 million tokens per minute, improving the speed of information processing.
These improvements in speed and efficiency make GPT-4o an excellent option for applications that require fast and accurate responses, such as customer support services and virtual assistants.
One of the most important innovations is the free accessibility of GPT-4o. This model offers free functionality that was previously reserved for paid users. This strategic move by OpenAI aims to democratize access to AI, allowing a wider audience to take advantage of the model's potential.
File analysis: users can upload and analyze text files at no additional cost.
Using GPTs sssistants: advanced features such as task management and workflow automation are now available to everyone.
The free accessibility of GPT-4o not only expands the user base, but also fosters innovation and creativity as more people can experiment with advanced AI capabilities.
GPT-4o introduces an expanded 128K context window. This allows the model to maintain consistency and relevance of responses even in long and complex conversations. Increasing the context window significantly improves the model's ability to understand and respond to user queries.
Long Conversations: Greater consistency in extended interactions.
Detailed Analysis: Ability to process and understand large amounts of contextual information.
The expanded context window enables GPT-4o to provide more accurate and relevant answers, improving the overall user experience.
GPT-4o integrates Web access, allowing the model to obtain real-time information to answer user questions. In addition, OpenAI has released a desktop app for Mac (and soon for Windows), which facilitates interaction with the AI via the PC clipboard.
Simplified interaction: users can copy text, images or other data to the clipboard and receive immediate responses.
Real-time access: ability to get up-to-date information through Web integration.
The desktop app makes GPT-4o a versatile workmate, easily integrating into users' daily workflow.
GPT-4o also introduces the ability to sense and respond to human emotions. During demos, the model showed the ability to detect the user's emotional state, such as happiness or anxiety, and respond accordingly. For example, if the user shows signs of stress, GPT-4o can provide advice to calm down.
Emotional support: the model can offer stress management tips or suggestions for improving emotional well-being.
Personalization of responses: adapts the tone and style of responses based on perceived emotion, enhancing the user experience.
This ability to perceive emotions makes GPT-4o a more empathetic and human virtual assistant, significantly improving user interaction.
GPT-4o APIs are available at a reduced cost compared to GPT-4, making the use of the model more accessible for applications of various types. Increasing the token dictionary reduces processing costs and the size of context windows, improving overall efficiency.
Virtual Assistants: creation of assistants capable of handling complex conversations and offering support on a wide range of topics.
Data analytics: ability to analyze text, visual and audio data, providing more complete and accurate insights.
Generative content: leverage the advanced capabilities of GPT-4o to generate creative content, such as articles, stories, and videos, based on variable inputs.
The accessibility of GPT-4o's API allows programmers to explore new creative possibilities and develop innovative applications that take full advantage of the model's capabilities.
GPT-4o represents a significant step forward for OpenAI, improving not only the complexity of the model but also the usability and accessibility of AI technologies. With the implementation of advanced features and free access, GPT-4o promises to expand the use of AI beyond simple chat. The combination of speed, efficiency, and multimodal capabilities makes GPT-4o a powerful tool for a wide range of applications, from healthcare to entertainment, education to finance.
In a rapidly changing technological landscape, the accessibility of GPT-4o enables more users to experiment with and integrate AI into their daily activities. This model not only improves performance over its predecessors, but also offers new opportunities for innovation and creativity. With GPT-4o, OpenAI continues to push the boundaries of artificial intelligence, demonstrating the potential of this technology to transform the way we live and work.
Artificial intelligence (AI) has profoundly transformed the way we interact with technology. Two of the most advanced and well-known AI models today are OpenAI's ChatGPT and Google's Gemini. Both represent the culmination of years of research and development in the field of natural language processing (NLP), but they have significant differences in terms of architecture, functionality, and applications. This article will explore these differences, providing an in-depth overview of the features of ChatGPT and Gemini.
Artificial intelligence has become a key component of modern technology, influencing areas such as automation, healthcare, finance and education. Top technology companies, including Google and OpenAI, are leading the AI revolution, developing advanced models that promise to redefine technological capabilities and improve people's daily lives. The race to gain a dominant position in the AI market has led to the creation of powerful tools such as ChatGPT and Gemini.
ChatGPT is an advanced language model developed by OpenAI, based on the GPT-3 architecture and the later GPT-4. It is designed to understand and generate human text in a consistent and relevant way. It uses billions of parameters to learn from a wide range of texts and answer questions naturally.
OpenAI introduced the GPT (Generative Pre-trained Transformer) series with GPT-3, which quickly became famous for its ability to generate extremely realistic text. GPT-4 further improved these capabilities by increasing the number of parameters and refining the machine learning algorithms used. ChatGPT was created for practical applications such as virtual assistants, customer service chatbots, and automated writing tools.
ChatGPT is known for its ability to maintain natural conversations on a wide range of topics. It can generate text, answer questions, write essays, and even create code. Its versatility makes it a powerful tool for many applications, from creative writing to technical assistance.
Gemini is Google's chatbot based on the PaLM 2 language model. This model represents a significant evolution from Google's previous attempts in the field of AI, such as Bard. Introduced during the I/O 2023 conference and later renamed Gemini in February 2024, this tool is designed to provide accurate and contextualized responses to users.
Google developed Gemini to compete directly with more advanced AI models such as ChatGPT. Based on PaLM 2, Gemini uses advanced machine learning techniques to read and understand billions of words, constantly improving through user interaction. The renaming and improvement of the model reflects Google's commitment to staying at the forefront of technological innovation.
Gemini is available in three variants: Nano 1.0, Pro 1.0 and Ultra 1.0, each designed for specific needs and applications. The Ultra 1.0 model, in particular, is extremely powerful with 540 billion parameters, surpassing ChatGPT's GPT-4 model. Gemini can handle multimodal input, including text, images, audio and video, making it versatile and capable of tackling complex tasks.
ChatGPT: based on the GPT-4 architecture, uses billions of parameters to generate natural text. It is highly versatile and can be adapted to different applications.
Gemini: based on PaLM 2, offers three variants for different needs. The Ultra 1.0 model with 540 billion parameters is designed for complex tasks and supports multimodal input.
ChatGPT: excels at generating coherent and relevant text, maintaining conversations on a wide range of topics. It is particularly useful for creative writing and technical assistance.
Gemini: offers a deeper understanding of context because of its ability to learn from billions of words. Its ability to handle multimodal input makes it ideal for complex, multifunctional applications.
ChatGPT: Used primarily in virtual assistants, customer service chatbots, automated writing tools, and code generation.
Gemini: Used in a wide range of industries, from healthcare to finance, education to industrial automation. Its Pro 1.0 and Ultra 1.0 variants make it suitable for both everyday applications and highly complex tasks.
ChatGPT: available through several platforms and can be integrated into various business applications. Costs vary depending on usage and integration.
Gemini: available for free in the Pro 1.0 version, while access to Gemini Advanced (Ultra 1.0) requires a subscription to the Google One AI Premium plan. This includes additional benefits such as 2TB of space on Google Drive.
ChatGPT: with 175 billion parameters, GPT-4 is extremely powerful but slightly inferior to Gemini's Ultra 1.0 model in terms of computational capacity.
Gemini: with 540 billion parameters, Ultra 1.0 offers unprecedented power, ideal for highly complex tasks and handling large amounts of data.
Both OpenAI's ChatGPT and Google's Gemini represent the best of innovation in artificial intelligence. While ChatGPT stands out for its versatility and ability to maintain natural conversations on a wide range of topics, Gemini stands out for its computational power and ability to handle multimodal input.
The choice between ChatGPT and Gemini depends on the specific needs of the user. For applications requiring natural and versatile text interaction, ChatGPT is an excellent choice. For tasks requiring high computational power and handling various types of input, Gemini Ultra 1.0 offers unparalleled capabilities.
In any case, both models continue to evolve and improve, promising to take artificial intelligence to new levels of performance and utility. Continued research and development in this field will ensure that both ChatGPT and Gemini remain key tools for future technological innovation and automation.
In this article, I address my concerns regarding the implications of uncertainty in decision support systems, with a particular focus on its effects on AI-based decision support systems (AI-DSS). The genesis of this article stems from the reflections shared in a previous piece, where I explore the concept of uncertainty beyond the realm of data quality (here is the link).
Decision Support Systems Overview - A decision support system (DSS) is a software system designed to collect, organize, and analyze company data to provide valuable information for decision-making or to support management and planning activities. Essentially, a DSS aids individuals tasked with making strategic decisions in complex contexts where determining the optimal choice or decision-making strategy is challenging. Their reliability hinges on the algorithms used to process data and the quality of the dataset utilized, constituting a simplified model of reality derived from the data available to the decision-maker.
AI-based Decision Support Systems - In recent years, DSS software systems have evolved with the integration of artificial intelligence (AI) to enhance the reliability of the representation model of reality upon which calculations are based. AI autonomously constructs the representation model of the analyzed universe (AI model) solely from the dataset provided by analysts.
The Importance of Dataset Selection - An AI develops its representation model based solely on the dataset designed by analysts. However, since the real world surpasses our ability to synthesize, analysts strive to strike a balance between capturing enough dimensions to represent macro dynamics while avoiding complexity that hampers result verification.
Emerging Doubts - Despite meticulous dataset design, it remains one of many possible representations of the real-world environment. As one form of uncertainty is linked to environmental complexity, doubts arise regarding whether the dataset itself is immune from environmental uncertainty. This concern transcends bias and addresses a potentially impactful yet less tangible aspect.
Consideration of Dimensions - Doubts arise concerning the selection of dimensions within the dataset and the uncertainty surrounding their influence on the AI model and processing outcomes. Unexpected interactions or interdependencies among dimensions could affect processing results, even those deemed marginal or independent.
Artificial Intelligence Development Process - AI systems operate through neural networks trained for specific tasks, utilizing deep learning. These networks employ layered structures where each layer contributes to final processing, with the ability to learn and solve complex problems autonomously. However, the nonlinear data processing within neural networks renders their processing opaque, resembling black boxes.
Certainty of Results - The primary limitation of AI today lies not in computing power or data availability but in the structure of AI models. Without a comprehensive understanding of the context, caution is warranted when entrusting AI to identify solutions, as it may generate optimal solutions in contexts diverging from reality.
Article source: Linkedin Article by Vincenzo Gioia
In today's digital age, artificial intelligence (AI) has rapidly emerged as one of the most revolutionary and transformative technologies. From search engines to recommendation systems, from industrial automation to personalized medicine, AI is redefining the way we live and work. Leading technology companies such as Google and OpenAI are at the center of this revolution, engaged in a compelling technology race to develop ever more advanced artificial intelligences and achieve market dominance.
Google, in particular, has made great strides with the launch of Gemini, an artificial intelligence based on the next-generation language model PaLM 2. This article, after thoroughly exploring the capabilities of Chat GPT, aims to get into the specifics of Gemini, exploring its distinctive features, history, and practical applications.
Artificial intelligence has revolutionized multiple industries, becoming a key driver of technological innovation. From improving business processes to automating daily operations, AI offers powerful tools that increase efficiency and productivity. Large technology companies are in an ongoing race to develop ever more advanced AI, seeking to dominate a rapidly growing and highly competitive market. This landscape has seen the emergence of giants such as Google, OpenAI, Microsoft and others, each with their own AI solutions that promise to redefine technological capabilities.
Gemini represents one of the latest and most advanced innovations in the field of AI. Developed by Google, Gemini is based on PaLM 2, a next-generation language model designed to understand and generate natural language with a high degree of accuracy. Gemini's ability to learn from billions of words and continuously improve through user interaction makes it a powerful tool for a wide range of applications. This article aims to explore Gemini's distinctive features, its history, and how it can be used effectively.
Gemini is Google's in-house chatbot based on Google's PaLM 2 model, an advanced language model that is the evolution of Bard, which was unveiled during the I/O conference in 2023. On Feb. 8, 2024, Google renamed Bard to Gemini, marking an important evolution in the field of AI. PaLM 2 is designed to "learn" by reading billions of words, enabling it to understand human language in depth and provide useful feedback to users.
The evolution from Bard to Gemini was not just a matter of rebranding. Google introduced significant improvements to the model, making it more powerful and versatile. PaLM 2, the architecture behind Gemini, was designed to overcome the limitations of previous models by using advanced machine learning and deep learning techniques. This has enabled Gemini to become a more efficient tool capable of answering a wider range of questions and tasks.
Google has redefined the available generative models, differentiating them into three categories: Nano 1.0, designed for tasks on single devices; Pro 1.0, applicable at scale to a wide range of tasks; and Ultra 1.0, intended for highly complex tasks. This subdivision allows users to choose the model best suited to their specific needs, ensuring versatility and power.
Currently, you can try Gemini for free with the Pro 1.0 model, which is available in more than 40 languages in more than 230 countries and territories, including Italy. The Ultra 1.0 model, which is part of Gemini Advanced, is available in more than 150 countries, but for now only in English. While the free version of Gemini with the Pro 1.0 model will remain accessible for free, access to Gemini Advanced will be reserved for subscribers to the Google One AI Premium plan, starting at 21.99 euros per month (with a two-month free trial), which also includes 2TB of storage space on Google Drive and other benefits.
The division of Gemini models into Nano 1.0, Pro 1.0 and Ultra 1.0 reflects the versatility and power of this tool. The Nano 1.0 model is designed for tasks on single devices, ideal for applications requiring fewer computational resources. The Pro 1.0 model, available free of charge, is suitable for a wide range of tasks and can be used in a variety of areas, from automated email writing to content generation for websites. Finally, the Ultra 1.0 model is intended for highly complex tasks, such as large-scale data analysis and market trend forecasting.
Gemini Advanced's Ultra 1.0 model is extremely powerful, with its 540 billion parameters, surpassing even ChatGPT's GPT-4 model, which has 175 billion. This capability enables Gemini to understand and process multimodal input, such as text, images, audio and video, making it extremely versatile and capable of tackling complex tasks. It can be used to improve productivity, generate code, schedule events, create documents, and more, although, like all AI, it can occasionally provide inaccurate responses or make errors.
Gemini offers a wide range of practical applications that make it an indispensable tool in many areas. For example, in customer service, Gemini can automate responses to customers, improving efficiency and customer satisfaction. In healthcare, it can assist doctors in analyzing medical records and making preliminary diagnoses. In finance, Gemini can analyze complex financial data and predict market trends. In education, it provides support to students through virtual tutors who can explain complex concepts and answer questions.
Compared to other AI models, Gemini is able to maintain exceptional consistency and relevance in extended conversations. Its ability to understand context in depth and generate personalized responses makes it particularly useful in applications that require a high degree of human interaction. For example, in an enterprise environment, Gemini can assist in the creation of complex documents, offer real-time data-driven suggestions, and improve collaboration among teams.
The future of Gemini is promising. As language models continue to be developed and improved, we can expect Gemini to become even more powerful and versatile. The potential applications are endless, from personalizing services to improving business operations, from innovation in healthcare to transforming education. Google continues to invest in research and development to ensure that Gemini remains at the forefront of technological innovation.
Gemini represents a significant step forward in the field of artificial intelligence because of its advanced architecture, its ability to learn and understand human language, and its versatility in tackling a wide range of tasks. With Google's support, Gemini is set to become a critical tool for companies and individuals seeking to make the most of AI's potential. Although there are still challenges ahead, the continued evolution of Gemini promises to redefine the future of technology and our daily interactions with machines.
Insight and understanding of tools like Gemini are not only fascinating, but essential to navigating the technological future effectively. This article has explored various aspects of Gemini, from its origins and innovations to practical applications and future challenges. As AI continues to advance, we can expect models like Gemini to become increasingly integrated into our lives, improving and optimizing countless processes and operations.
The United Nations predicts that by 2050, two-thirds of the world's population will reside in urban areas. In Europe, this trend is even more pronounced, with 75 percent of citizens already living in cities, according to Eurostat. This rapid urbanization puts a strain on existing infrastructure and management systems, highlighting the need for innovative solutions. The Smart Cities emerge as a response to these challenges, aiming to transform urban areas into centers of sustainability and efficiency through advanced technologies, particularly the Internet of Things (IoT).
Cities, although they occupy only 2-3% of the earth's surface, are responsible for 70% of carbon dioxide emissions and significant energy consumption. This high concentration of people and activities makes them a major contributor to climate change. Therefore, the challenge is to effectively integrate people, infrastructure and technologies to minimize environmental impact and improve the quality of urban life.
The Internet of Things (IoT) represents a strategic and promising solution to address complex urban issues. With its ability to automate data collection and analysis, IoT facilitates timely and informed decisions, substantially improving various aspects of urban life.
IoT sensors play a crucial role in continuous environmental monitoring. They collect real-time data on air quality, tracking and measuring all substances that may be harmful to humans. Similarly, sensors installed in water networks monitor water quality, detecting contamination and pH changes in real time, thus ensuring access to safe water resources. In urban settings, noise pollution sensors help identify areas where noise exceeds recommended limits, enabling administrators to take action, such as creating traffic-restricted zones. Another significant example is the use of sensors in garbage collectors to monitor their filling, thus optimizing collection routes and reducing emissions from service vehicles.
Urban traffic management benefits greatly from IoT technologies. Sensors and cameras installed along roads collect data on traffic flow, congestion and accidents, facilitating dynamic traffic light management to optimize travel times. IoT applications allow drivers to view real-time maps of available parking spaces, helping reduce the time spent looking for a spot. In addition, the integration of IoT sensors in public transportation gives operators the ability to monitor the status of vehicles and intervene promptly in case of anomalies or delays, thus ensuring a more reliable and efficient service.
Urban lighting is another area that benefits from IoT adoption. Light and motion sensors can adjust the intensity of light in streets based on the actual presence of people or vehicles, maximizing energy efficiency and reducing light pollution. This not only saves energy, but also improves visual comfort and safety at night.
IoT contributes significantly to improving urban safety through integrated video surveillance and advanced sensor systems. These systems enable faster and more effective detection and response to emergency situations. For example, smart cameras can analyze real-time video streams to recognize suspicious behavior or dangerous situations, such as sudden gatherings or traffic accidents, automatically triggering appropriate emergency responses.
Theadvanced monitoring of water infrastructure through IoT is a crucial breakthrough for urban sustainability. Specific sensors installed in water networks not only identify contamination and ensure safe drinking water, but are also critical in detecting leaks along pipelines. This is vital in a global context where water resources are becoming increasingly scarce and precious. By quickly identifying leaks, prompt action can be taken to repair them, reducing significant water wastage and minimizing environmental impact. Such efficient management not only conserves a critical resource but also reduces the cities' operational costs and carbon footprint.
The use of IoT sensors to monitor urban infrastructure provides an additional layer of safety that is essential for disaster prevention. Sensors installed on bridges, viaducts, tunnels, buildings, and aqueducts can detect changes and anomalies that could forewarn of structural failure. This real-time monitoring enables the application of preventive and predictive maintenance strategies, which are significantly cheaper than post-damage interventions. Predictive maintenance not only lowers operating costs but also drastically reduces public safety risks, ensuring the longevity and reliability of vital infrastructure.
The adoption of IoT in Smart Cities represents more than a technological advance: it is a strategic imperative for urban sustainability and raising the quality of life. Successful implementation of these technologies requires a holistic vision that transcends sectoral boundaries and is based on synergistic collaboration between public agencies, private businesses and local communities. Such collaboration is essential to building resilient infrastructure, promoting sustainable resource management, and ensuring a better quality of life for all citizens.
In this context, the integration of Artificial Intelligence (AI) with the IoT is proving crucial. AI amplifies the potential of the IoT through advanced algorithms that can analyze large volumes of data collected from sensors in real time. This continuous learning and improvement capability allows not only proactive optimization of urban operations, but also predicts trends and behaviors, significantly improving urban planning and response to unexpected events. For example, AI can predict energy or water demand spikes and automatically adjust resources to maximize efficiency and reduce waste. Similarly, embedded AI systems can improve public safety by analyzing video streams to recognize suspicious behavior or emergencies in real time, directing resources where they are most needed.
Adoption of this advanced technology also presents significant challenges, including the need to ensure citizen privacy and data security in an increasingly digital age. Addressing these issues with appropriate policies and regulations will be critical to maintaining public trust and promoting widespread adoption of IoT and AI technologies.
If you would like more information on the integration of Artificial Intelligence and the Internet of Things within city management, please contact us using the form at the bottom of this page.
Over the past decades, cities around the world have faced a number of growing challenges related to urbanization, including increased traffic, air pollution and loss of quality of life. In response to these challenges, urban planner Carlos Moreno has proposed the concept of the "15-minute city." This model, which aims to create more livable and sustainable communities, is gaining global attention. In this article we will explore the principles of the 15-minute city, its benefits, the challenges of implementing it, and some examples of cities that are adopting this approach.
The 15-minute city concept is based on four basic principles:
Proximity: all essential services, including work, education, shopping, health services, and recreational spaces, should be within a maximum of 15 minutes walking or cycling distance from one's home. This reduces the need for long daily commutes.
Diversity: Urban areas should accommodate a variety of land uses. This diversity supports a range of economic and social activities, making communities more resilient and dynamic.
Inclusiveness: 15-minute cities should ensure equitable access to services for all communities, regardless of income, ethnicity or social status. This principle promotes equality and social cohesion.
Sustainability: Reducing car use and promoting more sustainable means of transportation, such as walking and cycling, decreases the environmental impact of cities, helping to combat climate change and improve air quality.
Adopting the 15-minute city model brings with it a number of significant benefits:
Reduced traffic and pollution: with fewer people commuting by car, there is less traffic and, consequently, less air pollution. This improves air quality and reduces greenhouse gas emissions.
Health improvement: walking and cycling are physical activities that improve cardiovascular and mental health. In addition, having access to green spaces and sports facilities nearby promotes a more active lifestyle.
Social cohesion: 15-minute cities promote greater interaction among residents. The presence of public and community spaces facilitates socialization and strengthens neighborhood ties.
Economic resilience: diversification of local economic activities makes communities more resilient to economic crises. Local small businesses can thrive in an environment where residents shop and use nearby services.
Quality of life: reducing time spent on daily commuting frees up time for other activities, improving the overall well-being of citizens. The availability of services and infrastructure within a short distance makes daily life easier and more enjoyable.
Smart technologies play a crucial role in supporting the 15-minute city model, making urban areas more efficient, sustainable and livable. One of the key tools is the Internet of Things (IoT), which uses sensors to collect real-time data on traffic, air quality, energy consumption and service availability. This data can be analyzed to optimize urban resource management and improve citizens' quality of life.
Digital platforms are another key element, facilitating access to urban services. Mobile applications and online portals enable residents to book medical appointments, find local stores, attend community events, and use public transportation more efficiently. Smart mobility, which includes integrated public transportation systems and bike and car-sharing services, helps reduce dependence on private cars, thereby decreasing traffic and pollution.
Finally, artificial intelligence (AI) and machine learning technologies can be used to proactively predict and respond to city needs. For example, advanced algorithms can optimize public transportation routes, improve waste management, and increase urban security through smart surveillance.
Despite its many benefits, implementation of the 15-minute city model presents several challenges:
Planning and infrastructure: reorganizing existing cities to conform to the model requires a significant investment in urban planning and infrastructure. This includes creating bike lanes, safe sidewalks, and adequate public spaces.
Social equity: ensuring that all neighborhoods, including the least affluent, have equitable access to services is critical. This requires policies that prevent gentrification and promote social inclusion.
Cultural change: adopting the 15-minute city model requires a change in the habits and mindset of citizens, who may be accustomed to being car-dependent. Education and awareness are crucial to the success of the model.
Political and public support: The support of local and national governments, as well as the public, is essential for the implementation of the model. This can include incentives for the use of sustainable transportation and sustainability-oriented urban development policies.
Some cities are already taking concrete steps toward adopting the 15-minute city model:
Paris, France: Mayor Anne Hidalgo is a leading proponent of the 15-minute city model. Paris is implementing policies to reduce car traffic, increase green spaces, and improve accessibility to essential services. One example is the creation of "super-รฎlots" where car traffic is restricted and streets are reserved for pedestrians and cyclists.
Melbourne, Australia: the city has adopted the "Plan Melbourne" plan that calls for the development of 20 "20-minute cities" within the metropolitan area. This plan aims to ensure that all residents can access essential services within a short radius of their homes.
Portland, United States: Portland is known for its sustainable urbanism policies and has begun to incorporate elements of the 15-minute city model into its urban development plan. This includes promoting walkable and bikeable neighborhoods, as well as encouraging local small businesses.
In her book "The Smart City and the Comfortable City," Giulia Agrosรฌ further explores the concepts of smart city and "comfortable city," exploring how digital technologies can be integrated into urban policies to improve quality of life. The concept of the "convenient city" aligns closely with that of the 15-minute city, emphasizing the importance of citizen-centered urban planning.
Agrosรฌ discusses the use of smart technologies such as IoT sensors, digital platforms and smart mobility systems to create more efficient and sustainable cities. These tools can support the 15-minute city model by making urban services more accessible and improving urban resource management.
The book also places a strong emphasis on inclusiveness and sustainability, key issues for 15-minute cities. Agrosรฌ emphasizes the importance of ensuring that all citizens, regardless of income or social status, have equal access to essential services. It also promotes the creation of green spaces and the adoption of sustainable building practices to improve quality of life and reduce environmental impact.
The 15-minute city model represents an innovative and sustainable vision for the future of urban areas. By promoting proximity, diversity, inclusiveness, and sustainability, this approach can significantly improve the quality of life for citizens, reduce the environmental impact of cities, and create more cohesive and resilient communities. However, its implementation requires careful planning, investment in infrastructure, and strong political and public commitment. With the right support, 15-minute cities could become the norm, transforming our cities into more livable and sustainable places.
If you would like to learn more about the 15-minute city or discover the technological innovations that can enable a transition to the Smart City, please contact us using the form at the bottom of this page.
Artificial intelligence has definitely entered the heart of industrial production to improve both the precision and efficiency of operations: from algorithms that predict plant maintenance to systems that optimize logistics, AI is becoming indispensable for those seeking to maintain and improve competitiveness in the marketplace. But this technology is also increasingly emerging as a valuable ally in managing a very delicate and strategic phase: quality control in manufacturing processes.
Through advanced machine learning algorithms for analyzing images of products on the production line, defects can be identified with higher accuracy and faster speed than traditional methods. The ability to detect, classify and locate various types of defects in a single image not only optimizes the production process, but also paves the way for broader automation of assembly lines.
The adoption of artificial intelligence tools for quality control offers a number of significant benefits that can transform production processes. Here is a list of those benefits:
Improved accuracy: AI can analyze images and data with accuracy that exceeds human accuracy, detecting tiny defects that might be missed during manual inspections;
Consistency and continuity: unlike human operators, AI systems are not subject to performance variations related to fatigue or other human factors;
Speeding up processes: AI algorithms can process and analyze large amounts of data in a very short time, significantly speeding up quality control and reducing production downtime;
Cost reduction: quality control automation can reduce labor-related costs and minimize losses from human error. In addition, by preventing production defects, rework and scrap costs are also reduced;
Improved traceability and reporting: AI systems can automatically store detailed inspection and analysis histories, facilitating quality monitoring and regulatory compliance;
Adaptability: AI models can be trained and reconfigured for different types of products or production lines, making the tools scalable and adaptable to different needs;
Defect prevention: in addition to detecting defects, some AI systems are capable of predicting potential problem areas before defects occur, enabling preventive interventions;
Integration with other systems: AI tools can be integrated with other industrial automation systems, such as predictive maintenance and supply chain management, for even more comprehensive quality control;
Continuous improvement: machine learning enables AI tools to continuously improve through learning from new data, progressively enhancing the effectiveness of quality control;
Customer satisfaction: Finally, by improving product quality and reducing the risk of defects, AI helps increase customer satisfaction and strengthen brand reputation.
Google's path in artificial intelligence for visual inspection is also based on the growing need to improve quality and efficiency in industrial production. Responding to these needs, Google Cloud has developed the Visual Inspection AI, a product that integrates years of research in computational vision and machine learning. The qualitative leap offered by this technology over traditional approaches is based on its ability to learn from a limited number of images, enabling effective implementation even where training data is scarce.
The heart of Visual Inspection AI lies in its sophisticated machine learning models, which are capable of processing and analyzing high-resolution images to identify defects. Users can upload reference images through a simple user interface, where they can also annotate defects to train the models. Once trained, the system can scan new images in real time, comparing them with the training dataset to detect anomalies. This process is not only extremely fast, but also incredibly scalable, adapting to various types of industrial production.
The applications of Visual Inspection AI are extensive and varied. In the automotive industry, it is used for detailed paint and assembly inspections, identifying scratches and dents not visible to the naked eye. In electronics manufacturing, it helps examine printed circuit board components to unearth microscopic defects that could cause failures. Similarly, in textiles and metallurgy, the system can detect imperfections in fabric weaving or metal welding, ensuring that finished products meet the highest quality standards. These applications demonstrate not only the adaptability of Visual Inspection AI to different manufacturing environments, but also its potential to significantly reduce quality control costs while maintaining high standards.
The potential of these technologies goes far beyond mere automation of repetitive tasks. AI offers the possibility of integrating intelligent systems that not only detect defects with pinpoint accuracy, but also continuously learn from experience, improving with each iteration. This ability to learn and adapt promises to raise manufacturing standards to unprecedented levels, ensuring higher quality products with a consistency that only artificial intelligence can offer.
In addition, the integration of AI into production processes paves the way for new forms of human-machine collaboration, where operators are supported by intelligent systems capable of preventing errors and optimizing performance. The synergy between human capabilities and artificial intelligence not only increases efficiency, but also enriches the workplace with new skills and stimulating challenges.
Looking ahead, we expect AI to not only change production methods, but also to redefine roles and responsibilities within factories. As industries adopt these technologies, we are seeing a revolution in the way products are designed, manufactured and inspected. The journey has just begun, and the possibilities are as vast as the imagination of the engineers and innovators leading this revolution.
If you would like to find out how Artificial Intelligence can support your company's supply chain and optimize the quality control phase of goods, contact us using the form at the bottom of this page.
I decided to write this article to bring order to the reflections and deductions that I have developed in recent months regarding the concept of bias. The need to bring order was born from the confusion that arose in me when I noticed that the term bias had become part of everyday language, sometimes taking on ambiguous meanings for me.
By writing this article I do not harbor any certainty or absolute truth. Indeed, I write it to note down what I think regarding bias and I do so, as always, publicly because I trust that it can turn into a useful opportunity for discussion with those who have the patience to read my reflections.
Before starting this reading, relax and repeat after me:
Without explainability, artificial intelligence is useless and dangerous
In this article I talk about bias and, as always, I prefer to start from the definition of bias to which I refer in my reflections.
Biases are the manifestation of systematic distortions of judgment resulting from cognitive heuristics of which we have lost control, i.e. mental shortcuts that we use to simplify the decision-making process which have been pushed to such a level of trivialization of reality that we lose contact with reality. same reality from which they are generated, negatively impacting the decision-making model adopted by any biological or artificial intelligent agent (Kahneman & Egan, 2011).
Biases can influence any decision model to make it ineffective. Even where we think we have prepared a decision-making model based on bias-free heuristics, Tversky's studies demonstrate that these take on a fundamental role in the analysis of reality, producing consequences that are not necessarily detectable or detectable in the short term.
The awareness of the structural and structuring role assumed by biases in decision-making processes based on heuristics paradoxically makes them a "false problem" of the processes themselves. A heuristic model based on biases that are admissible and functional for the purpose of the model does not make the model itself free of bias. A decision-making process in which there are no dangerous, macroscopic distortions of reality leads me to think that the biases present in the model are invisible to our analysis but effective in influencing the decision-making process. A well-orchestrated constellation of biases assumes in the decision-making process the same mechanics as the small weights with which the wheels of our cars are balanced: at a certain threshold, they demonstrate a powerful conditioning of the system. The existence of this conditioning process was witnessed by Alexander Nix, CEO of Cambridge Analytica, in his speech โFrom Mad Men to Math Menโ presented at the Online Marketing Rockstars conference held in Hamburg in 2017. The potentially cataclysmic force of this conditioning was tested through the psychometric conditioning that Cambridge Analytica implemented during the 2010 general elections in Trinidad and Tobago in favor of the United National Congress (UNC) through the โDo Soโ campaign.
The analysis of a decision-making model must therefore not focus on the simple identification of the presence of obvious biases such as racial or gender biases but must be able to understand how much the method of administration of the individual results of the analysis carried out is capable of generating strategic micro-conditioning similar to that produced by AlphaGo with the โU-37โ move.
The awareness that biases are not the real problem of a decision-making model is also given to me by the fact that biases are not the cause of an anomaly but only and always a mere consequence of the latter. To be clear, a bias is to an anomaly what the flu is to an infection: it's just a symptom.
Stating that a decision-making system is affected by bias is obvious to me as the entire decision-making process is almost always based on heuristics. At the same time, talking about bias is also an admission of inadequacy. The inadequacy is determined by the fact that treating a bias is the equivalent of a symptomatic treatment caused by the inability to understand the origin of the anomaly and/or to correct the anomaly itself.
Artificial intelligence systems are not free from bias because these systems also operate through clustering processes or abstraction processes that are based on biases that are admissible and functional to the analysis.
In this article I explain with a "step-by-step" approach the logical path that led me to my conclusions, summarized already in this introduction to share with those who read me the awareness that mitigating the risk generated by cognitive dynamics that manifest in the form of bias does not exclude the presence of bias whose impact is equally serious but not immediately detectable by our ability to evaluate.
I have always been fascinated by the mechanisms by which the mind analyzes the world and human relationships. I devoured the TV series โBrain Gamesโ by National Geographic and the essay written by Sergio Della Sala and Michaela Dewar entitled โNever trust the mindโ which, through experiments bordering on magic, show us how unknown it is, even today, the human brain and how much the mechanisms that govern it in the daily effort of analysis and adaptation are linked to errors, to illusions of thought, to inconsistencies of mental processes, to imperfections of memory that lead to the development of real decision-making shortcuts.
Decision-making shortcuts are the strategy our brain uses to save energy. They manifest themselves every time we are faced with challenges, problems and decisions to make for which we prefer to adopt a "heuristic" approach, that is, an approach that makes use of generalizations, empirical rules and assumptions.
The heuristic approach is a decision-making model that is based on a set of strategies, techniques and creative processes that help us find solutions more quickly and easily. With this approach, decisions are made considering a limited number of alternatives, with partial awareness of the consequences of each of these. This process is driven by "heuristics", which are rules of thumb used to solve problems or do certain types of calculations and based on the knowledge that available information is never perfect and that human abilities are limited and fallible. As the psychiatrist Mauro Maldonato says: "Unlike formal calculation, heuristics are an immediate solution."
The strategies, techniques and creative processes that make up the heuristic approach are useful distortions of reality. These distortions simplify the analysis of the facts and aim to provide a subjective view based on the knowledge that we can recognize only a limited number of alternatives and are aware of only some of the consequences of each alternative. In most cases, these distortions allow us to interpret and, where possible, predict reality quickly and effectively.
The processes of simplification of reality are based on schemes and categories with which the knowledge we use in the processes of perception, memory and thought is organised.
The schemes and categories that we use to organize our knowledge describe people, objects and events through only the characterizing, common or most frequent details, excluding anything that can be traced back to a specific phenomenal manifestation.
Knowledge schemas are based on associations that are immediately available to our awareness and represent what is most common or considered typical. To be clear, when I talk about the beauty of dogs, no one thinks of the beauty of the Bracco Italiano or the Spinone Italiano because everyone thinks of the generic and subjective image of the dog that has been built up over the years.
Knowledge schemes are fundamental for a correct classification of the world which necessarily requires the implementation of an abstraction process with which a set of non-identical elements is created although belonging to the same phenomenal category.
Abstraction processes are fundamental for simplifying the processes of understanding and adaptation. We can say that they are the basis of the mechanisms that govern survival and evolution.
Without an abstraction process we would be incapable of making decisions because each phenomenon would produce a separate element that cannot be compared with similar others. The "environmental dependence syndrome" would develop (Lhermitte, 1986) which makes one unable to inhibit actions stimulated by any single input. In a similar condition, conifers or the single species of which they are composed would not exist (e.g. Scots pine, larch, fir, spruce) but only the single tree that is different from another due to the characteristic assumed by each single leaf.
Although the importance of abstraction processes is shared by all, it should be remembered that in abstractions exceptions or diversity are not taken into consideration. For this reason, when we talk about Africans we don't think about the white-skinned African population, even if it exists.
This tendency of schemes to generalize and exclude exceptions leads to prejudice when we do not have sufficient information on what we are talking about.
The simplification processes that we find at the basis of the heuristic model have an important flaw: the limit to which they go is constituted only by the common sense of those who apply them. For this reason, in some cases, the heuristic process goes beyond the simple simplification of reality and generates real banalizations from which preconceptions arise which, although they may be derived from reality, no longer retain any objective link with it.
The trivialization of reality leads to the development of preconceptions which reverberate in decision-making processes through inevitable errors of evaluation which can be more or less serious. Such errors, regardless of their nature, are generically called "cognitive biases" or more simply "bias".
Cognitive biases are systematic errors of thought that, by causing us to deviate from logic or rationality, influence the way we perceive reality, make decisions and the way we formulate our judgments.
The difference between bias and heuristics is, therefore, represented by the fact that heuristics are convenient and quick shortcuts closely linked to reality and which lead to quick conclusions. Cognitive biases are also shortcuts but they manifest themselves through prejudices that have lost all connection with reality and which are acquired, in most cases, without a critical spirit or judgment.
It is not easy to understand at what point a simplification process turns into a trivialization from which a cognitive bias arises. I believe that it is impossible to set a sort of threshold that allows us to understand that we are in the presence of a simplification process of which we have lost control to the point of declaring it dysfunctional to the decision-making process. For this reason, perhaps, we realize the existence of a bias always, so to speak, once the decision-making process has manifested its effects on the environment and people.
A world of absolute uniqueness in which it is not possible to create groups through abstract processes is a world in which any form of intelligent life is impossible. As unreasonable as it may seem, organizing knowledge by schemes and from these abstractions is common to all intelligent or teleological agents, even of an alien (non-human) nature. For my dog, birds are birds regardless of whether they fall within the species for which he was selected and trained to hunt. You could argue that my dog chases everything that moves purely out of predatory instinct. However, his reluctance to prey on objects unknown to him is common to all other dogs. I still remember the way he behaved when he saw, for the first time in his life, a balloon rolling on the floor because it was moved by the wind and how he behaved in subsequent encounters with this environmental phenomenology.
Abstractions are not lacking even in plant intelligence that implements clustering schemes in learning and adaptation processes. A testimony of this ability is given to us by Stefano Mancuso through his observations regarding the evidence collected by the French naturalist and botanist Lamarck (1744-1829) regarding the behaviors that the "mimosa pudica", so called because it closes its leaves as soon as it is touched, implements in a presumable attempt to defend against herbivores.
A specific aspect is, in my opinion, assumed by Artificial Intelligence (AI) systems which, although not life forms, operate as teleological agents and do so by implementing abstraction and classification processes not dissimilar to those produced by other living species. As noted by Nello Cristianini, every time an AI system implements a classification it does so with its theoretical construct based on its form of intelligence.
It is not possible to know what are the characteristics of the knowledge schemes that an AI adopts to distinguish a dog from a cat or to classify the world. If we could ever find out, we would find that it has nothing to do with our criteria based on human sensory data. I would not be surprised to find in an AI a classification similar to that proposed by Jorge Luis Borges in which the animal world is divided into:
The issue of biases that manifest themselves in AI systems is much more complex if we consider the fact that the statistical correlations that are used in abstraction processes are often, if not always, defined on data which, in addition to being naturally affected by bias, they could hide weakly correlated statistical links that are not evident to humans and capable of generating negative effects on the analysis and decision-making process. To understand the importance of weak correlations and their danger, I report a beautiful definition produced by the Ammagamma team which, in my opinion, David Bevilacqua teaches and disseminated on the topic: "the [weak correlations are] weaker relationships between the variables which influence a phenomenon [and are] difficult to read and interpret. Our mind is not able to grasp them, unlike strong correlations, but by equipping ourselves with a mathematical model it is possible to identify them [and use them to our advantage]". The awareness of the importance that weak correlations assume in the abstraction processes generated by an AI also comes from the studies conducted by James Pennebaker which demonstrate the feasibility of a psychometric segmentation of a user through the linguistic structure adopted in the exposition of their opinions alone. Thanks to its studies and weak correlations, Facebook can cluster groups of people starting only from the likes expressed on users' images and public posts.
Recognizing the existence of abstraction processes in every intelligent agent allows us to understand that biases can be present in every heuristic process regardless of the nature of the agent that brings it into being. Furthermore, I find Borges's provocation a useful tool for understanding that our principles of classification and ordering of the world are anything but obvious and natural as it is possible to hypothesize infinite other ways of organizing the objects of our experience such for example, the paradoxical one I reported above.
At this point in my reasoning, it is best for me to give a brief summary of what I have attempted to explain so far.
Point 1 - Heuristic processes are based on simplifications of reality which, even if functional to achieving the result, are the matrix from which biases arise.
Point 2 - Biases, being linked to simplification processes, are not the result of a specific level of abstraction but, rather, the result of a limit determined only by the level of unreliability that our common sense finds admissible in our cognitive processes and decision-making. In the terms set out, bias is present in every heuristic process and every time we deviate from objective data.
Point 3 - Simplification processes are necessary to implement the abstraction processes that allow us to understand the world regardless of specific phenomenal manifestations. I have also found this capacity for abstraction in agents endowed with intelligence alien to ours.
Bias, understood as a deviated form of simplification and abstraction mechanisms, is present in every heuristic process because it is through the adoption of one or more shortcuts that one can avoid the adoption of a logical-scientific approach which is always very expensive in terms of computing resources and data acquisition and verification time.
The presence of bias in all heuristic processes is also demonstrated by the experiment carried out by psychologist Emily Pronin who, in 2002, described the "blind spot bias" as the natural inclination of human logic to consider ourselves always more objective than anyone else. Another demonstration of the bias-heuristic link comes from the psychologist Paolo Legrenzi and the neurologist Carlo Umiltร who, in the book "Many unconscious for a brain", write
Given the enormous flow of information, we tend to select those that we already know, those with which we agree, those that we can assimilate better thanks to the presence of mental patterns and categories that are familiar to us and which are already consolidated. Furthermore, we are inclined to share this information with those who think like us and with those who we know will appreciate why they think like us. These new forms of life give rise to a sort of collective unconscious which translates into the radicalization of people's opinions. Individuals are comforted by sharing a current of opinions that is simple, clear, and requires low cognitive and attentional efforts
The role of biases in cognitive processes has led to a careful classification of them which, in the absence of taxonomic proposals or reference models, has generated a list of over 150 items divided into four macro-areas over the years.
With such a large list of items, I find it obvious to consider biases as an inseparable part of heuristics, despite the fact that in some cases they become the element that shows the fallacy of some simplification/abstraction processes.
Nobody likes the idea that heuristic processes are based on more or less effective biases because it demonstrates that every choice is always wrong or, if you prefer, right until proven otherwise. This scenario, however, is not as deplorable as it seems since it is precisely thanks to biases that it is possible to accelerate the analysis processes, improve the detection of critical choice factors in changing or uncertain situations and arrive at a more streamlined decision-making model. This is due to the fact that bias is closely linked to the schemes and categories with which the knowledge that underlies the processes of perception, memory and thought is organized.
Heuristics are necessarily based on biases even if, in most circumstances, these biases do not have harmful effects on the context or object of our decisions. In such a condition, as much as we don't like it, it is no longer necessary to ask ourselves whether a decision is made on the basis of a model whose mechanisms show bias or not. Rather, we need to ask ourselves what relevance is assumed by the biases certainly present in the current decision-making process. In essence, since choices are always based on errors of evaluation, let's focus on the distinction between serious errors and slightly irrelevant errors whose effects are, however, only apparently of low impact.
The vision of bias as a problem that explains the anomaly revealed downstream of a decision-making process is misleading because it transforms bias from the effect of an anomaly into a cause of the anomaly. Biases are always the symptom of a problem affecting the decision-making model and, for this reason, they do not exist except as a distorted manifestation of a cognitive process. To be clearer, I don't think it's correct to say that the anomaly found in a decision-making process can be traced back to a bias or is produced by a bias. When faced with an anomaly, it should be said that the problem from which it is generated manifests itself in the form of one or more biases.
The interpretation of bias as a symptomatic manifestation of a cognitive problem requires some reflections. The first is represented by the fact that the correction of the anomaly does not involve correcting the biases through which the anomaly manifests itself (it would be like lowering the fever instead of curing the infection). The second is represented by the fact that the anomaly we find through one or more biases does not mean that it has not produced others of equal importance but not yet identified.
A bias understood as an effect and not as a cause requires the adoption of a completely different approach from the one currently adopted for the analysis of the efficiency of AI systems in which one must necessarily be able to identify which abstraction process generated the anomaly that we identify as bias. To carry out such an analysis of the causes, one must know in detail the parameters that contribute to the generation of the decision and, even more, one must know which combination of "weak signals" generates the specific statistical signature that generated the anomaly. To date, there is no way to be certain which model generated the response.
The presence of biases is, therefore, not useful in explaining something since biases are present in every heuristic process and are present both in analysis processes whose outcomes conform to expectations and in those that do not produce the expected results.
Saying that a decision-making system is affected by bias means knowing why the bias was generated, why it was not avoided, and why correcting the anomaly does not generate a different one in pursuit of the complexity typical of the first Microsoft service packs of the years '90.
But what if the bias is instead a peculiarity of the decision-making model? A decision model is always focused on a contained set of data. This means that, even if there were no extreme discrimination phenomena, we would still find ourselves in a context in which it is not possible to exclude that we are in the presence of a bias worthy of AlphaGo's famous "U-37" move whose effects show themselves as a poison at a time and in a way that makes it impossible to understand the origin of the disease and identify an adequate cure.
Without a selective attention decision-making system, we would be at the mercy of environmental stimuli, even if irrelevant or inconsistent with what we are doing. We might think that it is possible to be supported by an AI in the data reading process but, even in this case, no one could exclude that the agent adopted to support us does not itself fall victim to the complexity of the world by developing the technological equivalent of " environmental dependence syndrome" (Lhermitte, 1986) which makes one unable to inhibit actions stimulated by any single input.
Article source: Linkedin Article by Vincenzo Gioia
In an age when digital permeates every aspect of our daily lives, it is surprising to discover how many companies continue to rely on outdated and ineffective document management systems. Yet there are still very many companies that have trucks full of paper documents such as shipping bills and contracts traveling from one location to another, often hundreds of miles apart. A veritable mountain of paper that then has to be hand-scanned and digitized before being returned to the sender.
And we are not talking about small businesses, far from it. Not only is this process expensive and slow, it also has a significant environmental impact and a high risk of human error.
But there is also good news, and that is that the technology to transform this situation exists and is absolutely established: automated document management, supported by Intelligent Document Processing (IDP) solutions and technologies such as OCR, NLP, ML, and AI, now offers an alternative that provides efficiency, security, and sustainability.
Paper-based document management is a traditional method that many companies still use to store, retrieve, and manage information. However, this method has many disadvantages. Let us look at them together:
High costs: physical document management results in high printing, storage, and transportation costs;
Reduced efficiency: time spent retrieving and managing physical documents can significantly reduce employee productivity;
High risk of error: manual document management is susceptible to errors, which can be costly and time-consuming to detect and correct;
Environmental impact: the production, transportation and disposal of paper documents have a significant impact on the environment.
Adopting an automated document management system offers numerous benefits as a result:
Cost reduction: eliminating the need for physical materials and reducing the staff required for document management can mean huge savings;
Improved efficiency: automation allows documents to be processed at much higher speeds than manual handling, thus improving overall productivity;
Enhanced security: digital solutions offer significant improvements in document security, including reducing the risk of loss, theft or damage;
Accessibility and retrieval: digital documents can be easily stored and retrieved from centralized databases, improving accessibility and reducing search time;
Environmental sustainability: minimizing paper use helps reduce tree cutting and CO2 emissions associated with paper production and transportation.
The transformation from a paper-based to a digital, automated system is facilitated by the adoption of several advanced technologies:
Optical Character Recognition (OCR): enables the conversion of printed or handwritten text into editable digital data;
Natural Language Processing (NLP): helps understand and interpret human language within documents;
Machine Learning (ML) and Artificial Intelligence (AI): are used to automate the process of classifying, categorizing, and analyzing documents;
Large Language Models (LLM) and Retrieval-Augmented Generation (RAG): offer powerful tools for improving the interpretation and generation of document content;
Transitioning from paper to digital management is not only a strategic move to reduce costs and improve efficiency, it is also an ethical imperative to promote environmental sustainability. Existing technologies offer the solutions needed to achieve this transformation, making the document management process more secure, faster and less costly.
If you would like to learn more about automated document management or discover the steps needed to digitize and automate this crucial process within your company, please contact us using the form at the bottom of this page.