top of page
eretikosart_A_surreal_POV_of_FUTURESCAPE_environment._Ultra-w_42e1c355-7581-44cc-ad7d-ce20

BIAS AND AI

  • Writer: FutureScape
    FutureScape
  • Mar 12
  • 7 min read

Lorenzo Della Peruta



cognitive and heuristic ai biases

a little-known topic that affects almost all our daily behaviour.



One goal of the world of Lyria is to encourage people to improve their relationship with AI and learn to use it fully.

It is therefore necessary to talk about cognitive and heuristic biases, a little-known topic that affects almost all our daily behaviour.


The term heuristic derives from the Greek heurískō, which means "to find" or "to discover." In psychology and behavioural economics, it refers to a set of complex phenomena that closely concern the mechanisms of the brain and how it operates when we make decisions.


As Daniel Kahneman beautifully explained in "Fast and Slow Thoughts", our minds are naturally lazy. The brain is the organ that consumes the most energy, and it seems natural that, over the centuries, it has evolved to avoid unnecessary energy waste. As a result, what Kahneman calls "System 1" is usually active, as opposed to "System 2".


Concept animation representing AI bias, prompt framing, and human cognitive distortions

System 1 and heuristics 


System 1 is the one we use most of the time, when we pay little attention or when  we don't think about complex calculations, logical or mathematical, to carry out an operation. This is the method by which we make instinctual, "gut" decisions.


It involves less energy consumption, is predominantly unconscious and allows us to orient ourselves in a complex world full of stimuli like ours. For example, when we play volleyball and have to position ourselves at the point where the ball falls, we don't perform complex mathematical calculations in our heads. We instinctively know, by habit and training, where we should position ourselves.


System 1, which is quick and intuitive, carries out this operation by exploiting heuristics—simple, intuitive rules learned from experience. For example, you might position yourself at a specific angle relative to the ball's trajectory and the ground, thus avoiding calculations about speed, balloon height, air quality, etc.


System 2 and logical fallacies


Evolutionarily, we use System 2 only when necessary, as it requires more time and energy. However, it is essential to make thoughtful choices, take logical steps, and understand calculations or learning unfamiliar operations.


For example, when a child learns to read, he or she does so using System 2. After months of practice, reading will cost him much less energy, and even that action will pass among the tasks of System 1. The same happens when you learn to write, to play an instrument, to drive a car and so on.


However, this energy saving comes at a cost. It has been found that our reluctance to employ System 2 prevents us from recognising certain logical contradictions and fallacies. Heuristics, which guide us successfully in most of our tasks, fail, leaving us vulnerable to logical errors we could have avoided with intervention from the second system.

 


How cognitive bias influences AI responses through framing and heuristics.


Availability heuristics


To understand this concept, imagine hearing on TV a succession of news reports that tell first of a murder, then of a robbery and finally of a fight outside a club in Milan. Instinctively, you will be led to believe that the city is unsafe.


This is a logical error due to how System 1 operates, known as availability bias. Having encountered a succession of negative events, System 1 has assumed that many more will occur. It's a life-saving mechanism in the wilderness that helps our ancestors avoid dangerous situations.


However, it is a mistake in today's complex societies. For some crime news event to which great credit is given, thousands of positive events have occurred untold, or worse events have not occurred at all.


The right behaviour, much more tiring, would be to compare the data on thefts, assaults and murders with previous months and years. At the end of the news, however, we rarely have access to this data. We have only been exposed to a bombardment of negative images, and this willingness has changed the way we see the world without us realising it.


Framing and artificial intelligence

What does all this have to do with AI? More and more studies show that Large Language Models (LLMs) are also  affected by these biases. However, they are built from information provided by human beings, and as we have seen, heuristics are so ingrained in our way of acting that we don't even realise it.  


It therefore follows that the answers provided by ChatGPT and other models are vitiated by various errors, which is also the result of our chat interaction. The most common mechanism is known as “framing”, linked to the availability heuristic we saw earlier.


Framing was first studied by Erving Goffman and refers to the bias by which the answer to a question is influenced by how the question is asked. The availability created by the words that make up the question influences the answer, especially if System 2 does not intervene.


In a famous experiment, two groups were subjected to the same questions and had to answer with the same data. However, these data were presented differently: participants in the first group were told they had to decide whether to undergo an operation with a 70% probability of success, while participants in the second group were told they had a 30% probability of failure.


The numbers are the same, yet the participants in the first group were much more likely to have surgery than those in the second. The same phenomenon was evident across applications of all types, from product discounts to gambling decisions.


The connection to AI becomes obvious when we consider how the way we phrase our question to the model can shape the type of answer it generates. For example, imagine we are unsure who wrote the ode "May 5th." With only a vague memory from school that Manzoni wrote it, we ask ChatGPT: "Was May 5 written by Manzoni?"


In this case, the question is posed the wrong way because, implicitly, we are already providing an answer. This will lead the model to focus on Manzoni, index him more often, and thus increase the chance that ChatGPT will confirm our initial intuition.


Obviously, in the case of such famous works, the risk is minimal, but in more complex topics or lesser-known titles, where there is little information, the risk of letting the algorithm be wrong is higher.





Awareness

Being aware of the mechanisms of bias that undermine our reasoning capacity, therefore, appears essential to reduce their influence in our lives. But are LLMs aware that they are subject to bias?


I tried to ask them and everyone, from ChatGPT to Grok, passing through Perplexity, said that yes, it is likely, if not inevitable, that they are affected by bias, whether it is due to the data used to train them that was the result of human distortions or due to "human annotators" during fine-tuning. It is precisely this procedure that will be the subject of the last analysis of this chapter.


It remains to be said, however, that this awareness may have arisen because of my question about awareness itself. In a nutshell, I could be biased by the framing of my answer: that AIs answering affirmatively to a precise question about awareness should not be surprising, given what I have read so far.


The finetuning


Fine-tuning is the process by which an already trained model is further trained on specific, controlled data to improve behaviour, accuracy, and the style of responses. On a practical level, this means the model is provided with a dataset of specific questions designed to elicit a particular aspect or inclination of the model.


To find a comparison with the book, this is what the Alternatives do with Lyria at the beginning of the novel: they ask her questions to gauge her level of autonomy and awareness.


Once the answers are reviewed, the model receives human feedback to improve them. Let's go back to the initial problem here. The answers improve according to whom? Human decision-makers give answers based on their beliefs, driven by what they believe is desirable or more lucrative for the company they work for.


This falls back on a system that is essential for developing AI, because it is natural for models to be tested and balanced, but it risks projecting human biases into the AI models we use daily.

 

Visual metaphor of language models shaped by human bias and training data.

Some practical advice


To conclude, I would like to offer practical advice on how to best use models to reduce bias, while recognising that it is impossible to eliminate bias.

First, we saw the importance of writing questions clearly, without suggesting answers, and rephrasing them in slightly different ways.


Alternatives and counterarguments can be explicitly requested to avoid a one-sided view. It is also important to be aware of your biases and to independently verify the sources used by the model, since you cannot delegate your judgment to AI and must reason about all information provided.

 





Article: Guest Author Lorenzo Della Peruta

Translation: Astrea Nicodemo

Images & Video: Eretikos Art





People also ask


How does bias affect AI answers?

Bias can affect AI answers because language models learn from human-generated data, training choices, and feedback systems that may already contain distortions or preferences.

What is framing bias in artificial intelligence?

Framing bias in artificial intelligence refers to the way a model’s answer can be influenced by how a question is phrased, emphasized, or structured by the user.

Can ChatGPT be influenced by the way a question is asked?

Yes. The wording of a prompt can guide the model toward certain assumptions, references, or interpretations, which may shape the final response.

How do cognitive biases shape AI interactions?

Cognitive biases shape AI interactions because users often ask questions in ways that already contain assumptions, and those assumptions can influence the model’s output.

 Why do large language models reflect human bias?

Large language models reflect human bias because they are trained on human language, human datasets, and human feedback processes.

Can fine-tuning introduce bias into AI systems?

Yes. Fine-tuning can improve usefulness and safety, but it can also encode the preferences, priorities, or assumptions of the people designing and evaluating the model.

How can users reduce bias when prompting AI?

Users can reduce bias by asking neutral questions, requesting alternative interpretations, checking sources, and avoiding prompts that already suggest the desired answer.



© 2026 FutureScape, a project by Astrea Nicodemo. All rights reserved. All original text, images, and video content published herein are protected by copyright. Unauthorized reproduction is prohibited.

Some visual materials may be created or enhanced using artificial intelligence tools under human creative direction.

Comments


BF.OZ - Astrea Nicodemo Copyright 2025

Art Direction & Photography @Barbara Foz

Web Design: Eretikos Art - Digital Lab

BF.OZ - Astrea Nicodemo Copyright 2024 -2026

Art Direction & Photography @Barbara Foz

Web Design: Eretikos Art - Digital Lab

bottom of page