10 Bias Recap

After one semester of bias research, I want to do a short recap, on everything I came across. So here is a condensed version, of all things, I found out:

What is a Bias?

Bias refers to a tendency to favor or oppose something based on personal opinions rather than objective reasoning. While biases can be explicit (conscious and intentional) or implicit (unconscious and automatic), they often stem from cognitive shortcuts known as heuristics. These shortcuts help our brains process information efficiently but can also lead to misinterpretations and irrational decisions. Cognitive biases, in particular, shape how we perceive reality, causing individuals to interpret the same facts differently. They develop early in life through personal experiences, societal influences, and media exposure, reinforcing both positive and negative associations.

Bias subtly affects decision-making in various aspects of life, from personal interactions to professional settings. Research shows that even trained professionals, such as scientists and hiring managers, exhibit unconscious biases, leading to disparities in employment opportunities. Implicit biases influence perceptions of competence, trustworthiness, and fairness, often without individuals realizing it. Acknowledging these biases is essential for reducing their impact and fostering more objective and equitable decision-making.

The Cognitive Bias Codex

The Cognitive Bias Codex by Buster Benson provides a comprehensive overview of over 200 cognitive biases, grouped into four categories to help us understand how our brains process information. One bias worth highlighting is the Bias Blind Spot, which refers to our tendency to think we’re less biased than others. This is especially relevant for UX design, where designers might overlook their own biases and assume their design decisions are universally valid. Other biases like Confirmation Bias, which makes us favor information that supports our existing beliefs, and Availability Heuristic, which makes us judge the likelihood of events based on what comes to mind most easily, can also influence how users engage with design elements.

In addition to these, biases such as the Mere-Exposure Effect, where familiarity breeds preference, and Anchoring, where initial information anchors subsequent judgments, can significantly shape how users make decisions. These mental shortcuts help us navigate the world more efficiently, but they can also distort our thinking. By understanding these biases, we can better design user experiences that acknowledge these cognitive filters, creating interfaces that allow for more informed, balanced decision-making. Ultimately, the Codex is a reminder that recognizing our biases is the first step towards making better choices—both in design and in life.

Common Biases in (UX) Design

Biases in UX design can subtly influence how designers create, research, and test products. Common biases include Confirmation Bias (seeking data that aligns with assumptions), False-Consensus Effect (assuming users think like designers), and Recency Bias (overweighting recent feedback). Anchoring Bias occurs when initial information overly influences decisions, while Social Desirability Bias can distort user research, and Sunk Cost Fallacy keeps designers committed to failing ideas.

To spot biases, review your assumptions and ensure decisions are based on data, not personal opinion. Involve diverse perspectives and conduct usability tests with varied users to uncover blind spots. Documenting your reasoning can also help identify biases. By recognizing and addressing these biases, designers can create more inclusive, user-centered designs.

Advantages of Biases

Biases are often seen as negative, but they serve important cognitive functions. They help us make quick decisions by filtering information efficiently, improving focus, and enhancing productivity in work and learning. Biases also support social connections by fostering trust and teamwork, aid in pattern recognition for faster learning, and boost motivation by reinforcing commitment to long-term goals. Additionally, they play a key role in survival, helping individuals assess risks and stay cautious in uncertain situations.

While biases can lead to errors, they also provide valuable benefits. By enabling efficient decision-making, strengthening social bonds, enhancing learning, and ensuring safety, they function as essential mental shortcuts. Recognizing their advantages allows for a more balanced perspective on their role in daily life.

Bias in Ai

AI is transforming industries, including UX design, by automating processes, analyzing user data, and enhancing efficiency. However, AI is only as unbiased as the data it learns from. If datasets contain historical biases, AI models can perpetuate them, influencing critical decisions in areas such as healthcare, hiring, and search engine results. For example, algorithms have been found to favor certain demographics in medical treatment recommendations, reinforce gender stereotypes in search results, and discriminate against female job applicants. These biases stem from underrepresentation in training data, flawed problem framing, and algorithmic design choices that prioritize overall accuracy over subgroup fairness.

Addressing AI bias requires proactive governance, ethical oversight, and diverse, representative training data. Organizations must implement fairness-focused frameworks, employ transparency practices, and incorporate human oversight to refine AI-generated outputs. Ethical considerations should also be integrated into science and technology education, ensuring interdisciplinary collaboration and regulatory measures to promote accountability. While technical solutions can mitigate bias, broader societal discussions are necessary to address the ethical implications of AI-driven decision-making.

Examples of Bias in Design

“Life can only be understood backwards, but it must be lived forwards.” ~ Soren Kierkegaard. This applies to biases in design—often, they’re only recognized after decisions are made. Here are a few examples:

  1. Spotify Shuffle Button: A Reddit user pointed out that the shuffle button was hard for colorblind users to distinguish. About 8% of men have red-green color blindness, and a simple design tweak could improve accessibility.
  2. Cars and Seat Belts: In the 1960s, crash tests used male-bodied dummies, neglecting the safety of women and children. This is sampling bias, where the sample didn’t represent the full population.
  3. Facebook’s “Year in Review”: Facebook’s 2014 feature, which showcased popular posts, sometimes included painful memories for users, due to optimism bias—assuming all top moments are joyful.

These examples show how majority bias—focusing on the majority and neglecting minorities—can shape designs that overlook important user needs.

How to combat Bias

The first step in addressing unconscious bias is recognizing it exists. Tools like the Designing for Worldview Framework by Emi Kolawole or Harvard’s Project Implicit tests can help identify biases. Understanding your biases is key to overcoming them and making design more inclusive. Once biases are spotted, the next step is to take action. Consciously designing with diverse users in mind and using tools like Perspective Cards can guide you to consider various experiences. Listening to clients and users, while letting go of assumptions, is essential to create designs that truly meet everyone’s needs.

Building diverse teams is critical to fostering inclusive design. Teams with varied backgrounds bring fresh perspectives, which are essential in a profession that thrives on challenging existing ideas. Overcoming bias is a lifelong commitment, so keep learning and remain open to feedback. Reflect on who might be left out and seek ways to make your designs more inclusive. Additionally, don’t just focus on the “happy path” in design; consider unhappy paths to address potential issues early on. Finally, when creating personas, challenge assumptions by focusing on real user experiences rather than demographic stereotypes. Designing for a global audience requires understanding diverse cultural insights, ensuring that inclusion is integrated into every step of the design process.

09 Advantages of Biases

This may seem counterintuitive, since biases always have a negative reputation, but they can have some advantages as well. Before I end this line of blogposts, with a short recap, I want to go another way, to highlight some positive sides of biases.

Biases also have many benefits. Our brains use biases to make decisions quickly, focus on important information, and even stay safe. Let’s explore some of the advantages of biases and how they help us in daily life.

01 Biases Help Us Make Quick Decisions

In a world full of information, our brains cannot process everything at once. Biases help us filter information so we can focus on what matters. For example, the brain ranks and prioritizes information to help us act fast. This ability is essential when making quick decisions in everyday life, such as crossing a busy street or choosing what to eat. Without biases, decision-making would be slow and overwhelming (LinkedIn).

02 They Improve Our Focus and Efficiency

Biases allow us to focus on relevant details while ignoring distractions. This is especially useful in work and learning environments. For example, when searching for an object in a cluttered room, our brains use bias to guide our attention toward what is most likely to help us. Similarly, biases help professionals make better decisions by focusing on key information instead of getting lost in unnecessary details (Airswift).

03 Biases Support Social Connection

Humans naturally form groups based on shared interests, beliefs, or backgrounds. This is known as ingroup bias. While this can sometimes lead to discrimination, it also has benefits. Ingroup bias helps build trust and cooperation within communities. It fosters teamwork, strengthens social bonds, and encourages people to support one another. These social connections are essential for emotional well-being and personal growth (Harvard Business School).

04 They Enhance Learning and Adaptability

Biases help us learn new things by making patterns easier to recognize. For instance, our brains naturally categorize information to make sense of the world. This ability helps us identify risks, recognize familiar faces, and understand new concepts more quickly. Even in education, biases help students focus on the most relevant material and remember information more effectively (LinkedIn).

05 Biases Can Increase Motivation

Some biases, like confirmation bias, can motivate people to pursue their goals. Confirmation bias makes us focus on information that supports our beliefs. While this can sometimes lead to mistakes, it also helps people stay committed to long-term goals. For example, entrepreneurs often rely on positive feedback to keep going, even when facing challenges. This kind of bias can drive innovation, persistence, and personal success (Airswift).

06 They Enhance Survival and Safety

From an evolutionary perspective, biases have helped humans survive by guiding quick and instinctive reactions. For example, people are naturally more alert to potential dangers because of negativity bias, which makes us pay more attention to risks. This bias helps us stay cautious and avoid harm. Similarly, biases like familiarity bias encourage people to stick with what they know, which can be useful in uncertain situations (Harvard Business School).

Conclusion

While biases can sometimes lead to errors, they also provide many benefits. They help us make fast decisions, focus on important details, connect with others, learn efficiently, stay motivated, and protect ourselves. Understanding the positive side of biases can help us use them wisely while being aware of their limitations. Rather than seeing biases as flaws, we should recognize them as essential tools for navigating the world more effectively.

08 Most common Biases in (UX) Design

After talking a lot about biases in general, I want to put focus on biases, that affect the design discipline in particular. I wanted to find out, which biases are very common amongst designers and how they can be spotted.

Biases can creep into UX design in subtle ways, shaping how designers create and evaluate their work. These mental shortcuts or preconceived notions can distort user research, design decisions, and testing outcomes.

Common UX Biases

  1. Confirmation Bias:
    Designers often seek out data or feedback that aligns with their assumptions or expectations. For example, if you’re convinced users will love a feature, you might unconsciously focus on positive comments while ignoring criticism. This skews the final product toward the designer’s preferences rather than the users’ needs (cf. UX Team).
  2. False-Consensus Effect:
    This bias happens when designers assume users think like they do. For instance, just because a designer finds an interface intuitive doesn’t mean the average user will feel the same way. This misalignment often results in designs that alienate diverse user groups (cf. Toptal).
  3. Recency Bias:
    This occurs when designers give undue weight to the most recent feedback or user data they’ve encountered. While recent input can be important, over-relying on it can overlook broader patterns or trends that are crucial to creating balanced designs (cf. PALO IT).
  4. Anchoring Bias:
    Designers may fixate on the first piece of information they receive, such as initial user feedback or early test results, and let it heavily influence future decisions. This can lead to disregarding new, potentially more accurate insights that arise later in the process (cf. UX Team).
  5. Social Desirability Bias:
    During user research, participants might provide answers they think the researcher wants to hear instead of their genuine thoughts. This can lead to misleading data and decisions that don’t address real user needs (cf. Toptal).
  6. Sunk Cost Fallacy:
    Designers sometimes stick with a feature or concept they’ve invested a lot of time and effort into, even when it’s clear it’s not working. This bias prevents teams from pivoting to better alternatives (cf. PALO IT).

Spotting Biases

To identify biases in your work, start by reviewing your assumptions. Are you basing design decisions on data or personal opinions? Regularly involve diverse perspectives in your design process to uncover blind spots. For example, conducting usability tests with a variety of users can highlight mismatches between the design and user expectations (UX Team).

Another tip is to document your decision-making process. Writing down why you chose a certain layout or feature can make biases easier to spot. If your reasoning is based on personal preference or limited data, you’ll know to re-evaluate that choice (Toptal).

Biases in UX design can hinder the creation of user-friendly and inclusive products. By recognizing common biases like confirmation bias, false-consensus effect, recency bias, and others, you can take proactive steps to create designs that truly meet users’ needs. Regularly challenging assumptions and involving diverse perspectives ensures a more balanced and effective design process.

07 How to combat Bias

I have talked a lot about what bias is and where it can occur, but not about how it can be mitigated. You will find some ideas on how to deal with bias in this blog post.

1. Spotting Unconscious Bias

The first step to overcoming unconscious bias is recognizing that it exists. For teams, tools like the Designing for Worldview Framework by Emi Kolawole or AIGA’s Gender Equity Toolkit, can help. If you want to find out, how biased you are towards a certain group of people, check out Harvard’s Project Implicit tests . Knowing your biases is the first step toward fixing them. (cf. UX Booth)

Source

2. Taking Action

Once you’ve spotted your biases, it’s time to do something about them. A great way to start is by consciously designing with different users in mind. Tools like Perspective Cards can help you imagine how your designs might feel to people with different experiences. When working with clients or users, take time to truly listen and understand their perspectives. Let go of your own assumptions—it’s the best way to gain new insights and create designs that work for everyone. (cf. UX Booth)

3. Build Diverse Teams

Diverse design teams are key to creating inclusive experiences. Diversity matters especially in design, a profession that requires professionals to think new thoughts and challenge existing ideas all the time. Different people think a different way, brining them together can result in a pool of new ideas, incorporating different (cf. UX Booth)

4. Keep Learning

Overcoming bias isn’t a one-time thing, it’s a lifelong process. Stay curious and open to feedback. Always think about who you might be leaving out and how you can make your designs more inclusive. By committing to continuous learning and embracing new perspectives, you’ll create better, more universal designs that truly work for everyone. (cf. Medium)

5. Explore the “unhappy paths”

When designing, don’t just focus on the “happy path” — consider the unhappy paths too. These are real-life situations where things break, go wrong, or are misused, and they shouldn’t be ignored as edge cases to fix later. Ask tough questions like, “How could people game the system?” or “Who could use it to harm others?” Addressing these issues early creates more robust and humane products that work for diverse users. While exploring unhappy paths may slow you down initially, it saves time in the long run by preventing costly reworks and ensuring you’re headed in the right direction from the start. (cf. Medium)

6. Make personas challenge assumptions

Personas are a hot topic, with debates on whether they’re necessary or useful, but when done right, they can be a powerful tool to challenge assumptions about users. Start by removing demographic details like age, gender, or income, which can introduce bias. Instead of generic stock photos, use real images of users who defy stereotypes, helping teams confront their unconscious expectations. If real user photos aren’t available, consider inclusive stock photo alternatives like tonl.co. You can also use names from underrepresented groups to further broaden perspectives. Remember, this isn’t about ticking a diversity box — it’s about reflecting real insights and challenging narrow views to design for a wider audience.
(cf. Medium)

7. Designing for a diverse global audience

t R/GA, we design for global audiences by leveraging diverse teams and cultural insights from the start. Our “Human, Simple, Powerful” design model ensures diversity and inclusion are baked into the process. The “Human” element focuses on addressing problems through a human lens, considering cultures, customs, and the context of users’ lives. We validate prototypes through user testing with a diverse audience that mirrors the anticipated end users. By mapping touchpoints and breakpoints across different backgrounds and conducting experience mapping globally and locally, we gain a well-rounded view of our users. This approach helps us create focused, inclusive solutions that eliminate ambiguity and meet the needs of diverse audiences. (cf. Medium)

Source

Although completely overcoming bias is probably impossible, you can try to minimize their impact on your work by utilizing some of the methods, I wrote about in this blog post.

06 How Bias effects (UX) Design

“Life can only be understood backwards, but it must be lived forwards” ~ Soren Kierkegaard
A quote that is also very fitting, when talking about bias in design. Most of the time you can only understand, that a decision could have been made due to a bias, after the changes have already been deployed. Looking a bit deeper into the topic of biases and how they affect (UX) design, here are some interesting stories, how products turned out biased towards or against parts of their user groups.

1 – Spotify Shuffle Button

In a reddit form, a user requested, that the shuffle button in the Spotify app would have a circle around it, since they are color blind and have a hard time seeing the difference between the active and inactive shuffle button. (see picture below) (cf. Reddit) Put simply, this might have happened due to a blind spot affecting Spotifys design team. Not all people perceive colors the same way, some have a hard time, especially seeing red and green. Approximately 8% of men and 0.5% of women are affected by this type of color blindness. (cf. the Guardian) This simple change could be a big difference for certain subsets of users.

Approximation of how a colorblind user with protanopia color blindness may see the Shuffle button in Spotify. Source

2 – Cars and Seat Belts 

Here is a fun one, in the 1960s, most crash test for cars were done with crash test dummy, modeled after an average male physique (height, weight & stature). Therefore safety design decisions were mostly tailored to men, neglecting woman, children, smaller or bigger individuals. Although crash test have been conducted with “female” crash test dummies, but they were only placed in the passenger seat. (cf. User interviews) When talking about safety, one hopes, that all possible users have been considered.

This happened very likely due to the “sampling bias”: “Sampling bias occurs when a sample does not accurately represent the population being studied. This can happen when there are systematic errors in the sampling process, leading to over-representation or under-representation of certain groups within the sample.” (Simply Psychology)

3 – Facebooks “Year in Review” 

In 2014 Facebook introduced the “year in review” feature, which showed the user their best performing posts of the past year. The algorithm would identify the “best” posts/moments depending on the amount of likes. Now this is all fun and games, until you see a lost loved one in your year review. While the algorithm might work for most users, some will have a different, less satisfying experience. (cf. Forbes)

Who ever had the idea for this feature, handed their bias over to the algorithm who automatically creates these reviews. Due to the optimism bias people to believe that they are less likely to experience negative events and more likely to experience positive ones. This bias can lead to overly optimistic expectations about the future, underestimating risks, or failing to prepare for potential challenges. Designers assumed that users’ most engaged photos and moments would always be joyful, leading to a feature that unintentionally surfaced painful memories for some users.(cf. The Decision Lab)

Source

These are just three examples of how biases can affect design and there are many more, this was just the beginning. Although I have noticed, that a lot of bias related “fails” happened, because the designers or researchers focused on one part of their users. There is another bias, that might be the basis for all of this: The majority bias, cognitive bias where people focus on the larger or more visible part of a group, often overlooking minority perspectives. This bias assumes the majority is representative or correct, leading to the neglect of smaller groups or less common viewpoints. Which could lead to neglect of a bunch of smaller groups, which all together would form the majority. (cf Nature)

1.7 Privacy vs. Personalization: Navigating Ethical Challenges in AI Mental Health Apps

AI-driven mental health apps offer a remarkable combination of personalization and accessibility, providing users with tailored experiences based on their unique needs. For example, apps like Talkspace utilize AI to detect crisis moments and recommend immediate interventions, while platforms such as Wysa offer personalized exercises based on user interactions. However, these benefits come with significant privacy and ethical challenges. To deliver personalized support, such tools rely on sensitive data such as user emotions, behavioral patterns, and mental health histories. This raises critical questions about how this data is collected, stored, and used.

Image Source: Government Technology Insider

Ensuring privacy in these apps requires robust safeguards, including encryption, secure data storage, and compliance with regulations like GDPR in Europe and HIPAA in the United States. These laws mandate transparency, requiring developers to clearly explain how user data is handled. Companies like Headspace exemplify these practices by encrypting user data, limiting employee access, and providing users with the option to control data-sharing settings. Headspace also rigorously tests its AI for safety, particularly in detecting high-risk situations, and connects users to appropriate resources when needed.

Beyond privacy, ethical concerns about fairness and inclusivity in AI algorithms are prominent. If the data used to train these algorithms isn’t diverse, the resulting tools may be less effective, or even harmful, for underrepresented groups. For example, biases in language or cultural context can lead to misunderstandings or inappropriate recommendations, potentially alienating users. To address this, platforms must ensure their datasets are diverse and representative, integrate cultural sensitivity into their development processes, and conduct ongoing audits to identify and rectify biases. Headspace’s AI Council, a group of clinical and diversity experts, serves as a model for embedding equity and inclusivity in AI tools.

Transparency is another key pillar for ethical AI in mental health. Users must be informed about how the AI works, the types of data it collects, and its limitations. For example, AI is not a replacement for human empathy, and users should be made aware of when to seek professional help. Clear communication builds trust and empowers users to make informed choices about their mental health.

While AI-driven mental health apps can enhance engagement and outcomes through personalization, the trade-off between privacy and functionality must be carefully managed. Ethical design practices, such as secure data handling, bias mitigation, and transparent user communication, are essential for balancing these priorities. By addressing these challenges proactively, developers can ensure that these tools support mental health effectively while respecting users’ rights and diversity.

Sources

  1. “AI principles at Headspace.” Headspace. Accessed: Jan. 14, 2025. [Online.] Available: https://www.headspace.com/ai
  2. Basu, A., Samanta, S., Sur, S., & Roy, A. Digital Is the New Mainstream. Kolkata, India: Sister Nivedita University, 2023.
  3. “Can AI help with mental health? Here’s what you need to know.” Calm. Accessed: Jan. 14, 2025. [Online.] Available: https://www.calm.com/blog/ai-mental-health
  4. Coghlan, S., Leins, K., Sheldrick, S., Cheong, M., Gooding, P., & D’Alfonso, S. (2023). To chat or bot to chat: Ethical issues with using chatbots in mental health. Digital Health, 9, 1–11. https://doi.org/10.1177/20552076231183542
  5. Hamdoun, S., Monteleone, R., Bookman, T., & Michael, K. (2023). AI-based and digital mental health apps: Balancing need and risk. IEEE Technology and Society Magazine, 42(1), 25–36. https://doi.org/10.1109/MTS.2023.3241309
  6. Valentine, L., D’Alfonso, S., & Lederman, R. (2023). Recommender systems for mental health apps: Advantages and ethical challenges. AI & Society, 38(4), 1627–1638. https://doi.org/10.1007/s00146-021-01322-w

04 Bias in Ai

Taking a little detour from my actual topic, I wanted to explore an issue of our time, bias in Ai. A topic that comes up a lot, when reading about Ai. I wanted to know, what can be done about it and how it could be avoided. Could this have an additional impact on our society?

Artificial Intelligence (AI) is transforming industries, and (UX) Design is no exception. Ai already has the ability to deliver high quality design work and is going to continue to evolve. It’s reshaping how we approach design, offering tools that enhance efficiency, streamline workflows, and even generate creative outputs, it’s already capable to deliver high quality design work. While AI excels at analyzing data, creating prototypes, and even predicting user behavior, the heart of UX design lies in empathy, problem-solving, and collaboration, skills uniquely human in nature. (cf. Medium A)

Ai can analyze vast amounts of user data to uncover patterns and insights that inform design decisions, helping designers better understand their audience. It can also generate initial design drafts or prototypes, saving time and allowing designers to focus on refining creative and strategic elements. Predictive algorithms powered by AI can anticipate user behavior, enabling the creation of more intuitive and personalized experiences. By automating repetitive tasks and offering data-driven insights, AI empowers designers to elevate their craft while maintaining a human-centered approach. (cf. Medium A)

But what if the data the Ai gets is already biased towards a certain user group, making it’s outputs biased as well a therefore influencing UX work. Addressing bias in AI is not just a technical challenge; it’s an ethical imperative that impacts the lives of millions.

Examples of Bias in Ai

  1. Healthcare Disparities: 
    An algorithm used in U.S. hospitals was found to favor white patients over black patients when predicting the need for additional medical care. This bias arose because the algorithm relied on past healthcare expenditures, which were lower for black patients with similar conditions, leading to unequal treatment recommendations.
  2. Gender Stereotyping in Search Results
    A study revealed that only 11% of individuals appearing in Google image searches for “CEO” were women, despite women constituting 27% of CEOs in the U.S. This discrepancy highlights how Ai can perpetuate gender stereotypes.
  3. Amazon’s Hiring Algorithm
    Amazon’s experimental recruiting tool was found to be biased against female applicants. The Ai, trained on resumes submitted over a decade, favored male candidates, reflecting the industry’s male dominance and leading to discriminatory hiring practices. (cf. Levity)

How does bias in Ai form?

Bias in Ai often forms due to the way data is collected, processed, and interpreted during the development cycle. Training datasets, which are meant to teach AI models how to make decisions, may not adequately represent all demographics, leading to underrepresentation of minority groups. Historical inequities embedded in this data can reinforce stereotypes or amplify disparities. Additionally, the way problems are defined at the outset can introduce bias; for instance, using cost-saving measures as a proxy for patient care needs can disproportionately affect underserved communities. Furthermore, design choices in algorithms, such as prioritizing overall accuracy over subgroup performance, can lead to inequitable outcomes. These biases, when unchecked, become deeply ingrained in AI systems, affecting their real-world applications.

Source: Judy Wawira Gichoya, pos. 3

Sometimes, the problem the Ai is supposed to solve is framed using flawed metrics. For instance, one widely used healthcare algorithm prioritized reducing costs over patient needs, disproportionately disadvantaging Black patients who required higher acuity care. (cf. Nature) When training datasets lack of diversity or reflect on historical inequities, Ai models learn to replicate these biases. Also, a well-designed system can fail in real-world settings if deployed in wrong environments it wasn’t optimized for. (cf. IBM) Decisions made during model training, like ignoring subgroup performance—can result in inequitable outcomes. (cf. Levity)

How to address bias in Ai

To avoid bias in Ai thoughtful planning and governance is important. Many organizations rush Ai efforts, leading to costly issues later. Ai governance establishes policies, practices, and frameworks for responsible development, balancing benefits for businesses, customers, employees, and society. Key components of governance include methods to ensure fairness, equity, and inclusion. Counterfactual fairness for example addresses bias in decision-making even with sensitive attributes like gender or race. Transparency practices help ensure unbiased data and build trustworthy systems. Furthermore a “human-in-the-loop” system can be incorporated to allow human oversight to approve or refine Ai-generated recommendations. (cf. IBM)

Reforming science and technology education to emphasize ethics and interdisciplinary collaboration is also crucial, alongside establishing global and local regulatory frameworks to standardize fairness and transparency. However, some challenges demand broader ethical and societal deliberation, highlighting the need for multidisciplinary input beyond technological solutions. (cf. Levity)

03 All about Biases

Before getting to know specific biases and getting to know, how to work around them, let’s take a closer look on what a bias actually is, how it’s formed and whether it’s a good or bad thing.

Bias – Definition

According to the Cambridge Dictionary, a bias is “the action of supporting or opposing a particular person or thing in an unfair way, because of allowing personal opinions to influence your judgment:” (Cambridge Dictionary) Sticking with explanation of language, you might come across the term “to be biased against” something or “to be biased towards” something. Being biased against something means to not favor something and being biased towards something means to favor it over something else. (cf. Britannica Dictionary)

Why am I explaining this? Well, I have come to realize, what I actually want to research are cognitive biases not bias in general. So I wanted to understand cognitive biases a little better.

A cognitive bias, is a predictable pattern of error I how our brain functions, those are very widespread. They affect how people understand and perceive reality and hard to avoid, they can lead to different people interpreting objective facts differently. Cognitive biases can lead to irrational decisions, they are result of mental shortcuts, or heuristics. (cf. Britannica Dictionary)

Additionally, one can differentiate between explicit and implicit biases. Explicit biases are conscious and intentional, individuals are fully aware of their attitudes and beliefs, which they can openly express and acknowledge. Implicit biases are unconscious and unintentional, they operate below the level of awareness, influencing behavior without the individual realizing it. (cf. Achievece)

How do Biases form?

Our minds can be like a collection of pockets where every experience is categorized and stored. This sorting process begins in childhood, helping us make sense of the world and react to future situations based on grouped experiences. It occurs automatically, as a mental shortcut to handle vast amounts of information efficiently. While this process is helpful, it also means our present decisions are often influenced by past experiences, which can lead to unconscious biases affecting how we view people, places, and situations.

Positive bias arises when something aligns with our own ideas or feels familiar, while negative bias occurs when something deviates from what we perceive as normal or preferable. Biases are not solely shaped by personal experiences but can also be influenced by external factors, such as media framing of situations, groups, or issues.

Biases can lead us to perceive someone as less capable or trustworthy or cause subtle discomfort around certain individuals. Importantly, these biases are often based on past experiences rather than the present context. (cf. NHS)

They stem from mental shortcuts, known as heuristics, which help our brains process information efficiently. While heuristics save time, they can lead to errors in thinking, particularly when patterns are misinterpreted or assumptions are made too quickly. (cf. Wikipedia)

How do the effect us?

Bias affects many aspects of our lives, often subtly influencing our decisions and perceptions. Implicit bias, formed over time through exposure to societal norms and experiences, impact everything from personal relationships to professional choices. For example, biases can affect hiring practices.Research shows that even trained scientists show bias in hiring, preferring male candidates over equally qualified women. Similarly, a study found resumes with “white” names were more likely to receive interview callbacks than those with “black” names, even when the resumes were identical.

These biases, often unintentional and shaped by socialization, affect not only professional decisions but everyday interactions as well. Recognizing and reflecting on our hidden biases is crucial to minimizing their impact and promoting fairness. (cf. Forbes)

01 The influence of cognitive biases on UX Work

Before reading this please answer this question (even if you don’t read the blog):

Results in next post ;D

Background

One of the reasons, who I got into UX Design in the first place is behause it connects three of my fields of interest: Design, Psychology and working with people. I want to find out more about what makes people click and what drives their perception of a design. Considering unconcious factors that influence how a user percieves a product is an important step to make a product truely userfriendly and human centered. Being aware of these factors and biases can really help to correctly approach a UX problem. Is this a „real“ finding or is this problem due to a bias?

What is a Bias?

First things first: “[A] cognitive bias is the tendency to think certain ways, often resulting in a deviation from rational, logical decision-making.” (CXL) The occurs in all areas of life, there is a bias for almost every area of life, they impact how we buy, sell, interact with friends, think, feel, etc. Feeling guiltier about a certain situation than you should, according to friends and family, you could be experiencing the egocentric bias. (cf. CXL) It’s important to remember that biases can occur on both sides during user research, both the user and the researcher can be subject to predetermined believes, affecting the outcome of the research. Some are already well known like the confirmation bias. (cf. Smashing Magazine)

Source

Impact on UX Design

In UX design, a bias can emerge at any stage, from topic selection to data interpretation, due to influences from researchers, participants, or other external factors. This is particularly concerning since designers and researchers may not be aware of them, potentially leading to skewed results or exclusionary designs. (cf. Clara Purdy) Take a look at the picture below, the cognitive bias codex, the list of biases designers make come across is nearly endless. Everyone can be subject to any of those biases, whether you come across it and recognize it or it effects yourself.

Source

Research Goals

Right now I can’t really tell where this research journey is going to take me, for now I will focus on biases and their effects of UX work. BUT during the researcher for this post, I realized how deep the rabbit hole around UX design and psychology goes. (Study guide for the rabbit hole ;D)

For now, a desirable outcome would be, to create a collection of biases and other effects, that influence people. Since one would have to become sensitive to these topics before they can conquer them. In addition to just generating awareness, there should also be info on why this matters and how to adjust to these effects. In the end there should be a lexicon about common effects, to be aware of and how to combat them. A deeper understanding of perceptual psychology will greatly impact how a designer approaches upcoming problems, to deepen the understanding for actions different users take.

Thanks for reading through my blog!
Leave a comment, if you are interested in this topic and tell me what you want to read about next! ;P