In June 2019, Jake Silberg & James Manyika of the McKinsey Global Institute (MGI) published the essay ‘Tackling bias in artificial intelligence (and in humans)’ [1]. In the below article, Philip van den Berg shares his experience with this phenomenon in Marketing and Market Intelligence. He shares some thoughts about reducing relative bias and the state of ‘lack of bias’ or ‘absolute fairness’, including conventional ways on how to reduce bias and conclusions from the MGI article on how to apply AI to do so.

The Bias Dilemma for Marketing and Market Intelligence

An important dilemma for Marketing and Market Intelligence practices is often to identify, quantify and communicate bias, while maintaining credibility and business justification. Bias is defined as ‘the action of supporting or opposing a particular person or thing in an unfair way’ by ‘allowing personal opinions to influence your judgment’ [2] and includes ‘prejudice’, ‘statistically unexpected deviation’ and ‘systematic error’ [3]. The consequence is that market data, market insights and market segmentation, as well as marketing plans, marketing content and marketing actions remain debatable or even questionable. This occurs especially when business results are under pressure and marketing impact is below expectation.

I have seen senior management using a mix of three approaches for decision making and communication: data, stories and intuition. The first is often dominant: data driven managers use numbers to align people and to reduce bias. The phrase ‘data don’t lie’ is used regularly, but is this true? Silberg & Manyika show that not only data interpretation can be biased but also data itself is often obtained from a non-representative sample, with a subjective methodology.

In a more ‘siloed’ organization or partnerships, departments don’t trust the ‘fairness’ of each other and declare their own data source and insight as the best. The market intelligence analyst defends his research, the marketer or agency his competencies and expertise and the sales person his experience and customer relations.

Increasing fairness bias by transparency, omni-data and feedback

What a person does not know, he tends not to trust. A first step to create confidence and thereby to increase fairness is transparency. This starts by questioning: a) the data, b) the algorithms and analytics that turn data into intelligence and c) the interpretation or insights, in order to understand the bias. Here is important to document the findings and communicate them to the stakeholders that use the data, the intelligence and the insights. Most of the time, being open about bias and data quality limitations creates more trust, than just stating the are ‘great’ or ‘sufficient’. Transparency also encourages stakeholders to bring suggestions how to improve quality and to start co-owning the topic of improving fairness.

A second way to reduce bias is an omni-data approach, by efficiently extracting value from multiple data sources. With every source added, more data quality checks can be built in and insights become richer, deeper and better. Stakeholders who demand using another source to take away their remaining distrust, can in the in most cases be satisfied.

A third part which is often missing, is the thorough post-cycle or post-event feedback loop. It allows stakeholders to review, to what extent data and insight assumptions were biased and to agree with them, on where to improve and to take joint action.

Bias transparency, an omni-data approach and feedback loops lead to a better understanding of and more cooperation on how to increase fairness. This is not only valid for Market Intelligence but also for Marketing activities, from the market insight, the market segmentation, and the persona definition, to the marketing plan with the messaging, the marketing mix and the metrics.

To make the organisation bias-aware and capable of reducing it, a data-driven strategy and a culture of openness on data quality are essential. For this, leadership has to understand the value of fair data, to map where the organization is and should go and to start a transition project with a midterm horizon.

Reducing bias by experimentation

Advantages of starting a strategy and culture shift are, that they may take too long – the market, competition and customers don’t wait – and that they don’t state well, what fairness is. Silberg and Manyika conclude, that this last topic is so complex, that ‘crafting a single, universal definition of fairness or a metric to measure it will probably never be possible’. Instead they see different metrics and standards to be used, which each depend on the use case and circumstances.

Reducing bias however, means one still needs sort of an understanding of fairness and how to improve it. I see experimentation as a quick way, to determine how relatively biased for example a marketing campaign is. Testing and trying out different small scale scenarios in parallel on persona definitions, messaging and marketing actions, will provide useful insights and learning. The scenario with the best business result is likely to be the least biased one.

Reducing bias with Artificial Intelligence

Still, even the best scenario could still be biased and far from the point of ‘ultimate’ fairness. In seeking to identify this point and reduce bias, human behaviour and judgement have clear limitations. This raises the question, to what extent Artificial Intelligence, which has the promise to overcome human limitations, can help.

Silbert and Manyika see it as a challenge, that the underlying data are often the main source of the bias, rather than the algorithm itself. This is because the algorithms are often trained on data that contains human bias. The authors observe three main approaches to increase fairness in AI models, but conclude technical progress is still in its early stage. The first is data pre-processing for accuracy and independency reasons. The second is post-processing to transform AI model predictions to less bias. The third is including fairness constraints on the optimization process or using so called adversaries to reduce bias from for example stereotyping. Also adding more data points, innovative training techniques, like transfer learning and explainability techniques [4], can help.

Moving forward with Artificial and Human Intelligence

While clear definitions and the above approaches can certainly reduce bias, they cannot rule out fairness restrictions in the data collection or in the social context into which an AI system is deployed. Therefore the Silbert and Manyika state that ‘human judgment is still needed to ensure AI supported decision making is fair.’ This means that an adjustable mix of human judgement and AI judgment is needed. To find the best balance, in order to maximize fairness and minimize bias from AI, they recommend ‘six potential ways forward for AI practitioners and business and policy leaders’:

  1. Be aware of the contexts in which AI can help correct for bias as well as where there is a high risk
  2. Establish processes and practices to test for and mitigate bias in AI systems.
  3. Engage in fact-based conversations about potential biases in human decisions.
  4. Fully explore how humans and machines can work best together.
  5. Invest more in bias research, make more data available for research (while respecting privacy) and adopt a multidisciplinary approach.
  6. Invest more in diversifying the AI field itself.

Summary

The availability of almost ‘endless’ amounts of customer and business data, as well as the fast growing capabilities of Artificial Intelligence-powered data analytics, have brought Market Intelligence and Marketing into a new era. Companies were never more dependent on data as well as data analytics, and thereby on data bias and data fairness. These topics have become strategic and require a paradigm shift in the way organisations deal with them, with deep consequences for their strategy and culture.

This calls for the need to define the state of ‘ultimate’ fairness and to quantify the bias gap in both Market Intelligence and Marketing. This can be partially obtained by transparency, omni-data, feedback and experimentation, but these approaches have their limitations. While AI-powered data collection, analytics and enrichment solutions are still in an early stage, they add substantial value in reducing bias. As AI-generated data and insights also use biased data and biased algorithms, a flexible mix of human judgement and AI judgement is required. Although defining the ‘biassless’ or ‘ultimately fair’ state might still be difficult, this approach is an important step towards it.

The business value of AI will continue to increase in the near future. This will strengthen the competitiveness and the business results of companies and organizations. Therefore it is of strategic importance, that their C-suites embrace ‘Data Bias and Fairness’ as a strategic theme and start utilizing the ‘six potential ways forward’ of Silbert and Manyika.

[1] https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans
[2] Cambridge Dictionary
[3] Merriam-Webster
[4] While the high-performance and accuracy of Artificial Intelligence, that is Deep learning and Machine Learning algorithms, are generally valued, the models are often applied in a black box manner. This makes it difficult for researchers and data scientists to fully understand how the algorithms work, to understand how to assess the bias and define the point of ‘absolute’ fairness and to communicate the reason of the outcomes to stakeholders or customers. ‘By providing an explanation for how the model made a decision, explainability techniques seek to provide transparency directly targeted to human users, often with the goal of improving user trust.’ They consist of ‘ Local explainability techniques’ that ‘ explain individual predictions, which makes them more relevant for providing transparency for end users.’ and of ‘Global explainability techniques’ that ‘refer to techniques that attempt to explain the model as a whole.’ [5]
[5] Several authors; Explainable Machine Learning in Deployment, 13 September 2018; https://arxiv.org/pdf/1909.06342.pdf