For anyone who has kept an eye on the discussion on artificial intelligence (hereby ‘AI’) and machine learning (‘ML’) in the media, it is no news that these systems can be biased, unfair and possibly lead to unlawful discrimination and other adverse impacts in their use. While AI can certainly improve efficiency and consistency, optimize processes and complement human decision-making, the narrative of AI as a quick fix for human bias in decision-making is quickly losing its footing.
This blog post is the first part in a series considering algorithmic fairness from both a technical and a philosophical perspective. It is meant to serve industry practitioners as well as those with no technical background but with an interest in ethics. Part 1 considers a central question underlying discussions on fair AI: what is algorithmic bias?
Read the blog post by Otto Sahlgren on the KITE project website.