A notable case of bias in AI recruitment involved Amazon's automated hiring tool, which was found to discriminate against women. This tool was developed to mechanize the hiring process by reviewing resumes and identifying top candidates. It trained on data from the company's resumes over a 10-year period, which mostly came from men due to the male dominance in the tech industry. As a result, the AI learned to prefer male candidates, penalizing resumes that included words like "women's" (e.g., "women's chess club captain") and downgrading graduates from all-women's colleges. Despite efforts to neutralize these biases, the tool continued to show discriminatory tendencies, leading Amazon to eventually discontinue its use​​​​. LINK One illustrative case study involved a financial institution, Money Bank, which utilized an AI tool called GetBestTalent for shortlisting candidates. Despite assurances of bias and discrimination audits, several candidates—distinguished by gender, race, and age—suspected that their rejections were unlawfully discriminatory. These suspicions arose because AI's decision-making process, which often relies on opaque algorithms, made it difficult to understand why certain candidates were rejected​​. LINK In another notable example, the company HireVue developed software that assessed job applicants based on their facial movements, word choice, and how they spoke. This instance highlights how AI tools can embed biases based on the subjective criteria they're programmed to evaluate, further complicating the ethical landscape of AI in recruitment​​. LINK