AI Engineering

Artificial intelligence (AI) is a powerful technology that can enhance the efficiency and effectiveness of various products and services. However, AI can also go wrong in many ways, causing public controversy and reputational damage for the organizations that use it. According to Gartner through 2022, 85 percent of AI projects delivered erroneous outcomes due to bias in data, algorithms, or the teams responsible for managing them.[1] Bias AI can undermine the quality and reliability of the organization’s products and services, leading to customer complaints, negative reviews, or reduced loyalty.


Reputation is a valuable asset for any organization. It reflects how the public perceives the organization’s identity, values, and performance. A positive reputation can help an organization attract and retain customers, employees, investors, and partners. A negative reputation can hurt an organization’s bottom line, market share, and competitive advantage. It can expose the organization to ethical criticism or social backlash, leading to public outrage, media scrutiny, or boycotts. The actual impact of these bias systems is not always easily identifiable especially if they are not deployed in a confined environment.

Amazon is a global leader in e-commerce, cloud computing, digital streaming, and artificial intelligence. The company employs more than 1.3 million people worldwide and is constantly looking for top talent to join its ranks. To improve the efficiency and effectiveness of its recruiting process, Amazon developed an artificial intelligence (AI) tool to help with screening and ranking job applicants’ resumes. However, the AI tool turned out to be biased against women, as it learned from historical data that mostly reflected male dominance in the tech industry. The tool penalized resumes that contained words such as “women’s” or the names of all-women’s colleges, and favored candidates with certain terms that were more common among male applicants. The tool also showed poor performance and reliability, as it generated irrelevant or contradictory recommendations. Amazon discovered these flaws in 2015 and tried to fix them, but eventually scrapped the project by 2018.[2]

An organization should continually assess its AI technologies for bias to mitigate risks for several reasons. Bias in AI can harm the individuals or groups who are affected by the decisions or actions of the AI system, leading to discrimination, exclusion, or harm. It can damage the organization’s reputation, trustworthiness, and social responsibility, leading to public outrage, media scrutiny, or legal liability. It can undermine the quality and reliability of the organization’s products and services, leading to customer dissatisfaction, negative reviews, or reduced loyalty. Therefore, by assessing and addressing bias in their AI technologies, organizations can not only avoid these risks but also enhance their performance, innovation, and customer satisfaction.


Market leaders are mitigating AI bias risks by establishing responsible AI governance and end-to-end internal policies to guide the design, development, deployment, and oversight of AI systems. Another measure is creating dedicated teams and roles to operationalize responsible AI principles and practices, such as AI ethics leads, boards, and committees. By investing in research and innovation to improve the quality, reliability, and fairness of AI systems and address the challenges and limitations of AI engineering. They are collaborating with industry partners, academic institutions, and civil society organizations to share best practices, learn from others, and inform standards and regulations for AI engineering.

No comments:

Post a Comment