RDS and Trust Aware Process Mining: Keys to Trustworthy AI?

14.02.22
~5min
4

ML startup has encountered infrastructure cost issues during the extensive growth. The main goal was to decrease the monthly operational cost ($75 K) for a large data-driven platform that handles ~ 240 bln entries monthly ( ~ 30TB), storing raw data for 12 months on AWS.

This is some text inside of a div block.

International Data Corporation (IDC)

This forecast has broad socio-economic implications because, for businesses, AI is transformative—according to a recent McKinsey study, organizations implementing AI-based applications are expected to increase cash flow 120% by 2030.But implementing AI comes with unique challenges. For consumers, for example, AI can amplify and perpetuate pre-existing biases—and do so at scale. Cathy O’Neil, a leading advocate for AI algorithmic fairness, highlighted three adverse impacts of AI on consumers:
In fact, a PEW survey found that 58% of Americans believe AI programs amplify some level of bias, revealing an undercurrent of skepticism about AI’s trustworthiness. Concerns relating to AI fairness cut across facial recognition, criminal justice, hiring practices and loan approvals—where AI algorithms have proven to produce adverse outcomes, disproportionately impacting marginalized groups.
But what can be deemed as fair—as fairness is the foundation of trustworthy AI? For businesses, that is the million-dollar question.

Defining AI Fairness

AI's ever-increasing growth highlights the vital importance of balancing its utility with the fairness of its outcomes, thereby creating a culture of trustworthy AI.
Intuitively, fairness seems like a simple concept: Fairness is closely related to fair play, where everybody is treated in a similar way. However, fairness embodies several dimensions, such as trade-offs between algorithmic accuracy versus human values, demographic parity versus policy outcomes and fundamental, power-focused questions such as who gets to decide what is fair.
There are five challenges associated with contextualizing and applying fairness in AI systems:

1. Fairness may be influenced by cultural, sociological, economic and legal boundaries

AI's ever-increasing growth highlights the vital importance of balancing its utility with the fairness of its outcomes, thereby creating a culture of trustworthy AI.
Intuitively, fairness seems like a simple concept: Fairness is closely related to fair play, where everybody is treated in a similar way. However, fairness embodies several dimensions, such as trade-offs between algorithmic accuracy versus human values, demographic parity versus policy outcomes and fundamental, power-focused questions such as who gets to decide what is fair.

2. Fairness and equality aren't necessarily the same thing

Equality is considered to be a fundamental human right—no one should be discriminated against on the basis of race, gender, nationality, disability or sexual orientation. While the law protects against disparate treatment—when individuals in a protected class are treated differently on purpose—AI algorithms may still produce outcomes of disparate impact—when variables, which are on-their-face bias-neutral, cause unintentional discrimination.
To illustrate how disparate impact occurs, consider Amazon’s same-day delivery service. It's based on an AI algorithm which uses attributes—such as distance to the nearest fulfillment center, local demand in designated ZIP code areas and frequency distribution of prime members—to determine profitable locations for free same-day delivery. Amazon's same-day delivery service was also found to be biased against people of colour—even though race was not a factor in the AI algorithm. How? The algorithm was less likely to deem ZIP codes predominantly occupied by people of colour as advantageous locations to offer the service.
Have question about digital transformation?
get answer

Related articles

your questions and special requests are always welcome
let's talk

Contact us

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.