Unintended consequences of technology: Enabling NextGen discrimination
Unintended consequences of technology:
Enabling NextGen discrimination
Dr. Sarah Saska talked about the unintended consequences of technology at Big Data and AI Toronto 2020. She focused on three major areas where technology is used to solve diversity, equity, and inclusion related challenges in the workplace.
Dr. Sarah Saska, Co-founder and CEO at Feminuity, is a seasoned academic and experienced practitioner. She has led pioneering doctoral research at the intersection of equity, technology, and innovation. Her research highlighted the need for companies in the technology and innovation sector to centre ethical and equitable design.
She acknowledged that the “pandemic has led to the faster deployment and uptake of many new technologies”. Businesses have been racing to accelerate their digital transformation and readapt in an ever-changing environment.
“As we witness a global understanding of systemic inequities, systems that have been broken for far too long, along with the increased global demands for racial justice as called for by the Black Lives Matter movement and those working in solidarity, we’re also witnessing companies showing interest in advancing diversity, equity, and inclusion related efforts within the organizations.”
“Many are actually looking at technology as that veritable silver bullet to assist them with their diversity, equity, inclusion related efforts. So, in real time, we’re also witnessing the rise of diversity and inclusion technologies. Tech that’s aimed at helping us to solve complex forms of inequity within the context of the workplace.”
Unfortunately, these innovations can lead to some unexpected consequences of technology, especially related to biases and discrimination. “We have a really good shot of enabling the next generation of different types of discrimination” warned Dr. Saska.
“Technology is not more objective than people. It is merely an extension of us and of our collective histories. Technologies are built by us and they are using our datasets that reflect our history of exclusion and discrimination. The good, the bad, and everything in between. And the same tech is shaped by us on an ongoing basis so it’s also a reflection of our current behaviors.”
Recruitment and screening tools
A few years ago, Amazon built a resume screening tool to automate the search for top talent. Dr. Saska explained that “the screening algorithm was built using resumes that amazon had collected over the past decade and as it turns out those resumes were largely from men”. As a result, the AI recruiting tool showed bias against women.
With the intention of solving the problem of bias and recruitment, Amazon “created a whole new slew of problems”, she said. She also added that one of the most important points here is that “the programmers who developed the screening tool actually tried to edit the algorithm in an effort to make it be more objective […] but ultimately the programmers concluded that neutrality simply wouldn’t be possible for this tool”.
Despite all the resources allocated to building this new tool, Amazon made the decision to scrap it. In Sarah’s opinion, “this is an all too telling corporate admission that even when you’re one of the most well resources companies in the world, you cannot engineer objectivity”.
Tools impacting pay equity
Once a company has decided on which candidate to hire, it can also use predictive tools to help determine what sort of offer to even extend to that person.
Let us take an example of a company where women make less money than men on average. Dr. Saska argued that “the algorithm will identify these types of pre-existing patterns and they’ll end up perpetuating these pay inequities in the offer that it actually proposes to you know the HR and people leaders to extend to candidates”.
It can become an even bigger problem when this situation remains unknown to the hiring team since it can circumvent laws that are in place to protect equal or equitable pay.
We also know that facial recognition ends up coming up with problems for us in the context of the workplace. Zoom’s face detection algorithm came under scrutiny last year when it erased racialized people’s faces. The algorithm was working well for individuals with white or light skin colour, but not for people of colour. Sarah pointed out that this was most likely due to a lack of diversity of images in their databases.
The pandemic has even amplified the problem with an increase in surveillance. Many organizations have shown a lack of trust with their employees who are working remotely and have been using facial recognition tools that have shown to at times discriminate based on race.
“We already know that facial recognition software and algorithms can barely recognize racialized people’s faces, but now we’re actually using it to determine if someone is a good employee, if they’re working well from home, if they can be trusted while working remotely.”
“Now more than ever, we need to be so vigilant about the technologies that we use and choose to engage. Those that impact our lives in ways that many of us don’t understand as of yet. We cannot let the promise of AI overshadow real and present harms to people. The efficiency and scalability of technology means that they can reproduce existing inequities at scale and at warp speed.”
It is crucial to get things right to stop these unintended consequences of technology from happening. As Sarah put it, “We can’t let amoral markets that prioritize profit over social good supersede our responsibilities to one another, as well as our future. Technology will be most powerful when everyone is actually empowered by it and that’s the kind of world that I want to see.”
Dr. Sarah Saska will join us at Big Data and AI Toronto 2021 on October 13, 2021. She will take part in a fireside chat about biases and discrimination. You can secure your spot here.