Imagine trying to use Alexa when you can’t hear her voice. Or learning that someone went ahead of you for care at the hospital because of an algorithm, when your health is worse and your skin color is different.
As an industry, the possibilities of artificial intelligence (AI) often make us so eager to create new applications that we sometimes forget to consider all the people who will benefit from them. Or those who will be excluded when we don’t factor the principles of diversity, equality and inclusion (DEI) within the data we use to build the applications.
AI is only as good as the data you input. Recently a study showed that facial recognition could identify a white male with 99% accuracy, but for women of color, it dropped to 35%. Why? Because that population was left out of the test cases during the design phase.
And in Oakland, software was piloted to predict high crime areas. But it turns out the algorithm was actually tracking areas that had a high population of minorities — regardless of crime rate.
These are examples of injustice, plain and simple, amplified by technology.
In AI and software design, it is absolutely critical that we think about everyone as we’re innovating. Otherwise, we’re guilty of perpetuating and accelerating damaging stereotypes, even if we are doing it unconsciously. The result is that we are missing out on additional benefits that AI can bring to our lives and holding back the true potential of this technology.
At Micron, we think a lot about this issue. Last year, Micron Gives announced our Advancing Curiosity program to provide $1 million to leading university research groups and nonprofit organizations dedicated to social good through AI.
This commitment includes supporting a UCLA project that aims to identify and combat bias in AI and promoting transparency in machine learning by developing computationally rigorous methods to help identify bias. To promote stronger diversity within the AI field, we also support AI4All and its camps, which focus on generating interest in these fields, particularly among underrepresented groups.
And it goes beyond race and gender; we must look at all dimensions of diversity. Almost all AI assistants are voice-based, a functionality that all but eliminates the hard of hearing or deaf from using these tools. The Advancing Curiosity program is working with Rochester Institute of Technology to develop alternative AI-assisted technology that is engaging hard-of-hearing users throughout the design loop. This engagement ensures the solution really works for the people who use it.
You see, to get quality data inputs for AI, it’s crucial that the mix of people and organizations developing these projects reflect the world we all live in. If we see the U.S. population with 7% of people with disabilities, that percentage should be mirrored in the workplace. That’s how you get valuable AI, through the diversity and inclusion of people who contribute to the design and development of the applications themselves.
But you also have to think about diversity of thought. It’s equally important to have an inclusive environment where everyone feels comfortable sharing ideas. If you have diversity but people don’t contribute, you’re not getting the value of that diversity.
Today, we are at a tipping point, with massive AI datasets being built all around us. It is imperative that we think about our data and whether it fully represents those we want to serve and help. To do that, we must create workplaces where the best solution, the best innovation, the best idea rises to the top — no matter where it comes from.
I’m excited to be a part of a team that is committed to building a workforce that is diverse, equal and inclusive. We are by no means perfect, but I believe the will and momentum are here, and we recognize the value that DEI provides to our culture, to our innovation and to our competitiveness. Learn more about diversity, equality and inclusion efforts at Micron and read our latest DEI report.