By Sue Shellenbarger
Artificial intelligence isn’t always intelligent enough at the office.
One major company built a job-applicant screening program that automatically rejected most women’s resumes. Others developed facial-recognition algorithms that mistook many black women for men.
The expanding use of AI is attracting new attention to the importance of workforce diversity. Although tech companies have stepped up efforts to recruit women and minorities, computer and software professionals who write AI programs are still largely white and male, Bureau of Labor Statistics data show.
Developers testing their products often rely on data sets that lack adequate representation of women or minority groups. One widely used data set is more than 74% male and 83% white, research shows. Thus, when engineers test algorithms on these databases with high numbers of people like themselves, they may work fine.
The risk of building the resulting blind spots or biases into tech products multiplies exponentially with AI, damaging customers’ trust and cutting into profit. And the benefits of getting it right expand as well, creating big winners and losers.