Dominique Davis is an applied research scientist at DimensionalMechanics, integrating data from the cognitive studies that she leads into the systems that the company’s building. Artificial intelligence, like many other technologies, can end up reflecting the biases of its human creators, and Davis is very concerned with how to bring ‘mindfulness’ to their work.

What are some common misconceptions about what the field of AI encompasses, and the role it plays in our lives in America?

Often times people conflate AI with thinking, feeling robots which are far from where most AI industry efforts lie. The media exacerbates these ideas which further leads to fear in society. The intention is not to replace humans but rather to develop tools humans can work with to improve the human condition.

Messaging around AI is future-centric so many people aren’t sure what AI is and aren’t even aware that they’re using it. If you’ve asked your phone for directions, used a social network, or listened to an automatically generated playlist, you’ve likely reaped the benefits of AI. In the coming years the benefits will certainly grow. I believe AI will improve health screenings, airport security, insurance and bank procedures — things that affect Americans daily lives.

How did you find your way into this field?

Originally my plan was to earn a PhD in Psychology, become a professor, and open my own cognitive research lab at a university. That didn’t pan out for me for two reasons. One, I didn’t receive the support that I had expected during the program. It’s tough enough being a woman of color needing to prove your ideas to those who don’t look like you. The pressure became exhausting. Second, the more theories I learned, the more I deeply thought about how I could apply them in the real world. How would X theory help me build an app or device?

I decided that my efforts were best suited for the industry setting. After earning my Master’s I accepted an Applied Research Scientist position at my current company DimensionalMechanics. This was the best career decision I’ve made yet. My role allows me to apply theory to code and tangible products — it’s a wonderful feeling.

Help me better understand what a day at work looks like for you; is there a recent memorable one you can describe?

I had been working on building an AI text classification model and the process was very challenging yet rewarding. My research background is in cognitive psychology so I’ve had to learn additional programming languages on the job. There were days when I painstakingly looked over my code only to find a simple typo. Other times I realized that I had to restructure my code because the approach wasn’t going to scale. Finally one day my code worked and I got my initial model trained to 78 percent accuracy! I knew that I had more training to do to achieve 90 percent accuracy but I was still very proud to get to that point.

Women in technical fields often question their place in technical roles. I believe women of color question this even more so because often times they didn’t have as much access to technology in school or at home. In any case, my experience demonstrates that you can become proficient in a technical role with a limited technical background. It’s all in how you apply yourself.

Vis a vis your focus on diversity and inclusion in AI, where are the biggest points of frustration?

It’s difficult for young women and people of color to see themselves in this field when most conferences feature a majority of speakers that don’t look like them. In the field we need to do a better job at highlight work of those who may not look like your typical AI scientist and who may not have graduated from a top recruiting school.

When we lead by example we will inspire a fresh generation of AI scientists eager to build top-performing technologies. In order for the AI field to stay innovative fresh ideas need to circulate. The only way that will happen is if diversity in every sense is embraced.

Can you describe a role in the field of AI that requires that kind of mindfulness you’ve described — and how that mindfulness could be built into engineers’ training more effective ways?

Many AI engineers aren’t aware of the implications of bias in their datasets. When AI engineers train facial recognition algorithms, for example, it’s critical that they train the algorithm on faces with as many different skin colors as possible. Too many times it’s reported that a facial recognition app misclassifies its users or downright cannot detect its user because their faces were not represented in the training dataset.

Even more critical, AI can now affect the fate of someone’s life in the courtroom. If legal staff pass court case documents through an algorithm to help decide the fate of someone’s case, it’s critical that the algorithm is trained on a minimally bias dataset.

Alongside technical skills, engineers absolutely need emotional intelligence (EQ). This, in my opinion, should be taught in schools — well before entering the workplace. It’s never too late though. Teams with high in EQ build better products and better maintain inclusive environments. Team members from non-traditional tech backgrounds need to know that their ideas and efforts matter and are valued.

Related Posts

Leave a Reply

Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.

Discover more from AI Dynamics

Subscribe now to keep reading and get access to the full archive.

Continue reading