Founded back in 1997 at the Boston Museum of Science, the Women in Science and Engineering (WISE) initiative was created to support women and girls in STEM fields and to increase gender equity in STEM. Twice a year, WISE invites a female STEM professional to speak on current developments in their field. For two decades, they have discussed subjects ranging from climate change to AI to mechanical engineering, all while inspiring fellow innovators, learners, and enthusiasts.

Last week, I attended their latest WISE event, a panel of women who spoke on the growing phenomenon of AI. The panel included Dr. Barbara Barry, a collaborative scientist at Mayo Clinic; Jordan Harrod, a Ph.D. candidate and an AI YouTube educator; Rebecca Kantar, Vice President of Education at Roblox; and Hala Hanna, Executive Director of Solve. 


This group of intelligent and innovative women spoke about the future of AI and the intersection of AI and gender equity. With the rapid growth of AI, almost all modern industries will soon feel its effects. According to Hanna, by 2026, it is estimated that 90% of all content and images online will be AI-generated. Whether one considers this change a boon or detriment, it's essential that this content reflects the real world. That means closely examining the biases present in AI tools and working to correct them. 


The origins of bias in AI.


Compared to just 10 years ago, Kantar said, the level of generative ability of modern AI is groundbreaking. ChatGPT, for example, makes monumental progress between versions, with massive improvements from version 3.5 to their current version 4.0. Despite this progress, however, many AIs have inherent biases. As Hanna put it, the world as we know it is actually less biased than AI is. When asked to generate images of judges, one AI tool produced a sample in which only 3% depicted female judges. However, 34% of judges worldwide are women, demonstrating that AIs may not accurately reflect real-world diversity and variety. 



Such biases can come from the creators, who may imbue their AIs with subconscious prejudices, or they could come from the datasets used to train that AI. Barry, who researches AI in healthcare, drew a parallel with bias in medical research. According to Barry, clinical trials only began including women within the last 20 years. This meant that this large swath of new data was suddenly introduced to these scientific studies. The data influenced results of many studies as women’s experiences opened doors to knowledge that was previously ignored. Similarly, the AI platforms can really only be as knowledgeable as the data we input into them. If the input data does not reflect a diverse population, an AI's outputs will be equally limited. So, the more diversity of thought involved in AI creation, the less likely the AI will contain inherent bias. 

examining bias in a clinical trial


Bias in application of artificial intelligence.


However, beyond the creation process, Barry noted that bias could even influence how people apply AI to various situations. If, as an example, doctors could implement tests run by artificial intelligence, they would still have to decide which patients need the test. Their judgment may be influenced by their worldview, their impression of the patient, and even how their day has gone. When Barry asked the mostly female audience whether they've ever had pain or symptoms dismissed by doctors due to gender bias, many said that they had. In some situations, AI tests may help confirm women’s experiences and pain. In other cases, though, AIs could perpetuate inequities of gender or other marginalized identities.


How to prevent AI bias.


To combat these biases, it’s also important to maximize human involvement throughout AI creation. Harrod explained that every person has their own lens through which they view their lives and the world. We come to solutions based on these lenses but cannot always see specific faults in our worldviews. Therefore, the more diverse the group of developers is, the more likely it is that they'll catch each other’s biases. 



By getting involved in conceptualizing AI tools, Harrod added, we “shape the landscape of people affected by these technologies.” She insightfully pointed out that it is often not possible for the people most harmed by technology to participate in the conversation or process of its development. Therefore, those who have the privilege and access to create new technologies must listen to voices of those with less privilege. Only 12% of AI researchers are women, for example, so it’s important for men in AI fields to seriously consider gender equality in their work.


Staying informed.


Despite Harrod's advice, the majority of people don’t have access to the process of AI development. So, what can we do instead? How can we contribute to making ethical AIs? The most important step people can all take is staying informed. Kantar said, “Every time we choose not to be informed, we abdicate the chance to spread truth.” In other words, whenever we avoid interacting with new technologies, we forfeit the opportunity to contribute to making those technologies safe and ethical. 

educators researching AI


For Kantar, ethical AI usage begins with knowledge. We must understand what these tools are and are not capable of. To do so, Kantar encourages people to play around with different AIs and read more about the capabilities. It can be challenging to pick apart this wealth of information on AI to learn what is most useful. However, according to Harrod, AI users don’t need an intricate understanding of the deep technical details behind these tools. It’s most important to understand how to create a helpful prompt and then evaluate the response. And, these are the skills which are most relevant to students as well. 


Teaching students to ethically use AI.


In Kantar’s view, kids are going to lead the movement to use AI. These tools cut time and effort, and many kids now go straight to ChatGPT instead of using Google. Kids are going to find a way to use AI no matter what, so it’s important to consider what skills AI can cultivate. Tools like ChatGPT can teach young people to ask good questions, think about what problems are worth solving, or consider the best tool to use when completing a task. 


By banning AIs completely, teachers may be missing an opportunity to have their whole class learn to use these tools well. Teachers can challenge students to use ChatGPT to enhance their learning. Ask ChatGPT how to conduct an experiment, then have children analyze whether those results will be effective. ChatGPT can often give what experts call “believable hallucinations,” or false information that appears accurate. It is essential for kids (and teachers) to develop a sense for high-confidence and low-confidence information. In other words, you should be able to assess if AI-generated information sounds accurate or suspicious, then fact-check it. Thanks to near-constant content saturation, young people must understand how to evaluate information anyway, and ChatGPT offers great practice. 


a student using AI to complete an assignment


Educators must grapple with the fact that students will inevitably use AI, and your best bet to reckon with this new reality is to become familiar with its intricacies. Try out some AI tools like ChatGPT, Google’s Bard, DALL-E, or others. Come up with creative uses for AI in your classrooms, or use it for organizing things like lesson plans. Using AI regularly can demystify it for teachers and students alike—and this familiarity means you'll recognize when students use it to plagiarize assignments. Instead of trying to combat cheating using AI, teachers could redirect kids toward using AI to enhance learning. As Kantar added, “This technology is here. We can’t put it back in the box, so we have to make sure our kids have the power to use it.” 


Moving forward with new tech.


Using AI for social good also creates certain challenges. While AI can certainly enhance biases, it can also exponentially scale access and effectiveness for many people. AI tools collapse costs and decrease labor, which greatly enhances accessibility of education and creation. Ethically using AI means we're committing to equity from the beginning, understanding the limits of AI, and learning to use it responsibly.

To stay up to date with STEM, EdTech, and AI, be sure to follow us on Twitter, Facebook, and Instagram.