The use of Generative Artificial Intelligence (GAI) tools like ChatGPT is becoming common in higher education. A research project led by IDS Affiliate Ran Liu and Graduate Fellow Carla Glave is offering insights about their potential to both reduce and reinforce inequalities in STEM education.
The project, The Transformative Potential of Generative Artificial Intelligence (GAI) in STEM Learning, Equality, and Inclusion: A Student-Centered Mixed Methods Study aims to understand how students are using these tools—and whether GAI is helping or hurting efforts to create more equitable learning environments.
The research, supported by an Institute for Diversity Science Seed Grant, focuses on the experiences of students from historically underrepresented groups. These include women, first-generation college students, non-native English speakers, and students from underrepresented racial and ethnic backgrounds. Liu and Glave used a mix of surveys, interviews, and analyses of student-AI interactions to examine how GAI is shaping STEM learning.
The study found that students are already widely using GAI. The students who were surveyed used tools like ChatGPT for tasks such as brainstorming, coding, writing assistance, and exploring complex STEM concepts. For underrepresented students—particularly first-generation and multilingual learners—GAI may serve as a useful support when help from professors or other students isn’t available. Some students reported that GAI helped them overcome language barriers and provided personalized learning opportunities that traditional resources sometimes lack.
However, the research also uncovered areas of concern. Many students have a limited understanding of how GAI tools work, sometimes perceiving them as advanced search engines or even “magical” sources of knowledge. This misunderstanding can lead to uncritical reliance, potentially exposing students to misinformation or biased content. International and multilingual students raised concerns about the Western-centric nature of GAI responses, which can reinforce cultural biases.
Liu and Glave’s survey of over 1,200 students also showed disparities in who is using GAI, how often, and with what level of confidence. Women report later adoption of GAI tools and lower confidence in using them than men, despite exhibiting similar motivations of using them. Additionally, women, non-binary students, and students of color tend to express greater concerns about equity, bias, and cultural representation in GAI usage. These patterns suggest that, without adequate support, the GAI integration into higher education may risk reinforcing existing inequalities. On the other hand, international and multilingual students report receiving more GAI-related information from a range of sources, highlighting the potential for these technologies to support more inclusive learning experiences.
Dr. Liu and Glave emphasize that universities should quickly move to address these inequalities by developing clear guidelines on ethical AI use, providing equitable access to GAI tools, and offering targeted AI literacy programs. “Generative AI has the potential to be a powerful tool for expanding access to STEM learning, especially for students who might otherwise struggle to find affordable support,” said Dr. Ran Liu, “But that promise won’t be realized unless we take deliberate steps to ensure all students are prepared to use these tools critically and effectively, not just those who are already tech-savvy or have greater access to the most updated information and tools.”
Liu and Glave presented this work at the 2025 annual conferences of the Comparative and International Education Society (CIES) and the American Educational Research Association (AERA), including a highlighted session at CIES.
As GAI continues to become embedded in academic life, this study offers evidence-based insights that can help universities and colleges shape policies to ensure that the benefits are equitably shared—while its risks are responsibly managed.