There are significant inequities in the global workforce producing and maintaining artificial intelligence (AI). Alongside other inequalities in the sector, AI is overwhelmingly male-dominated.
AI is an unequal industry
Only 22% of AI professionals globally are female. The figure is similar in Australia, where women made up 23% of information technology graduates in 2021. According to OECD.AI, only half of all academic publications in AI have at least one female author. This stands in stark contrast with the 90% of AI publications that have at least one male co-author.
Bias is being reproduced and reinforced
Evidence shows that bias can creep into AI models, and these biases are then reproduced in outputs. When given general prompts, AI text-to-image generators tend to produce images that reinforce whiteness, heteronormativity and American ideals. AI generators can also amplify harmful stereotypes, associating Muslims with violence, portraying people with disabilities as needing to be “fixed” or saved, and suggesting that women are less suited to leadership roles or certain occupations.
What this means for learning and assessment
AI has a susceptibility to bias that poses a serious risk for education and the future of learning. Students may use tools that confidently present them with false or misleading information as fact. Without expertise in the relevant subject area, they may find these errors difficult to verify or challenge.
An adequate response to the challenge of assessment in the age of AI must be holistic. It is necessary to assess students on their ability to use AI in a way that does not unfairly discriminate against, or cause harm to, people of any gender, sexual orientation, race, ethnicity, age, disability, religion and so on.
It is also necessary to equip university staff with the skills to identify, navigate and teach about these biases.
Our submission to TEQSA calls for embedded EDI approaches
SAGE has called for an equity, diversity and inclusion lens on the potential impacts of AI in higher education. In feedback to TEQSA on their Assessment reform for the age of artificial intelligence, we make recommendations for changes to the new guidelines and criteria.
SAGE has recommended that higher education institutions be guided to:
- Assess students on their ability to recognise and minimise social biases and discrimination in AI.
- Build staff capacity to identify and teach about equity, diversity and inclusion issues in AI.
- Increase the representation of women and marginalised groups amongst AI researchers and developers.