A stark light has been cast on the geographic biases embedded in ChatGPT, a popular AI chatbot. The study reveals a concerning disparity in the chatbot’s ability to provide environmental justice information that caters to local needs.
Researchers have pinpointed a troubling trend: densely populated states like California and Delaware have almost universal access to tailored information, with less than 1 percent of their populations in informational deserts. Contrastingly, in sparsely populated states such as Idaho and New Hampshire, the figure skyrockets to over 90 percent.
Virginia Tech’s own, Lecturer Kim, from the Department of Geography, underscores the urgency for further investigation. Kim states, “Our initial findings are a call to action — geographic biases in AI tools like ChatGPT can no longer be overlooked.”
The university’s findings are not isolated. They join a chorus of recent studies pointing out political and error-related biases in large language models, highlighting the potential for misinformation.