Skip to main

Ethics and AI

When you think about the ethics of using AI, it’s about two things: what your unit allows, and what feels right for your own learning journey. AI is changing how we work, learn and live, so it’s worth considering things like intellectual property, plagiarism, the environment and the people behind AI as you make your choices.

We use AI every day—but what’s happening behind the scenes? This short video outlines the key hidden impacts.

Environmental impacts of AI

Artificial intelligence does not simply exist ‘in the cloud’; it relies on large data centres containing high-performance computers and extensive data storage. Training and operating AI models requires substantial computational power and energy, which generates significant heat. These systems must be cooled, often using large volumes of fresh, potable water. By 2027, global AI-related water use is projected to reach 4.2–6.6 billion cubic metres, which is approximately one-third to one-half of Australia’s annual water use. This reality places additional pressure on local communities and ecosystems (Climate Leaders Coalition, 2025).

The construction of data centres and AI technologies requires rare earth materials – resources that are finite and environmentally costly to extract, partly because only a small proportion is usable mineral, and the rest becomes toxic waste (Reitmeier & Lutz, 2025). Describing the mining in Jiangxi, China, natural resource strategist David Abraham notes, "Only 0.2 percent of the mined clay contains the valuable rare earth elements. This means that 99.8 percent of earth removed in rare earth mining is discarded as waste, called 'tailings', that are dumped back into the hills and streams" (cited in Crawford, 2021).

As one of the world’s largest producers of rare earths, Australia—particularly Western Australia—has seen increased mining activity, leading to habitat destruction, land degradation, water contamination, biodiversity loss, pollution, and potential damage to Indigenous cultural heritage.

Data centres that host AI systems require vast amounts of electricity, and in many regions this energy still comes from fossil fuels. As AI models become more advanced, they need even larger data centres with much higher energy demands. Training a single system can have a noticeable carbon footprint. For example, training OpenAI’s GPT-3 produced 552 tonnes of CO₂, roughly the same as driving 123 petrol cars for an entire year (Climate Leaders Coalition, 2025). In Australia, data centres currently consume about 2% of electricity in the National Electricity Market, a figure expected to rise to 6% by 2030 and 12% by 2050 as demand for AI continues to grow (Senate Select Committee on Adopting Artificial Intelligence, 2024).

It’s not just data centres that consume energy — everyday AI use does too. A single ChatGPT request can use about ten times more electricity than a basic Google search, with image generation using roughly twenty times more, and video generation much higher again. An AI-powered Google search is estimated to require 20 to 30 times more energy than a standard search because of the extra computing power needed (de Vries, 2023).

A more complex picture

AI is contributing to rising energy use in data centres, but it’s important to remember that AI isn’t the only driver. Data centres support everything from video streaming and cloud storage to social media, online shopping and government systems — and energy use is increasing across all of these services, not just AI (International Energy Agency, 2024). At the same time, improvements in hardware, model design and algorithms may help reduce the electricity needed to run AI systems. However, expecting efficiency gains to fully offset the long-term growth in energy demand may be unrealistic. As AI becomes cheaper and faster to run, more people and industries are likely to use it — a rebound effect that may increase energy use rather than reduce it (de Vries, 2023).

Labour impacts of AI

Many people don’t realise that today’s AI systems rely on huge amounts of human labour behind the scenes. Workers spend hours labelling images, videos and text so that AI models can learn, yet this work is often done in lower-income countries through crowdsourcing platforms where pay is low, job security is minimal, and no employment benefits exist (Crawford, 2021). In many cases, people invest significant time reading instructions or completing unpaid training, only to finish a handful of tasks before the project suddenly ends.

A survey of 3,500 crowdworkers across 75 countries found that people on platforms like Amazon Mechanical Turk and Clickworker earned only $1 to $5.50 per hour—far below minimum wage in most places (Berg et al., 2018). Meanwhile, companies developing AI, such as OpenAI, pay some engineers salaries of almost a million dollars each year. These were valued at around $80 billion in early 2024, highlighting the stark gap between those who build AI and those who provide support behind the scenes (Narayanan & Kapoor, 2024).

GenAI models are trained on data automatically scraped from the internet, which means that it needs a filter to prevent the output of toxic content, including hate speech, content promoting self-harm and images of child abuse. To develop this filter, humans have to label millions of examples of toxic text and images. Former content moderator for OpenAI, Mophat Okinyi, was part of a team in Kenya whose job it was to read and label thousands of descriptions of toxic content. Okinyi would view up to 700 text passages a day, many depicting graphic sexual violence. It took a gruelling toll on his mental health, and on his relationship with his family (Hao, 2025).

Not all AI data work is harmful—many workers label everyday content like trees, traffic signs or simple phrases—but some of it involves the most distressing material online. This work is essential for making AI systems safer, and demand for it is likely to increase as AI use expands and safety filters need ongoing updates. Understanding this hidden labour is essential if we want AI systems to be developed in ways that are fair, ethical, and safe for the people who make them possible. Today, Mophat Okinyi is a leading advocate for the fair treatment and rights of online content moderators and tech workers, having helped establish the Content Moderators Union: the first organisation dedicated to protecting AI data workers in Africa (Hao, 2025).

Privacy impacts and bias in AI

GenAI systems need enormous amounts of data—millions of images and trillions of text sources—and most of this data comes from us. Modern computer-vision tools exist only because people have uploaded billions of photos to platforms like Facebook and Flickr. Even small actions, like tagging a friend in a photo or completing a CAPTCHA by identifying traffic lights, can end up contributing to the datasets used to train AI systems (Mitchell, 2020).

GenAI systems need enormous amounts of data—millions of images and trillions of text sources—and most of this data comes from us. Modern computer-vision tools exist only because people have uploaded billions of photos to platforms like Facebook and Flickr. Even small actions, like tagging a friend in a photo or completing a CAPTCHA by identifying traffic lights, can end up contributing to the datasets used to train AI systems (Mitchell, 2020).

A common industry practice is to scrape vast numbers of images and texts from the internet, sort them into categories, and use them to teach AI how to “see” and interpret the world. This process often assumes that anything publicly available online is free to take and use, without the need for agreements, signed releases, or ethics approval. But these datasets often reflect the biases of the internet itself. For example, one major face-recognition dataset was found to be 77.5% male and 83.5% white because online images tend to feature public figures, who are disproportionately white men (Crawford, 2021).

Information that GenAI produces will reflect the prejudices that exist in the data it is trained on, meaning that it can reinforce and amplify existing stereotypes. In one study, an AI system incorrectly classified men as women when they were standing in kitchens—because the dataset included far more images of women in that setting (Mitchell, 2020).

A well-known real-world example comes from Amazon. The company found that an internal AI hiring tool was systematically downgrading female applicants for senior roles. Because the model was trained on résumés from the previous decade—most of which came from men—the AI “learned” that male candidates were preferable. It penalised résumés containing the word “women’s” (such as “women’s chess club captain”) and downgraded graduates from all-women’s colleges (Hao, 2025). Engineers attempted to correct the system by removing explicitly gendered language, but the AI continued to rely on subtler patterns—such as preferring action verbs more commonly used by men (“executed,” “captured”). Eventually, Amazon concluded that the tool could not be fixed and abandoned the project.

Explore more AI impacts

Berg, J., Furrer, M., Harmon, E., Rani, U., & Silberman, M. S. (2018). Digital labour platforms and the future of work: Towards decent work in the online world. International Labour Office. https://www.ilo.org/publications/digital-labour-platforms-and-future-work-towards-decent-work-online-world

Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Climate Leaders Coalition. (2025). AI, climate and the environment: Strategies to reduce the impact. https://www.climateleaders.org.au/wp-content/uploads/2025/10/2509-CLC-AI-Climate-and-Environment-Strategies-to-Reduce-Impact.pdf

de Vries, A. (2023). The growing energy footprint of artificial intelligence. Joule, 7(10), 2191–2194. https://doi.org/10.1016/j.joule.2023.09.004

Hao, K. (2025). Empire of AI: Dreams and nightmares in Sam Altman’s OpenAI. Penguin Press.

Mitchell, M. (2020). Artificial intelligence: A guide for thinking humans. Picador.

Narayanan, A., & Kapoor, S. (2024). AI snake oil: What artificial intelligence can do, what it can't, and how to tell the difference. Princeton University Press.

O’Donnell, J. (2025, May 20). We did the math on AI’s energy footprint. Here’s the story you haven’t heard. MIT Technology Review. https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/

Reitmeier, L., & Lutz, S. (2025, September 12). What direct risks does AI pose to the climate and environment? Grantham Research Institute on Climate Change and the Environment, London School of Economics and Political Science. https://www.lse.ac.uk/granthaminstitute/explainers/what-direct-risks-does-ai-pose-to-the-climate-and-environment/

Senate Select Committee on Adopting Artificial Intelligence. (2024). Chapter 6: Impacts of AI on the environment. In Adopting artificial intelligence (AI): Report (pp. 161–183). Parliament of Australia. https://www.aph.gov.au/Parliamentary_Business/Committees/Senate/Adopting_Artificial_Intelligence_AI/AdoptingAI/Report/Chapter_6_-_Impacts_of_AI_on_the_environment