AI Education
( A GLANCE AT THE FUNCTION
& IMPACT OF AI )
For simplicity, we will break AI down into two branches – Traditional AI and Generative AI. However, it is important to note that the world of AI is much more complex than just these two branches.
For a more detailed description of AI, check out:
“What is artificial intelligence” by Eda Kavlakoglu and Cole Stryker
Traditional AI → This branch of AI classifies, predicts, and decides based on data and information. It does NOT invent and it does NOT create, it simply takes data and information, learns pattern recognition, and creates decisions.
Generative AI →This branch of AI creates something ‘new’ by learning through data and examples.
Let’s explore some examples below:
-
Traditional AI → recommends songs based on your listening history
Generative AI → could create a new song that matches the style of songs in your listening history
“One system predicts, the other creates”
-
Traditional AI → will classify help tickets and route them to the correct department
Generative AI → will write the reply to the customer, explain the issue, and guide the customer to a solution
AI learns from information and data, just like we learn from experience.
When we learn to recognize animals, you look at photos of cats and dogs until you can tell them apart. AI does the same, it looks at thousands of pictures to recognize patterns. This is called Machine Learning.
Why are we talking about this now? While AI has been around for quite some time, a rapid increase in the introduction of new machine learning and, in particular, generative AI tools has boomed in the 2010’s. Leading entities introducing these systems include Microsoft, Anthropic, and Google.
For a more comprehensive timeline of the development of AI, check out:
The Ultimate Timeline of Artificial Intelligence Technology
ENVIRONMENTAL
IMPACT
Artificial intelligence tools and systems rely on physical infrastructure, primarily data centers, which raise significant environmental concerns, particularly around energy and water consumption.
The International Energy Agency provides a breakdown of energy demand:
-
→ “on average they account for around 60% of electricity demand in modern data centres, although this varies greatly between data centre types.”
Servers can be equipped with Central Processing Units (CPU’s) and Graphics Processing Units (GPU’s).
In the past decade, use of GPU’S has boomed. -
→ “devices used for centralized data storage and backup, and account for around 5% of electricity consumption.”
-
→ switches that connect devices and data center, routers for directing traffic, and load balancers account for up to 5% of electricity demand
-
→ keep data centers running during power outages, although rarely used, they are still considered ‘necessary’
A 2025 article from the Lincoln Institute of Land Policy notes that even a mid-sized data center can consume as much water as a small town, while larger facilities may use up to 5 million gallons per day – comparable to a city of 50,000 people.
In terms of energy use, the International Energy Agency (IEA), cited in the same article, estimates that a conventional data center can draw as much electricity as 10,000 to 25,000 households. These facilities also depend heavily on freshwater for cooling systems that maintain optimal operating temperatures. The Environmental and Energy Study Institute (EESI) reports that a medium-sized data center can consume up to 110 million gallons of water annually – roughly equivalent to the yearly usage of 1,000 households. Even at the level of individual interactions, resource demands accumulate: researchers at the University of California, Riverside estimate that generating a 100-word AI response can use about 519 milliliters of water, roughly the equivalent of a standard bottle.
Beyond water and energy demands, data centers also require significant land and specialized construction. Although they are estimated to account for less than 10% of global electricity use, the International Energy Agency (IEA) notes that their energy consumption is often highly concentrated in specific regions, which can place considerable strain on local power grids.
In addition to water use, data centers contribute to noise, light, and air pollution. Noise pollution is especially prominent during construction, but it continues once facilities are operational. Heating, ventilation, and air conditioning (HVAC) systems produce a constant hum that can exceed 90 decibels; noises levels above the 85-decibel threshold are associated with potential hearing damage.
Large hyperscale data centers also generate continuous light pollution, as some facilities require all-night illumination. This can negatively affect surrounding communities by disrupting natural circadian rhythms, interfering with melatonin production, and altering sleep–wake cycles. Light pollution also impacts wildlife, disrupting migration and behavior patterns in species such as birds, deer, butterflies, and fish.
Residents in Illinois have reported firsthand the effects of data center noise on their quality of life. In a segment by ABC Eyewitness News, David Szala and Bryan Castro, who live near the CyrusOne data center in Chicago, described hearing cooling fans constantly, day and night. “You can hear it as soon as you walk out. Fans, just constant noise,” Szala said. Despite the construction of sound barriers, residents report that the issue persists. “The noise doesn’t drop down and stop. The noise radiates from above,” Castro explained.
FOUNDATIONAL BIAS
Artificial intelligence systems are built on data, human input, and iterative development processes, all of which can introduce bias. During early training phases in particular, these biases can significantly shape how AI systems behave and make decisions. A Chapman University article outlines four key stages of AI development that are especially vulnerable to bias:
-
→ Bias often begins here. If training data is not diverse or representative, outputs will reflect those limitations.
Example: AI trained on historical hiring data from a company that favors male applicants may replicate those patterns in its recommendations.
-
→ Human annotators interpret and label data, which can introduce subjectivity. Categories such as sentiment or facial expression are especially susceptible to cultural and personal bias.
-
→ Imbalanced datasets or poorly designed model architectures can reinforce bias. Optimization methods may also prioritize majority groups, leading to less accurate outcomes for underrepresented populations.
-
→ Even if a system appears unbiased during development, real-world use can expose gaps. Without diverse testing and ongoing monitoring, AI systems may produce discriminatory or exclusionary results.
A concrete example of bias in generative AI comes from a study conducted by researchers at the University of Washington examining Stable Diffusion, a deep learning model that generates images from text prompts. In the study, researchers prompted the model to create images of “a front-facing person,” varying inputs across six continents and 26 countries, as well as across gender identities (e.g., “person,” “man,” and “nonbinary person”).
They then analyzed the outputs by assigning similarity scores from 0 (least similar) to 1 (most similar). Results showed that prompts for a generic “person” most closely resembled men (0.64), as well as individuals from Europe (0.71) and North America (0.68). In contrast, generated images were least representative of nonbinary individuals (0.41) and people from Africa (0.41) and Asia (0.43).
The study also identified patterns of sexualization. Using a Not Safe For Work (NSFW) detection model, researchers rated generated images on a scale from 0 (neutral) to 1 (“sexy”). They found that images of Venezuelan women were rated significantly more sexualized (0.77) compared to those of women from Japan (0.13) and the United Kingdom (0.16).
These findings highlight how biases embedded in training data and model design can lead to uneven and potentially harmful representations, reinforcing stereotypes and amplifying existing social inequalities.
DATA SECURITY
Data privacy and security have emerged as major concerns in the widespread adoption of artificial intelligence. A 2024 report from TrustArc found that AI remains the top privacy challenge for organizations worldwide for the second consecutive year, highlighting the growing tension between innovation and data protection.
Government use of AI further complicates these concerns. For example, U.S. Immigration and Customs Enforcement (ICE) has incorporated AI-driven tools into its operations, raising risks for vulnerable populations, particularly undocumented communities. Although the Fourth Amendment protects against unreasonable searches and seizures, agencies often rely on a “data broker loophole.” This allows them to purchase personal data such as web activity, location information, and demographic details from third-party companies without a warrant, arguing that data shared online is not legally “private.”
In December 2025, ICE reportedly contracted 13 private companies to provide skip tracing services, a practice that involves locating individuals using online data, public records, and, in some cases, surveillance. One such company, Gravitas AI, has been associated with these efforts. Contractors may verify addresses and document individuals’ locations through time-stamped photos of homes and workplaces, raising serious concerns about surveillance and consent.
Private-sector use of AI also introduces security vulnerabilities. Many organizations rely on tools like ChatGPT to streamline tasks such as editing text or debugging code. However, sharing sensitive information with third-party AI systems can expose organizations to data breaches. For instance, the platform Vercel experienced a data leak after an employee authorized a third-party AI tool using their Google account. This allowed attackers to gain access to internal systems and unencrypted information, leaving both the company and its customers at risk.
A similar incident occurred in 2023 at Samsung, where an employee uploaded confidential source code into ChatGPT for debugging. This raised alarms because, by default, some AI systems may retain and use user inputs for model training unless users opt out, creating potential pathways for sensitive data exposure.
Concerns about data misuse extend beyond corporate environments. In 2022, a California-based AI content creator discovered that her private medical photos had been included in the LAION-5B, a publicly available dataset used to train image-generation models. She identified the images using the tool Have I Been Trained, which allows individuals to check whether their work appears in training datasets. The images, originally shared only with a medical provider, highlight the risks of sensitive data being scraped, reused, and redistributed without consent.
Together, these examples illustrate how AI systems can amplify existing vulnerabilities in data privacy, from government surveillance and legal loopholes to corporate data leaks and unauthorized data use. As AI adoption grows, addressing these risks will be critical to ensuring both security and trust.
Artificial Intelligence (AI)
is an umbrella term that refers to computer systems that perform tasks NORMALLY done by humans.
TRADITIONAL VS GENERATIVE AI
WHAT ARE THE IMPACTS OF AI?
WHAT IS AI?
Artificial adjective
made or produced by human beings rather than occurring
naturally, especially as a copy of something natural.
OXFORD LANGUAGE DEFINITION
Intelligence noun
the ability to acquire and apply knowledge and skill.
OXFORD LANGUAGE DEFINITION