top of page

Exploring Algorithmic Bias in LLMs

A hands-on, research-focused course exploring how large language models (LLMs) can inherit unfair biases—and what you can do to detect and mitigate these issues. Participants will replicate real bias-auditing methods from studies like the “Silicon Ceiling,” run simple statistical tests to confirm if differences are random or meaningful, and build an interactive mini tool that reveals whether LLM responses change when certain variables (like names) are tweaked. By the end, you’ll understand why bias emerges, learn how to measure it, and deploy a small-scale “bias auditing” web app.

$2,500

July 7 – July 27 
July 28 – August 17 

Approximately 7 hours per week commitment,
with live sessions and independent project work
Key Highlights
Practical Bias Detection
Students will design prompt variations (e.g., different résumés or user attributes) to reveal subtle biases and record results for comparison.
Statistics
No heavy math required. Learn to compute averages, standard deviations, and simple p-values so you can judge if an observed difference is real or likely random.
Hands-On Coding Emphasis

At least half of each live session is devoted to building and testing code—whether collecting LLM outputs, analyzing them with simple stats, or refining a mini bias-audit app.

Research Design and Implementation

Through step-by-step guidance, you’ll create an AI bias-auditing tool—a small web application that compares AI responses across different demographics or prompts.

Look-Think-Do (LTD) Tools Integration

Use LTDScience to annotate real research (like The Silicon Ceiling), then switch to LTDCoding to implement what you learned and create web applications. This “Look → Think → Do” flow keeps reading short and coding central.

Ethical & Policy Reflections

 Set aside short discussion time for examining the broader social consequences of biased AI systems, ensuring you see not just the “how” but also the “why it matters.”

Topics you'll cover
Understanding LLMs & Bias Fundamentals
Reading & Annotating Research (LTDScience)
Simple Statistics & Hypothesis Testing
Hands-On Bias Analysis Tool
Debiasing Techniques
Ethical & Policy Reflections
Error Handling & Edge Cases
Final Project & Presentation
This course is for:
coding.png
expertise.png
personalization (1).png
Anyone Concerned About AI Ethics seeking practical ways to test and analyse bias in large language models
Students who are excited about AI’s promise, concerned about its potential pitfalls, and looking for mitigation strategies
Beginner to Intermediate Coders wanting a friendly introduction to AI, research methods, and building simple web tools
Course Delivery Method
search (2).png
3 Weekly Live Sessions
  • Live Labs: Collaborate in real time with instructors and classmates to build or improve mini-projects.

  • Demo Sessions: Share your creations, gather input, and learn from other students’ approaches.

  • Office Hours: Get one-on-one or small-group troubleshooting and feedback on your code.

curriculum (1).png
Flexible Asynchronous Learning Complete readings, quizzes, and coding exercises at your own pace.
curriculum (1).png
Community Discussion Forum  Exchange ideas, celebrate milestones, and request peer support for tough coding challenges.
Prerequisites
  • Comfort with basic coding concepts (variables, loops, functions)

  • No prior AI or stats experience required—we’ll keep it simple

  • A computer with internet access for asynchronous tasks and live coding labs

Why take this course?

AI bias affects hiring, healthcare, and everyday decision-making. Learn to detect and analyze bias with a hands-on, research-driven approach.

🔹 Heavily Hands-On – Each session dedicates at least half the time to coding labs.
🔹 Step-by-Step Guidance – Read real research in ltdScience, apply it in code, and get instructor support in live labs.
🔹 Future-Ready Skills – Build a concrete understanding of bias detection and its impact across industries.

Learning Outcomes

1

Conduct Systematic Bias Detection Research

Design and implement experiments to identify patterns of bias in large language models using methodologies from published studies.

2

Apply Data Analysis to Evaluate AI Fairness

Use statistical methods to determine whether observed differences in AI responses are statistically significant or random variations.

3

Develop Interactive Bias Visualization Tools

Build web applications that demonstrate how LLM outputs change when variables like demographics or names are modified.

4

Formulate Bias Mitigation Approaches

Explore and implement techniques to reduce unfair patterns in AI responses while considering broader ethical implications.

bottom of page