AI has an ethics problem. And it’s more personal than you think.
The tools we use every day are not neutral. And most of us haven’t asked why.
I’ve been studying AI for Everyone these past few weeks. Started with the technical stuff, prompts, models, how to talk to these systems so they actually do what you want. Good skills. Practical. Worth knowing.
But module five stopped me cold.
Ethics.
Not the philosophy lecture kind. The real kind. The kind that explains why your CV might be rejected before a human ever reads it. Why your face might not be recognized by a system that works perfectly for someone else. Why the content you create might be buried by an algorithm that was never designed with you in mind.
The machine inherited our mistakes
Every AI system you interact with was trained on data. Massive amounts of it. And that data came from the real world, a world shaped by decades of inequality, exclusion, and bias. The AI didn’t create those patterns. It inherited them. Then it scaled them.
Amazon built an AI hiring tool a few years ago. They had to scrap it. Why? Because it was trained on ten years of their own hiring data, data from an industry that had historically hired mostly men. The system taught itself that male candidates were preferable. It was penalising CVs that included the word “women’s” as in women’s chess club, women’s college. Nobody programmed it to do that. It just learned what success looked like from the data it was given.
That’s not a glitch. That’s the system working exactly as designed. The design was just wrong.
They built it. But not for all of us.
Right now, the people building most of the world’s AI systems are not representative of the world those systems will affect. And when you build something without the full picture, the blind spots don’t disappear. They get automated.
Think about what this means for communities. For content moderation tools that flag certain dialects as aggressive. For recommendation algorithms that amplify already loud voices and suppress emerging ones. For hiring tools that screen people out based on proxies for race or class that nobody explicitly programmed in.
These tools are already inside the platforms and workplaces most of us use every day. We didn’t build them. But we’re living inside the decisions they make.
“It’s automated” does not mean “it’s fair”
One of the principles I kept coming back to in the material was explainability. The idea that if an AI system makes a decision that affects your life, your job application, your loan, your visibility online, you should be able to understand why. Not in code. In plain language.
Most systems don’t offer that. The decision just arrives. Opaque. Final. No appeal process.
The EU is ahead of the curve here. Their GDPR framework includes a right to explanation. If an algorithm makes a decision about you, you can ask why. That’s not a perfect solution. But it’s at least asking the right question. Most of the world hasn’t even gotten there yet.
This is your problem too
If you manage communities, you are making curatorial decisions every day about whose voice gets amplified, whose gets moderated, whose story gets told. AI tools are increasingly helping you make those decisions faster. Which means the ethical questions don’t belong to the engineers anymore. They belong to us too.
If you create content, algorithms are already deciding who sees your work and who doesn’t. Understanding that those decisions are not neutral, not objective, and not fixed is the first step to building an audience that isn’t entirely at the mercy of a system designed without you in mind.
If you work in recruitment or talent, the tools your company uses to screen, rank, and shortlist candidates are making judgment calls. Somebody needs to be asking what those tools were trained on. Whether their outputs are being audited. Who they systematically miss.
I’m not saying avoid AI. I use it constantly. I find it genuinely useful.
I’m saying use it with your eyes open.
The technology is not inherently good or bad. But it’s also not neutral. It reflects the choices, assumptions, and blind spots of the people who built it. And right now, a lot of those people are not thinking about communities like ours.
That has to change. And the first step is knowing enough to ask the question.

