- Apr 2
Finding Your Voice in the Age of AI
- Courtney Trevino
- Mindful AI
- 0 comments
by Courtney Trevino | Mindful AI
I use AI every day.
Not to hide my voice, but to hear it more clearly.
If that sentence makes you flinch a little, you’re not alone. We’re in a cultural moment where using AI for writing is often treated like a moral failing. If you admit you used ChatGPT or another tool to help with a speech, a proposal, or a blog post, there’s an almost automatic suspicion: So… you didn’t really write it then, did you?
Meanwhile, millions of people are quietly using AI to think better, write better, and finally say things they’ve been trying to say for years.
Both of these realities are true at the same time.
This Isn’t Really About AI. It’s About How We Feel About Help.
Somewhere along the way, we decided that “doing it the hard way” is morally superior.
We let calculators into math classes, but only after students “proved” they could do long division by hand. We use spellcheck without guilt. We let Grammarly underline our sentences and suggest commas, and no one calls that cheating.
But say you used AI to refine a draft you already wrote? Suddenly, for some people, the word unethical shows up.
The narrative goes like this:
If AI touched it, it’s no longer “yours.”
If AI helped you, it must have replaced you.
If AI made it easier, it must have made it less real.
That’s a story. It’s not the truth.
The truth is that AI can absolutely be misused — just like a calculator can be misused. But using a calculator to solve a quadratic equation doesn’t mean you don’t understand algebra. It means you understand it well enough to know you don’t need to burn an hour re‑doing the same calculation over and over just to prove you can.
At some point, insisting on doing everything manually stops being noble and starts being wasteful.
The Calculator Analogy (With a Caveat)
Ethicists love to argue that “AI is not a calculator” — and they’re right in important ways. Generative AI raises real questions about bias, training data, energy use, and labor conditions that a plastic calculator never did. Those concerns are real and worth taking seriously.
But at the level of everyday use, especially for writing and thinking, there is a useful comparison:
A calculator doesn’t tell you what problem to solve.
It doesn’t decide why you’re solving it.
It doesn’t understand what the answer means for your life.
It just helps you get the answer more efficiently — once you know the formula and the variables.
AI, used well, functions similarly:
It doesn’t know what matters to you until you tell it.
It doesn’t have a story until you give it one.
It doesn’t have a voice until you bring yours to the prompt.
Dropping a one-line prompt — “write me a blog post about AI” — and copying the output is the equivalent of asking a stranger to sit your exam for you. That’s not AI literacy. That’s abdication.
But journaling deeply, bringing pages of your own thinking into an AI assistant, and asking it to help you structure, refine, shorten, clarify, or remove something that might cross a boundary?
That’s collaboration.
The ethics aren’t in the tool. They’re in the choices you make with it.
Honestly, I Think We Are Wasting a Lot of Human Potential
Honestly, I think insisting that “real” writing must always be a solo, manual slog is one of the quietest ways we hold ourselves back.
There are people who have important things to say — about their work, their industry, their trauma, their recovery, their ideas — who have been stuck for years because the mechanics of writing exhaust them. Neurodivergent thinkers. Second-language speakers. People who grew up being told their voice didn’t matter. People who are brilliant in conversation but freeze at a blank page.
For them, AI isn’t a shortcut. It’s a lifeline.
AI helps them:
untangle thoughts that feel like a knot
see structure in the mess
find words that were sitting just out of reach
keep their story intact while smoothing the edges
That doesn’t erase their voice. It reveals it.
And yet, the loudest voices in the room sometimes respond not with curiosity, but with contempt.
“If you used AI, it doesn’t count.”
“If you didn’t suffer over every sentence, it isn’t real.”
To me, that isn’t ethical superiority. That’s rigidity. It’s a refusal to adapt, dressed up as virtue.
The Real Risk Isn’t AI. It’s Pretending It Doesn’t Exist.
We are well past the point of arguing about whether AI is “going away.”
It’s not.
If you’re still quietly hoping we’ll get to rewind to a pre-AI world, you were probably also hoping life would go “back to normal” after the pandemic. It didn’t. Some things came back. Many things didn’t. The world changed, whether or not any of us voted for it.
AI is another one of those shifts.
You don’t have to love it. You don’t have to use it for everything. You absolutely get to have boundaries.
But pretending it isn’t here — or refusing to learn about it on principle — doesn’t protect you. It just keeps you from developing the literacy you’ll need to make informed choices about it.
We should absolutely be concerned about unethical uses of AI:
generating fake research
fabricating sources
claiming AI‑generated work as entirely your own when originality is required
using AI to deceive, manipulate, or erase other people’s labor
Those are real problems.
But using AI transparently, with your own thinking at the center — and saying so out loud?
That’s not unethical. That’s honest.
What troubles me much more than “I used AI to help edit this” is “I use AI all the time and I lie about it because I’m afraid of what people will think.”
The shame isn’t in the tool. It’s in the culture we’ve built around it.
What This Really Comes Down To
What this means for you is this:
Your voice isn’t fragile. AI doesn’t have the power to steal it unless you hand it over completely.
Effort is not the same as ethics. Doing everything the hardest possible way isn’t automatically more virtuous. Sometimes it’s just slower.
Refusal is not protection. Refusing to learn AI doesn’t shield you from its impact; it just leaves you with less control over how it shows up in your life and work.
Ethical AI use is a skill. It can be learned, practiced, and improved over time — just like any other professional skill.
AI, like a car, is not “dangerous” on its own. A car becomes dangerous when the driver is reckless, untrained, or intends harm. AI becomes harmful under the same conditions.
The answer isn’t to ban cars. It’s to teach people how to drive — and hold them accountable for how they choose to use that freedom.
The same is true here.
If You Want to Learn to “Drive” This Well
This is why I teach what I teach.
In my AI Literacy Webinar, I walk through the five levels of AI use — from simple prompts to using AI as a true thinking partner and workflow collaborator. Not so you can outsource your mind to a machine, but so you can understand just how deep AI use can go, what’s possible, what’s risky, and where you want to draw your own ethical lines.
We also talk about:
what ethical AI use actually looks like in practical terms
how to preserve your voice and values while using AI to refine your work
where transparency matters, and how to talk honestly about your AI use without feeling ashamed
My goal is not to convince you to use AI.
My goal is to make sure that if you choose to use it — or choose not to — you’re making that decision from a place of understanding rather than fear.
Question for You
So here’s my question:
Are you spending your energy trying to prove you don’t need AI — or learning how to use it in a way that still feels like you?
Only one of those paths moves you forward.
Webinars, Courses, and Coaching
If this resonated and you’d like practical guidance on using AI as a thinking partner—not a replacement—I offer a free AI Literacy Webinar, plus deeper courses and coaching for those ready to go further. Check out my upcoming offerings on my Product Offerings page.
#AILiteracy#MindfulAI#EthicalAI#ArtificialIntelligence#FutureOfWork#AIWriting#AIAndCreativity#VoiceAndStorytelling#AIEducation#ProfessionalDevelopment