Is It Risky to Use AI?
- Maria Hohenauer

- Feb 15
- 4 min read
Updated: Feb 16
Why "what if something goes wrong" is keeping you stuck? And how to start without fear.
Elena has a job interview next week. She's qualified. She's prepared. But she's also tempted to ask AI to help her practice. Then she stops herself.
What if the AI gives her bad advice? What if she sounds robotic? What if she becomes dependent on it and freezes when she has to think for herself?
So she does nothing. She practices alone, in her head, the way she always has. And the what-ifs stay safely in the dark, unchallenged.
If this sounds familiar, you're not alone. The fear of risk is one of the most powerful reasons we talk ourselves out of trying something new. It feels responsible. It feels protective. It whispers: Better safe than sorry.
Wayne Dyer formulates it like this: what feels like caution is often just fear wearing a sensible disguise.
Who Was Wayne Dyer?
Dr. Wayne W. Dyer (1940–2015) was an American psychologist, author, and motivational speaker who became one of the most influential voices in the self-help movement.
After earning his doctorate in counselling from Wayne State University, Dyer worked as a high school guidance counsellor and later as a professor at St. John's University in New York. His first book, Your Erroneous Zones (1976), became one of the best-selling books of all time, with over 35 million copies sold worldwide.
Over his career, Dyer wrote more than 40 books. His work evolved from psychological themes like motivation and self-actualization to deeper spiritual teachings influenced by thinkers such as Abraham Maslow, Lao Tzu, and Swami Muktananda.
This blog post is a part of a series based on Dyer’s book “Excuses Begone! How to Change Lifelong, Self-Defeating Thinking Habits”, which was published in 2009 by Hay House.
Dyer's gentle, compassionate approach reminds us that we're capable of far more than our excuses would have us believe: a perfect starting point for our conversation about AI and staying human.
Where This Excuse Hides
For Elena, it's "what if AI gives me wrong career advice and I mess up my interview?"
For Nora, it's "what if I use AI at work and my boss thinks I'm cheating?"
For Patricia, it's "what if I ask a stupid question and the AI thinks I'm foolish?" (As if an AI can think anything at all.)
For Clara, it's "what if I rely on AI and lose my own instincts?"
The excuse takes different forms, but underneath it's the same story: It's safer to stay where I am. At least I know the risks here.
But do you? Do you really know the risk of staying exactly where you are while the world moves forward without you?
What Dyer Knew About Risk
Wayne Dyer often said that fear is "the sand in the machinery of life." It keeps things from running smoothly. It stops momentum. It convinces us that standing still is safer than moving forward.
For the excuse "it's going to be risky," he offered this affirmation:
"Being myself involves no risks. It is my ultimate truth and I live fearlessly."
Notice he didn't say "nothing bad will ever happen." He said "being myself involves no risks." Because the deepest risk, in Dyer's view, is abandoning who you are.
When you let fear of AI keep you from understanding it, you're not protecting yourself. You're abandoning your curiosity, your adaptability, your right to be part of the conversation. That's the real loss.
The Two Kinds of Risk
Let's separate them for a moment.
There's real risk - the kind where someone could get hurt, money could be lost, or harm could come.
And then there's imagined risk - the kind where you might look foolish, feel uncomfortable, or discover you're not instantly perfect at something.
Most of what we fear about AI falls into the second category.
Yes, you should fact-check important information. Yes, you should think critically about what AI tells you. Yes, you should use your own judgment. (That's literally what "human in the loop" means.) But letting fear of imagined risk keep you from even trying? That's not caution. That's captivity.
One Small Thing You Can Do Today

Here's a practice I borrowed from my own notebooks. I call it the "Low-Stakes Challenge." Pick something so unimportant that it literally does not matter if AI gets it wrong.
Ask AI to suggest a dinner recipe using three random ingredients in your kitchen.
Ask AI to write a silly poem about your pet.
Ask AI to generate a packing list for a weekend trip you're not even taking.
That's it. The stakes are zero. If the recipe is terrible, you order takeout. If the poem is nonsense, you laugh. If the packing list suggests snow gear for a beach trip, you shrug.
The goal isn't to get something useful. The goal is to prove to yourself that trying AI will not destroy you.
Once you've done that a few times, you might feel ready for something slightly more meaningful. A draft of an email. A brainstorm for a project. A question about a topic you're genuinely curious about.
But start where the stakes are zero. Let your nervous system learn, slowly, that this tool is not a threat.
A Gentle Invitation
I keep a small notebook where I track my low-stakes experiments. Not because they're important, but because they remind me that I'm brave enough to try things that might not work out perfectly.
If you're someone who finds courage on paper, I make journals for exactly these moments. A place to note your tiny experiments, your small wins, your growing comfort with something that once felt risky.
But even a sticky note works. The important thing is to start.
The Truth About Risk
Elena finally let herself try AI for something trivial - a practice interview question she never actually used. Nothing bad happened.
The AI wasn't perfect. It wasn't terrifying. It was just... a tool. Helpful in some ways, not in others. And she got to decide what to take and what to leave.
That's the risk we forget to name: the risk of never finding out that you're still in charge.
You are. Always.
Come back soon. There's more to talk about.
Warm greetings
Maria




Comments