"If you hide the bot, you're cheating. If you show the bot, you're partnering. The real question is: why are we still arguing about which one matters more?" — Jason Averbook
AI isn’t a shortcut—it’s a shift. A shift in how we work, how we learn, and how we show up. And yet, we’re still stuck in this tired conversation about whether using it is "cheating." I had this conversation WAY too many times this week with employers, leaders, students and parents.
In school? Cheating. At work? Cheating. Applying for a job? Definitely cheating.
But here’s the truth: if we keep calling AI cheating, we’re not protecting anything. We’re just reinforcing old rules for a new reality.
This post isn’t a hot take. It’s a call to reframe. Because the debate isn’t about AI. It’s about trust, transparency, and what it means to add human value when machines can do so much. And also, I don’t want to live in a world where we keep hindering progress.
PS. I used AI to help me frame this debate, do research and create graphics and infused it my thoughts along the way to make it MINE.
Why We’re Still Having This Debate
Speed. This change didn’t creep in. It hit us like a freight train. ChatGPT dropped and suddenly, every student, classroom, workplace, and resume got a rewrite. We weren’t ready.
Unclear definitions. The word "cheating" is carrying too much weight. It used to mean copying someone else’s work. Now it’s being slapped on anything AI touches—even if the human is still thinking, editing, and owning the outcome.
Unequal access. Some schools and companies have banned AI. Others have embraced it. That inconsistency creates friction. If one person gets to use it and another doesn’t, the question shifts from cheating to fairness. I get not all have access “at the moment” but do we penalize all?
Fear of becoming obsolete (FOBO). If the machine can do the thing—write the email, summarize the report, even spark the idea—what’s left for us? That fear makes people double down on legacy definitions of merit.
Lack of trust. We’ve seen real misuse—plagiarized essays, confidential data leaks, fake case law, even airline tickets purchased at a low price (heaven help me). And when trust is broken, people assume the worst. They stop asking what’s possible and start trying to catch someone doing it wrong. They always ask “what if something goes wrong instead of “what if something goes right?”
The Loudest Voices in the Room
Team Cheating says:
External help erodes fairness and merit.
You don’t grow if you don’t struggle.
People are faking it—and getting away with it.
Team Partnering says:
This is the future of work and being.
AI isn’t replacing us—it’s expanding what’s possible and adding a new level of creativity to us all.
We’ve always had tools. This one just thinks with us.
Both camps make valid points. But both are also stuck in a false binary.
The question isn’t “Are you using AI?” It’s: “Are you doing it with integrity?” And to that point our job is to stop training people on the how to write a prompt only but to educate people on what integrity is when using these amazing tools.
What’s Actually Being Tested
Who owns the judgment? If you prompt it, challenge it, edit it, and stand behind it—that’s not cheating. That’s critical thinking. (many will disagree with me here)
Are you being transparent? In academia, we cite our sources. In business, we disclose conflicts. AI should be no different. Many organizations like the American Psychological Association and the Modern Language Association have already defined how to cite it. Why haven’t we?
Are we assessing effort or impact? If someone uses AI to draft faster and then adds depth and insight, isn’t that what we want?
Do we know what skills still matter? Prompting. Editing. Evaluating. Explaining. AI doesn’t replace thinking. It multiplies the value of it—if we let it.
What the Data Tells Us (2024 and Beyond)
In 2024, 61% of students said their instructors had updated their syllabi to address AI use—yet 43% still admitted using it in ways considered questionable.
74% of working professionals believe AI use at work boosts their performance, and only 11% believe it’s dishonest when disclosed.
89% of hiring managers in 2024 say they welcome AI-assisted applications—if the candidate can defend their contribution.
Across academia, over 70% of institutions in North America and Europe have added AI guidance to their academic integrity frameworks.
The trendline? It’s not the tool. It’s whether people are being honest about how they use it.
A Better Question
"What did you bring to the table after the model stopped typing?"
If you can answer that, you didn’t cheat. You collaborated.
So What Needs to Change?
1. Ditch the secrecy. Say it out loud: “This was AI-assisted, edited by me.” That one line builds more trust than an entire essay of guesswork. (This won’t be needed for long but assumed)
2. Define the goal. If the goal is original thought, ban AI. If the goal is insight or communication, require it. Match the rule to the outcome, not the fear.
3. Teach the delta. Don’t just ask what the person did. Ask what they changed, challenged, or improved. That’s the learning moment.
4. Equalize access. Gatekeeping tools is the real unfair advantage. Give everyone access and train them well.
5. Update what we measure. Let’s reward clarity, curiosity, and courage—not just time spent alone at the keyboard.
6. Introduce prompt transparency. I don’t know how long this will be needed, but for now, show your work. Share your prompt. Say what changed. If we normalize this in schools, in hiring, and in everyday work, trust increases—quickly.
A Note to Parents and Teachers
This isn't just a conversation for tech leaders and policy makers. It’s a conversation for kitchen tables and classrooms.
Parents: if your child uses AI to do their homework, ask them what they changed. Ask them why they used it. Teach them to reflect—not just produce.
Teachers: instead of banning the tools outright, create space for students to show their judgment. Let them include a short note on how they used AI. Did it help them understand the concept better? Did they have to correct it? That’s the learning.
AI can be a crutch. Or it can be a coach. Your influence—the way you ask questions, the way you frame feedback—is what makes the difference.
Let’s stop assuming the worst and start guiding toward the best.
For Employees Talking to Employers
If your company hasn't banned AI outright, but is limiting access or creating so much red tape that you can't do your job efficiently—you're not alone.
Here’s what you can say:
"I’m not asking for free rein. I’m asking for the tools to do my best work. Right now, I’m spending more time recreating what AI could help me accelerate. That’s not protecting quality—it’s slowing us down." or
"I understand the concerns. But what if banning AI is costing us more than it’s protecting? What if it’s slowing us down, or pushing innovation underground instead of shaping it?"
When AI policies are too rigid, here’s what happens:
Deadlines slip.
Repetitive tasks pile up.
Innovation flatlines.
Frame it not as a compliance issue, but a productivity one:
Offer examples of repetitive work you could offload.
Show the time savings you've experienced using AI in a sandbox or personal tool.
Ask: “What would it take to pilot this safely?”
Sometimes it’s not about changing the policy—it’s about helping leadership see the cost of inaction and how it can be used “safely”.
The goal is to help your leaders see that leadership in the AI era means education, not exclusion.
Be the person who says, “Let’s figure this out together.”
The Damage of Calling It Cheating
Here’s what happens when we keep labeling AI use as cheating:
Students stop experimenting. They don’t learn how to prompt, evaluate, or question—because they’re too afraid of being punished for trying.
Employees go underground. Instead of sharing prompts and improving workflows together, they hide their usage—and the company loses collective learning.
Trust erodes. People stop being open with each other. We start assuming everyone is cheating, which leads to more dishonesty, not less.
Inequity expands. The people who know how to use AI well (and have access) keep advancing. Everyone else falls behind, shamed into silence. The have’s and the have-not’s divide gets wider, yet different.
We miss the moment. This is a chance to teach critical thinking, digital fluency, and ethical judgment. If we frame AI as cheating, we lose the opportunity to shape how it’s used.
The longer we punish curiosity and label innovation as dishonesty, the longer we delay real progress.
Let’s stop wasting time defending old boundaries and start building new ones that reflect reality.
The Myth of Not Needing to Learn
One of the most common fears I hear is: "With AI, no one will learn anything anymore."
Here’s the truth: using AI without understanding is a trap, not a shortcut.
If you rely on it without checking the math, the logic, the source—you risk embedding mistakes, bias, and shallow thinking. This is the concept of HITL (Human In The Loop) that needs to be educated.
AI can help you write an essay. But if you don’t understand the argument, it’ll fall apart the moment someone challenges it.
AI can draft your presentation. But if you can’t explain your position, no slide deck will save you.
We don’t need to abandon the basics. We need to redefine where the basics stop and where strategy begins.
We still teach arithmetic even though we have calculators. Why? Because you need a baseline to judge the machine.
Same here. We need to teach judgment, sense-checking, and reflection.
We’re not skipping learning. We’re shifting what we do with what we learn.
Time to Move On
AI isn’t the enemy of ethics. It’s the mirror that reflects whether we’re being honest.
Cheating isn’t about the tool. It’s about pretending you didn’t use it.
So let’s stop asking if someone used AI. Let’s start asking: What did they do with it?
"The future isn’t man vs. machine. Or man behind machine. It’s man with machine—out in the open, adding value, and showing their work." — Jason Averbook
As Charlene Li says: “You don’t prepare people for the future by banning the future. You prepare them by guiding them through it.”
And Glen Cathey reminded me recently: “We’re not teaching people to beat the bots—we’re teaching them to work better because of them.”
Let’s raise the standard. Let’s retire the word "cheating." Let’s teach people to team—with tech, with each other, with integrity.
Next time you see someone use AI, don’t ask: "Did you cheat?"
Ask: “What did you learn? What did you change? And how would you do it differently next time?”
Closing Lift
Let’s work together to make our future path brighter—shoulder-to-shoulder, curiosity over caution. Instead of dismissing the greatest technological leap of our lifetimes, let’s explore it, shape it, and share the gains. We’re blessed to be the generation that gets to write this chapter—so let’s make it count, together.
Jason Averbook is a globally recognized leader in digital strategy, generative AI, and the future of work. With more than three decades at the intersection of human capital and technology, he’s spent his career helping organizations shift from simply adopting tech to fully embodying a digital, human-first mindset. Named one of the Top 25 Global Thought Leaders in HR and Work, Jason is the founder of Leapgen and Knowledge Infusion, and the author of two books on digital transformation.
He’s known for pushing the conversation forward—with sharp insights, practical frameworks, and a call to action leaders can’t ignore. Whether in the boardroom or on a stage, Jason’s mission is clear: build organizations that don’t just react to change—but lead it.
We are entering the AI Age, not auditioning for the Luddites’ Reunion Tour. The sooner we ditch the fear, grab curiosity by the hand, and start outsourcing the mental junk drawer, the sooner we get to the good stuff — like humans creating from joy, not survival mode.
Scarcity is so last century.
Love this Jason.
The word gets thrown around so much, and it’s often people pointing the finger at anyone using AI in a way they don’t understand or don’t support.
Given how few definitions for what we’re even trying to accomplish let alone how it gets accomplished, you can’t call something cheating because one person is doing it well with AI.