We are entering the AI Age, not auditioning for the Luddites’ Reunion Tour. The sooner we ditch the fear, grab curiosity by the hand, and start outsourcing the mental junk drawer, the sooner we get to the good stuff — like humans creating from joy, not survival mode.
What you're saying here, pushes the argument back. By using loaded language and telling people to "ditch the fear", you're telling them their thought process isn't valid. That kind of invalidity is what makes people entrench in fear, not move forward.
Instead of calling them a Luddite, why not ask them about their concerns? Listen, and see if there's ways you can help them understand the other side of the argument as well. Can they create a personal code of ethics and find some use of the tool to be acceptable?
Its time we stop evangelizing use and start working with people to create a real team instead.
Concetta - yes to this. You’re right that when we dismiss fear outright, we can unintentionally dismiss people, and that’s never my goal. Fear deserves a seat at the table - it just doesn’t get to run the meeting.
AND - here’s where I get a little spicy - so much of what I’m seeing out there isn’t thoughtful skepticism. It’s knee-jerk panic dressed up as principle. Folks aren’t always asking “Is this aligned with my values?” They’re saying “AI bad, brain melt, run.”
So my post was me tossing a glitter bomb into the fear fog - to say hey, maybe curiosity’s the better co-pilot.
I’m 100% with you on co-creating ethical use, not just cheerleading adoption. Let’s build a team that can hold nuance, ask real questions, and stay human while experimenting.
The word gets thrown around so much, and it’s often people pointing the finger at anyone using AI in a way they don’t understand or don’t support.
Given how few definitions for what we’re even trying to accomplish let alone how it gets accomplished, you can’t call something cheating because one person is doing it well with AI.
I think the big case is...are they doing it well with AI, or is the AI doing it for them? That's why the cheating word comes up the most often. There's no definitive criteria for how to figure that out. While Jason's questions here might be a good start, its not enough to figure out how skilled the person is vs. how skilled the AI is.
And, to put the second level to it, how does one compare the person who did it well with AI, and the person who learned the skills themselves and did it without, or the use of a tool AI rather than a generative AI. Because I think that's where a lot of discussion is occurring - in business, the value might be in the product getting done, and done quickly, but in academia, the value might be in the skill teaching, not the product itself being done.
And, I think we sometimes have this idea that getting help or support is cheating and then we back into pointing fingers at the sources of help and support we’re least familiar with.
If you go ask an advisor or friend for guidance or help or to review your work, is that cheating? Obviously if you ask them to do it for you, it would. However, where is that line?
The same is true for AI. Having it help refine or shape your thoughts or pull together some key resources isn’t cheating. Telling it to write an essay that you don’t bother to proofread would be.
The biggest issue is we don’t have consistency in our standards of measurement.
I think you're missing some key pieces here. There's a big difference between using generative AI in business, after you have learned critical thinking skills, and using it during the brain's development and skill development in school.
1. "AI can help you write an essay. But if you don’t understand the argument, it’ll fall apart the moment someone challenges it." For most folks in minor academia (k-12, bachelor level courses), they aren't fighting the validity of the content. They are simply doing an essay, they get a grade based off a set of requirements to fulfill a standardized curriculum, and that's that. You aren't going to encounter someone "challenging" your argument until at least grad school, and by the time you get there, you're used to doing something in a very particular way, and its skipping all of the critical thought process development that needs to happen in order to be able to "proofread" and "check" the work of the AI.
2. "Team Cheating: You can't learn if you don't struggle" and "Team Cheating: AI lets people fake it" you're doing their argument a disservice by framing it in the least positive way. What these both are really is "You can't learn how to think if you don't understand how to find evidence, construct an argument, document it, and be able to accept challenge of it and refine it". Those skills not only translate to when you are writing something, but also are skills needed to make oral arguments, and heavier use of generative AI makes those skills weaker.
3. The specifics here of separating LLMs/generative AI are also part of what is weakening your argument. By lumping all AIs together, you're discounting the academics who do appreciate the use of machine learning, AIs for tool use such as transcribing reports, analyzing images, etc. There's a big difference between using say, a medical AI to analyze an MRI and using chatGPT to craft an essay.
4. I have to say, we've seen hundreds if not thousands of people now evangelizing the use of generative AI. I had hoped to see a more balanced approach to the positives and negatives within the argument. By focusing only on the perceived positives of using generative AI, you've hampered the argument of anyone who reads this and wants to use it as a part of the conversation, because they have no idea how to answer if someone says one of the perceived negatives of use of generative AI.
As someone who has mostly experimented with it in my job and as a hobby genealogist, I have focused heavily on choosing which pieces of generative AI and which generative AIs I use based on my personal code of ethics, and I have focused on showing others how to do in context of a team. But I fully understand when employers and academia do not want to do this, because it genuinely is yet another skill that has to be learned, and it is a daunting challenge to teach multiple skills at once, especially when one is learning the skill themselves at the same time. There's entire generations of instructional designers and educators that have to completely overhaul their design theory and curriculums, and that doesn't happen overnight like the tech bros expect. It IS coming along and way faster than anything I have ever seen. Instructional designer sessions on "how to use generative AI" have been replaced in less than a year with "how to incorporate generative AI into lesson design". Compare that to the adoption of the SAM instructional design model that was proposed in the 2012 and still hasn't cracked a good number of institutions. Or the fact that organizations still have to be convinced of the value of online learning.
There are going to be a lot of arguments during this time period about the what and the how of the use of generative AI, and once the court cases get solved, and the instructional designers and educators are skilled up, you're going to see the argument get much more nuanced and lose the "cheating" word automatically as folks key into what are the actual issues and what are solutions. And by then, we may even see better types of AI evolve, and document the best uses of generative AIs to help people make the case (love that Proctor and Gamble case and I cite it often).
This was such an insightful read—thank you for sharing it! One small piece of feedback: in your note, you mention, "PS. I used AI to help me frame this debate, do research and create graphics and infused it my thoughts along the way to make it MINE." Since you're taking ownership of the piece (which I appreciate!), I wanted to point out that there are two identical headings ("For Employees Talking to Employers") with slightly different formatting and content. It might just be me, but I found myself rereading those sections a couple of times to fully grasp the point. When relying only on a tool rather than collaborators, ensuring clarity and consistency becomes even more important for readability. *This was AI-assisted, edited by me😉*
It never ceases to amaze me how you always push to talk about the tough topics that we typically all hide from. This is very important to discuss and I'm thankful for your courage to do so. "And also, I don't want to live in a world where we keep hindering progress." YES! Change management is critical. Let's do all that we can to stay ahead and not fall behind because being proactive instead of reactive is key.
I knew after Friday's discussion you'd be writing about 'cheating' this weekend. I'm glad you did and hope it helped with your frustration. Very well put together and I honestly don't disagree, just finding the trust in people needed to see this thinking forward hard.
We are entering the AI Age, not auditioning for the Luddites’ Reunion Tour. The sooner we ditch the fear, grab curiosity by the hand, and start outsourcing the mental junk drawer, the sooner we get to the good stuff — like humans creating from joy, not survival mode.
Scarcity is so last century.
What you're saying here, pushes the argument back. By using loaded language and telling people to "ditch the fear", you're telling them their thought process isn't valid. That kind of invalidity is what makes people entrench in fear, not move forward.
Instead of calling them a Luddite, why not ask them about their concerns? Listen, and see if there's ways you can help them understand the other side of the argument as well. Can they create a personal code of ethics and find some use of the tool to be acceptable?
Its time we stop evangelizing use and start working with people to create a real team instead.
Concetta - yes to this. You’re right that when we dismiss fear outright, we can unintentionally dismiss people, and that’s never my goal. Fear deserves a seat at the table - it just doesn’t get to run the meeting.
AND - here’s where I get a little spicy - so much of what I’m seeing out there isn’t thoughtful skepticism. It’s knee-jerk panic dressed up as principle. Folks aren’t always asking “Is this aligned with my values?” They’re saying “AI bad, brain melt, run.”
So my post was me tossing a glitter bomb into the fear fog - to say hey, maybe curiosity’s the better co-pilot.
I’m 100% with you on co-creating ethical use, not just cheerleading adoption. Let’s build a team that can hold nuance, ask real questions, and stay human while experimenting.
Love this Jason.
The word gets thrown around so much, and it’s often people pointing the finger at anyone using AI in a way they don’t understand or don’t support.
Given how few definitions for what we’re even trying to accomplish let alone how it gets accomplished, you can’t call something cheating because one person is doing it well with AI.
Thanks Christopher!
> one person is doing it well with AI.
I think the big case is...are they doing it well with AI, or is the AI doing it for them? That's why the cheating word comes up the most often. There's no definitive criteria for how to figure that out. While Jason's questions here might be a good start, its not enough to figure out how skilled the person is vs. how skilled the AI is.
And, to put the second level to it, how does one compare the person who did it well with AI, and the person who learned the skills themselves and did it without, or the use of a tool AI rather than a generative AI. Because I think that's where a lot of discussion is occurring - in business, the value might be in the product getting done, and done quickly, but in academia, the value might be in the skill teaching, not the product itself being done.
I see where you’re coming from Concetta.
And, I think we sometimes have this idea that getting help or support is cheating and then we back into pointing fingers at the sources of help and support we’re least familiar with.
If you go ask an advisor or friend for guidance or help or to review your work, is that cheating? Obviously if you ask them to do it for you, it would. However, where is that line?
The same is true for AI. Having it help refine or shape your thoughts or pull together some key resources isn’t cheating. Telling it to write an essay that you don’t bother to proofread would be.
The biggest issue is we don’t have consistency in our standards of measurement.
I think you're missing some key pieces here. There's a big difference between using generative AI in business, after you have learned critical thinking skills, and using it during the brain's development and skill development in school.
1. "AI can help you write an essay. But if you don’t understand the argument, it’ll fall apart the moment someone challenges it." For most folks in minor academia (k-12, bachelor level courses), they aren't fighting the validity of the content. They are simply doing an essay, they get a grade based off a set of requirements to fulfill a standardized curriculum, and that's that. You aren't going to encounter someone "challenging" your argument until at least grad school, and by the time you get there, you're used to doing something in a very particular way, and its skipping all of the critical thought process development that needs to happen in order to be able to "proofread" and "check" the work of the AI.
2. "Team Cheating: You can't learn if you don't struggle" and "Team Cheating: AI lets people fake it" you're doing their argument a disservice by framing it in the least positive way. What these both are really is "You can't learn how to think if you don't understand how to find evidence, construct an argument, document it, and be able to accept challenge of it and refine it". Those skills not only translate to when you are writing something, but also are skills needed to make oral arguments, and heavier use of generative AI makes those skills weaker.
3. The specifics here of separating LLMs/generative AI are also part of what is weakening your argument. By lumping all AIs together, you're discounting the academics who do appreciate the use of machine learning, AIs for tool use such as transcribing reports, analyzing images, etc. There's a big difference between using say, a medical AI to analyze an MRI and using chatGPT to craft an essay.
4. I have to say, we've seen hundreds if not thousands of people now evangelizing the use of generative AI. I had hoped to see a more balanced approach to the positives and negatives within the argument. By focusing only on the perceived positives of using generative AI, you've hampered the argument of anyone who reads this and wants to use it as a part of the conversation, because they have no idea how to answer if someone says one of the perceived negatives of use of generative AI.
As someone who has mostly experimented with it in my job and as a hobby genealogist, I have focused heavily on choosing which pieces of generative AI and which generative AIs I use based on my personal code of ethics, and I have focused on showing others how to do in context of a team. But I fully understand when employers and academia do not want to do this, because it genuinely is yet another skill that has to be learned, and it is a daunting challenge to teach multiple skills at once, especially when one is learning the skill themselves at the same time. There's entire generations of instructional designers and educators that have to completely overhaul their design theory and curriculums, and that doesn't happen overnight like the tech bros expect. It IS coming along and way faster than anything I have ever seen. Instructional designer sessions on "how to use generative AI" have been replaced in less than a year with "how to incorporate generative AI into lesson design". Compare that to the adoption of the SAM instructional design model that was proposed in the 2012 and still hasn't cracked a good number of institutions. Or the fact that organizations still have to be convinced of the value of online learning.
There are going to be a lot of arguments during this time period about the what and the how of the use of generative AI, and once the court cases get solved, and the instructional designers and educators are skilled up, you're going to see the argument get much more nuanced and lose the "cheating" word automatically as folks key into what are the actual issues and what are solutions. And by then, we may even see better types of AI evolve, and document the best uses of generative AIs to help people make the case (love that Proctor and Gamble case and I cite it often).
This was such an insightful read—thank you for sharing it! One small piece of feedback: in your note, you mention, "PS. I used AI to help me frame this debate, do research and create graphics and infused it my thoughts along the way to make it MINE." Since you're taking ownership of the piece (which I appreciate!), I wanted to point out that there are two identical headings ("For Employees Talking to Employers") with slightly different formatting and content. It might just be me, but I found myself rereading those sections a couple of times to fully grasp the point. When relying only on a tool rather than collaborators, ensuring clarity and consistency becomes even more important for readability. *This was AI-assisted, edited by me😉*
It never ceases to amaze me how you always push to talk about the tough topics that we typically all hide from. This is very important to discuss and I'm thankful for your courage to do so. "And also, I don't want to live in a world where we keep hindering progress." YES! Change management is critical. Let's do all that we can to stay ahead and not fall behind because being proactive instead of reactive is key.
I knew after Friday's discussion you'd be writing about 'cheating' this weekend. I'm glad you did and hope it helped with your frustration. Very well put together and I honestly don't disagree, just finding the trust in people needed to see this thinking forward hard.