Great post Jason. I have seen this firsthand. What I learned is that it’s not enough to experiment with agents and tell your bosses it’s important. Build collaborations and create POCs so you can lead organizations forward. Waiting for executive review and permission is largely a waste of time.
Insightful post. Many organizations need to confront how unprepared they are for the shift from chatbots to agents. Until they invest as intentionally in skills, incentives and decision rights as they do in tools, the gap you are describing will only widen.
The light bulb vs electrical grid metaphor is the right frame. Most teams are still installing light bulbs while wondering why nothing changed. I ran into the same gap building Wiz - the shift happened when I stopped asking "how do I automate this one task" and started designing agents that run overnight without waiting to be asked.
Proactive research, inbox triage, drafted outputs waiting in the morning. The chatbot warmup analogy is accurate: we've barely started. For concrete autonomous agent use cases beyond the hype: https://thoughts.jock.pl/p/ai-agent-use-cases-moltbot-wiz-2026
This is nice post and communicates an important shift in mental models toward AI. That said, I believe the limiting factor when it comes to adopting agentic AI solutions at scale is not about change readiness or AI fluency. Its about people's ability to set, define and communicate goals and directions effectively.
You note that, "An agent is a colleague you can delegate to. You give it a goal, and it builds the plan, executes the steps, monitors what’s working, adjusts when something doesn’t, and keeps going until the job is done. You don’t manage it turn by turn. You hand it an outcome and check back in." Doing this assumes people know what their goals are and can effectively communicate these to an agent, or anyone else for that matter.
One of the perennial frustrations expressed by employees is not knowing what they are expected to do at work. This is because managers, and people in general are terrible at setting meaningful goals and defining and communicating clear directions to others. People don't often know exactly what they want other people to do beyond aspirational outcomes, and they also don't know what will happen when they try to do it which is why we have things like "weekly check ins".
This idea that we will be able to tell agents to do our work for us makes sense when it comes to performing repetitive tasks. That's just about plain old AI driven automation, not empowering autonomous agents. But the idea agents will be like minions we send out in the world to do stuff for us strikes me as far fetched and potentially very dangerous. I'm reminded of the fable of the "Monkey's paw" where people are empowered by a magical relic that will grant whatever they ask from it. They quickly learn that just asking an agent to do something for you without providing very clear directions on how you want it done can lead to some very unpleasant outcomes.
Also, regarding the Service Now customer service agents - I'd love to get a sense of how actual customers feel about these. It's probably great for simple problems, but having an agent instead of a person for complex, atypical and emotionally stressful problems sounds like a form of purgatory. My brother once said that "in the future getting service from an actual human will be a luxury reserved for the very wealthy". That sounds like a dystopian future to me.
Great post Jason. I have seen this firsthand. What I learned is that it’s not enough to experiment with agents and tell your bosses it’s important. Build collaborations and create POCs so you can lead organizations forward. Waiting for executive review and permission is largely a waste of time.
Great post. Very useful for those of us with children coming into the world of work in the next few years
Insightful post. Many organizations need to confront how unprepared they are for the shift from chatbots to agents. Until they invest as intentionally in skills, incentives and decision rights as they do in tools, the gap you are describing will only widen.
The light bulb vs electrical grid metaphor is the right frame. Most teams are still installing light bulbs while wondering why nothing changed. I ran into the same gap building Wiz - the shift happened when I stopped asking "how do I automate this one task" and started designing agents that run overnight without waiting to be asked.
Proactive research, inbox triage, drafted outputs waiting in the morning. The chatbot warmup analogy is accurate: we've barely started. For concrete autonomous agent use cases beyond the hype: https://thoughts.jock.pl/p/ai-agent-use-cases-moltbot-wiz-2026
This is nice post and communicates an important shift in mental models toward AI. That said, I believe the limiting factor when it comes to adopting agentic AI solutions at scale is not about change readiness or AI fluency. Its about people's ability to set, define and communicate goals and directions effectively.
You note that, "An agent is a colleague you can delegate to. You give it a goal, and it builds the plan, executes the steps, monitors what’s working, adjusts when something doesn’t, and keeps going until the job is done. You don’t manage it turn by turn. You hand it an outcome and check back in." Doing this assumes people know what their goals are and can effectively communicate these to an agent, or anyone else for that matter.
One of the perennial frustrations expressed by employees is not knowing what they are expected to do at work. This is because managers, and people in general are terrible at setting meaningful goals and defining and communicating clear directions to others. People don't often know exactly what they want other people to do beyond aspirational outcomes, and they also don't know what will happen when they try to do it which is why we have things like "weekly check ins".
This idea that we will be able to tell agents to do our work for us makes sense when it comes to performing repetitive tasks. That's just about plain old AI driven automation, not empowering autonomous agents. But the idea agents will be like minions we send out in the world to do stuff for us strikes me as far fetched and potentially very dangerous. I'm reminded of the fable of the "Monkey's paw" where people are empowered by a magical relic that will grant whatever they ask from it. They quickly learn that just asking an agent to do something for you without providing very clear directions on how you want it done can lead to some very unpleasant outcomes.
Also, regarding the Service Now customer service agents - I'd love to get a sense of how actual customers feel about these. It's probably great for simple problems, but having an agent instead of a person for complex, atypical and emotionally stressful problems sounds like a form of purgatory. My brother once said that "in the future getting service from an actual human will be a luxury reserved for the very wealthy". That sounds like a dystopian future to me.