Attitudes, Behaviors, and Culture (ABC)
Many organizations are still framing AI as about ROI, governance, and procurement. The conversations sound the same in boardrooms and steering committees: Will this investment pay off and in how long? Have we made sure that our information will be secure? Do we have the right AI model? How will this connect into our systems? These questions matter; without decent infrastructure, nothing interesting happens. But they only get you as far as tools sitting ready on the shelf.
What you really care about is something deeper: what does our business become, and how do we all change as leaders, employees, vendors and customers, once these tools are in use? Will we use them to change how we think and act, or simply speed up whatever we were already doing?
Transformation happens when the everyday experience of work shifts. Briefs written in new ways. Experiments conceived and run differently. Review meetings and learning sessions feeling more like collaborative investigations than status rituals. When the use cases shift and generative AI starts being treated as a teammate in an extended collaboration the real value is unlocked. And this shift is not a technology upgrade; it is an attitude change that gradually reshapes behaviors and, eventually, culture.
That is the ABC of culture change in practice: attitudes shift first, enabling new behaviors; repeated behaviors harden into shared norms; and those norms are what we experience as culture.
A is for Attitudes: The Stories People Tell Themselves
Every company has a “way” whether it is explicitly stated (“the Apple Way”) or not. Ways are grounded in attitudes, the invisible operating system of an organization. The quiet stories people carry about what is safe, what is rewarded, and what counts as “how we do things here.”
With AI tools, those stories are often wary or resigned. Leadership may send out edicts that sound like “you had better use AI or you will be fired.” In response, some people will assume this is just another management fad that will pass if they keep their heads down. Others will take this message in a worse way, imagining that AI is a direct threat: something that might automate them away if they help too enthusiastically. Some who might even want to learn and experiment worry that even touching the system will get them into trouble if something goes wrong. All of these narratives exist before anyone has typed a single prompt, and they strongly shape which affordances people can even see.
The key is to design for attitudes, and begin by naming the fear. Leaders need to say, explicitly, that the goal is to augment human work, not replace it. Framed the early phases as exploration, expecting messy drafts and partial successes rather than instant magic. The message has to shift from “adopt this new thing or else” to “we are learning this together.”
Next, make it clear how an organization can use these tools in a safe and secure way. People need to see where data would and would not flow, what information must never be pasted into prompts, and which review steps should be applied before anything reached a customer.
Finally, reframe success. Rather than celebrating only speed and cost reduction, highlight stories where human creativity and understanding are extended. Find those use cases where employees have discovered new ways of working and celebrate them, both rewarding and recognizing these employees. In these stories, the tool is never the hero; the people are. AI should show up as an amplifier of human judgment and imagination.
These moves changed the underlying story from “AI as threat or gimmick” to “AI as extended mind.” Once that narrative takes hold, people could start to notice and reach for new behaviors.
B is for Behaviors: Rituals, Not Just Rules
Behaviors are where culture grows arms and legs. You know a shift is real when it shows up in what people actually do on a Tuesday afternoon, not just in what they say in a town hall.
If the underlying attitude becomes “AI is a teammate,” the visible behaviors might look like this: teams begin meetings by asking what part of the problem they can prototype with the model in the next 20 minutes. Individuals capture prompts and responses in shared spaces rather than hoarding them in private chats. Project groups run pre-mortems with simulators as casually as they once ran spellcheck.
Here is a simple ritual called the ten-minute jam. One day a weel, someone brings a live, messy problem to the group. It could be a half-formed brief, a confusing data slice, an awkward bit of language in a document. The team works it through with the AI system in real time. No slides. No polish. Just “let’s see what happens if…” The real benefit is not any one answer but the normalization of public experimentation. Trying things with the tool becomes a shared act.
Another practice to try is the prompt postmortem. After important wins or painful misses, don’t just talk about the decision that had been made; review the way the prompt and context was constructed to produce it. Ask which things had helped, where it had gone wrong, and which feedback loops had been used to catch emerging problems. AI usage becomes something discussable and improvable, like any other craft.
A third behavior can be encoded in a simple heuristic: no one ships the first idea, whether it came from a person or a model. Generate at least three distinct takes and create the discipline to make them different. Only after reviewing different approaches do you then choose or synthesize. This small discipline will push teams to use AI for divergence and themselves for convergence. It reinforces the right mental model of collaboration: not outsourced thinking, expanding the space of possibilities and then applying judgment.
None of these behaviors are about choosing the “best” model. They are about how people use the tools and make the AI affordances show up in daily work.
C is for Culture: When It Becomes “Just How We Work”
Culture is what remains when no one is making slides about AI anymore.
Experimentation itself must come to be seen as part of the job, not something done off on the side in your “spare time” (as if we have any). People assume that any new process would have a human–AI loop baked into it. New hires absorb this expectation not through training decks but through what they see colleagues doing around them.
At that point, the organization had moved from tools to culture. The technology continues to evolve, but is now a living pattern for integrating new capabilities. When new AI capabilities arrive (just about every day), people will know instinctively how to start experimenting and integrating them into the work.
Build a culture which expects explainability, traceability, and collaboration and the attitude that the team can “build for how we actually think.” Affordances change people; people, collectively, then build better affordances.