AI News & Updates
Iteration Is the Secret Weapon of AI Power Users

Iteration Is the Secret Weapon of AI Power Users
Reflections on Anthropic's AI Fluency Index, through the lens of teaching AI leadership
Anthropic just published something worth slowing down for.
Their new AI Fluency Index analyzed nearly 10,000 real Claude conversations to measure whether people are actually using AI well, not just using it. They built a framework of 24 behaviors that define effective human-AI collaboration, and then tracked which ones show up in practice.
The findings are equal parts encouraging and sobering. And if you're in the business of teaching AI leadership (or becoming one), they're required reading.

The Finding That Didn't Surprise Me
85.7% of conversations in the study involved iteration and refinement: going back and forth with the AI rather than taking the first answer and running.
Those conversations showed double the AI fluency behaviors of non-iterative ones. Users who iterated were 5.6x more likely to question Claude's reasoning, and 4x more likely to catch missing context.
Here's my take: this isn't really about AI. It's about mindset.
I've used the same test when interviewing candidates for years. Partway through a task, I give them feedback and watch what happens. Some get defensive. Some nod and change nothing. And some actually absorb it, adjust their thinking, and come back with something better.
That last group are almost always the right hire. Not because they were the most polished coming in, but because they know how to improve in real time.
Turns out that's exactly the same skill that makes someone good at AI.
A growth mindset, one that treats every output as a starting point rather than a verdict, is what separates the power users from the rest. People who are naturally more agile, more curious, more willing to say "that's interesting, but let's push further" are going to be more AI fluent. The tool amplifies the disposition. It doesn't create it.
At the AI Officer Institute, we don't teach iteration as a prompt technique. We teach it as a posture. The habit of going back, pushing further, refining your thinking: that's not a feature of AI. That's a feature of mastery. And great leaders already have it. They just need to apply it to a new tool.
The Finding That Should Worry Every Business Leader
Here's where it gets uncomfortable.
When AI produces something that looks polished (a finished document, a working app, a beautifully formatted report), users become less likely to critically evaluate it. Less likely to fact-check. Less likely to ask "wait, is anything missing here?"
The report flags a drop of 3-5 percentage points in critical evaluation behaviors specifically in artifact-heavy conversations. On paper that sounds small. In practice, it's a blind spot with real consequences.
This is just human nature. We're wired to follow the shiny thing. Marketing beats research every time because our brains are pattern-matching machines that reward the look of quality over the substance of it. AI has just gotten very, very good at producing the look of quality.
The danger isn't that AI makes mistakes. The danger is that AI makes mistakes in ways that are hard to spot, and we've trained ourselves to stop looking.
The 30% Problem
One of the more quietly striking numbers in the report: only 30% of users tell Claude how they want it to interact with them.
That means 70% of people are essentially letting the AI drive the collaboration dynamic by default.
At AIO, this is one of the first things we teach, and it might be the highest-leverage skill in the entire curriculum. We call it meta-prompting: telling the AI not just what you want, but how you want to work together.
Some of our favorites:
"Ask me questions before you start."
"Interview me to pull out what I actually need."
"Iterate with me. Don't just hand me a final answer."
"Tell me what you're doing and why as you go."
When you set those terms upfront, everything changes. The AI stops being a vending machine and starts functioning like a thought partner. That's a different relationship, and a significantly more valuable one.
What This Means If You're Teaching AI Fluency
The Anthropic report is honest about what it can and can't measure. Of the 24 fluency behaviors in their framework, 13 happen outside the conversation: things like being transparent about AI's role in your work, or thinking critically about how you share AI-generated output. Those matter enormously. They're just harder to track.
That's actually where the real work of AI leadership lives. The behaviors that define an AI Officer aren't just about crafting better prompts. They're about building judgment: knowing when to trust, when to question, when to take ownership of the output you're putting into the world.
At AIO, we embed critical evaluation into practice rather than lecturing about it. You learn to question AI outputs by doing it, in exercises, in real scenarios, on real work. The muscle builds through use.
But Anthropic's data is a useful reminder that the muscle can atrophy too, especially when the outputs get better. As AI produces increasingly polished work, the temptation to coast will only grow. The fluent practitioners will be the ones who stay in the discomfort of questioning even what looks finished.
The Takeaway
Anthropic's AI Fluency Index gives us something we've been missing: a baseline. A measurable picture of how people actually collaborate with AI, not how they say they do.
The headline finding is simple, and it maps to everything we know about skill development in any domain:
The people who iterate are the people who improve.
That's the secret weapon. Not the fanciest model. Not the longest prompt. Just the willingness to go back, push further, and treat every output as a draft.
If you're working on your own AI fluency , or building a team that needs it, start there.
Dave Hajdu is the founder of Edge8 AI and the AI Officer Institute (AIO), where he trains the next generation of AI leaders across Southeast Asia and beyond. Learn more at ai-officer.com.


