Overview – Why It Matters
Where Thinking Meets Reality
In the first five steps of problem solving, we moved from defining a problem to identifying causes and designing countermeasures. Each stage required logical thinking, teamwork, and a fair amount of faith that our understanding of the problem was correct. Step 6 is where that faith is tested.
This is the fun step—not because it’s easy, but because it is alive. Here, the process finally talks back. The data, the process, and the people who live the process tell us whether our effort was grounded in reality or built on assumptions. This is where we discover if our hard work has paid off—or if it’s time to humble ourselves, go back, and improve our thinking.
At its core, Step 6 asks one simple question: Did it work?
But beneath that simplicity lies a deeper purpose: to check the quality of our logic from start to finish. Results are the proof of thinking. A strong result confirms a solid chain from problem definition to root cause to countermeasure. A weak result, conversely, signals that something earlier—our grasp of the problem, the true cause, or the match between cause and countermeasure—was off target.
From Doing to Discovering
Up to now, problem solving has been an act of design and implementation. In Step 5, we took action: we built, installed, trained, adjusted, and sometimes re-engineered. Step 6 shifts our mindset from doing to discovering. We stop pushing and start listening.
The guiding logic is straightforward:
Verified Cause → Appropriate Countermeasure → Sustained Result.
“Verified” means cause–effect linkage was tested and evidence-based. “Appropriate” means the countermeasure fit the scale and mechanism of the cause. “Sustained” means the improvement remains stable over time and conditions.
Checking results is not merely an afterthought or report-writing exercise; it is the validation step that determines whether we have truly learned. It transforms problem solving from a one-time fix into a scientific loop of understanding, prediction, and confirmation.
The Emotional Reality of Checking
Few steps generate such a mix of anticipation and anxiety. On paper, Step 6 sounds procedural: collect data, compare before and after, confirm targets. In practice, it can be nerve-racking. I have felt both extremes.
Years ago, at Toyota, I joined a project team that modified a problematic surface-grinding machine. The project was timed during the plant’s Spring shutdown and had to be completed around the clock in just seven days. Every detail mattered: alignment, coolant flow, table flatness and surface finish, clamps and maintenance access. When the final modification was made and the machine restarted, the tension was palpable.
We nervously waited for the official quality results from the QC lab—the true test of our work. When the data came back, the verdict was positive: process capability had improved dramatically, and maintainability was far better. The feeling was unforgettable—relief, pride, and validation that disciplined teamwork and PDCA thinking worked exactly as intended.
Moments like that show the joy of Step 6. They remind us that data can carry emotion—that proof of improvement can inspire confidence and reinforce belief in the process.
But there is another harsher side.
As a consultant, I once observed a complex technical problem at a national laboratory. The team was world-class—scientists, engineers, and researchers working for months on an issue that defied explanation. Each time they thought they had solved it, new evidence emerged, forcing them to revisit the problem definition, the root-cause hypotheses, and the countermeasures. Six months later, the ultimate cause turned out to be something astonishingly minute—so small and so unexpected that no one could have predicted it at the outset.
That experience was humbling. It showed that checking results is not about proving who was right; it’s about discovering the truth. Sometimes, the truth is uncomfortable. But without that humility, learning stalls.
Why Checking Results Is Non-Negotiable
Every organization faces the temptation to declare victory too soon. After all, once an improvement has been implemented, everyone wants closure. The schedule is tight, people are tired, and there’s pressure to move on. But skipping or softening Step 6 is like building a bridge and never load-testing it. It may look good, but you don’t know if it will hold.
Toyota’s culture treats checking as non-negotiable. Whether in production, engineering, or design, confirmation of results is built into the process. The logic is simple: if a change was worth implementing, it’s worth validating.
This mindset guards against two forms of self-deception:
- Checking execution instead of effectiveness.
Did we do what we said, or did it actually solve the problem? - Measuring activity instead of outcome.
Did the metrics that matter—quality, delivery, safety, cost, morale—actually improve and stay improved?
True checking means asking both questions honestly, even if the answer leads back to re-examining earlier steps.
The Scientific Loop
Step 6 is the transition from Do to Check in PDCA. In scientific terms, it is the experiment’s validation phase. The countermeasure is our hypothesis; the process result is our data.
When we compare before and after, we’re not just looking for difference—we’re looking for causal proof. Does the evidence show that the countermeasure addressed the verified cause and not some unrelated factor?
This is also the moment to detect side effects or secondary problems. Many teams find that solving one issue inadvertently exposes another. That’s not failure; it’s evolution. Each loop refines the system’s understanding of itself.
The best teams use Step 6 to strengthen both confidence and curiosity: confidence when results confirm their logic, and curiosity when they do not.
Highs, Lows, and Learning
The beauty of Step 6 is that it’s emotionally honest. It doesn’t flatter; it reveals. The data may say we succeeded, or it may say we missed the mark. Either way, it gives us clarity.
When the outcome is positive, the team earns the satisfaction of proof and the permission to standardize in Step 7. When it’s negative or inconclusive, the feedback is equally valuable—it tells us where our understanding must deepen.
This stage reminds us that problem solving is not linear but cyclical. Every “Check” contains the seed of the next “Plan.” Over time, that repetition builds organizational wisdom—the kind that can’t be written into procedures but shows up in better questions, sharper hypotheses, and calmer judgment.
Reflection and Transition
Checking results, then, is not the end of problem solving but the beginning of learning. It closes one PDCA loop and opens the next. It tests both our process and our mindset.
The true measure of a mature problem-solving culture isn’t how quickly people act—it’s how rigorously they verify.
Success is feedback; failure is insight. Both are indispensable.
In the next article, we’ll explore the specific tools and methods used to check results effectively—how to use data, visual evidence, and operational metrics to confirm whether improvement is real and lasting.