Step 6: Check ResultsIntermediate7 min read

Step 6: Check Results - Tools and Methods

By Art Smalley

Tools and Methods

The Purpose of Checking Tools

Step 6 is where thinking meets verification. The purpose of this phase is simple but profound: prove, with evidence, whether the countermeasures produced the intended result.

This means our tools and methods must match the intent of the goal set back in Step 3. It does little good to declare success because “standard work charts were updated” if our stated goal was to cut defect parts per million (DPMO). The Check must logically mirror the Goal.

When the evidence is sound, confidence follows. When it is weak or misaligned, we risk congratulating activity rather than achievement. Toyota, Six Sigma, and any sound scientific approach insist on this alignment.


From Goals to Checks: Logical Alignment

Every improvement effort forms a logical chain:

Problem → Root Cause → Countermeasure → Result

The result must close the loop to the goal. A simple discipline table keeps teams honest:

Step Key Question Required Evidence
3 – Set Target What are we trying to improve? Clear, specific, measurable goal (e.g., defect rate, cycle time, tolerance)
5 – Countermeasure What did we change to achieve it? Countermeasure tied to verified cause
6 – Check Did it work—and stay improved? Before/after data on same metric, plus evidence of stability

The method of checking should be the same as the method of measuring in Step 3. If Step 3 used minutes per transaction, Step 6 must use the same units and definition. Only then is the comparison valid.


Tool Family 1 – Before-and-After Charts

These are the simplest and most visual form of proof. They show the signal of improvement without complex statistics.

  • Run Charts: Show trend over time, revealing step change or gradual shift.
  • Before/After Bar Graphs: Common in QC Circle reports—simple, powerful.
  • Histogram Comparisons: Demonstrate distribution shift and reduced spread.

Use these when data is readily available and sample sizes are moderate. A good before-and-after chart answers the question: “Did the curve move in the right direction, and does it stay there?”


Tool Family 2 – Statistical Quality Control (SQC) Tools

When variation matters—and it usually does—statistical confirmation separates noise from true change.

  • Control Charts (X-bar/R, p-charts): Verify process stability and sustainment.
  • Capability Indices (Cp, Cpk, Ppk): Quantify improvement relative to tolerance.
  • Scatter Plots and Correlation Studies: Validate the cause–effect link.

Example – Toyota Surface Grinder:
When our team rebuilt a grinding machine to improve surface flatness, we tracked both coolant concentration and flatness capability (Cpk) for several weeks. The control chart showed variation tightening; capability improved materially. That combination—data and stability—proved the fix had taken hold.

Such checks turn satisfaction into evidence. They also prevent the premature “it looks better” conclusion that undermines real learning.


Tool Family 3 – Verification of Effectiveness

The key question: Did our countermeasure cause the improvement, or did something else?

  • Regression Analysis: Tests correlation between variable and outcome.
  • DOE Confirmation Runs: Prove factor impact when several variables changed.
  • Pre-/Post Sampling Tests: Confirm measurable difference with confidence.

These methods don’t belong only to statisticians. Even a simple “A/B comparison under identical conditions” fulfills the same spirit of scientific validation.


Tool Family 4 – Visual and Field Confirmation

Not all checking happens in spreadsheets. Many improvements hinge on human behavior, layout, or procedure.

  • Standard Work Observation: Compare actual vs. expected method.
  • Layered Process Audits: Verify consistency at multiple management levels.
  • Visual Boards: Display key before/after conditions so everyone can see progress.

The principle is direct verification—go see whether the countermeasure changed real-world conditions as intended.


Tool Family 5 – Result and Process Metric Pairing

Results show what changed; process metrics show why it stays changed. Mature systems track both.

  • Result Metric: Outcome tied to Step 3 goal (e.g., defects per thousand, wait time).
  • Process Metric: Leading indicator ensuring behavior or condition stability.
Example Result Metric Process Metric
Grinder capability Flatness capability (e.g., Cpk) Coolant concentration control
Documentation accuracy Error rate % % self-checks completed
Hospital wait time Avg door-to-doctor (min) % triage completed < 5 min

Tracking both builds confidence that gains will hold beyond the project’s close.


Checking in Healthcare and Services

Service and healthcare teams often ask, “What can we measure if we don’t have Cp or Cpk?” The answer: plenty. The rigor is not in the math—it’s in the logic.

If the goal was to shorten patient discharge time, the check is the time itself, not the number of staff trained or checklists completed.

Examples of Meaningful Checks

Type of Goal Before/After Evidence Notes
Patient Safety / Quality of Care Infection rate per 1,000 patient-days, medication error rate Mirrors DPMO logic using clinical units.
Lead-Time Reduction Timestamped process map or run chart of average lead time Fits Lean healthcare or service flow studies.
Service Quality / Customer Experience Complaint counts, satisfaction scores, sentiment trend Ensure same measurement method before/after.
Administrative Error Reduction Sample audit results or cycle-time data Same discipline, softer process.
Standard Adherence Compliance observation scores Confirms process behavior, not paperwork.

Healthcare Example – Emergency Department

A hospital aimed to cut door-to-doctor time from 42 min to 25.

  • Goal metric: Avg door-to-doctor minutes.
  • Process metric: % triage within 5 min.
  • Check tool: Run chart with control limits over 90 days.

Result: Sustained 26 min average — improvement verified.

The same reasoning applies in any service setting—from call centers to insurance claims. When metrics reflect the original goal, checking becomes credible and actionable.


Why Toyota and Six Sigma Agree

Both traditions converge on one truth expressed here in problem-solving terms:

No confirmation, no learning. No verification, no problem solving.

  • Toyota demands evidence through simple visual charts and go-see checks.
  • Six Sigma demands evidence through statistical confirmation and control plans.

The common denominator is logical rigor. Whether you plot defect rate on a whiteboard or run a capability study, the mindset is identical—verify with data that matches the target.


Typical Tool Summary

Purpose Typical Tool Used For Notes
Quantitative Proof Run / Control Chart Process stability, defect rate Must mirror Step 3 goal
Statistical Validation Cp/Cpk, Regression Capability or cause proof Apply when data rich
Behavioral Confirmation Observation / Audit Human adherence Reinforces sustainment
Mixed System Dual Metric Board Leading + Lagging Indicators Shows relationship between process and result

Closing Reflections

The choice of tool matters less than the integrity of its logic. Checking results is not a paperwork exercise—it’s the moment of truth for problem solving.

A weak check inflates confidence without evidence.
A strong check builds learning, credibility, and momentum.

Whether in an engine plant, hospital ward, or service center, the principle is the same:

Measure what you aimed to improve.
Compare before and after.
Confirm stability.
Learn from the gap.

Only then can Step 7—Standardize and Follow Up—stand on solid ground.

© 2025 Art Smalley | a3thinking.com