Most organizations know they need to improve operations, but few can pinpoint exactly where they stand right now. That gap between "we should get better" and "here’s precisely what needs to change" is where an operational excellence maturity assessment comes in. It gives you a structured, honest snapshot of your current capabilities, and a clear direction for where to focus your improvement efforts next.
Without this kind of baseline, improvement initiatives tend to scatter. Teams chase isolated wins instead of building toward something systematic. Resources get burned on projects that feel productive but don’t move the needle. A maturity assessment prevents that waste by grounding your strategy in data, not gut feeling, which is exactly the engineering-driven approach we apply at Lean Six Sigma Experts across our consulting, training, and recruiting work.
This guide walks you through the full process: selecting the right framework, defining your maturity levels, collecting and scoring data, and turning results into a prioritized action plan. Whether you’re running this assessment for a single facility or across multiple sites, you’ll have a repeatable methodology you can use to measure progress over time and hold your organization accountable to real, measurable standards.
What a maturity assessment measures and why it matters
A maturity assessment doesn’t measure outcomes like revenue or defect rates directly. It measures your organization’s capability to produce and sustain those outcomes. Think of it as an X-ray rather than a symptom report: instead of noting that lead times are long, it reveals why they’re long and what structural capabilities you’re missing. An operational excellence maturity assessment typically scores your organization across several dimensions, assigning levels that range from reactive and unstructured at the low end to fully optimized and self-sustaining at the high end.
The key shift a maturity assessment creates is moving your organization from describing problems to diagnosing root causes at a systems level.
The five core dimensions most models evaluate
Most maturity frameworks evaluate a consistent set of dimensions that together describe how well your organization operates. These five appear across frameworks used in manufacturing, corporate services, and healthcare environments.

| Dimension | What it evaluates |
|---|---|
| Leadership and strategy | How consistently leaders drive improvement priorities and allocate resources |
| Process management | Whether processes are defined, documented, measured, and controlled |
| People and capability | The depth of improvement skills built into your workforce |
| Data and measurement | How rigorously your organization collects, analyzes, and acts on performance data |
| Continuous improvement systems | The maturity of your problem-solving routines, tools, and governance structures |
Scoring each dimension separately gives you a multi-dimensional capability profile rather than a single aggregate number. That profile is what makes the assessment actionable: it shows you which dimensions are dragging down your overall performance, so you know where to invest resources first instead of spreading improvement effort too thin.
Why your measurement approach changes what you find
The way you collect evidence matters as much as the framework you select. Organizations that rely only on management surveys consistently overestimate their maturity because surveys capture perception, not ground truth. Combining structured interviews, direct floor observation, performance data review, and document audits produces a far more accurate picture of where your capabilities actually stand versus where leaders believe they stand.
This mixed-method approach also builds internal credibility for your findings. When supervisors and plant managers see that your assessment team walked the production floor, pulled real performance data, and reviewed actual standard operating procedures, they’re far less likely to challenge the results. That credibility matters because the assessment will ultimately ask people to change how they work. Resistance drops sharply when the evidence is concrete and observable, and getting genuine buy-in at this stage sets up every subsequent improvement initiative for a smoother start.
Step 1. Set scope, goals, and stakeholders
Before you select a framework or schedule your first interview, you need to define exactly what you’re assessing and why. Skipping this step is how assessments turn into sprawling projects that produce reports nobody acts on. A well-scoped assessment gives your team a clear mandate, keeps data collection focused, and makes your final recommendations far easier for leadership to approve and fund.
Define the boundaries of your assessment
Your scope determines which facilities, functions, and processes fall inside the assessment and which ones don’t. If you’re running your first operational excellence maturity assessment, start with one site or one value stream rather than the entire organization. That focus lets you build a repeatable methodology before scaling it.
Trying to assess everything at once guarantees shallow findings. Narrow your scope intentionally, then expand it in future cycles.
Use the checklist below to lock down your scope before collecting a single data point:
- Site or business unit: Name the specific location or department being assessed
- Value streams in scope: List the processes from order intake to delivery that you’ll evaluate
- Time horizon: Specify whether you’re assessing current state or a rolling 12-month window
- Assessment team: Identify who leads the work and who provides subject matter input
- Deliverable format: Agree upfront whether the output is a scored report, a dashboard, or a working session
Align stakeholders before you start
Stakeholder alignment is not a soft activity. Unaligned stakeholders will dispute your findings, delay approvals, and undermine implementation before it starts. Identify every person whose function falls within your defined scope and schedule a 30-minute kickoff conversation with each one to explain the purpose, methodology, and expected output.
During those conversations, capture each stakeholder’s top two performance concerns. These priorities won’t change your scoring rubric, but they will sharpen which gaps you emphasize in your final presentation. When stakeholders see their own concerns reflected clearly in your results, they become advocates rather than obstacles.
Step 2. Pick a model and build your rubric
Your framework selection determines how precisely your operational excellence maturity assessment maps to your industry and goals. Three models cover the majority of industrial and corporate environments: the Shingo Model, which focuses on culture and behavioral alignment; the Baldrige Excellence Framework, which addresses organizational performance broadly; and a custom Lean Six Sigma model, which fits best when your primary focus is process efficiency and waste reduction. If your organization already has a Lean or Six Sigma program running, building a custom model on top of those principles typically produces the most directly actionable scoring criteria.
Match the model to your improvement philosophy first; adapting a misaligned framework creates scoring friction that slows every step that follows.
Select the right framework for your context
Each established framework brings different strengths to your assessment. The table below compares the three most common options so you can make a direct comparison before committing.
| Framework | Best fit | Primary focus |
|---|---|---|
| Shingo Model | Manufacturing, multi-site organizations | Culture and behavioral alignment |
| Baldrige Excellence Framework | Corporate and healthcare environments | Holistic organizational performance |
| Custom Lean Six Sigma Model | Process-heavy environments with existing improvement programs | Waste elimination and process control |
Once you select your framework, confirm it maps cleanly to the five core dimensions from Step 1: leadership, process management, people, data, and continuous improvement systems. Any dimension your chosen framework ignores becomes a blind spot in your scoring.
Define five levels with observable behaviors
Your rubric needs concrete, observable behaviors at each maturity level, not vague descriptors like "improving" or "advanced." Use this template structure for each dimension:
- Level 1 (Reactive): No documented process; problems handled ad hoc
- Level 2 (Developing): Process exists but is inconsistently followed
- Level 3 (Defined): Process is documented, followed, and measured
- Level 4 (Managed): Data drives decisions; improvement follows a structured cycle
- Level 5 (Optimized): Continuous improvement is embedded; gains are sustained over time
Write two or three specific behavioral indicators per level for each dimension. That specificity eliminates scoring disputes during calibration and makes your final gap analysis far easier to defend to leadership.
Step 3. Collect evidence with a mixed-method approach
With your rubric defined, the next task is gathering reliable evidence across each dimension you’re scoring. A single data source will distort your results. Surveys capture what people believe is true; performance data captures what’s actually happening; observations capture what your systems miss entirely. Running all three in parallel is the only way to produce findings that hold up under scrutiny.
Triangulating across at least three evidence sources is the single most effective way to protect your assessment from the perception bias that undermines most internal audits.
Conduct structured interviews and floor observations
Start with 30-minute structured interviews using a fixed question guide so your data stays comparable across roles. Interview operators, supervisors, and at least one senior leader per function in scope. Pair each interview block with a direct floor walk so you can verify what was described against what you actually observe. Note discrepancies between the two, because those gaps are often your most important findings.
Use this interview question template as a starting point:
- How do you currently identify a process problem before it reaches the customer?
- What standard work documentation do you reference daily, and when was it last updated?
- How are improvement ideas captured, reviewed, and implemented in your area?
- Can you show me where you track performance against your targets?
Pull quantitative performance data
Qualitative evidence alone won’t satisfy your data-driven stakeholders, and it shouldn’t. Pull three to five hard metrics per dimension directly from your operational systems. For process management, that might include first-pass yield, cycle time variance, and on-time delivery. For people and capability, pull training completion rates and the number of improvement projects closed per quarter.
Document where each data point came from, who provided it, and the date range it covers. That documentation protects your operational excellence maturity assessment findings from being dismissed as anecdotal when you present to leadership.
Step 4. Score, calibrate, and agree on the baseline
Once your evidence is collected, the scoring process needs structure and discipline to produce a defensible baseline. Raw data doesn’t score itself, and without a consistent scoring protocol, two assessors reviewing the same evidence will land on different numbers. Your goal at this stage is to convert your mixed-method evidence into a single agreed maturity score per dimension that your entire assessment team and key stakeholders will stand behind.
Score each dimension independently
Score each dimension by comparing your collected evidence against the behavioral indicators you defined in your rubric. Work through one dimension at a time and assign a preliminary score using half-point increments (e.g., 2.5, 3.5) when the evidence sits between two levels. Half-points matter because they give your roadmap sharper prioritization signal than whole-number rounding.
Use this scoring reference template for each dimension:
| Dimension | Evidence summary | Preliminary score | Supporting data source |
|---|---|---|---|
| Leadership and strategy | Improvement goals set annually but not tracked at floor level | 2.5 | Leadership interviews, KPI board audit |
| Process management | SOPs exist for 60% of core processes; last updated 18 months ago | 2.0 | Document review, floor observations |
| People and capability | Green Belt training completed but no active project pipeline | 2.5 | Training records, supervisor interviews |
| Data and measurement | Daily metrics tracked; rarely used to trigger structured problem-solving | 3.0 | Performance data pull, floor walk |
| Continuous improvement systems | Ad hoc kaizen events; no formal governance cadence | 1.5 | Stakeholder interviews |
Run a calibration session to lock in the baseline
After all preliminary scores are assigned, bring your full assessment team together for a two-hour calibration session. Each assessor presents their preliminary score for one dimension and cites the two strongest pieces of evidence supporting it. Where scores diverge by more than 0.5, the team reviews the conflicting evidence directly until they reach consensus.
The calibration session is where your operational excellence maturity assessment gains organizational legitimacy, because agreed scores are far harder to dismiss than individual judgments.
Close the session by documenting each final score, the primary evidence behind it, and the name of the assessor who led that dimension. This record becomes the formal baseline you’ll measure future progress against.
Step 5. Turn gaps into a prioritized roadmap
Your calibrated baseline tells you where you stand; now you need to decide what to fix first and in what sequence. The gap between your current score and your target score in each dimension is raw material for your roadmap, but not every gap deserves equal priority. Closing a high-impact, low-effort gap first generates momentum and builds organizational confidence in the entire improvement process.
Prioritize gaps that sit at the intersection of high strategic importance and existing organizational capacity to act on them quickly.
Rank gaps by impact and effort
Plot each scored dimension on a two-axis priority matrix: one axis for the business impact of closing the gap, the other for the effort required. Dimensions scoring below 2.5 with high strategic importance belong in your immediate action tier. Dimensions scoring between 2.5 and 3.5 with moderate effort belong in your 90-day development tier. Assign each gap to a clear tier before you write a single project charter.

| Priority tier | Score range | Recommended action |
|---|---|---|
| Immediate (0-90 days) | Below 2.5, high impact | Assign a project sponsor and charter a focused improvement event |
| Development (90-180 days) | 2.5 to 3.5 | Build capability through targeted training or process redesign |
| Sustain (180+ days) | Above 3.5 | Embed into governance routines and monitor quarterly |
Build a 90-day action plan
Each gap in your immediate tier needs a specific owner, a measurable target, and a completion date before your roadmap is complete. Vague commitments like "improve process management" don’t drive action. Write a one-line action statement for each priority gap that names who is accountable, what the deliverable is, and when it’s due.
Your operational excellence maturity assessment becomes a management tool rather than a shelf report when every gap connects directly to a named action. Use this template to structure your 90-day plan:
- Gap: People and capability scored 2.5
- Owner: Operations Manager
- Action: Launch Green Belt project pipeline with three active charters by day 60
- Target score: 3.5 by next reassessment date
Step 6. Run governance and reassess on a cadence
A completed operational excellence maturity assessment has no lasting value if it sits in a shared drive and gets reviewed once a year. Governance is the mechanism that keeps your baseline scores alive, your action owners accountable, and your improvement trajectory visible to leadership. Without a formal reassessment schedule, organizations naturally drift back toward reactive problem-solving within six months of their first assessment.
A single assessment creates a snapshot; a governance cadence turns that snapshot into a continuous performance management system.
Set a formal reassessment schedule
Your reassessment frequency should match the pace at which your organization can realistically close gaps. For most manufacturing and operations environments, a full reassessment every 12 months works well for the complete five-dimension scorecard. Pair that with a lightweight quarterly check-in focused only on the dimensions in your immediate action tier. That combination gives you an annual big picture and a quarterly progress signal without burning your team on constant data collection.
Use this governance calendar template to lock in your cadence before the first reassessment date arrives:
| Review type | Frequency | Scope | Owner |
|---|---|---|---|
| Full reassessment | Annual | All five dimensions | Assessment lead |
| Quarterly check-in | Every 90 days | Immediate-tier gaps only | Operations manager |
| Action plan review | Monthly | 90-day plan milestones | Project sponsors |
Build a governance rhythm that keeps improvements visible
Assign a single governance owner who is responsible for scheduling reviews, tracking action plan completion, and escalating stalled items to leadership. Without one named owner, governance meetings get deprioritized when production pressures hit.
Each quarterly check-in should follow a fixed 60-minute agenda: 15 minutes reviewing updated scores, 30 minutes on action plan status, and 15 minutes resolving blockers. Document every session with a one-page summary that captures score changes, completed actions, and open risks. That running record becomes your proof of progress when leadership asks whether the improvement program is actually moving the organization forward.

Next steps
You now have a complete, repeatable process for running an operational excellence maturity assessment: scope it correctly, select the right framework, collect mixed-method evidence, calibrate your scores, build a prioritized roadmap, and hold yourself to a governance cadence that keeps progress visible. The methodology only works if you actually run it, so set a start date before you close this page.
Pick your scope first. Choose one site or one value stream, assign an assessment lead, and schedule your stakeholder kickoff conversations within the next two weeks. That single step converts this guide from reading material into an active project.
If you want experienced support to accelerate your assessment, reduce scoring bias, or build the internal capability to run this process independently, Lean Six Sigma Experts can help across all three pillars: consulting, training, and recruiting. Contact our team to discuss your operational excellence goals and get a plan built around your specific environment.
