Most Lean Six Sigma deployments don’t fail because of bad data or flawed methodology. They fail because the organization wasn’t ready for the change. A change readiness assessment gives you a structured way to evaluate whether your people, processes, and culture can absorb a transition before you commit resources to it. Skip this step, and you’re essentially building a house on a foundation you never inspected.
At Lean Six Sigma Experts, we’ve spent over a decade helping organizations implement process improvements that actually stick. One lesson comes up again and again: technical solutions only work when the organization is prepared to adopt them. That preparation isn’t a gut feeling, it’s something you can measure, score, and act on. A proper readiness assessment surfaces the gaps between where your organization is now and where it needs to be before a major initiative launches.
This guide walks you through every step of conducting a change readiness assessment, from selecting the right framework to building your own scoring criteria. You’ll find practical tools, downloadable templates, and real evaluation methods that operations leaders, HR managers, and project teams can put to work immediately. Whether you’re rolling out a new Lean program across multiple sites or restructuring a single department, knowing your starting point changes everything.
What a change readiness assessment measures
A change readiness assessment doesn’t measure enthusiasm or surface-level buy-in. It measures organizational capacity across specific, testable dimensions that directly predict whether a change initiative will succeed or stall. Think of it as a diagnostic tool that gives you a baseline score before you start spending time and money on implementation. Each dimension you measure reveals a different type of gap, and those gaps tell you exactly where to focus your preparation efforts.
The six core readiness dimensions
Most rigorous assessments evaluate six interconnected areas. Leadership alignment looks at whether decision-makers understand the change, support it publicly, and are prepared to model new behaviors from day one. Employee awareness measures how well your workforce understands what’s changing, why it’s changing, and what it means for their daily work. Without awareness, even the best technical solution runs into friction at every level.

Strong leadership alignment is the single strongest predictor of successful change adoption in process improvement programs.
Organizational capacity covers whether you have the budget, staffing, and bandwidth to absorb the transition alongside normal operations. Process and system readiness evaluates whether current workflows, technology, and documentation can support the new state without major rework. Cultural fit examines whether existing organizational values and behavioral norms align with what the change requires. Finally, communication infrastructure checks whether the right channels exist to keep stakeholders informed and two-way feedback active throughout the rollout.
| Dimension | What It Measures | Key Risk if Weak |
|---|---|---|
| Leadership alignment | Executive buy-in and visible sponsorship | Initiative loses momentum after launch |
| Employee awareness | Understanding of change scope and personal impact | Resistance, confusion, low adoption |
| Organizational capacity | Resources, bandwidth, and budget availability | Overload and incomplete implementation |
| Process and system readiness | Compatibility of current tools and workflows | Rework, delays, technical failures |
| Cultural fit | Alignment of norms with required behaviors | Reversion to old habits post-training |
| Communication infrastructure | Channel availability and feedback mechanisms | Misinformation and unresolved concerns |
What the assessment does not measure
A readiness assessment does not predict the future or guarantee outcomes. It measures your starting conditions. Scores in any dimension only reflect the state of the organization at the time of data collection, which is why reassessment throughout an initiative matters as much as the initial baseline measurement. Teams sometimes conflate readiness with willingness, but the two are different. An employee can be fully willing to support a change and still lack the skills, resources, or information needed to adopt it effectively.
Understanding this distinction shapes how you respond to gaps. Skill gaps require training. Awareness gaps require communication. Resource gaps require planning and prioritization decisions from leadership. Treating all gaps as the same problem leads to generic solutions that fix nothing. A well-structured readiness assessment separates these issues so you can address each one with a targeted response rather than a single, unfocused change management plan.
When to run it and who should own it
Timing matters more than most teams realize. Running a change readiness assessment too late locks you into commitments you can’t reverse, while running it too early means you’re measuring an organization that hasn’t yet processed what the change actually involves. The right window sits 4 to 8 weeks before a formal initiative launch, once stakeholders understand the scope of the change but before implementation resources are fully deployed.
The right trigger points for running an assessment
Certain organizational events signal that an assessment is due. You should run one any time you’re introducing a new process framework, such as a Lean or Six Sigma program, restructuring a department, deploying new technology across multiple teams, or responding to a regulatory change that requires behavioral shifts from your workforce. These triggers share a common characteristic: they all require people to work differently than they do today.
The earlier you identify readiness gaps, the more options you have to close them before they become launch blockers.
You should also reassess mid-initiative if adoption metrics fall below your targets or if leadership signals change during the rollout. A single baseline score at the start of a project doesn’t stay valid for long. Organizational conditions shift as people learn more, as resource constraints evolve, and as early pilot results come in. Building in a second assessment at the 30 to 60 day mark after launch catches emerging problems before they compound.
Who should own the process
Ownership of the change readiness assessment needs to sit with someone who has both organizational access and decision-making authority. In most medium to large organizations, that means the project sponsor or a senior operations leader, not the project manager alone. Project managers coordinate the logistics, but the sponsor owns the outcomes and has the standing to act on what the data reveals.
Your HR or organizational development team should be active co-owners, particularly for dimensions like cultural fit and employee awareness. They hold existing relationships across the workforce and understand how to design data collection that employees will actually engage with honestly. If your organization uses dedicated change management practitioners, they should lead the assessment design and facilitate the scoring process while the sponsor remains accountable for acting on results. Assigning ownership to a committee with no clear accountable individual is the fastest way to ensure the findings sit in a folder and go nowhere.
Step 1. Define the change and success criteria
Before you build any part of a change readiness assessment, you need a clear, agreed-upon definition of what you’re actually assessing readiness for. Vague change descriptions produce vague results. If your team can’t articulate the change in a single, concrete sentence, your assessment questions will drift in ten different directions and your scores won’t point to anything actionable. Start here, and treat this step as non-negotiable before moving forward.
Write a single-sentence change statement
Your change statement should answer three things in one sentence: what is changing, who it affects, and what the desired outcome is. This forces you to strip away complexity and agree on the core scope before you start asking anyone else questions about it.
Use this template to build your statement:
Change Statement Template: "We are [specific action] in [affected area or team] to achieve [measurable outcome] by [target date]."
For example: "We are deploying a standardized Lean daily management system across all three manufacturing shifts to reduce unplanned downtime by 20% by Q3." That statement is specific enough to anchor every assessment question you write in later steps. If your leadership team can’t agree on a single version of that sentence, that disagreement is itself a critical readiness gap you’ve already uncovered.
Set measurable success criteria before you start
Once you have a change statement, define what success looks like in measurable terms. These criteria serve two purposes: they guide what your assessment needs to measure, and they give you a baseline to compare against when you reassess mid-initiative or at project close.
Build your success criteria using this format:
| Criteria Category | Example Metric | Target | Measurement Method |
|---|---|---|---|
| Adoption rate | % of staff following new process | 85% within 60 days | Supervisor observation checklist |
| Competency | Assessment score on new procedures | 80% pass rate | Post-training test results |
| Performance impact | Reduction in defect rate | 15% decrease | Production data pull |
| Stakeholder confidence | Leadership readiness score | 75+ out of 100 | Pre-launch survey |
Each criterion must have an owner and a measurement method before your assessment launches. Criteria without ownership tend to get measured once and then forgotten. Connecting your success metrics to the assessment dimensions you’ll evaluate in Step 3 keeps the entire process internally consistent and ensures your findings translate directly into decisions rather than just observations.
Step 2. Map stakeholders and change impacts
A change readiness assessment only produces useful data if you know who you’re assessing and what they’re being asked to absorb. Stakeholder mapping gives you that picture before you write a single survey question or schedule a single interview. Skip this step and you’ll either over-sample groups with minimal exposure to the change or miss the people whose resistance will matter most when implementation begins.
Build your stakeholder inventory
Your stakeholder inventory is a structured list of every group affected by the change, organized by their level of involvement and type of impact. Start broad, then narrow. Identify every team, department, or role that will interact with the new process, system, or structure. Then sort them into three tiers: those who must change their daily behavior, those who support the change without directly using it, and those affected only indirectly.
Use this template to build your inventory:
| Stakeholder Group | Change Impact Type | Tier | Primary Contact |
|---|---|---|---|
| Shift supervisors | Direct behavioral change | 1 | Operations Manager |
| HR department | Policy and process support | 2 | HR Director |
| Finance team | Reporting format change | 2 | CFO |
| Executive leadership | Strategic alignment | 3 | Project Sponsor |
| External suppliers | Workflow interface change | 3 | Procurement Lead |
Filling in this table forces your team to name specific groups rather than speak in generalities. That specificity directly shapes which assessment questions you build in Step 4 and which groups you prioritize for deeper data collection.
Assess the impact level for each group
Once you have your inventory, score the magnitude and urgency of impact for each group. Magnitude measures how significantly their work will change. Urgency measures how soon they need to be ready. Groups with high magnitude and high urgency need the most attention in your assessment design and the most targeted action planning after results come in.

Groups with high impact and low current readiness are your highest-priority gaps. Address them first in your action plan.
Rate each group on a simple 1 to 3 scale for both dimensions: 1 for low, 2 for moderate, 3 for high. Multiply the two scores to get a combined priority number between 1 and 9. Any group scoring 6 or above should receive dedicated readiness evaluation, not just inclusion in a general survey. This scoring approach keeps your assessment focused and prevents you from treating a shift supervisor the same way you treat an executive who reviews a monthly dashboard.
Step 3. Choose readiness dimensions and model
With your stakeholders mapped and your change clearly defined, you now need to decide which readiness dimensions to evaluate and which framework will structure your data collection. Not every change requires the same lens. A technology deployment stresses process and system readiness far more than a culture shift initiative does. Choosing the wrong dimensions wastes time and produces scores that don’t reflect the real risks in your specific situation.
Match dimensions to your change type
Different changes expose different vulnerabilities, and your change readiness assessment should reflect that directly. A Lean Six Sigma rollout puts heavy demands on leadership alignment and cultural fit because it asks people to change how they think about problems, not just which software they use. A regulatory compliance change, by contrast, puts more weight on awareness and process readiness because the behavioral shift is more prescribed.
Use this table to match your change type to the dimensions that carry the most predictive weight:
| Change Type | High-Priority Dimensions |
|---|---|
| Lean / Six Sigma program launch | Leadership alignment, cultural fit, employee awareness |
| Technology deployment | Process and system readiness, organizational capacity |
| Regulatory compliance update | Employee awareness, communication infrastructure |
| Organizational restructuring | Leadership alignment, cultural fit, capacity |
| Multi-site standardization | Communication infrastructure, process readiness, capacity |
Scoring every dimension equally wastes effort on low-risk areas. Weight your dimensions based on where your specific change type is most vulnerable.
You don’t have to evaluate all six dimensions with equal depth. Prioritize the two or three dimensions most relevant to your change type and build deeper questions around those. The remaining dimensions still belong in your assessment, but they can use lighter measurement tools like short pulse surveys rather than structured interviews.
Select your scoring model
Once you know which dimensions to focus on, you need a consistent scoring model that converts qualitative responses into numbers you can compare across groups and track over time. The simplest reliable approach uses a 1 to 5 Likert scale for survey questions, averaged across each dimension to produce a dimension score, then averaged again to produce an overall readiness index.
Build your scoring structure using this template:
| Dimension | Questions | Raw Score Range | Weight |
|---|---|---|---|
| Leadership alignment | 5 | 5-25 | 25% |
| Employee awareness | 5 | 5-25 | 20% |
| Cultural fit | 4 | 4-20 | 20% |
| Process readiness | 4 | 4-20 | 15% |
| Organizational capacity | 3 | 3-15 | 10% |
| Communication infrastructure | 3 | 3-15 | 10% |
Assigning weights rather than treating all dimensions equally lets you reflect the true risk profile of your change in the final index score. A Lean program launch, for example, might weight leadership alignment at 30% and cultural fit at 25%, leaving less weight for system readiness.
Step 4. Build your assessment tools and questions
With your dimensions selected and weighted, you now need to build the actual instruments that collect data from each stakeholder group. The tools you choose directly affect the quality and honesty of the responses you get back. A well-designed change readiness assessment uses multiple data collection methods rather than relying on a single survey, because different groups respond more openly to different formats.
Choose the right tool for each stakeholder group
Not every stakeholder group needs the same type of tool. Tier 1 stakeholders facing direct behavioral change benefit most from structured interviews or focus groups, where you can probe for context behind their scores. Tier 2 and 3 stakeholders work well with shorter online surveys because their exposure to the change is narrower and their responses are easier to quantify without follow-up.
| Stakeholder Tier | Recommended Tool | Estimated Time |
|---|---|---|
| Tier 1 (direct impact) | Structured interview or focus group | 30-45 minutes |
| Tier 2 (support roles) | 10-15 question online survey | 10-12 minutes |
| Tier 3 (indirect impact) | 5-question pulse survey | 5 minutes |
| Leadership | One-on-one structured interview | 20-30 minutes |
Write questions that produce actionable data
Your survey questions need to map directly to the dimensions you selected in Step 3. Vague questions produce vague scores, so tie each question explicitly to a behavior or condition that can be addressed through a specific intervention. For each dimension, write three to five questions on a 1 to 5 scale where 1 represents "strongly disagree" and 5 represents "strongly agree."
Questions tied to observable behaviors outperform opinion-based questions because they produce scores you can act on, not just analyze.
Use this question template for each dimension in your change readiness assessment:
Leadership Alignment Sample Questions:
- "My direct manager has clearly explained why this change is happening." (1-5)
- "Senior leaders visibly support this initiative in team meetings." (1-5)
- "I believe leadership will follow through on the commitments they’ve made about this change." (1-5)
Employee Awareness Sample Questions:
- "I understand how my daily tasks will change as a result of this initiative." (1-5)
- "I know where to go if I have questions about this change." (1-5)
After you’ve drafted your questions, test them with two or three people outside the core project team before distributing them broadly. Unclear wording skews scores and creates gaps in your data that you can’t fix after collection closes.
Step 5. Collect data fast and without bias
Once your tools are built, speed and consistency matter more than most teams expect. Data collection that drags on for three or four weeks introduces timing bias because the organization’s conditions change during that window. Opinions shift after town halls, rumors spread, and early results from pilots contaminate responses from people who haven’t yet experienced the change. Your change readiness assessment collects a snapshot, so treat it like one.
Set a collection window and stick to it
Give yourself five to seven business days as your target collection window for all stakeholder tiers. This keeps the organizational context stable enough that you’re measuring the same conditions across groups. Send your survey on a Tuesday or Wednesday morning, when response rates are historically higher than Monday or Friday, and set your deadline for the following Tuesday at noon. That gives respondents a full week without letting the window drift open-ended.
Closing your collection window on a fixed date forces decisions rather than waiting for perfect participation rates.
Use this launch sequence to keep collection on track:
| Day | Action |
|---|---|
| Day 1 (Tuesday) | Send surveys and schedule Tier 1 interviews |
| Day 2-3 | Send one reminder to non-respondents |
| Day 4-5 | Conduct Tier 1 structured interviews |
| Day 7 (Tuesday) | Close surveys and compile raw data |
Remove the conditions that create biased responses
Anonymity is the single most effective lever you have for collecting honest data. When employees believe their individual scores will reach their manager, they inflate ratings to avoid conflict. Use a survey platform that aggregates results and communicate that fact explicitly in your survey introduction. State the minimum group size you’ll report at, typically five respondents, so no individual can be identified from the results.
Bias also enters through how you frame questions and who delivers them. Avoid sending surveys from the project sponsor’s email address because it signals expected answers. Send from a neutral party, such as HR or the change management lead. For interviews, use a trained facilitator who holds no direct authority over the interviewees. Brief that facilitator to ask the same questions in the same order across every session without adding commentary that signals preferred responses. Standardizing your delivery removes the interviewer effect before it distorts your data.
Step 6. Score results and find root causes
Once your data collection window closes, resist the urge to jump straight to solutions. Raw scores without interpretation are just numbers. This step is where you convert those numbers into a clear picture of organizational readiness and trace each gap back to a specific, addressable cause.
Calculate dimension scores and your readiness index
Start by averaging all individual responses within each dimension to get a dimension score on a 1 to 5 scale. Then apply the weights you assigned in Step 3 to calculate your overall readiness index. Any dimension scoring below 3.0 warrants direct attention before launch. A score between 3.0 and 3.9 signals moderate risk that needs a targeted plan. Scores of 4.0 and above indicate adequate readiness in that area, though you should still monitor them.

Your overall readiness index is a decision-making tool, not a grade. Use it to prioritize action, not to judge your organization.
Use this scoring template to calculate your results:
| Dimension | Raw Avg Score (1-5) | Weight | Weighted Score |
|---|---|---|---|
| Leadership alignment | 3.4 | 25% | 0.85 |
| Employee awareness | 2.8 | 20% | 0.56 |
| Cultural fit | 3.1 | 20% | 0.62 |
| Process readiness | 3.7 | 15% | 0.56 |
| Organizational capacity | 2.6 | 10% | 0.26 |
| Communication infrastructure | 3.2 | 10% | 0.32 |
| Overall Readiness Index | 100% | 3.17 / 5.0 |
Break scores down further by stakeholder tier and department before drawing conclusions. A strong overall index can mask a critical gap in one specific group, and that group may be exactly the one whose resistance will derail your rollout.
Dig into the data to find root causes
Scores tell you where a gap exists, but they don’t automatically explain why. For any dimension scoring below 3.0 in your change readiness assessment, go back to the open-ended interview responses and look for patterns. If employee awareness scores low, the root cause might be inconsistent manager communication, unclear messaging, or insufficient lead time for information to reach frontline workers. Each explanation points to a different corrective action.
Use a simple root cause table to document your findings. For each gap, list the affected dimension, the score, the probable root cause based on qualitative data, and the type of intervention needed. This table becomes the direct input for your action plan in Step 7 and ensures you’re solving real problems rather than treating symptoms with generic training or emails.
Step 7. Turn gaps into a change action plan
Your root cause table from Step 6 gives you the raw material for a structured action plan that closes specific gaps before they become launch blockers. The goal of this step is not to produce a lengthy document but to assign a concrete intervention, an owner, and a deadline to every gap you identified. An action plan without those three elements is just a list of intentions.
Prioritize gaps by risk and effort
Not every gap you found in your change readiness assessment needs the same urgency or the same level of resources. Before you assign actions, rank each gap by the combination of its readiness score and the complexity required to close it. A low score that requires a single targeted communication campaign is less threatening than a low score rooted in a deeper cultural resistance that needs months of leadership modeling to resolve.

Closing a low-score gap with a high-effort fix before launch only makes sense if that gap will directly block adoption on day one.
Use this prioritization matrix to sort your gaps before you build interventions:
| Gap | Dimension | Score | Effort to Close | Priority |
|---|---|---|---|---|
| Frontline staff unaware of process changes | Employee awareness | 2.4 | Low | High |
| No feedback channel for questions | Communication infrastructure | 2.7 | Low | High |
| Leadership not aligned on scope | Leadership alignment | 2.9 | High | High |
| Training materials not finalized | Process readiness | 3.1 | Medium | Medium |
| Budget confirmed but not allocated | Organizational capacity | 3.3 | Medium | Medium |
Write targeted interventions for each gap type
Once you have your priorities set, build one specific intervention for each gap. Match the intervention type to the root cause, not just the dimension score. Skill gaps need training. Awareness gaps need communication. Structural gaps need a decision from leadership before any frontline action will hold.
Use this action plan template for each gap you need to close:
| Gap | Root Cause | Intervention | Owner | Deadline |
|---|---|---|---|---|
| Low awareness among shift supervisors | No structured briefing held yet | Deliver 30-minute briefing with Q&A for all supervisors | Operations Manager | 2 weeks pre-launch |
| No dedicated feedback channel | No channel assigned | Create shared email alias and post instructions on intranet | HR Director | 1 week pre-launch |
| Leadership messaging inconsistent | No shared talking points | Draft and distribute a one-page leader FAQ | Project Sponsor | 10 days pre-launch |
Each row in this table must have a named individual as owner, not a team or department. Shared ownership produces no accountability. Attach this completed table to your project plan and review it in every steering committee meeting until each item closes.
Step 8. Monitor adoption and reassess
Launching a change initiative does not end your responsibility for readiness. Organizational conditions shift after go-live as people encounter the reality of new processes, workloads adjust, and early wins or stumbles shape attitudes across the workforce. Treating your initial change readiness assessment as the only data point is one of the most common reasons post-launch adoption falls short of targets. This step builds the monitoring habit that keeps your action plan current and your initiative on track.
Track adoption metrics against your success criteria
Go back to the success criteria table you built in Step 1 and start measuring against it within the first two weeks of launch. Each criterion should already have an owner and a measurement method, so collecting the data is a matter of executing what you designed rather than improvising new measurement after the fact. Look for early warning signs in the numbers before they become entrenched patterns.
Use this weekly tracking template to log adoption data across the criteria you defined:
| Metric | Target | Week 2 Actual | Week 4 Actual | Week 8 Actual | Status |
|---|---|---|---|---|---|
| Process adherence rate | 85% | 62% | 74% | 88% | On track |
| Post-training pass rate | 80% | 71% | 83% | 87% | Resolved |
| Defect rate reduction | 15% | 4% | 9% | 14% | Watch |
| Leadership readiness score | 75/100 | 68 | 72 | 77 | On track |
Any metric that falls below 80% of its target at the Week 4 mark should trigger a root cause review using the same process you applied in Step 6, not a wait-and-see approach.
Run your second assessment at the 30 to 60 day mark
Your second formal assessment should go out 30 to 60 days after launch, using the same questions and scoring model from your initial baseline. This lets you compare scores dimension by dimension rather than making qualitative judgments about whether things feel better. Dimension scores that stayed flat or dropped after launch signal that your initial action plan addressed the symptom but not the root cause.
A second assessment score that is lower than baseline in any dimension is a direct signal to escalate, not to wait for adoption to self-correct.
Send the second survey to the same stakeholder groups you surveyed initially. Share the before-and-after score comparison with leadership and the project team within one week of closing the collection window. Use those results to update your action plan, close out items that worked, and replace items that didn’t with revised interventions that address what the new data actually shows.

Next steps
You now have a complete, eight-step process for running a change readiness assessment that produces scores you can act on rather than data that sits in a folder. Each step connects directly to the next: your stakeholder map shapes your questions, your root cause analysis drives your action plan, and your second-round assessment tells you whether your interventions actually worked. The method only delivers value when you run it in sequence, not when you cherry-pick the pieces that feel easiest.
Your most important next move is to write your change statement and success criteria before anything else. That single step anchors every decision that follows and forces alignment among the people who need to sponsor the work. If your organization is preparing for a Lean or Six Sigma deployment and you want experienced guidance on building your readiness plan from the ground up, contact the Lean Six Sigma Experts team to talk through your specific situation.
