How to build a Roadmap – Publish

TheRoadAhead_ChildThis post represents the last of the Road Map series I have shared with over 60,000 readers since introduced in March of 2011 at this humble little site alone.  I never would have thought this subject would have attracted so much interest and helped so many over the last three years. Quite frankly I’m astonished at the interest and of course grateful to all the kind words and thoughts so many have shared with me.

The original intent was to share a time tested method to develop, refine, and deliver a professional roadmap producing consistent and repeatable results.  This is should be true no matter how deep or how wide or narrow the scope and subject area we are working with. I think I have succeeded in describing the overall patterns employed.  The only regret I have is not having enough time and patience with the constraints of this media to dive deeper into some of the more complex and trickier aspects of the delivery techniques. I remain pleased with the results given the little time I have had to share with all of you. And sincerely hope that what has worked for me with great success over the years may help you make your next roadmap better.

This method works well across most transformation programs. As I noted earlier most will struggle to find this in textbooks, class rooms, or in your local book store (I have looked, maybe not hard enough). This method I is based loosely on the SEI-CM IDEAL model used to guide development of long-range integrated planning for managing software process improvement programs. Although the SEI focus is software process improvement the same overall pattern can applied easily to other subject areas like organization dynamics and strategy planning in general.

The Overall Pattern
At the risk of over-simplifying things, recall the overall pattern most roadmaps should follow is illustrated in the following diagram.

RoadMap_Pattern

This may look overwhelming at first but represents a complete and balanced approach to understanding what the implications are for each action undertaken across the enterprise as part of a larger program.  You can argue (and I would agree) that this may not be needed for simple or relatively straightforward projects.  What is more likely in this case is the project or activity we are discussing represents a piece or component part of a much larger initiative and will most certainly not need  its’ own roadmap at all. This post is focused on the bigger picture of a collection of projects and activities gathered together to organize and guide multiple efforts in a clear directed manner.

Earlier posts in this series (How to Build a Roadmap) summarized the specific steps required to develop a well thought out road map. The method identified specific actions using an overall pattern all roadmaps should follow. The following steps (and related links to other posts) are required to complete this work:

  1. Develop a clear and unambiguous understanding of the current state
  2. Define the desired end state
  3. Conduct a Gap Analysis exercise
  4. Prioritize the findings from the Gap Analysis exercise into a series of gap closure strategies
  5. Discover the optimum sequence of actions (recognizing predecessor – successor relationships)
  6. Develop and Publish the Road Map

This post wraps up all the hard work to date and assembles the road map to begin sharing the results with stakeholders.  Assuming all the prior work is completed, we are now ready to develop and publish the road map. How this is communicated is critical now. We have the facts, we have the path outlined, and we have a defensible position to share with our peers. We have the details readily available to support our position. Now the really difficult exercise rears its ugly head. Somehow, we need to distill and simply our message to what I call the “Duckies and Goats” view of the world. In other words we need to distill all of this work into a simplified yet compelling vision of how we transform an organization, or enabling technology to accomplish what is needed. Do not underestimate the difficulty in this task. After all the hard work put into an exercise like this, the last thing we need to do is to confuse our stakeholders with mind-numbing detail. Yes, we need this for ourselves to exhaust any possibility we have missed something to ensure we haven’t overlooked the obvious because sometimes “when something is obvious, it may be obviously wrong”.  So what I recommend is a graphical one or two page view of the overall program where each project is linked to successive layers of detail. Each of these successive layers of detail can also be decomposed further if needed to the detailed planning products and supporting schedules. For an example of this see the accompanying diagram which illustrates the concept..

RoadMapExplodedDevelop the Road Map
Armed with the DELTA (current vs. desired end state), the prioritization effort (what should be done), and the optimum sequence (in what order) we can begin to assemble a sensible, defensible road map describing what should be done in what order.  Most of the hard work has already been completed so we should be only be concerned at this point with the careful presentation of the results in a way our stakeholders will quickly grasp and understand.

Begin by organizing the high-level tasks and what needs to be accomplished using a relative time scale usually more fine grained for the first set of tasks typically grouped into quarters.  Recall each set of recommended initiatives or projects has already been prioritized and sequenced (each of the recommended actions recognize predecessor – successor relationships for example).  If this gets too out-of-hand use a simple indexing scheme to order the program using groupings of dimension, priority, sequence, and date related values with your favorite tool of choice.  Microsoft Excel pivot tables work just fine for this, and will help organize this work quickly.  I use the MindJet MindManager product to organize the results into maps I can prune and graft at will.  Using this tool has some real advantages we can use later when we are ready to publish the results and create our detailed program plans.

Each project (task(s)) should be is defined by its goals, milestone deliveries, dependencies, and expected duration across relevant dimensions. For example, the dimensions you group by can include People and Organization, Processes, Technology and Tools, and External Dependencies.  The following illustrates a high-level view of an example Master Data Management roadmap organized across a multiple year planning horizon.

PIM_Hub_01I think it is a good idea to assemble the larger picture first and then focus on the near term work proposed in the road map.  For example, taking the first quarter view of what needs to be accomplished from the executive summary above we can see the first calendar quarter (in this case Q4 2009) of the road map is dedicated to completing the business case, aligning the global strategy, preparing the technical infrastructure for a MDM Product project, and gaining a better understanding of product attribution.  The following illustrates the tasks exploded in the summary to the near term map of what is needed in Q4 2009 (the first quarter of this program).

PIM_Hub_02

Publish the Road Map
At this stage everything we need is ready for publication, review by the stakeholders, and the inevitable refinements to the plan. I mentioned earlier using MindJet MindManager tool to organize the program initiatives into maps.  This tool really comes in handy now to accelerate some key deliverables. Especially useful is the ability to link working papers, schedules, documentation, and any URL hyperlinks needed to support the road map elements. Many still prefer traditional documents (which we can produce with this tool easily) but the real power is quickly assembling the work into a web site that is context aware and a quite powerful way to drill from high level concepts to as much supporting detail as needed.  This can easily accessible without the need for source tools (this is a zero footprint solution) by any stakeholder with a browser. The supporting documentation and URL content can be revised and updated easily without breaking the presentation surface when revisions or refinements are made to the original plan.  I also use the same tool and content to generate skeleton program project plans for us with MS Project. The plans generated can be further refined and used to organize the detailed planning products when ready. Your Program Management Office (PMO) will love you for this.

Think you would agree this is an extremely powerful way to organize and maintain a significant amount of related program content to meet the needs of a wide variety of stakeholders.   An example of a Road Map web site is illustrated in the snapshot below (note, the client name has been blocked to protect their privacy).

SiteImage_02

Results
So, we have assembled the roadmap using a basic pattern used across any discipline (business or technology) to ensure an effective planning effort. This work is not an exercise to be taken lightly. We are after all discussing some real world impacts to come up with a set of actionable steps to take along the way that just make sense. Communicating the findings clearly through the road map meets the intent of the program management team and will be used in a variety of different ways.  For example beyond the obvious management uses consider the following ways this product will be used.

First, the road map it is a vehicle for communicating the program’s overall intent to interested stakeholders at each stage of its planned execution.

  • For downstream designers and implementers, the map provides overall policy and design guidance. The map can be used to establish inviolable constraints (plus exploitable freedoms) on downstream development activities to promote flexibility and innovation if needed.
  • For project managers, the road map serves as the basis for a work, product, and organization breakdown structures, planning, allocation of project resources, and tracking of progress by the various teams.
  • For technical managers, the road map provides the basis for forming development teams corresponding to the work streams identified.
  • For designers of other systems with which this program must interoperate, the map defines the set of operations provided, required, and the protocols that allows the interoperation to take place at the right time.
  • For resource managers, testers and integrators, the road map dictates the correct black-box behavior of the pieces that must fit together.

Secondly, the road map can be used as a basis for performing up-front analysis to validate (or uncover deficiencies in) in design decisions and refining or altering those decisions where necessary.

  • For the architect and requirements engineers who represent the customer(s), the road map is a framework where architecture as a forum can be used for negotiating and making trade-offs among competing requirements.
  • For the architect and component designers, the road map can be a vehicle for arbitrating resource contention and establishing performance and other kinds of run-time resource consumption budgets.
  • For those wanting to develop using vendor-provided products from the commercial marketplace, the road map establishes the possibilities for commercial off-the-shelf (COTS) component integration by setting system and component boundaries.
  • For performance engineers, the map can provide the formal model guidance that drives analytical tools such as rate schedulers, simulations, and simulation generators to meet expected demands at the right time.
  • For development product line managers, the map can help determine whether a potential new member of a product family is in or out of scope, and if out, by how much.

Thirdly, the road map is the first artifact used to achieve program and systems understanding.

  • For technical mangers, the map becomes a basis for conformance checking, for assurance that implementations have in fact been faithful to the program and architectural prescriptions.
  • For maintainers, the map becomes a starting point for maintenance activities, revealing the relative date and areas when a prospective change is planned to take place.
  • For new project members, the map is should be the first artifact for becoming familiar with a program and system’s design intent.

This post wraps up all the hard work to date and assembles the road map to begin sharing with stakeholders impacted by the program as planned.  The original intent was to share a time tested method to develop, refine, and deliver a professional roadmap producing consistent and repeatable results.  I want to thank all of you embarking down this adventure with me over the last couple years. My sincere hope is that what has worked for me time after time may work just as well for you.

How to build a Roadmap – Gap Analysis

An earlier post in this series How to Build a Roadmap  discussed the specific steps required to develop a well thought out road map. This roadmap identified specific actions using an overall pattern ALL roadmaps should follow. The steps required to complete this work:

  1. Develop a clear and unambiguous understanding of the current state
  2. Define the desired end state
  3. Conduct a Gap Analysis exercise
  4. Prioritize the findings from the Gap Analysis exercise into a series of gap closure strategies
  5. Discover the optimum sequence of actions (recognizing predecessor – successor relationships)
  6. Develop and Publish the Road Map

This post will discuss how to develop a robust gap analysis to identify any significant shortcomings between the current and desired end states.  We use these findings to begin develop strategy alternatives (and related initiatives) to address what has been uncovered. Our intent is to identify the difference (delta) from where we are to what we aspire to become. This exercise is critical to identify what needs to be accomplished.  The gap analysis leads to a well-organized set of alternatives and viable strategies we can use to complete the remaining work.

Gap Analysis

Click to enlarge

We are seeking a quick and structured way to define actionable activities to be reviewed and approved by all stakeholders. This includes identifying a set of related organizational, functional, process, and technology initiatives needed (I will address the architectural imperatives in another post). The gap closure recommendations provide a clear line of sight back to what needs to be accomplished to close the “delta” or gaps uncovered in the analysis.

Addressing the gaps discovered can be grouped across three broad categories to include specific actionable activities and management initiatives related to:

  • Building organizational capability,
  • Driving organizational commitment, and
  • Right-Fitting the solution to ensure we do not try to build a system whose complexity exceeds the organization’s capability to deliver

While every organization’s “reach should always exceed its grasp” we also should understand the need to introduce this new discipline in a measured and orderly manner.  There are many good (and a little too academic for me) books and writers who have addressed this topic well. What is different about my approach is my need to identify specific actionable activities we undertake in a consistent way to change into what we aspire to be.

Gap Analysis

There are a couple of ways to do this. Remember all three dimensions referred to above should be considered in this analysis. A helpful outline should include questions related to each of the three aspects in our work.  Note, this is a generalized example, and represents three distinct entry points to evaluate.  Further exploration into any of these three is usually needed for developing the detailed planning products in later phases. Using this approach we can attempt to reveal quickly where significant gaps exist and explore further as needed. So let us focus on the key themes early to guide our work moving forward.

Organizational Capability

Is the organization is ready to embrace the initiative?

  • Have we identified baseline adoption of fundamental management capability across both business and technical communities that can be leveraged to drive the effort?
  • Is there executive consensus? and –
  • Do plans exist to execute immediate actions to address missing operational capability (gaps)?

Is there evidence of detailed planning to stage and execute the transformation?

  • Have we consciously chosen maturity jumps in a measured and controlled manner?
  • Do we understand the expected change in process consistency, discipline, and complexity may require some deep cultural shifts to occur? and
  • Have we clearly articulated the associated operational impacts to the stakeholders?

Are capability-based plans included or needed at this time?

  • Have we accounted for internal and external bandwidth in capability and core competency?
  • Have we factored in enough time to stabilize the management foundation and professional community?
  • Are there critical dependencies with other ongoing programs? – Is there an effort underway to secure the participation of critical roles and key personnel?

Organizational Commitment

Is there evidence that exists that marketing the compelling vision is occurring in an organized manner?

  • Is there an effort to quantify and repeatedly communicate the value to the organization?
  • Has the “what’s in it for me” messaging for critical stakeholders been developed?
  • Are goals and objectives communicated in a consistent, repeatable manner?

Is there a need to proactively Manage Stakeholder Buy-In?

  • Have we created opportunities for stakeholder involvement?
  • Need to design quantitative usage metrics? and
  • Are there incentives to align and reward desired behavior?

Do we need to develop and enhance change leadership?

  • Develop manager’s communication,
  • Develop expectation and capacity management skills,
  • Assign dedicated transition management resources to the effort.

Strong Governance and Oversight roles accepted and adopted as a critical success factor?

  • Is there active executive sponsor involvement?
  • Have we defined performance outcomes to direct and track success?  and
  • Are line and staff managers accountable for progress to plan?

Are goals and objectives communicated in a consistent, repeatable manner?

  •  Need to institute a comprehensive and open communication plan that publishes program information to the organization in a consistent manner?

Right-Fitting the Technical Solution

Has the operating model been defined?

  • Ensure that introducing and adopting new processes are aligned to business intent.
  • Balance the trade-offs between structure and process,
  • Formally assign decision rights,
  • Have the new roles been defined where needed? and
  • Can we leverage reuse of existing assets where possible?

What about developing necessary Management and Business User Skills? 

  • Enhance domain specific skills,
  • Improve decision management, and
  • Adopt  and refine fundamental technical and business skills related to operations

Are there significant gaps in the current architecture or environment that would prevent successful delivery? 

  • Information, Application, and Technical architecture can support the desired end state
  • Required ITIL (Information Technology Infrastructure Library) Service and Support practices exist, and
  • Complexity of the solution will not exceed the organization’s technical ability to deliver

The gap closure strategy should include specific recommendations for building organizational capability and driving commitment to this effort. In addition we should ensure to right-fit a technical solution the organization can grow into as it achieves widespread adoption across the enterprise.  The approach is carefully weighed to align the three perspectives to ensure our gap closure strategy recommendations are not used to build a system whose complexity exceeds the organization’s capability to deliver.

A typical gap analysis sequence starts with an understanding of the strategy as defined. This in turns drives the organizational structure. Processes are based on the organization’s structure. Structure and Processes further refine reward systems and policy. Beginning with strategy we uncover gaps where a shared set of goals (and related objectives) may not align with the desired end state. This gap analysis can be organized around the following categories:

  • People/Organization considers the human side of Information Management, looking at how people are measured, motivated and supported in related activities.  Those organizations that motivate staff to think about information as a strategic asset tend to extract more value from their systems and overcome shortcomings in other categories.
  • Policy considers the message to staff from leadership.  The assessment considers whether staffs are required to administer and maintain information assets appropriately and whether there consequences for inappropriate behaviours.  Without good policies and executive support it is difficult to promote good practices even with the right supporting tools.
  • Process and Practice considers whether the organization has adopted standardized approaches to Information Management.  Even with the right tools, measurement approaches and policies, information assets cannot be sustained unless processes are consistently implemented.  Poor processes result in inconsistent data and a lack of trust by stakeholders.
  • Technology covers the tools that are provided to staff to properly meet their Information Management duties.  While technology on its own cannot fill gaps in the information resources, a lack of technological support makes it impractical to establish good practices.

How it Works – An Example

The questions in the quick scan tool used in a prior post to define our end state were organized around six (6) key groups to include Organization, Policy, Technology, Compliance, Measurement, and Process/Practice.  This is a quick way to summarize our findings and provide valuable clues and direction for further investigation.  We can then focus on specific subject areas using detailed schedules based on the field work to date.  Based on the subject gaps uncovered at the higher level summary (Current vs. Desired End State) further investigation should be performed by a professional with deep subject matter expertise and intimate knowledge of generally accepted best practices.  In fact it is best to use prepared schedules in the early field work (if possible) to begin gathering and compiling the facts needed during the interview processes to mitigate “churn” and unnecessary rework.

For example, in the Process/Practice area we can use the Information Technology Infrastructure Library (ITIL) to uncover any significant gaps in the Service and Support delivery functions needed to support the defined end state.  Detailed schedules can be compiled and the organization’s current state evaluated against this Library and other best practices to ensure the necessary process and practices are in place to enable the proposed end state solution.

ITIL Service Delivery

  • Service Level Management
  • Capacity Management
  • Availability Management
  • Service Continuity Management
  • Financial Management

ITIL Service Support

  • Incident Management
  • Problem Management
  • Configuration Management
  • Change Management
  • Release Management

The following fragment illustrates an example schedule related to Service Continuity Management.  Using this schedule we capture our findings, suggested initiatives or projects, expected deliverables, level of effort, and relative priority of the gap identified.  This is a quick way to summarize our findings and prepare for the next step (4- Prioritize the findings from the Gap Analysis exercise into a series of gap closure strategies).

Click to enlarge

Click to enlarge

In another example this small fragment form a Master Data Management (MDM) related gap schedule addresses specific Data Profiling activities expected within the context of a Party or Customer supporting function. What is clear from this schedule is that no evidence of profiling has been found. This is a significant gap in the MDM domain. We should have some idea of the relative quality of the data sourced into our platform and be able to keep our customers informed as to what level of confidence they should expect based on this analysis. This represents a clear gap and should be addressed in the roadmap we will develop in later stages.

Click to enlarge

Click to enlarge

Results

I think you can see this is valuable way to quickly gather, compile field work, and capture a fairly comprehensive view of the gaps uncovered between the current and desired end states of the domain in question. Armed with this information we can now proceed to step four (4) and begin to prioritize the findings from the Gap Analysis exercise into a series of gap closure strategies.

This is an invaluable way to assemble and discover the optimum sequence of actions (recognizing predecessor – successor relationships) as we move to developing the road map. This difference (delta) between these two (current and desired end state) is the basis for our road map.  I hope this has answered many of the questions about step three (3) Conduct a Gap Analysis exercise. This is not the only way to do this, but has become the most consistent and repeatable methods I’m aware of to perform a gap analysis quickly in my practice.

How to build a Roadmap – Define End State

ChangesAn earlier post (How to Build a Roadmap) discussed the specific steps required to develop a well thought out road map. This method identified specific actions using an overall pattern ALL roadmaps should follow. The steps required to complete this work:

1) Develop a clear and unambiguous understanding of the current state

2) Define the desired end state

3) Conduct a Gap Analysis exercise

4) Prioritize the findings from the Gap Analysis exercise into a series of gap closure strategies

5) Discover the optimum sequence of actions (recognizing predecessor – successor relationships)

6) Develop and Publish the Road Map

I have discussed a way to quickly complete step one (1) Define current State . This post will discuss how to craft a suitable desired End State definition so we can use the results from the Current State work and begin our gap analysis. Our intent is to identify the difference (delta) from where we are to what we aspire to become. I know this seems obvious (and maybe a little redundant). This baseline is critical to identify what needs to be accomplished to meet the challenge.

The reality of organizational dynamics and politics (we are human after all) can distort the reality we are seeking here and truly obscure the findings. I think this happens in our quest to preserve the preferred “optics”. This is especially so when trying to define our desired end state. The business will have a set of strategic goals and objectives that may not align with the individuals we are collaborating with to discover what the tactical interpretation of this end state really means. We are seeking a quick or structured way to define a desired end state that can be reviewed and approved by all stakeholders when this activity gets underway.  The tactical realization of the strategy (and objectives) is usually delegated (and rightly so) to the front line management. The real challenge is eliciting, compiling, and gaining agreement on what this desired end state means to each of the stakeholders. This is not an easy exercise and demands a true mastery of communication and facilitation skills many are not comfortable with or have exercised on a regular basis.  A clear understanding of the complex interaction of any organization (and their un-intended consequences) is critical to a clear understanding of the desired end state.

Define Desired End State

graph_galbraith_star-model1There are a couple of ways to do this. One interesting approach I have seen is to use the Galbraith Star Model as an organizational design framework. The model is developed within this framework to understand what design policies and guidelines will be needed to align organizational decision making and behavior. The Star model includes the following five categories:

  • Strategy: Determine direction through goals, objectives, values and mission. It defines the criteria for selecting an organizational structure (for example functional or balanced Matrix). The strategy defines the ways of making the best trade-off between alternatives.
  • Structure: Determines the location of decision making power. Structure policies can be subdivided into: specialization: type and number of job specialties; shape: the span of control at each level in the hierarchy; distribution of power: the level of centralization versus decentralization; departmentalization: the basis to form departments (function, product, process, market or geography).
  • Processes: The flow of information and decision processes across the proposed organization’s structure. Processes can be either vertical through planning and budgeting, or horizontal through lateral relationships (matrix).
  • Reward Systems: Influence the motivation of organization members to align employee goals with the organization’s objectives.
  • People and Policies: Influence and define employee’s mindsets and skills through recruitment, promotion, rotation, training and development.

The preferred sequence in this design process is composed in the following order:

  • strategy;
  • structure;
  • key processes;
  • key people;
  • roles and responsibilities;
  • information systems (supporting and ancillary);
  • performance measures and rewards;
  • training and development; and
  • career paths.
StrategyModel

Strategy Model – Click to enlarge

A typical design sequence starts with an understanding of the strategy as defined. This in turns drives the organizational structure. Processes are based on the organization’s structure. Structure and Processes further refine reward systems and policy. Beginning with Strategy we uncover a shared set of goals (and related objectives) to define the desired end state organized around the following categories:

  • People/Organization considers the human side of Information Management, looking at how people are measured, motivated and supported in related activities.  Those organizations that motivate staff to think about information as a strategic asset tend to extract more value from their systems and overcome shortcomings in other categories.
  • Policy considers the message to staff from leadership.  The assessment considers whether staff is required to administer and maintain information assets appropriately and whether there are consequences for inappropriate behaviors.  Without good policies and executive support it is difficult to promote good practices even with the right supporting tools.
  • Process and Practice considers whether the organization has adopted standardized approaches to Information Management.  Even with the right tools, measurement approaches and policies, information assets cannot be sustained unless processes are consistently implemented.  Poor processes result in inconsistent data and a lack of trust by stakeholders.
  • Technology covers the tools provided to staff to properly meet their Information Management duties.  While technology on its own cannot fill gaps in the information resources, a lack of technological support makes it impractical to establish good practices.

Goal Setting

smart_goals_bwGoal setting is a process of determining what the stakeholder’s goals are, working towards them and measuring progress to plan. A generally accepted process for setting goals uses the SMART acronym (Specific, Measurable, Achievable, Realistic, and Timely). Each of these attributes related to the goal setting exercise is described below.

  • Specific: A specific goal has a much greater chance of being accomplished than a general goal.  Don’t “boil the ocean” and try to remain as focused as possible. Provide enough detail so that there is little or no confusion as to what exactly the stakeholder should be doing.
  • Measurable: Goals should be measurable so we can measure progress to plan as it occurs. A measurable goal has an outcome that can be assessed either on a sliding scale (1-10), or as a hit or miss, success or failure. Without measurement, it is impossible to sustain and manage the other aspects of the framework.
  • Achievable: An achievable goal has an outcome that is realistic given the organization’s capability to deliver given the necessary resources and time. Goal achievement may be more of a “stretch” if the outcome is more difficult to begin with. Is what we are asking the organization possible?
  • Realistic: Start small and remain sharply focused with what the organization can and will do and let the stakeholder’s experience the joys of meeting their goals.  Gradually increase the intensity of the goal after having a discussion with the stakeholder’s to redefine the goal.  Is our goal realistic given the budget and timing constraints?  If not, then we might want to redefine the goal.
  • Time Bound: Set a timeframe for the goal: for next quarter, in six months, by one year. Setting an end point for the goal gives the stakeholders a clear target to achieve.  Planning follow-up should occur within the 6-month period (best practice) but may occur within one year period or prior based on progress to plan.

Defining the desired end state is accomplished through a set of questions used to draw participants into the process to meet our SMART objectives.  This set of questions is compiled, evaluated, and presented in a way that is easy to understand. Our goal here is to help everyone participating in the work to immediately grasp where the true gaps or shortcomings exist and why this is occurring when we get to step three (3) in the gap analysis phase.  This is true if we are evaluating Information Strategy, our readiness to embrace a SOA initiative, or launching a new business initiative. We can complete the design process by using a variety of tools and techniques. I have used IDEF, BPMN or other process management methods and tools (including RASIC charts describing roles and responsibilities for example). Whatever tools you elect to use, they should effectively communicate intent and used to validate changes with the stakeholders who must be engaged in this process.

Now this is where many of us come up short.  Where do I find the questions to help drive SMART goals? How to I make sure they are relevant? What is this engine I need to compile the results? And how do I quickly compile the results dynamically and publish for comment every time I need to?

One of the answers for me came a few years ago when I first saw the MIKE 2.0 quick assessment engine for Information Maturity. The Information Maturity (IM) Quick Scan is the MIKE 2.0 tool used to assess current and desired Information Maturity levels within an organization. This survey instrument is broad in scope and is intended to assess enterprise capabilities as opposed to focusing on a single subject area. Although this instrument focuses on Information Maturity I realized quickly I had been doing something similar for years across many other domains. The real value here is in the open source resource you can use to kick start your own efforts.  I think it is also a good idea to become familiar with the benchmarks and process classification framework the American Productivity and Quality Center (APQC) has made available for a variety of industries. The APQC is a terrific source for discovering measures and quantifiable metrics useful for meeting the need for specific, measurable objectives to support the end state definition.

How it Works

The questions in the quick scan are organized around six (6) key groups in this domain to include Organization, Policy, Technology, Compliance, Measurement, and Process/Practice.  The results are tabulated based on responses (in the case of the MIKE 2.0 template) ranging from zero (0 – Never) to five (5 – Always).  Of course you can customize response the real point here is we want to quantify the responses received.  The engine component takes the results builds a summary, and produces accompanying tabs where radar graphs plots present the Framework, Topic, Lookup, # Questions, Total Score, Average Score, and Optimum within each grouping.  The MS Word document template then links to this worksheet and grabs the values and radar charts produced to assemble the final document. If all this sounds confusing, please grab the templates and try them for yourself.

Define Current State Diagram

Define Desired End State Model – Click to Enlarge

The groupings (and related sub-topics) are organized out of the box like this to include the following perspectives:

  • Compliance
  • Measurement
  • People/organization
  • Policy
  • Process/Practice
  • Technology

Each of these perspectives is summarized and combined into a MS Word document to present to the stakeholders.  The best part of this tool is it can be used periodically augment quantitative measures (captured in a dashboard for example) to assess progress to plan and improvement realized over time. Quantifying improvement quickly is vital to continued adoption of change. Communicating the results is stakeholders in quick, easy-to-understand format they are already familiar with is just as important using the same consistent, repeatable tool we used to define current state with.

Results

I think you can see this is valuable way to reduce complexity and gather, compile, and present a fairly comprehensive view of the desired end state of the domain in question. Armed with this view we can now proceed to step three (3) and begin to conduct the Gap Analysis exercise. The difference (delta) between these two (current and desired end state) becomes the basis for our road map development.  I hope this has answered many of the questions about step two (2) Define End State. This is not the only way to do this, but has become the most consistent and repeatable methods I’m aware of to define a desired end state quickly in my practice.  Understandings the gaps between the current and the desired end-state across the business, information, application, and technical architecture make development of a robust solution delivery road map possible.

Modeling the MDM Blueprint – Part IV

optionIn part II and III of this series we discussed the Common Information and Canonical Models. Because MDM is a business project we need to establish of a common set of models that can be referenced independent of the technical infrastructure or patterns we plan on using. Now it is time to introduce the Operating Model into the mix to communicate how the solution will actually be deployed and used to realize the benefits we expect with the business in a meaningful way.

This is the most important set of models you will undertake. And sadly not accounted for in practice “in the wild”, meaning rarely seen, much less achieved. This effort describes how the organization will govern, create, maintain, use, and analyze consistent, complete, contextual, and accurate data values for all stakeholders.

There are a couple of ways to do this. One interesting approach I have seen is to use the Galbraith Star Model as an organizational design framework. The model is developed within this framework to understand what design policies and guidelines will be needed to align organizational decision making and behavior within the MDM initiative. The Star model includes the following five categories:

Strategy:
Determine direction through goals, objectives, values and mission. It defines the criteria for selecting an organizational structure (for example functional or balanced Matrix). The strategy defines the ways of making the best trade-off between alternatives.

Structure:
Determines the location of decision making power. Structure policies can be subdivided into:
– specialization: type and number of job specialties;
– shape: the span of control at each level in the hierarchy;
– distribution of power: the level of centralization versus decentralization;
– departmentalization: the basis to form departments (function, product, process, market or geography).

In our case this will really help when it comes time to designing the entitlement and data steward functions.

graph_galbraith_star-model1Processes:
The flow of information and decision processes across the proposed organization’s structure. Processes can be either vertical through planning and budgeting, or horizontal through lateral relationships (matrix).

Reward Systems:
Influence the motivation of organization members to align employee goals with the organization’s objectives.

People and Policies:
Influence and define employee’s mindsets and skills through recruitment, promotion, rotation, training and development.

Now before your eyes glaze over, I’m only suggesting this be used as a starting point. We are not originating much of this thought capital, only examining the impact the adoption of MDM will have on the operating model within this framework. And more importantly identifying how any gaps uncovered will be addressed to ensure this model remains internally consistent. After all, we do want to enable the kind of behavior we expect in order to be effective, right? A typical design sequence starts with an understanding of the strategy as defined. This in turns drives the organizational structure. Processes are based on the organization’s structure. Structure and Processes define the implementation of reward systems and people policies.

The preferred sequence in this design process is composed in the following order:
a – strategy;
b – structure;
c – key processes;
d – key people;
e – roles and responsibilities;
f – information systems (supporting and ancillary);
g – performance measures and rewards;
h – training and development;
i – career paths.

The design process can be accomplished using a variety of tools and techniques. I have used IDEF, BPMN or other process management methods and tools (including RASIC charts describing roles and responsibilities for example), What ever tools you elect to use, they should effectively communicate intent and used to validate changes with the stakeholders who must be engaged in this process. Armed with a clear understanding of how the Star model works we can turn our attention to specific MDM model elements to include:

Master Data Life Cycle Management processes
– Process used to standardize the way the asset (data) is used across an enterprise
– Process to coordinate and manage the lifecycle of master data
– How to understand and model the life-cycle of each business object using state machines (UML)
– Process to externalize business rules locked in proprietary applications (ERP) for use with Business Rules Management Systems (BRMS) (if you are lucky enough to have one )
– Operating Unit interaction
– Stewardship (Governance Model)
– Version and variant management, permission management, approval processes.
– Context (languages, countries, channels, organizations, etc.) and inheritance of reference data values between contexts
– Hierarchy management
– Lineage (historical), auditability, traceability

I know this seems like a lot of work. Ensuring success and widespread adoption of Master Data Management mandates this kind of clear understanding and shared vision among all stakeholders. We do this to communicate how the solution will actually be deployed and used to realize the benefits we expect.

In many respects this is the business equivalent to the Technical Debt concept Ward Cunningham developed (we will address this in the next part on Reference Architecture) to help us think about this problem. Recall this metaphor means doing things the quick and dirty way sets us up with a technical debt, which is similar to a financial debt. Like a financial debt, the technical debt incurs interest payments, which come in the form of the extra effort that we have to do in future development because of the quick and dirty design choices we have made. The same concept applies to this effort. The most elegant technical design may be the worst possible fit for the business. The interest due in a case like this is, well, unthinkable.

Take the time to get this right. You will be rewarded with enthusiastic and supportive sponsors who will welcome your efforts to achieve success within an operating model they understand.

The Business Case for Enterprise Architecture

We have the frameworks. Plenty of them: Zachman, IEEE STD 610.12, FEAF (Federal Enterprise Architecture), and now an updated TOGAF (Open Group Architecture Framework) you can find more about at (http://www.opengroup.org/togaf).

We know what to do. All of us know what should be done and have a pretty good idea of how to do it. For example see what the GAO has institutionalized in their view of our profession in the publication labeled “A Practical Guide to CIO Council – Federal Enterprise Architecture. February 2001, Version 1.0 “.

Why do it?
Now comes the hard part. How about a business case for investing in this discipline? Expressed in terms ordinary business people can understand.  I have yet to find (I have looked) a really compelling piece of work that can express quickly what impact EA would have across five essential elements every business leader understands and manages to on a daily basis:

– cash flow
– margin (profitability)
– velocity
– growth
– customer (responsiveness, intimacy, and alignment)

I’m  guessing this is because we are all technicians at heart. I have also come to believe this is essential to having our peers in the business understand our value proposition. I remain convinced this discipline can (done properly) enable significant improvements in each of our organizations.  Any thoughts or suggestions are welcome, encourage all of us in this profession to evaluate and think carefully about what our real value is – as business people first, then as trusted advisors and partners to our business and IT colleagues alike.

Priceless…

25 years, 170+ million customers affected, 5+ billion transactions processed…

Sounds like the old McDonald marquee tote board
(for those old enough to remember when they used to count burgers sold chain-wide).

february-2009-002_5

Actually everyone you see in this picture has had a hand in many of the products and services you use everyday and have come to rely on. Like shipping freight across the world, using a credit card, paying an electric bill, caring for a loved one in a hospital, or helping our young people protect this country of ours. I never really thought about this until we all got together for one evening last week to laugh about old times. And then it dawned on me when I realized the collective contribution this group has made to a number of lives. Call us clueless geeks, hopeless loons, but I for one count myself lucky to have had the good fortune to be associated with each and every one of them. They have enriched my life in so many ways. And the best part? I still get to see them every now and then and raise a glass or two to old times.

Priceless…

Decentralized Enterprise Architecture: Can It Actually Work?

Interesting read over at Architecture & Governance Magazine (http://www.architectureandgovernance.com/) written by Raghavendra (Rao) Subbarao (Raghavendra.subbarao@gmail.com) as “Decentralized Enterprise Architecture: Can It Actually Work?”. Think there are some good points made here for the case for and against centralized versus de-centralized operating models. I have discovered through bitter experience that a Federated model probably works the best for a number of reasons, but this is usually not very comfortable for most corporate senior management to embrace. Use of balanced matrix organization is a little more complex than the classic command and control models most are familiar with. But it does put hands-on practitioners where they are needed the most (in the field solving for real problems) and enables the kind of business relationships and trust needed to ensure the sum of the parts is greater than the whole.  No one likes to live with Ivory Tower dictates no matter how compelling a case can be made for them. I think we all agree that collaboration among peers leads to widespread use and adoption. This is the magic. And the true difference between time consuming confrontations over who holds the power of decision or the energetic, creative empowerment of all contributors that really gets things done. There is much more to this, but the argument should be shifted to understanding how to make a Federated model work. The truth is the solution is not in selecting a centralized vs. decentralized model. It is neither.