How to Build a Roadmap – Gap Analysis Update

update-stock-imageI have received a number of requests about the tools and methods used to complete the gap analysis steps from earlier posts in the series How to Build a Roadmap. In this series I have discussed the specific steps required to develop a well thought out road map where one of the key tasks was conducting a gap analysis exercise. Understanding the limits of a medium like this I have posted this update to explore the questions in a little more detail. Believe this will be extremely useful to anyone building a meaningful road map. The internet is full of simply awful templates and tools which range from the downright silly to extremely dangerous in their simple assumptions where there is no attempt to quantify results. Even more distressing is lack of understanding of how to use and leverage the best data sources you already have – the professionals within your own organization. Save yourself some time and read on.

Recall the road map development identified specific actions using an overall pattern ALL road maps should follow. The steps required to complete this work:

  1. Develop a clear and unambiguous understanding of the current state
  2. Define the desired end state
  3. Conduct a Gap Analysis exercise
  4. Prioritize the findings from the Gap Analysis exercise into a series of gap closure strategies
  5. Discover the optimum sequence of actions (recognizing predecessor – successor relationships)
  6. Develop and Publish the Road Map

The Gap Analysis step discussed how to develop a robust analytic to find any significant shortcomings between the current and desired end states. We use these findings to begin develop strategy alternatives (and related initiatives) to address what has been uncovered. Our intent is to identify and quantify the difference (delta) from where we are to what we aspire to become. This exercise is critical to find what needs to be accomplished. The gap analysis leads to a well-organized set of alternatives and practical strategies we can use to complete the remaining work. You can review the full post here.

Gap Analysis

Gap Analysis

The goal? Seek a quick and structured way to define actionable activities to be reviewed and approved by all stakeholders. We would like focus on the important actions requiring attention. This includes identifying a set of related organizational, functional, process, and technology initiatives needed. The gap closure recommendations give a clear line of sight back to what needs to be accomplished to close the “delta” or gaps uncovered in the analysis.

What is needed than is a consistent, repeatable way to evaluate quickly where an organization is, where they want to go, and the level of effort needed to accomplish their goals with some precision. In short the delta between current and desired state is uncovered, quantified, and ready for a meaningful road map effort based on factual findings supported by real evidence captured in the field. Performing a successful gap analysis begins with defining what you are analyzing which could be processes, products, a region, or an entire organization. Even at the overall organizational level, knowing what aspect you are analyzing is crucial to find and understand the intent and findings of the effort. Quickly focusing at the desired level of detail means we can now:

  • Know where to go; what really needs attention
  • Pinpoint opportunity…quickly.
  • Uncover what is preventing or holding back an important initiative
  • Know what to do – and in what suggested order

This is where problem solving using some quick management diagnostic tools can be used across a variety of challenges met when developing a road map. Using these tools to perform the gap analysis delivers quick distinctive results and provides the key data and actionable insight needed to develop a meaningful road map. This method (and the tools) can be used to:

  • evaluate capability using a generally accepted maturity model specific to the business,
  • focus on a specific subject area or domain; Master Data Management or Business Intelligence are two examples were known proven practice can be used as starting point to support the findings compiled,
  • assess the readiness of an important program or evaluate why it is in trouble,
  • evaluate and uncover root cause issues with a struggling project,
  • detect and measure what requires immediate attention by uncovering weaknesses where proven practice has not been followed or adopted.

Quick Management diagnostic tools
The tool set I use follows the same general pattern and structure, only the content or values differ based on how focused the effort is and what is needed to complete the work successfully. The questions, responses, and data points gathered and compiled are usually organized in a structured taxonomy of topics. See the earlier post (Define End State) for more on this. The key is using the same engine to tabulate values based on responses that can range from zero (0 – Never or No) to five (5 – Always or Yes). Of course you can customize the responses. In fact I have done this with a Program Readiness Assessment and BigData Analytics tool. The real point is to quantify the responses received. The engine component takes the results builds a summary, and produces accompanying tabs where radar graphs plots present the Framework, Topic, Lookup, # Questions, Current State Scores, Desired End State Scores, and common statistical results within each grouping. The tool can be extended to include MS Word document templates which then link to the findings worksheet and grabs the values and charts produced to assemble the draft document ready for further editing and interpretation. If all this sounds confusing, a couple of examples may be helpful.

Using the Data Maturity Model (CMMI) to Evaluate Capability
The Data Maturity Model (DMM) was developed using the principles and structure of CMMI Institute’s Capability Maturity Model Integration (CMMI)—a proven approach to performance improvement and the gold standard for software and systems development for more than 20 years. The DMM model helps organizations become more proficient in managing critical data assets to improve operations, enable analytics and gain competitive advantage.

Using this body of knowledge and a library of source questions we can elicit current state and desired end state responses using a simple survey. This can be conducted online, in workshops, or traditional interviews as needed. The responses are compiled and grouped to evaluate the gap closure opportunities for an organization wishing to improve their data management practices by identifying and taking action to address shortcoming or weaknesses identified. The framework and topic structure of the 142 questions are organized to match the DMM model. DMM_Topics

Looking closer we find the nine (9) questions used to elicit responses related to Business Glossaries within the Data Governance topic.

1) Is there a policy mandating use and reference to the business glossary?
2) How are organization-wide business terms, definitions, and corresponding metadata created, approved, verified, and managed?
3) Is the business glossary promulgated and made accessible to all stakeholders?
4) Are business terms referenced as the first step in the design of application data stores and repositories?
5) Does the organization perform cross-referencing and mapping of business-specific terms (synonyms, business unit glossaries, logical attributes, physical data elements, etc.) to standardized business terms?
6) How is the organization’s business glossary enhanced and maintained to reflect changes and additions?
7) What role does data governance perform in creating, approving, managing, and updating business terms?
8) Is a compliance process implemented to make sure that business units and projects are correctly applying business terms?
9) Does the organization use a defined process for stakeholders to give feedback about business terms?

Responses are expected to include or more of the following values describing current state practice and what the respondent believes is a desired end state. These can simply be a placed on a scale where the following values are recorded for both current and desired outcomes.

Responses
0 – Never or No
1 – Awareness
2 – Occasionally
3 – Often
4 – Usually
5 – Always or Yes

In this example note how the relatively simple response can be mapped directly into the scoring description and perspective the DMM follows.

0 – No evidence of processes performed or unknown response.

1 – Performed Processes are performed ad hoc, primarily at the project level. Processes are typically not applied across business areas. Process discipline is primarily reactive; for example, data quality processes emphasize repair over prevention. Foundational improvements may exist, but improvements are not yet extended within the organization or maintained. Goal: Data is managed as a requirement for the implementation of projects.

2 – Managed Processes are planned and executed in accordance with policy; employ skilled people with adequate resources to produce controlled outputs; involve relevant stakeholders; are monitored and controlled and evaluated for adherence to the defined process. Goal: There is awareness of the importance of managing data as a critical infrastructure asset.

3 – Defined Set of standard processes is employed and consistently followed. Processes to meet specific needs are tailored from the set of standard processes according to the organization’s guidelines. Goal: Data is treated at the organizational level as critical for successful mission performance.

4 – Measured Process metrics have been defined and are used for data management. These include management of variance, prediction, and analysis using statistical and other quantitative techniques. Process performance is managed across the life of the process. Goal: Data is treated as a source of competitive advantage.

5 – Optimized Process performance is optimized through applying Level 4 analysis for target identification of improvement opportunities. Best practices are shared with peers and industry. Goal: Data is critical for survival in a dynamic and competitive market.

The key here is capturing both current state (what is being performed now) and the desired end state capability using this tool. The difference or delta between the two values now becomes a data set we can use analytic tools to reveal where the greatest challenges are. In this example the clear gaps (represented in orange and red visual cues) show where we should focus our immediate attention and call for further investigation. Yellow shaded topics are less urgent. All green shaded topics don’t need the same rigor when addressing the actions needed in the road map developed in later stages.

DMM_Focus

Specific Subject Area – Master Data Management Assessment
In this example we can extend and focus on Master Data Management using the same principles and structure of CMMI Institute’s Capability Maturity Model Integration (CMMI), adding proven practice in the Master Data Management domain. Note the framework and topic structure is far more focused to match the MDM model framework. And the library of survey questions used here (225 questions) are far more detailed and now very much focused on Master Data Management.

MDM_Topics

Using the same scoring engine we have captured both current state (what is being performed now) and the desired end state capability using this tool. The difference or delta between the two values now becomes a data set we can use analytic tools to reveal where the greatest challenges are. The clear gaps (represented in orange and red visual cues) pop off the page when the size and relative distance from desired or needed and current practice is measured. Now there is a good idea of what needs to be addressed in the road map developed in later stages.

MDM_Focus

This is a quick way to summarize our findings and give valuable clues and direction for further investigation. We can then focus on specific problem areas using detailed schedules based on the field work to date. Based on the gaps uncovered at the higher level summary (Current vs. Desired End State) further investigation should be performed by a professional with deep subject matter expertise and intimate knowledge of generally accepted proven practice. Using the same data set we can now begin to use an interactive exploration tools to uncover significant patterns and reveal further insight.

MDM_Explore_2

 

Results
I hope this has helped readers who have asked about how to develop and use gap analysis tools to find quickly what significant delta items (the difference between current and desired states) demand further attention. I think you can see this is valuable way to quickly gather, compile field work, and capture a fairly comprehensive view of the gaps uncovered between the current and desired end state of the subject in question. This method and set of tools can be used in a variety of management challenges across the business both big and small. Armed with this information we can now go ahead to step four (4) and begin to prioritize the findings from the Gap Analysis exercise into a series of gap closure strategies.

This is an invaluable way to assemble and discover the best sequence of actions (recognizing predecessor – successor relationships) as we move to developing the road map. This difference (delta) between these two (current and desired end state) is the basis for our road map. I hope this has answered many of the questions about step three (3) Conduct a Gap Analysis exercise. This is not the only way to do this, but has become the most consistent and repeatable methods I’m aware of to perform a gap analysis quickly in my practice.


If you enjoyed this post, please share with anyone who may help from reading it. And don’t forget to click the follow button to be sure you don’t miss future posts. Planning on compiling all the materials and tools used in this series in one place, still unsure of what form and content would be the best for your professional use.Please take a few minutes and let me know what form and format you would find most valuable.

Suggested content for premium subscribers:

  • Topic Area Models (for use with Mind Jet – see https://www.mindjet.com/ for more)
  • Master Data Management Gap Analysis Assessment
  • Data Maturity Management Capability Assessment
  • Analytic Practice Gap Analysis Assessment
  • Big Data Analytic Gap Analysis Assessment
  • Program Gap Analysis Assessment
  • Program Readiness Assessment
  • Project Gap Analysis Assessment
  • Enterprise Analytic Mind Map
  • Reference Library with Supporting Documents

How to build a Roadmap

Roadmap Image (Pins)How many of us in the profession can truly say we have been taught to develop, refine, and deliver a professional road map based on a sound method with consistent repeatable results?  Have been at this crazy business for years, and still astonished at the wide variety of quality in the results I have experienced over the years – and it’s not getting any better. Not sure I can identify why this is so, maybe it’s the consolidation and changes in the traditional consulting business (big eight to what? two, maybe) or the depreciation of the craft itself among our peers. And then again, maybe sound planning went out of style and I didn’t get the memo. No matter what the root cause(s) is I want to take a little time and share some (not all) of what has worked for me with great success over the years and may make your next roadmap better.

I’m no genius, just believe I have been blessed to come into the industry at a time when the large management consulting firms actually invested in intellectual property and shared this with the “new hires” and up-and-coming staff like me. Investing in structured thinking, communication skills, or just plain good old analytic skills makes sense.  Why there is not more of this kind of investment is truly troubling.

What I’m going to share works well across most transformation programs. You will struggle to find this in textbooks, class rooms, or in your local book store (I have looked, maybe not hard enough). This method I will share is based loosely on the SEI-CM IDEAL model used to guide development of longe-range integrated planning for managing software process improvement programs. You will most likely find something similar to this in the best and brightest organizations who have adopted an optimized way to think about how to guide their organizations to perform as expected (some of us call this experience). Now on to the summary of what I want to share, the balance will be revealed in an upcoming series using the adoption of Master Data Management as an example.

The Overall Pattern

At the risk of over-simplifying things, here is the overall pattern ALL roadmaps follow:

Click to enlarge

1) Develop a clear and unambiguous understanding of the current state

– Business Objectives (not just strategy or goals, real quantifiable objectives)
– Functional needs
– High impact business processes or cycles
– Organization (current operating model)
– Cost and complexity drivers
– Business and technical assets (some call these artifacts)

2) Define desired end state
First, (know this is obvious) what are trying to accomplish? Is there an existing goal-driven strategy clearly articulated into quantifiable objectives? Sounds silly doesn’t it, and if this exists and a no one knows about it or cannot clearly communicate what the end game is we have a problem. This could be a well guarded secret. Or, what is more common the line of sight from executive leadership down to the mail room is broken, where no one knows what the true goals are or cares (it’s just a job after all) becomes a annual charade of MBO objectives with no real understanding.  Some better examples I would expect include:

– Performance targets (Cash flow, Profitability, Velocity (cycle or PCE), Growth, Customer intimacy)
– Operating Model Improvements
– Guiding principles

3) Conduct Gap Analysis
Okay, now this is where the true fun starts. Once here we can begin to evaluate the DELTA between who we really are, and what we truly want to become.  Armed with a clear understanding of where we are and where we want to be, the actionable activities begin to fall out and become evident. Gap closure strategies can then begin to be discussed, shared, and resolved into any number of possibilities usually involving the following initiatives:

– Organizational
– Functional
– Architectural (technology)
– Process
– Reward or economic incentives

For the enterprise architect the following diagram illustrates a sample index or collection of your findings to this point focused across the four architecture domains (Business, Information, Application, and Technology) related to the architecture.  Note how this is aligned into the enterprise architecture meta-model you can see over at the Essential project. The DELTA in this case represents the recommended Gap Closure Strategy between current and desired end states. Or put simply, the actionable things we need to do to close the gap between where are, and where we want to be.

EA Document Index

Click to enlarge

4) Prioritize
Now that we have the list of actionable items it is time to prioritize what is front of us. This is usually driven (in a technology road map) by evaluating the relative business value AND the technical complexity, plotting the results in a quadrant graph of some kind. It is critical here that the stakeholders are engaged in the collection of the data points and they are keenly aware of what they are scoring. At the end of the day, what we are doing here is IDENTIFYING what is feasible and what has the highest business value. I know, I know this sounds obvious, and you would be astonished by how often this does not occur.

5) Discover the Optimum Sequence
Okay, now we have the initiatives, the prioritization, how about sequence? In other words are there things we have to get accomplished first, before others? Are there dependencies we have identified that need to be satisfied before moving forward? This sounds foolish as well, and we sometimes we need to learn how to crawl, walk, run, ride a bike, and then drive a motor vehicle. And what about the capacity for any organization to absorb change? Hmmm… Not to be overlooked, this is where a clear understanding of the organizational dynamics is critical (see step number 1, this is why we need to truly understand where we are).

6) Develop and Publish the Road Map
Now we are ready to develop the road map. Armed with the DELTA (current vs. desired end state), the prioritization effort (what should be done), and the optimum sequence (in what order) we can begin to assemble a sensible, defensible road map describing what should be done in what order.  How this is communicated is critical now. We have the facts, we have the path outlined, and we have a defensible position to share with our peers. We have the details readily available to support our position. Now the really difficult exercise rears its ugly head. Somehow, we need to distill and simply our message to what I call the “Duckies and Goats” view of the world.  In other words we need to distill all of this work into a simplified yet compelling vision of how we transform an organization, or enabling technology to accomplish what is needed.  Do not underestimate this task, after all the hard work put into an exercise like this, the last thing we need to do is to confuse our stakeholders with mind-numbing detail. Yes, we need this for ourselves to exhaust any possibility we have missed something. And to ensure we haven’t overlooked the obvious – not sure who said this but “when something is obvious, it may be obviously wrong”.  Here is another example of a visual diagram depicting an adoption of Master Data Management platform in its first year.

MDM Roadmap

Click to enlarge

So, this is the basic pattern describing how a robust roadmap should be developed for any organization across any discipline (business or technology) to ensure an effective planning effort. Wanted to share this with you to help you with your own work, this is usually not an exercise to be taken lightly. We are after all discussing some real world impacts to many, all the while understanding the laws of unintended consequences, to come up with a set of actionable steps to take along the way that just make sense. This method has worked for me time after time. I think this may just work for you as well. More on this later…

Modeling the MDM Blueprint – Part VI

In this series we have discussed developing the MDM blueprint by developing the Common Information (part II), Canonical (part III) , and Operating (part IV)  models in our work. In Part V  I introduced the Reference Architecture model into the mix to apply the technical infrastructure or patterns we plan on using. The blueprint has now moved from being computation and platform independent to one of expressing intent through the use of more concrete platform specific model.  The solution specification is now documented (independent of the functional Business Requirements) to provide shared insight into the overall design solution.  Now it is time to bring the modeling products together and incorporate them into a MDM solution specification we can use in many ways to communicate the intent of the project.

First, the MDM blueprint specification becomes the vehicle for communicating the system’s design to interested stakeholders at each stage of its evolution. The blueprint can be used by:

  • Downstream designers and implementers to provide overall policy and design guidance. This establishes inviolable constraints (and a certain amount of freedom) on downstream development activities.
  • Testers and integrators to dictate the correct black-box behavior of the pieces that must fit together.
  • Technical managers as the basis for forming development teams corresponding to the work assignments identified.
  • Project managers as the basis for a work breakdown structure, planning, allocation of project resources, and tracking of progress by the various teams.
  • Designers of other systems with which this one must interoperate to define the set of operations provided and required, and the protocols for their operation, that allows the inter-operation to take place.

Second, the MDM blueprint specification provides a basis for performing up-front analysis to validate (or uncover deficiencies) design decisions and refine or alter those decisions where necessary. The blueprint could be used by:

  • Architects and requirements engineers who represent the customer the MDM blueprint specification becomes the forum for negotiating and making trade-offs among competing requirements.
  • Architects and component designers as a vehicle for arbitrating resource contention and establishing performance and other kinds of run-time resource consumption budgets.
  • Development using vendor-provided products from the commercial marketplace to establish the possibilities for commercial off-the-shelf (COTS) component integration by setting system and component boundaries and establishing requirements for the required behavior and quality properties of those components.
  • Architects to evaluate the ability of the design to meet the system’s quality objectives. The MDM blueprint specification serves as the input for architectural evaluation methods such as the Software Architecture Analysis Method [and the Architecture Tradeoff Analysis Method (ATAM-SM) and Software Performance Engineering (SPE)   as well as less ambitious (and less effective) activities such as unfocused design walkthroughs.
  • Performance engineers as the formal model that drives analytical tools such as rate schedulers, simulations, and simulation generators.
  • Development product line managers to determine whether a potential new member of a product family is in or out of scope, and if out, by how much.

Third the MDM blueprint becomes the first artifact used to achieve system understanding for:

  • Technical mangers as the basis for conformance checking, for assurance that implementations have in fact been faithful to the architectural prescriptions.
  • Maintainers as a starting point for maintenance activities, revealing the areas a prospective change will affect.
  • New project members, as the first artifact for familiarization with a system’s design.
  • New architects as the artifacts that (if properly documented) preserve and capture the previous incumbent’s knowledge and rationale.
  • Re-engineers as the first artifact recovered from a program understanding activity or (in the event that the architecture is known or has already been recovered) the artifact that drives program understanding activities at the appropriate level of component granularity.

Blueprint for MDM – Where this fits within a larger program

Developing and refining the MDM blueprint is typically associated with larger programs or strategic initiatives. In this last part of the series I will now discuss where all this typically fits within a larger program and how to organize and plan this work within context.  The following diagram (click to enlarge and use your browser to magnify the png file) puts our modeling efforts within the context of a larger program taken from a mix of actual engagements with large, global customers.  The key MDM blueprint components are highlighted with numbers representing:

  1. Common Information Model
  2. The Canonical Model
  3. The Operating Model
  4. The Reference Architecture
Program Management Design

Click to enlarge

I have also assumed a business case exists (you have this right?) and the functional requirements are known.  Taken together with the MDM blueprint we now have a powerful arsenal of robust information products we can use to prepare a high quality solution specification that is relevant and can be used to meet a wide variety of needs.  Typically, use of the MDM blueprint may include:

  • Identifying all necessary components and services
  • Reviewing existing progress to validate (or uncover deficiencies in) design decisions; refine or alter those decisions where necessary
  • Preparation of detailed planning products (Product, Organization, and Work Breakdown structures)
  • Program planning and coordination of resources
  • Facilitating prioritization of key requirements – technical and business
  • Development of Request for Quotation, Request for Information products (make vs. buy)
  • Preparing funding estimates (Capital and Operating Expense) and program budget preparation
  • Understanding a vendors contribution to the solution and pricing accordingly (for example, repurpose as needed in contract and licensing activities and decouple supplier proprietary lock-in from solution where appropriate)

We are also helping to ensure the business needs drive the solution by mitigating the impact of the dreaded Vendor Driven Architecture (VDA) in the MDM solution specification.

Summary

I hope you have enjoyed this brief journey through Modeling the MDM blueprint and have gained something from my experience.  I’m always interested in learning from others, please let me know what you have encountered yourself, and maybe we can help others avoid the pitfalls and pain in this difficult demanding work.  A key differentiator and the difference between success and failure on an MDM journey is taking the time to model the blueprint and share this early and often with the business.  This is after all a business project, not an elegant technical exercise.  In an early reference I mentioned Ward Cunningham’s Technical Debt concept.  Recall this metaphor means doing things the quick and dirty way sets us up with a technical debt, which is similar to a financial debt. Like a financial debt, the technical debt incurs interest payments, which come in the form of the extra effort that we have to do in future development because of the quick and dirty design choices we have made. The technical debt and resulting interest due in MDM initiative with this kind of far-reaching impact across the enterprise is, well, unthinkable. Take the time to develop your MDM blueprint and use this product to ensure success by clearly communicating business and technical intent with your stakeholders.

Modeling the MDM Blueprint – Part V

er_modelIn this series we have discussed developing the MDM blueprint by creating Common Information (part II), Canonical (part III), and Operating (part IV) models in our work streams. We have introduced the Operating Model into the mix to communicate how the solution will be adopted and used to realize the benefits we expect with the business in a meaningful way.  And hopefully set reasonable expectations with our business partners as to what this solution will look like when deployed.

Now it is time to model and apply the technical infrastructure or patterns we plan on using. The blueprint now moves from being computation and platform independent to one of expressing intent through the use of more concrete platform specific models.

Reference Architecture
After the initial (CIM, Canonical, and Operating models) work is completed then, and only then are we ready to move on to the computation and platform specific models. We know how to do this well – for example see Information service patterns, Part 4: Master Data Management architecture patterns.

At this point we now have enough information to create the reference architecture. One way (there are several) to organize this content is to use the Rozanski and Woods extensions to the classic 4+1 view model introduced by Philippe Kruchten. The views are used to describe the system in the viewpoint of different stakeholders (end-users, developers and project managers). The four views of the model are logical, development, process and physical view. In addition selected use cases or scenarios are used to demonstrate or show the architecture’s intent. Which is why the model contains 4+1 views (the +1 being the selected scenarios). 

41views1

Rozanski and Woods extended this idea by introducing a catalog of six core viewpoints for information systems architecture: the Functional, Information, Concurrency, Development, Deployment, and Operational viewpoints and related perspectives. This is elaborated in detail in their book titled “Software Systems Architecture: Working with Stakeholders Using Viewpoints and Perspectives”. There is much to learn from their work, I encourage you to visit the book’s web site for more information.

What we are describing here is how MDM leadership within very large-scale organization can eventually realize the five key “markers” or characteristics in the reference architecture to include:

– Shared services architecture evolving to process hubs;
– Sophisticated hierarchy management;
– High-performance identity management;
– Data governance-ready framework; and
– Registry, persisted or hybrid design options in the architecture selected.

Recommended, this is an exceptional way to tie the technical models back to the stakeholders needs as reflected in the viewpoints, perspectives, guidelines, principles, and template models used in the reference architecture. Grady Booch said “…the 4+1 view model has proven to be both necessary and sufficient for most interesting systems”, and there is no doubt that MDM is interesting.  Once this work has been accomplished and agreed to as part of a common vision, we have several different options to proceed with. One interesting approach is leveraging this effort into a Service Orientated Modeling Framework introduced by Michael Bell at Methodologies Corporation.

Service Orientated Modeling
The service-oriented modeling framework (SOMF) is a service-oriented development life cycle methodology. It offers a number of modeling practices and disciplines that contribute to a successful somf_v_2_0service-oriented life cycle management and modeling. It illustrates the major elements that identify the “what to do” aspects of a service development scheme. These are the modeling pillars that will enable practitioners to craft an effective project plan and to identify the milestones of a service-oriented initiative—in this case crafting an effective MDM solution.

SOMF provides four major SOA modeling styles that are useful throughout a service life cycle (conceptualization, discovery and analysis, business integration, logical design, conceptual and logical architecture). These modeling styles: Circular, Hierarchical, Network, and Star, can assist us with the following modeling aspects:

– Identify service relationships: contextual and technological affiliations
– Establish message routes between consumers and services
– Provide efficient service orchestration and choreography methods
– Create powerful service transaction and behavioral patterns
– Offer valuable service packaging solutions

SOMF Modeling Styles
SOMF offers four major service-oriented modeling styles. Each pattern identifies the various approaches and strategies that one should consider employing when modeling MDM services in a SOA environment.

– Circular Modeling Style: enables message exchange in a circular fashion, rather than employing a controller to carry out the distribution of messages. The Circular Style also offers a way to affiliate services.

– Hierarchical Modeling Style: offers a relationship pattern between services for the purpose of establishing transactions and message exchange routes between consumers and services. The Hierarchical pattern enforces parent/child associations between services and lends itself to a well known taxonomy. somf_styles

– Network Modeling Style: this pattern establishes “many to many” relationship between services, their peer services, and consumers similar to RDF. The Network pattern accentuates on distributed environments and interoperable computing networks.

– Star Modeling Style: the Star pattern advocates arranging services in a star formation, in which the central service passes messages to its extending arms. The Star modeling style is often used in “multi casting” or “publish and subscribe” instances, where “solicitation” or “fire and forget” message styles are involved.

There is much more to this method, encourage you to visit the Methodologies Corporation site (Michael is the founder) and download the tools, power point presentations, and articles they have shared with us.

Summary
So, based on my experience we have to get this modeling effort completed to improve the probability we will be successful. MDM is really just another set of tools and processes for modeling and managing business knowledge of data in a sustainable way.  Take the time to develop a robust blueprint to include Common Information (semantic, pragmatic and logical modeling), Canonical, (business rules and format specifications), and Operating Models to ensure completeness.  Use these models to drive a suitable Reference Architecture to guide design choices in the technical implementation.

This is hard, difficult work. Anything worthwhile usually is. Why put the business at risk to solve this important and urgent need without our stakeholders understanding and real enthusiasm for shared success?  A key differentiator and the difference between success and failure on an MDM journey is taking the time to model the blueprint and share this early and often with the business.  This is after all a business project, not an elegant technical exercise.  Creating and sharing a common vision through our modeling efforts helps ensure success from inception through adoption by communicating clearly the business and technical intent of each element of the MDM program.

In the last part of the series I will be discussing where all this fits into the larger MDM program and how to plan, organize, and complete this work.

Modeling the MDM Blueprint – Part IV

optionIn part II and III of this series we discussed the Common Information and Canonical Models. Because MDM is a business project we need to establish of a common set of models that can be referenced independent of the technical infrastructure or patterns we plan on using. Now it is time to introduce the Operating Model into the mix to communicate how the solution will actually be deployed and used to realize the benefits we expect with the business in a meaningful way.

This is the most important set of models you will undertake. And sadly not accounted for in practice “in the wild”, meaning rarely seen, much less achieved. This effort describes how the organization will govern, create, maintain, use, and analyze consistent, complete, contextual, and accurate data values for all stakeholders.

There are a couple of ways to do this. One interesting approach I have seen is to use the Galbraith Star Model as an organizational design framework. The model is developed within this framework to understand what design policies and guidelines will be needed to align organizational decision making and behavior within the MDM initiative. The Star model includes the following five categories:

Strategy:
Determine direction through goals, objectives, values and mission. It defines the criteria for selecting an organizational structure (for example functional or balanced Matrix). The strategy defines the ways of making the best trade-off between alternatives.

Structure:
Determines the location of decision making power. Structure policies can be subdivided into:
– specialization: type and number of job specialties;
– shape: the span of control at each level in the hierarchy;
– distribution of power: the level of centralization versus decentralization;
– departmentalization: the basis to form departments (function, product, process, market or geography).

In our case this will really help when it comes time to designing the entitlement and data steward functions.

graph_galbraith_star-model1Processes:
The flow of information and decision processes across the proposed organization’s structure. Processes can be either vertical through planning and budgeting, or horizontal through lateral relationships (matrix).

Reward Systems:
Influence the motivation of organization members to align employee goals with the organization’s objectives.

People and Policies:
Influence and define employee’s mindsets and skills through recruitment, promotion, rotation, training and development.

Now before your eyes glaze over, I’m only suggesting this be used as a starting point. We are not originating much of this thought capital, only examining the impact the adoption of MDM will have on the operating model within this framework. And more importantly identifying how any gaps uncovered will be addressed to ensure this model remains internally consistent. After all, we do want to enable the kind of behavior we expect in order to be effective, right? A typical design sequence starts with an understanding of the strategy as defined. This in turns drives the organizational structure. Processes are based on the organization’s structure. Structure and Processes define the implementation of reward systems and people policies.

The preferred sequence in this design process is composed in the following order:
a – strategy;
b – structure;
c – key processes;
d – key people;
e – roles and responsibilities;
f – information systems (supporting and ancillary);
g – performance measures and rewards;
h – training and development;
i – career paths.

The design process can be accomplished using a variety of tools and techniques. I have used IDEF, BPMN or other process management methods and tools (including RASIC charts describing roles and responsibilities for example), What ever tools you elect to use, they should effectively communicate intent and used to validate changes with the stakeholders who must be engaged in this process. Armed with a clear understanding of how the Star model works we can turn our attention to specific MDM model elements to include:

Master Data Life Cycle Management processes
– Process used to standardize the way the asset (data) is used across an enterprise
– Process to coordinate and manage the lifecycle of master data
– How to understand and model the life-cycle of each business object using state machines (UML)
– Process to externalize business rules locked in proprietary applications (ERP) for use with Business Rules Management Systems (BRMS) (if you are lucky enough to have one )
– Operating Unit interaction
– Stewardship (Governance Model)
– Version and variant management, permission management, approval processes.
– Context (languages, countries, channels, organizations, etc.) and inheritance of reference data values between contexts
– Hierarchy management
– Lineage (historical), auditability, traceability

I know this seems like a lot of work. Ensuring success and widespread adoption of Master Data Management mandates this kind of clear understanding and shared vision among all stakeholders. We do this to communicate how the solution will actually be deployed and used to realize the benefits we expect.

In many respects this is the business equivalent to the Technical Debt concept Ward Cunningham developed (we will address this in the next part on Reference Architecture) to help us think about this problem. Recall this metaphor means doing things the quick and dirty way sets us up with a technical debt, which is similar to a financial debt. Like a financial debt, the technical debt incurs interest payments, which come in the form of the extra effort that we have to do in future development because of the quick and dirty design choices we have made. The same concept applies to this effort. The most elegant technical design may be the worst possible fit for the business. The interest due in a case like this is, well, unthinkable.

Take the time to get this right. You will be rewarded with enthusiastic and supportive sponsors who will welcome your efforts to achieve success within an operating model they understand.

Modeling the MDM Blueprint – Part III

In part II of this series we discussed the Common Information Model. Because MDM is a business project we need to establish of a common set of models that can be referenced independent of the technical infrastructure or patterns we plan on using. The essential elements should include:

– Common Information Model
– Canonical Model
– Operating Model, and
– Reference Architecture (e.g. 4+1 views, viewpoints and perspectives).

We will now turn our attention to the second element, the Canonical Model.

The Canonical Model (business rules and format specification) describes how the extraction of business rules from the software portfolio are managed and shared oagis_modelamong other applications.  In addition to externalizing business rules locked in proprietary applications (for example ERP or CRM) we also use design patterns defined here to communicate between different data formats. Instead of writing translators between each and every format (with potential for a combinatorial explosion), use this in combination with the CIM to write a translator between each format and the canonical format using rules to guide the effort. See the Open Applications Group Integration Specification (OAGIS) as example of an integration architecture that is based on a canonical data model. Implicit (and emerging now as generally accepted practice) is the use of rules (rules engines like iLOG for example) to handle reference data that must be shared across systems beyond software packages in our portfolio.  OAGIS uses XML as the common protocol for defining business messages and processes (scenarios) to enable business applications to communicate among one another in a standard manner. Not only the most complete set of XML business messages currently available (there are others several others, see the eXtensible Business Reporting Language (XBRL) for example), it also accommodates the specific industries by collaborating with vertical industry groups to add and extend additional requirements as needed. For another real working example in the Product Information Management (PIM) space see GS1 Global Data Synchronization Network and the standards that make this possible. 

Nick Malik over at Inside Architecture  has written an exceptional post about this. We may not agree on all aspects (mostly semantics) but I think he has summed up well what this set of models should address in the blueprint. His post addresses the essential elements a complete modeling effort would produce. These products would typically include:

Canonical Message Schema – describes how when passing messages from one application to another we pass a set of data between applications where both the sender and the receiver have a shared understanding of what the values are:
(a) data type,
(b) range of values, and
(c) semantic meaning. 

Event Driven Perspective (Views) – a style of architecture characterized by a set of relatively independent actors who communicate events amongst themselves in order to achieve a coordinated goal.  This can be done at the application level, the distributed system level, the enterprise level, and the inter-enterprise level (B2B and EDI).  Although we disagree on where this effort belongs (see Part IV of this series on reference architecture development), the logical view will have its origins here. 

Business Event Ontology
This ontology includes a list of business events, usually in a hierarchy, that represents the points in the overall business process where two or more objects (entities) need to communicate or share the same data values and intent (semantics).  And this, as Nick states is “is not the same as a process step. An event may trigger a process step, but the event itself is strictly speaking simply a “notification of something that has occurred,” not the name of the process.  Ontology development is a pretty exciting technology I have watched mature from simple lab exercises (toys really), to something far more useful. For more on this see Part II (The Common Information Model) or my post at Essential Analytics about the Protege ontology editor.

Business Rules
The last remaining modeling effort is the collection (identification and grouping) of the rules used to define the behavior of the elements we have already referred to. Typically buried in application code, (if you are not lucky enough to have a Business Rules engine <g>), this model describes the business rules, protocol, and default behavior expected when the model elements interact with each other (especially useful when exceptions occur or logical constraints are violated).  Not a common artifact I find, wish more of us would take the time and effort to accomplish this task.  For another real world reference see the  GDSN Package Measurement Rules (issue 1.9.2) for the global definition of nominal measurement attributes of product packaging or the GDSN Validation Rules.

As I stated in Part II, this is hard challenging work. The key differentiator and difference between success and failure on your MDM journey will be taking the time to model the blueprint and sharing this work early and often with the business. We will be discussing the third (and most important element) of the MDM blueprint, the Operating model in part IV. I encourage you to participate and share your experience, we can all learn from each other.

Modeling the MDM Blueprint – Part II

whiteboardIn part I of this series we discussed what essential elements should be included in a MDM blueprint. The important thing to remember is the MDM is a business project that requires establishing of a common set of models that can be referenced independent of the technical infrastructure or patterns you plan on using. The blueprint should remain computation and platform independent until  the models are completed (and accepted by the business) to support and ensure the business intent. The essential elements should include:

– Common Information Model
– Canonical Model
– Operating Model, and
– Reference Architecture (e.g. 4+1 views, viewpoints and perspectives).

We will now turn our attention to first element, the Common Information Model.

A Common Information Model (CIM) is defined using relational, object, hierarchical, and semantic modeling methods. What we are really developing here is rich semantic data architecture in selected business domains using:

  • Object Oriented modeling 
    Reusable data types, inheritance, operations for validating data
  • Relational
    Manage referential integrity constraints (Primary Key, Foreign Key)
  • Hierarchical
    Nested data types and facets for declaring behaviors on data (e.g. think XML Schemas)
  • Semantic models
    Ontologies defined through RDF, RDFS and OWL

I believe (others may not) that MDM truly represents the intersection of Relational, Object, Hierarchical, and semantic modeling methods to achieve a rich expression of the realitycim_diagram the organization is operating in.  Expressed in business terms this model represents a “foundation principal” or theme we can pivot around to understand each facet in the proper context.  This is not easy to pull off, but will provide a fighting chance to resolve semantic differences in a way that help focus the business on the real matter at hand. This is especially important when the developing the Canonical model introduced in the next step.

If you want to see what one of these looks like visit the MDM Alliance Group (MAG).  MAG is a community Pierre Bonnet founded to share MDM Modeling procedures and prebuilt data models.  The MDM Alliance Group publishes a set of prebuilt data models that include the usual suspects (Location, Asset, Party, Party Relationship, Party Role, Event, Period [Date, Time, Condition]) downloadable from the website. And some more interesting models like Classification (Taxonomy) and Thesaurus organized across three domains. Although we may disagree about the “semantics” I do agree with him adopting this approach can help us avoid setting up siloed reference databases “…unfortunately often noted when using specific functional approaches such as PIM (Product Information Management) and CDI (Customer Data Integration) modeling”.  How true. And a very common issue I encounter often.

Another good example is the CIM developed over the years at the Distributed Management Task Force (DMTF). You can get the CIM V2.20 Schema MOF, PDF and UML at their web site and take a look for yourself. While this is not what most of us think of as MDM, they are solving for some of the same problems and challenges we face.

Even more interesting is what is happening in semantic technology. Building semantic models (ontologies) include many of the same concepts found in the other modeling methods we have already discussed but further extend the expressive quality we often need to fully communicate intent. For example:

– Ontolgies can be used at run time (queried and reasoned over).
– Relationships are first-class constructs.
– Classes and attributes (properties) are set-based and dynamic.
– Business rules are encoded and organized using axioms.
– XML schemas are graphs not trees, and used for reasoning.

If you haven’t been exposed to ontology development I encourage you to grab the open source Protege Ontology Editor and discover for yourself what this all about.  And while you are there see the Protégé Wiki  and grab the Federal Enterprise Architecture Reference Model Ontology (FEA-RMO) for an example of its use in the EA world.   Or see the set of tools found at the Essential project. The project uses this tool to enter model content, based on a model pre-built for Protégé. While you are at the Protégé Wiki  grab some of the ontologies developed for use with this tool for other examples, such as the SWEET Ontologies (A Semantic Web for Earth and Environmental Terminology. Source: Jet Propulsion Laboratory).  For more on this, see my post on this tool at Essential Analytics. This is an interesting and especially useful modeling method to be aware of and an important tool to have at your disposal.

This is hard challenging work. Doing anything worthwhile usually is.  A key differentiator and the difference between success and failure on your MDM journey will be taking the time to model the blueprint and sharing this work early and often with the business. We will be discussing the second element of the MDM blueprint, the Canonical model in part III. I encourage you to participate and share your professional experience.