How to build a Roadmap

Roadmap Image (Pins)How many of us in the profession can truly say we have been taught to develop, refine, and deliver a professional road map based on a sound method with consistent repeatable results?  Have been at this crazy business for years, and still astonished at the wide variety of quality in the results I have experienced over the years – and it’s not getting any better. Not sure I can identify why this is so, maybe it’s the consolidation and changes in the traditional consulting business (big eight to what? two, maybe) or the depreciation of the craft itself among our peers. And then again, maybe sound planning went out of style and I didn’t get the memo. No matter what the root cause(s) is I want to take a little time and share some (not all) of what has worked for me with great success over the years and may make your next roadmap better.

I’m no genius, just believe I have been blessed to come into the industry at a time when the large management consulting firms actually invested in intellectual property and shared this with the “new hires” and up-and-coming staff like me. Investing in structured thinking, communication skills, or just plain good old analytic skills makes sense.  Why there is not more of this kind of investment is truly troubling.

What I’m going to share works well across most transformation programs. You will struggle to find this in textbooks, class rooms, or in your local book store (I have looked, maybe not hard enough). This method I will share is based loosely on the SEI-CM IDEAL model used to guide development of longe-range integrated planning for managing software process improvement programs. You will most likely find something similar to this in the best and brightest organizations who have adopted an optimized way to think about how to guide their organizations to perform as expected (some of us call this experience). Now on to the summary of what I want to share, the balance will be revealed in an upcoming series using the adoption of Master Data Management as an example.

The Overall Pattern

At the risk of over-simplifying things, here is the overall pattern ALL roadmaps follow:

Click to enlarge

1) Develop a clear and unambiguous understanding of the current state

– Business Objectives (not just strategy or goals, real quantifiable objectives)
– Functional needs
– High impact business processes or cycles
– Organization (current operating model)
– Cost and complexity drivers
– Business and technical assets (some call these artifacts)

2) Define desired end state
First, (know this is obvious) what are trying to accomplish? Is there an existing goal-driven strategy clearly articulated into quantifiable objectives? Sounds silly doesn’t it, and if this exists and a no one knows about it or cannot clearly communicate what the end game is we have a problem. This could be a well guarded secret. Or, what is more common the line of sight from executive leadership down to the mail room is broken, where no one knows what the true goals are or cares (it’s just a job after all) becomes a annual charade of MBO objectives with no real understanding.  Some better examples I would expect include:

– Performance targets (Cash flow, Profitability, Velocity (cycle or PCE), Growth, Customer intimacy)
– Operating Model Improvements
– Guiding principles

3) Conduct Gap Analysis
Okay, now this is where the true fun starts. Once here we can begin to evaluate the DELTA between who we really are, and what we truly want to become.  Armed with a clear understanding of where we are and where we want to be, the actionable activities begin to fall out and become evident. Gap closure strategies can then begin to be discussed, shared, and resolved into any number of possibilities usually involving the following initiatives:

– Organizational
– Functional
– Architectural (technology)
– Process
– Reward or economic incentives

For the enterprise architect the following diagram illustrates a sample index or collection of your findings to this point focused across the four architecture domains (Business, Information, Application, and Technology) related to the architecture.  Note how this is aligned into the enterprise architecture meta-model you can see over at the Essential project. The DELTA in this case represents the recommended Gap Closure Strategy between current and desired end states. Or put simply, the actionable things we need to do to close the gap between where are, and where we want to be.

EA Document Index

Click to enlarge

4) Prioritize
Now that we have the list of actionable items it is time to prioritize what is front of us. This is usually driven (in a technology road map) by evaluating the relative business value AND the technical complexity, plotting the results in a quadrant graph of some kind. It is critical here that the stakeholders are engaged in the collection of the data points and they are keenly aware of what they are scoring. At the end of the day, what we are doing here is IDENTIFYING what is feasible and what has the highest business value. I know, I know this sounds obvious, and you would be astonished by how often this does not occur.

5) Discover the Optimum Sequence
Okay, now we have the initiatives, the prioritization, how about sequence? In other words are there things we have to get accomplished first, before others? Are there dependencies we have identified that need to be satisfied before moving forward? This sounds foolish as well, and we sometimes we need to learn how to crawl, walk, run, ride a bike, and then drive a motor vehicle. And what about the capacity for any organization to absorb change? Hmmm… Not to be overlooked, this is where a clear understanding of the organizational dynamics is critical (see step number 1, this is why we need to truly understand where we are).

6) Develop and Publish the Road Map
Now we are ready to develop the road map. Armed with the DELTA (current vs. desired end state), the prioritization effort (what should be done), and the optimum sequence (in what order) we can begin to assemble a sensible, defensible road map describing what should be done in what order.  How this is communicated is critical now. We have the facts, we have the path outlined, and we have a defensible position to share with our peers. We have the details readily available to support our position. Now the really difficult exercise rears its ugly head. Somehow, we need to distill and simply our message to what I call the “Duckies and Goats” view of the world.  In other words we need to distill all of this work into a simplified yet compelling vision of how we transform an organization, or enabling technology to accomplish what is needed.  Do not underestimate this task, after all the hard work put into an exercise like this, the last thing we need to do is to confuse our stakeholders with mind-numbing detail. Yes, we need this for ourselves to exhaust any possibility we have missed something. And to ensure we haven’t overlooked the obvious – not sure who said this but “when something is obvious, it may be obviously wrong”.  Here is another example of a visual diagram depicting an adoption of Master Data Management platform in its first year.

MDM Roadmap

Click to enlarge

So, this is the basic pattern describing how a robust roadmap should be developed for any organization across any discipline (business or technology) to ensure an effective planning effort. Wanted to share this with you to help you with your own work, this is usually not an exercise to be taken lightly. We are after all discussing some real world impacts to many, all the while understanding the laws of unintended consequences, to come up with a set of actionable steps to take along the way that just make sense. This method has worked for me time after time. I think this may just work for you as well. More on this later…

Advertisements

The Architecture Value Proposition

A lot of discussions are flying around on the Enterprise Architecture boards about our role (or lack of) and the need to demonstrate value. Some have even questioned why this is so (after all, we all know the value of accountants, right?).  Do all EA professionals have an identity crisis?  Seems so, and I decided to share this post to clear the air a bit and share some thoughts of the kind of value we should deliver to our business peers.  If you are impatient enough with this introduction just scroll down to the “money” section and see what a quick taxonomy (not complete by any measure) of what our value proposition should look like – at a minimum what values we should find in our day to day work to share with our business peers on a regular basis. If this sounds like a lot of foo-foo to you, get ready to be challenged on a regular basis – and spend an inordinate amount of time and effort justifying your pay-check.  And less time solving for the important and urgent challenges we have been asked to help with.

First, how about a little context?  I believe (and I think you will too) that architecture is a comprehensive framework of well understood processes used to manage and align an organization’s IS assets with:

  • People and Organization (Structure)
  • Processes, and
  • Technology used to meet specific, actionable strategic business goals.

For example, reference the Galbraith Star Model.
A typical design sequence (in my world) starts with an understanding of the strategy as defined by the business. This in turns drives the organizational structure. Processes are based on the organization’s structure. Structure and Processes define the implementation of reward systems and people policy.

This is one way to understand our true role in architecture management is to ensure technology investment and widespread adoption within the organization occurs because:

  • The investment has value,
  • The program (projects) will be properly managed,
  • The organization has the capability to deliver the benefits,
  • Dedicated resources are working on the highest value opportunities,
  • Projects with inter-dependencies are undertaken in the optimum sequence.

Actionable objectives this function (architecture) enables include:

  • Driving costs (not shifting them) from the business processes meta-cycles and improve value-added efficiency across operating models
    • Cycle time reduction
    • Error reduction
    • Resource liberation
    • Cost reduction
    • Cost avoidance
  • Turning service portfolio investment into an operational variable cost which can relate directly to business volume
    • The business should only pay for what they use…
    • Can budget for services on a per usage basis
    • Use Strategic decision-making framework to:
      • manage cascading goals to evaluate corporate or shared goals along a vertical and horizontal axis to mitigate alignment issues
      • use functional views to focus IS investment choices
    • Realizing economic value through business process improvement as a:
      • Vehicle to drive measurable performance improvement:
        • Reduces defects embedded at the design stage of lifecycle by applying architectural principals
        • Uses problem solving approach upstream, reducing operational rework costs.
    • Enable process optimization through continuous, complimentary investment in reengineering and organizational development.

Our work then is to develop and promote management’s use of architecture to drive costs out of the organization and continue to meet stakeholder demands for distinctive, quality products and services.  The value proposition we bring should include detailed findings and quantitative methods to uncover how to accomplish this to meet the following objectives:

  • Improved planning helps to make more informed decisions. Ensures project plans are complete and consistent with business priorities and addresses the impact of changes (alignment) as needed.
  • Technology investments can be managed using consistent, repeatable processes to effectively build (acquire), maintain, and retire these assets over their useful life.
  • We can enable cost efficiencies through elimination or consolidation of redundant or obsolete resources.
  • Delivering quality information, designed and architected for the enterprise as a whole, has proven time and time again to be faster and less expensive by:
    • Reducing complexity
    • Encouraging component reuse
    • Improving communications within the organization
    • Providing a standardized, easy-to-use reference models and templates.
  • We can significantly improve IT and business alignment by encouraging the rapid adoption of technology to meet changing business needs in an effective manner. Improved service levels to key constituents — customers, employees, partners.
  • Encourage less expensive development and delivery by reusing and leveraging previously-developed shared services and products. Results in lower maintenance and integration costs. Provide a logical construction blueprint for defining and controlling the integration of systems and components by combining organizational, data, and business processes.
  • Communicate using common terminology, standard descriptions, and generally accepted industry standards between the business and technology providers.

Delivering Value

The following section represents the majority of the business value I have helped other organizations uncover and exploit through the architecture function. We should be able to quickly discover and describe each of the opportunities we encounter within technology management alone (forget about the business for now) to include:

  • Plan to Manage (Lifecycle),
  • Manage to Availability (Protect),
  • Request to Resolve (Support), and
  • Develop to Deploy (Develop).

We should be able to tie service orientation for example, to its impact on one or more target metrics – use short-term tactical focus to start with and describe its impact (i.e., quantify its value, examples are included) in terms of:

  • Process Cycle Efficiency (reduce the time to close a defect by 40% within three months)
  • Time (Reduce system testing time by three weeks on the next project)
  • Accuracy (Improve schedule estimation accuracy to within 10% of actual)
  • Resources (Reduce maintenance costs by 50% within one year)
    • People
    • Money
    • Physical (e.g., physical plant, infrastructure)

The following themes represent the significant elements we should examine and evaluate for the size and relative scale of the respective opportunity that can be realized.

Communications

  • Communicate effective alignment of IS strategy
    • Clear understanding of the current and future direction
    • Ensure business focus is on the highest priority and mission critical efforts.
    • Make more informed decisions
      • Control Strategic Scope and economic intent
      • Enhance investment decision-making by providing ready access to the business to information about the people, processes, and technology.
      • Assess the impact and mitigate risks associated with tactical decisions.
  • Improve Service levels
  • Produce Higher Quality Products
    • Improved reliability
    • Predictability (variance reduction). Methods to deliver services and information in a consistent and structured manner.
  • Increase project success rates
  • Meet operational goals and objectives
    • Process control objectives
    • Professional staff is accountable for compliance with policy and management direction.
    • Optimize activity objectives

Efficiency

  • Shorten work cycles
    • Improve Process Cycle Efficiency
      • Shift focus to value-added activities
      • Minimize Non-Value-Added activity
      • Eliminate obsolete non-value added practices
      • Optimize Essential Non-Value-Added-But-Necessary activity
    • Schedules
    • Maintenance costs
    • Development time
  • Reduce costly rework
  • Leverage Economies of Scale – Consolidation
    • Leverage existing IS assets
    • Shared knowledge base
    • Infrastructure and Shared Services
    • Skills and resources
  • Reduce or eliminate redundant investment
  • Promote Interoperability
    • Identify opportunities for vertical and horizontal interoperability between business units.  Improve or reuse existing IS assets.
  • Improve developer productivity
    • Leverage strengths of different types of developers.
    • Preserves business logic by isolating significant changes to “plumbing”
  • Reduce Need for Customization
    • Lower labor costs and lower Total Cost of Ownership
  • Enable vendor independence
  • Leverage the benefits of Standardization
    • Use standard, reusable components and services
      • Proven – Based on field experience
      • Authoritative – Offer the best advice available
      • Accurate – Technically validated and tested
      • Actionable – Provide the steps to success
      • Relevant – Address real-world problems
    • Use common processes, methods and tools
      • Tooling
      • Wizards
      • Templates
      • Metadata, Code configuration generation

Agility

  • Adapt quickly to changing business needs that require significant requirement changes
  • Accommodate Emerging Technologies
  • Engineering Life Cycle Support
    • Streamlined Governance and Stewardship
      • Meet Process control objectives
      • Optimize activity objectives
    • Guidance for tasks such as deployment and operations
  • Prepare for New Opportunities
    • Off-shoring
    • Emerging technologies
  • Improved Speed to Market
    • Increased productivity due to the need to develop and test only code that is related to business requirements.
  • Higher levels of customer satisfaction
  • Improve communications
  • Illustrate and communicate the business of the enterprise to all stakeholders
  • Common terminology and semantics
    • Standard descriptions
    • Generally accepted industry standards and models
    • Promote Global collaboration
  • Improve Risk Mitigation Strategies
    • Leverage proven Business Reference Models
    • Identify Capacity Planning needs and impact
    • Reuse previously identified Solution Set patterns that link preferred Business, Information, and Technology Architecture Components.
    • Provide explicit linkage between stated business goals and the solution proposal (clear line-of-sight).
  • Support Innovation
  • Identify opportunities to employ innovative processes and technology.

Quality

  • Drive one or more systems to common “use” or purpose
    • Shared information is more accurate. Information is collected once and used many times, avoiding the misunderstandings and keying errors associated with multiple collection points.
    • Shared information is more timely. Information can be made available instantly rather than waiting for a separate collection effort.
    • Shared information is more complete. Information from multiple sources can be assembled into a full description.
    • Shared information is less expensive. Information costs much less to store data and send it to another user than it does to collect it again.
  • Work Product Impact
    • Less variance in work products
      • Provide atomic solutions to recurring problems (Solve problems once)
      • Application Frameworks standardize the way in which a class of problems is solved.
      • Set of classes which enforce the architecture
    • Teach developers how to use it
    • Support and evolve framework

This is how I have demonstrated real business value in my practice. Not sure why this is still questioned unless our role remains a mystery (poor communications) or maybe we have simply not executed (execution failure does happen).  So hopefully this has helped some of you to understand and place into “business terms” the kind of value we should be delivering to our business peers.  And more important to all of us in the profession is how we can define an effective value proposition to share benefits that can be realized in a sustained, consistent, and repeatable manner.  Thanks for listening…

Design Goals

Design GoalsIn my last post (Wide open spaces) we discussed the elegance of using space based architecture platforms based on their simplicity and power. Compared to other models for developing distributed applications, it offers simpler design, savings in development and debugging effort, and more robust results that are easier to maintain and integrate.  Recall, this model combines and integrates distributed caching, content-based distributed messaging, and parallel processing into a powerful architecture within a grid computing framework.  

That was a mouthful. You may want to read that last sentence again carefully. And think about what this means to you as a professional practitioner.  More importantly, how this may change the way you think about application platforms in general.

Before diving into this important concept, I think it is always good idea to express our stated design goals right up front – and use these to guide the inevitable trade-offs and decisions that will need to be made along this journey. So let’s get started with a few design goals I’m comfortable with. I’m sure there are more, but this represents a good start.

The platform’s ability to scale must be completely transparent

The architecture should be based on technology that can be deployed across a grid of commodity hardware nodes, providing a scalable and adaptable platform that supports high-volume, high-performance processing. The resulting platform should be tolerant of failure in individual nodes, can be matched to changing volumes easily by increasing (or decreasing) the number of processing nodes and, by virtue of its decoupled business logic, is extendible and adaptable to evolve as the business landscape changes.

Unlike conventional application server models, our elastic application platform should not require application developers to do anything different in their code in order to scale. The developer uses a simple API that provides a vast key-value data store that looks like a large shared memory space. Underneath the covers distributed caching features of the application platform spread the data across multiple servers (e.g. using a sophisticated hash algorithm). The application developer should remain unaware of the underlying implementation that distributes the data across the servers on his behalf. In brief, the goal of the grid-enabled middleware is designed to hide complexities of partitioning, distributing, and load balancing.

The platform provides resiliency by design

Applications must be available to customers and expected service level objectives must be met.  The business cannot afford a single point of failure to impact customer access to other features and functions of the customer applications suite otherwise available. The platform should operate continuously and needs to be highly resilient to avoid any interruption in processing. This means that the application suite cannot have any single point of failure in the software, hardware, or network. High Availability (HA) is a basic requirement.  Failing services and application components will continue on different backup servers, without service disruption.

Distributed data caches are resilient by design because they should automatically replicate data stored in the cache to one or more backup servers, guided by the policies defined by an administrator and executed in a consistent controlled manner. If one server fails, then another server provides the data (the more replicas, the more resilient the cache). Note, distributed data caches can be vulnerable to data center outages if all the compute servers are located in the same physical data center. To address this weakness, the distributed caching mechanism should offer special WAN features to replicate and recover data across multiple physical locations. The improvement in resilience reduces the risk of expensive system down-time due to hardware or software failure, allowing the business to continue operating albeit with reduced performance, during partial outages. An added benefit of this architecture composed of discrete units working together would enable rapid development and a controlled introduction of new features in response to changing requirements without the need for a big-bang rollout approach.

The platform is prepared to meet demanding performance requirements

A performance characteristic of distributed caches is that they store data in fast-access memory rather than on disk, although backing store on disk may be an option. Since this data spans multiple servers, there is no bottleneck or single point of failure. Using this advanced elastic application platform provides a means to ensure that cached data will tend to be on the same server where application code is processing, reducing network latency. We can do this by implementing a “near-cache” concept that places data on the server running the application using that data or by directly managing application code execution in the platform, placing adjacent code and data in cache nodes that are on the same server.

The platform needs to support robust integratation with other data sources

Most distributed caching platforms offer read-through, write-through, and write-behind features to synchronize data in the cache with external data sources. Rather than the developer having to write the code that does this, an administrator configures the cache to automatically read or write to a database or other external data source whenever an application performs a data operation in the cache. Data is an asset that is valuable. Sharing this asset across the platform improves the ability to support better data enrichment, improve accuracy and meet business goals.

The platform’s application workload is by nature distributed

For elastic application platforms offering distributed code execution we should consider the nature of the workload the applications will present to servers. If we can divide the workload into units that naturally fit into the distribution schemes as offered then the greater sophistication of the distributed code execution capability can be just what’s needed to turn a troublesome, resource intensive application into one that performs well and meets expectations.

Specific application responsibilities that repeat (or are redundant) across the application architecture should be separated out in the application architecture.  Shared global or common use application functional solutions are sometimes referred to as “cross-cutting concerns” and forward the key principle of “separation of concerns”. The platform should support component designs which minimize coupling.  The law of Demeter (Principle of Least Knowledge or only know your neighbor applies). The platform should promote loose coupling by minimizing:

  • dependency between modules (e.g. shared global variables)
  • discouraging content coupling (one module relying on another’s content)
  • protocol or format dependencies
  • control based coupling where one program controls another’s behavior
  • Non-traceable message coupling which can lead to a dynamic spaghetti-like results impossible to manage

There are other goals I have not addressed here which we should all be familiar with to include:

  • Desire to BUY vs. Build and Maintain
  • Remain Technology and Vendor Independent
  • Promote Interoperability
  • Meet security and privacy needs

So, now we have a better idea of the design goals we are going to try to achieve. I think it is always important to take these goals to the next step in the high-level specification in order to begin quantifying how we will meet these into actionable objectives. Remember our original strategy which has driven our design goals. The design goals now should be used to create quantifiable objectives we can plan and measure progress to.

Wide open spaces

Wide open spaces - Wyoming

Okay, okay – know I should keep this blog more up to date, just have been a little busy with my day job… and now after a much needed rest in last weeks in August I can now share a few things you may find especially interesting and timely.  It is no coincidence that the image accompanying this post is of wide open spaces. This is in fact where I spent most satisfying part of my “summer vacation”.  And spaces (Tuple Spaces) is what I intend to share with you in the coming weeks.           

As architects we have a professional responsibility to always remain on the look-out for new (and sometimes revisited) ideas about how to improve and adopt good ideas. Especially when our business needs to invest in some key technology changes to remain competitive and deliver value customers will continue to seek for its distinctive quality of service and value.            

I have been pretty busy in the last year engaged in a variety of industries where road map development and execution of next generation platforms and paradigm shifts were needed.  Many of the more difficult challenges were solved by adopting a Space-Based Architecture (SBA) architecture pattern. This is a demonstrated pattern used to achieve near linear scalability of stateful, high-performance applications using the tuple spaces. This is not a new idea; the tuple space model was developed by David Gelernter over thirty years ago at Yale University. Implementations of tuple spaces have also been developed for Smalltalk, Java (JavaSpaces), and the .NET framework). A tuple space is an implementation of the associative memory model for parallel (distributed) computing by providing a repository of tuples that can be accessed concurrently. I know, this is a mouthful and a little too academic for me too. What this really means is we can group processors that produce pieces of data and group processors that use the data. Producers post their data as tuples in the space, and the consumers then retrieve data from the space that match a certain pattern. This is also known as the blackboard metaphor. Tuple spaces may be thought as a form of distributed shared memory. The model is closely related to other patterns that have been proved successful in addressing the application scalability challenge used by Google and Amazon.com (EC2) for example. The model has also been applied by many firms in the securities industry for implementing scalable electronic securities trading applications for example.   

Before you think I have gone daft on you, I recommend you see a commercial implementation of this at Gigaspaces.  Review the site and developer documentation and you will see how this platform is used to embrace many of the principles of Representational State Transfer (REST), service-oriented architecture (SOA) and Event-driven architecture (EDA), as well as elements of grid computing.  The beauty of the space based architecture resides in its tandem of simplicity and power. Compared to other models for developing distributed applications, it offers simpler design, savings in development and debugging effort, and more robust results that are easier to maintain and integrate.          

The pattern represents a model that combines and integrates distributed caching (Data Grid), content-based distributed messaging (Messaging Grid), and parallel processing (Processing Grid) into a powerful service oriented architecture built on shared spaces within a grid computing framework. Research results and commercial use have shown that a large number of problems in parallel and distributed computing have been solved using this architecture. And the implications of its adoption beyond high performance On-Line Transaction Processing extend well into other uses (including Master Data Management, Complex Event Processing, and Rules Processing for example).           

And this is what I intend to share with you in the coming weeks. 
Wide open spaces…

Modeling the MDM Blueprint – Part II

whiteboardIn part I of this series we discussed what essential elements should be included in a MDM blueprint. The important thing to remember is the MDM is a business project that requires establishing of a common set of models that can be referenced independent of the technical infrastructure or patterns you plan on using. The blueprint should remain computation and platform independent until  the models are completed (and accepted by the business) to support and ensure the business intent. The essential elements should include:

– Common Information Model
– Canonical Model
– Operating Model, and
– Reference Architecture (e.g. 4+1 views, viewpoints and perspectives).

We will now turn our attention to first element, the Common Information Model.

A Common Information Model (CIM) is defined using relational, object, hierarchical, and semantic modeling methods. What we are really developing here is rich semantic data architecture in selected business domains using:

  • Object Oriented modeling 
    Reusable data types, inheritance, operations for validating data
  • Relational
    Manage referential integrity constraints (Primary Key, Foreign Key)
  • Hierarchical
    Nested data types and facets for declaring behaviors on data (e.g. think XML Schemas)
  • Semantic models
    Ontologies defined through RDF, RDFS and OWL

I believe (others may not) that MDM truly represents the intersection of Relational, Object, Hierarchical, and semantic modeling methods to achieve a rich expression of the realitycim_diagram the organization is operating in.  Expressed in business terms this model represents a “foundation principal” or theme we can pivot around to understand each facet in the proper context.  This is not easy to pull off, but will provide a fighting chance to resolve semantic differences in a way that help focus the business on the real matter at hand. This is especially important when the developing the Canonical model introduced in the next step.

If you want to see what one of these looks like visit the MDM Alliance Group (MAG).  MAG is a community Pierre Bonnet founded to share MDM Modeling procedures and prebuilt data models.  The MDM Alliance Group publishes a set of prebuilt data models that include the usual suspects (Location, Asset, Party, Party Relationship, Party Role, Event, Period [Date, Time, Condition]) downloadable from the website. And some more interesting models like Classification (Taxonomy) and Thesaurus organized across three domains. Although we may disagree about the “semantics” I do agree with him adopting this approach can help us avoid setting up siloed reference databases “…unfortunately often noted when using specific functional approaches such as PIM (Product Information Management) and CDI (Customer Data Integration) modeling”.  How true. And a very common issue I encounter often.

Another good example is the CIM developed over the years at the Distributed Management Task Force (DMTF). You can get the CIM V2.20 Schema MOF, PDF and UML at their web site and take a look for yourself. While this is not what most of us think of as MDM, they are solving for some of the same problems and challenges we face.

Even more interesting is what is happening in semantic technology. Building semantic models (ontologies) include many of the same concepts found in the other modeling methods we have already discussed but further extend the expressive quality we often need to fully communicate intent. For example:

– Ontolgies can be used at run time (queried and reasoned over).
– Relationships are first-class constructs.
– Classes and attributes (properties) are set-based and dynamic.
– Business rules are encoded and organized using axioms.
– XML schemas are graphs not trees, and used for reasoning.

If you haven’t been exposed to ontology development I encourage you to grab the open source Protege Ontology Editor and discover for yourself what this all about.  And while you are there see the Protégé Wiki  and grab the Federal Enterprise Architecture Reference Model Ontology (FEA-RMO) for an example of its use in the EA world.   Or see the set of tools found at the Essential project. The project uses this tool to enter model content, based on a model pre-built for Protégé. While you are at the Protégé Wiki  grab some of the ontologies developed for use with this tool for other examples, such as the SWEET Ontologies (A Semantic Web for Earth and Environmental Terminology. Source: Jet Propulsion Laboratory).  For more on this, see my post on this tool at Essential Analytics. This is an interesting and especially useful modeling method to be aware of and an important tool to have at your disposal.

This is hard challenging work. Doing anything worthwhile usually is.  A key differentiator and the difference between success and failure on your MDM journey will be taking the time to model the blueprint and sharing this work early and often with the business. We will be discussing the second element of the MDM blueprint, the Canonical model in part III. I encourage you to participate and share your professional experience.

Modeling the MDM Blueprint – Part I

Several practitioners have contributed to this complex and elusive subject (see Dan Power’s Five Essential Elements of MDM and CDI for example) and have done a good job at elaborating the essential elements.  There is one more element often overlooked in this field and remains a key differentiator and the difference between success and failure among the major initiatives I have had the opportunity to witness firsthand – modeling the blueprint for MDM.

pen1This is an important first step to take, assuming the business case is completed and approved. It forces us to address the very real challenges up front, before embarking on a journey that our stakeholders must understand and support in order to succeed. Obtaining buy-in and executive support means we all share a common vision for what we are solving for.

 MDM is more than maintaining a central repository of master data. The shared reference model should provide a resilient, adaptive blueprint to sustain high performance and value over time. A MDM solution should include the tools for modeling and managing business knowledge of data in a sustainable way.  This may seem like a tall order, but consider the implications if we focus on the tactical and exclude the reality of how the business will actually adopt and embrace all of your hard work. Or worse, asking the business to stare at a blank sheet of paper and expect them to tell you how to rationalize and manage the integrity rules connecting data across several systems, eliminate duplication and waste, and ensure an authoritative source of clean reliable information can be audited for completeness and accuracy.  Still waiting?

So what is in this blueprint?

The essential thing to remember is the MDM project is a business project that requires establishing of a common information model that applies whatever the technical infrastructure or patterns you plan on using may be. The blueprint should remain computation and platform independent until the Operating Model is defined (and accepted by the business), and a suitable Common Information Model (CIM) and Canonical model are completed to support and ensure the business intent. Then, and only then, are you ready to tackle the Reference Architecture.

The essential elements should include:
– Common Information Model
– Canonical Model
– Operating Model, and
– Reference Architecture (e.g. 4+1 views).

Will be discussing each of these important and necessary components within the MDM blueprint in the following weeks and encourage you to participate and share your professional experience. Adopting and succeeding at Master Data Management is not easy, and jumping into the “deep end” without truly understanding what you are solving for is never a good idea. Whether you are a hands-on practitioner, program manager, or an executive planner I can’t emphasize enough how critical modeling the MDM blueprint and sharing this with the stakeholders is to success. You simply have to get this right before proceeding further.

The Business Case for Enterprise Architecture

We have the frameworks. Plenty of them: Zachman, IEEE STD 610.12, FEAF (Federal Enterprise Architecture), and now an updated TOGAF (Open Group Architecture Framework) you can find more about at (http://www.opengroup.org/togaf).

We know what to do. All of us know what should be done and have a pretty good idea of how to do it. For example see what the GAO has institutionalized in their view of our profession in the publication labeled “A Practical Guide to CIO Council – Federal Enterprise Architecture. February 2001, Version 1.0 “.

Why do it?
Now comes the hard part. How about a business case for investing in this discipline? Expressed in terms ordinary business people can understand.  I have yet to find (I have looked) a really compelling piece of work that can express quickly what impact EA would have across five essential elements every business leader understands and manages to on a daily basis:

– cash flow
– margin (profitability)
– velocity
– growth
– customer (responsiveness, intimacy, and alignment)

I’m  guessing this is because we are all technicians at heart. I have also come to believe this is essential to having our peers in the business understand our value proposition. I remain convinced this discipline can (done properly) enable significant improvements in each of our organizations.  Any thoughts or suggestions are welcome, encourage all of us in this profession to evaluate and think carefully about what our real value is – as business people first, then as trusted advisors and partners to our business and IT colleagues alike.