Big Data Analytics – Unlock Breakthrough Results: (Step 3)

In this step I will dive deeper into defining the critical capabilities used across the four operating models discussed in an earlier post (Big Data Analytics – Unlock Breakthrough Results: Step 2). This may be a little boring for many and just a little too detailed for a medium like this. I believe it is important to always define your terms and create a controlled vocabulary so there is less of chance of friction or ambiguity in the decision model we will be developing. Seems old-fashioned and little of date in a world where Info-graphics and sound-bites are the preferred delivery medium. So at the risk of boring many, I’m going to just put this baseline out there and reference this work later when needed.
Capability Defined
A capability is the ability to perform or achieve certain actions or outcomes through a set of controllable and measurable faculties, features, functions, processes, or services. Capability describes “the what” of the activity, but not necessarily the how.  Achieving success with big data means leveraging its’ capability to transform raw data into the intelligence to realize true actionable insight.  Big data is a part of a much larger ecosystem and should not be viewed as a stand-alone solution that is independent of the other platforms available to the analytic community. The other platforms should be used to expand and amplify what is uncovered in big data using each of their respective strengths.

ModelSummaryThanks to Gartner who published Critical Capabilities for Business Intelligence and Analytics Platforms this summer (12 May 2015 ID:G00270381) we have a reasonably good way to think about form and function across the different operating models which Gartner refers to in their work as baseline use cases. Recall that across any analytic landscape (including big data) we are most likely to encounter one or more of the four operating models to include:

– Centralized Provisioning,
– Decentralized Analytics,
– Governed Data Discovery, and
– OEM/Embedded Analytics.

This seems to be a sensible way to organize the decision model by describing the fifteen (15) groups of critical capabilities when comparing or seeking platform and tool optimization. The baseline used includes the following capability groups:

– Traditional Styles of Analysis
– Analytic Dashboards and Content
– IT-Developed Reports and Dashboards
– Platform Administration
– Metadata Management
– Business User Data Mash-up
– Cloud Deployment
– Collaboration and Social Integration
– Customer Service
– Development and Integration
– Ease of Use
– Embedded Analytics
– Free Form Interactive Exploration
– Internal Platform Integration
– Mobile

There are other ways to view capability. See Critical Capabilities for Enterprise Data Science written by Dr. Jerry Smith that addresses Data Science in depth. His work represents a significant deep dive into the data science and a refinement of capability expressed at a much more granular level than I suggest here. The purpose in this effort is to organize and quantify which capability within each operating model is more important than the others; weighting their relative importance in satisfying need. In this step we are simply starting a baseline. We can refine the critical analytic capabilities from this baseline to meet site specific needs before moving on to the weighting in the next step.

Note: the weights used in this example are based on the Gartner work referred to above. I have changed the metadata weighting to reflect my experience, will leave the balance of the work to the next step after you have tailored this baseline to your environment and are ready to apply your own weightings.

 

CapabilityDiagram

We have already seen there are very different needs required for each of the models presented. A set of tools and platforms which are ideal for Centralized Provisioning may be completely unsuited for use within a Decentralized operating model.  Critical capability essential to Embedded Analytic is very different from Governed Data Discovery.  And of course there are some essential capabilities that will be shared across all operating models (e.g. metadata).

As the decision model is introduced and developed in later steps the data points for each can be used to develop quick snapshots and quantitative indexes when evaluating for form and function.  I know this seems like a lot of work. Once completed you can always leverage this effort for solving for what can seem like bewildering array of choices and implications. Think of this as a way to introduce a taxonomy and a controlled vocabulary so all interested stakeholders have a way to discuss and think about each choice in a meaningful way. The following descriptions and characteristics of each of the fourteen (14) critical capabilities are presented to add additional context.

IT-Developed Reports and Dashboards
Provides the ability to create highly formatted, print-ready and interactive reports, with or without parameters. IT-authored or centrally authored dashboards are a style of reporting that graphically depicts performance measures. This includes the ability to publish multi-object, linked reports and parameters with intuitive and interactive displays; dashboards often employ visualization components such as gauges, sliders, check boxes and maps, and are often used to show the actual Healthcare-performance-dashboardvalue of the measure compared with a goal or target value. Dashboards can represent operational or strategic information. Most often found in the Centralized Provisioning and OEM/Embedded Analytic models. Key characteristics and functions to recognize include:

– Production reporting, distribution and printing
– Parameterization, filters, prompts
– Report and dashboard navigation and guided navigation
– Design environment and document layout
– Visual components, such as gauges, sliders, dials, check boxes

Analytic Dashboards and Content
The ability to create highly interactive dashboards and content with visual exploration and embedded advanced and geospatial analytics to be consumed by others. Key features and functions include:

– Information visualizations
– Disconnected exploration
– Embedded advanced analytics
– Geospatial and location intelligence
– Content authoring
– Consumer interactivity and exploration

While this is an important capability found across all operating models, it is most important to Decentralized Analytics and Governed Discovery success.

Traditional Styles of Analysis
Ad hoc query enables users to ask their own questions of the data, without relying on IT to create a report. In particular, the tools must have a reusable semantic layer to enable users to navigate available data sources, predefined metrics, and hierarchies. Online analytical processing (OLAP) enables users to analyze data with fast query and calculation performance, enabling a style of analysis known as “slicing and dicing.” Users are able to navigate multidimensional drill paths. They also have the ability to write-back values to a database for planning and “what if?” modeling. This capability could span a variety of data architectures (such as relational, multidimensional or hybrid) and storage architectures (such as disk-based or in-memory). This capability is most often realized through:

– OLAP, and
– Ad hoc queries.

Most often found in the Centralized Provisioning model, it can be useful in Governed Discovery as well.

Platform Administration
Provides the capability to secure and administer users, scale the platform, optimize performance and ensure high availability and disaster recovery. These capabilities should be common across all platform components. This capability includes:

– Architecture
– Security
– User administration
– Scalability and performance
– High availability and disaster recovery

Almost always found in the Centralized Provisioning and to a lesser extent OEM/Embedded Analytics models.

Business User Data Mashup
“Drag and drop,” user-driven data combination of different sources and the creation of analytic models to include user-defined measures, sets, groups and hierarchies. Advanced capabilities include semantic autodiscovery, intelligent joins, intelligent profiling, hierarchy generation, data lineage and data blending on varied data sources, including multistructured data. Features to identify related to this capability include:

– Business user data mashup and joins
– Business-user-defined calculations, grouping
– Data inference
– Data profiling and enrichment
– Business user data lineage

This capability group is important to Decentralized Analytics and Governed Discovery models.

Cloud Deployment
Platform as a service and analytic application as a service capabilities forbuilding, deploying and managing analytics and analytic applications in the cloud, based on data both in the cloud and on-premises. Expect the following features and functions within this group to include:

– Built-in data management capabilities (including data integration and data warehouse)
– Special-purpose connectors to cloud-based data sources
– Direct connect for both cloud and on-premises data sources (hybrid)
– Packaged content
– Self-service administration
– Self-service elasticity

This capability is most important in Decentralized Analytics, Governed Discovery, and Embedded models.

Collaboration and Social Integration
Enables users to share and discuss information, analysis, analytic content and decisions via discussion threads, chat, annotations and storytelling. Think of this as the communication channel or collaborative workspace. In addition to analytic content and findings look for:

– Story telling
– Discussion threads
– Integration with social platforms
– Timelines
– Sharing and real-time collaboration

This capability is most important to Decentralized Analytics and Governed Discovery models.

Customer Service
Relationships, products and services/programs that enable clients to be successful with the products evaluated. Specifically, this includes the ways customers receive technical support or account support. This can also include ancillary tools, customer support programs (and the quality thereof), availability of user groups, and service-level agreements. Examine the service level agreements (SLAs) and discover what the analytic community is happy with; or not.

This capability is found across all operating models.

Development and Integration
The platform should provide a set of programmatic and visual tools and a development workbench for building reports, dashboards, queries and analysis. It should enable scalable and personalized distribution, scheduling, alerts, and workflow of content and applications via email, to a portal or to mobile devices. It should include the ability to embed and customize analytic platform components in a business process, application or portal.

– External platform integration
– Embedded Analytics
– Support for big data sources (including cloud)
– Developer productivity (APIs, SDKs, versioning, and multi-developer features)
– Scheduling and alerts
– Workflow and events

This group of capabilities is important to Centralized Provisioning and OEM/Embedded Analytics models.

Usability – Ease of Use
This is a combined grouping consisting of product quality, support, availability of skills, user support (which includes training, online videos, online communities and documentation) and migration difficulty. Closely related to Customer Service but different – this is all about the content available to analytic community.

Important across all models especially critical in the success of Decentralized Analytics.

Embedded Analytics
This group of capabilities includes a software developer’s kit with APIs and support for open standards — for creating and modifying analytic content, visualizations and applications, and embedding them into a business process and/or an application or portal. These capabilities can reside outside the application, reusing the analytic infrastructure, but must be easily and seamlessly accessible from inside the application, without forcing users to switch between systems. The capabilities for integrating analytics with the application architecture will enable users to choose where in the business process the analytics should be embedded. Look for:

– Capability for embedding (APIs, open standards, SDKs, component libraries)
– Capability to consume common methods, (ex; Predictive Model Markup Language (PMML) and SAS/R-based models in the metadata layer and in a report object or analysis application.

This capability is important to the success of the OEM/Embedded Analytics model.

Free Form Interactive Exploration
This group of critical capabilities enables the exploration of data, manipulation of chart images, with the color, brightness, size, shape and motion of visual objects representing aspects of the data set being analyzed. This includes an array of visualization options that go beyond those of pie, bar and line charts, including heat and tree maps, geographic maps, scatter plots and other special purpose visuals. These tools enable users to analyze the data by interacting directly with a visual representation of it. What to look for?

– Interactivity and exploration
– User experience
– Information visualizations
– Disconnected exploration
– Search-based data discovery
– Data flow
– Content authoring
– In-memory interactive analysis

This capability is most important to Decentralized Analytics and Governed Discovery models.

Internal Platform Integration
A common look and feel, install, query engine, shared metadata, promotability across all platform components.

– Integration with complementary analytic capabilities
– Ability to promote business-user-generated data mashups to the systems of record
– Common security model and administration application components across the platform
– Integrated semantic/metadata layer
– Integrated and common front-end tools

This capability is most important to Centralized Provisioning, Governed Discovery, and OEM/Embedded Analytic models.

Metadata Management
metadataPlatform and supporting tools used to enable users to leverage the same systems-of-record semantic model and metadata. They should provide a robust and centralized way for administrators to search, capture, store, reuse and publish metadata objects, such as dimensions, hierarchies, measures, performance metrics/KPIs, and report layout objects, parameters and soon. Administrators should have the ability to promote a business user-defined data mashup and metadata to the systems-of-record metadata.

– Promotability
– Data modeling
– Reuse
– Connectivity and data sources
– Data lineage and impact analysis

This capability is most important to Centralized Provisioning, Decentralized Analytics, and Governed Discovery models.

Mobile
Enables organizations to develop and deliver content to mobile devices in a publishing and/or interactive mode, and takes advantage of mobile devices’ native capabilities, such as touchscreen, camera, location awareness and natural-language query.

– Content authoring and information exploration
– Information display, interaction and context awareness
– Multi-device support
– Security and administration
– Offline mode exploration

This capability is important across all operating models.

Summary
So there it is. The fourteen (14) critical capabilities organized as a baseline to be used within each of the four (4) operating models. We are now at a point where the data points can be weighted and combined with the community profiles (this is coming in another step) to arrive at a sound approach to quantifying the data used in the upcoming decision model.

 

CapabilityResultsIf you enjoyed this post, please share with anyone who may benefit from reading it. And don’t forget to click the follow button to be sure you don’t miss future posts. Planning on compiling all the materials and tools used in this series in one place, still unsure of what form and content would be the best for your professional use. Please take a few minutes and let me know what form and format you would find most valuable.

Suggested content for premium subscribers: 
Big Data Analytics - Unlock Breakthrough Results: Step Three (3) 
Operating Model Mind Map (for use with Mind Jet - see https://www.mindjet.com/ for more)
Analytic Core Capability Mind Map
Enterprise Analytics Mind Map 
Analytics Critical Capability Workbooks
Analytics Critical Capability Glossary, detailed descriptions, and cross-reference
Logical Data Model (XMI - use with your favorite tool)
Reference Library with Supporting Documents
Advertisements

How to build a Roadmap – Define End State

ChangesAn earlier post (How to Build a Roadmap) discussed the specific steps required to develop a well thought out road map. This method identified specific actions using an overall pattern ALL roadmaps should follow. The steps required to complete this work:

1) Develop a clear and unambiguous understanding of the current state

2) Define the desired end state

3) Conduct a Gap Analysis exercise

4) Prioritize the findings from the Gap Analysis exercise into a series of gap closure strategies

5) Discover the optimum sequence of actions (recognizing predecessor – successor relationships)

6) Develop and Publish the Road Map

I have discussed a way to quickly complete step one (1) Define current State . This post will discuss how to craft a suitable desired End State definition so we can use the results from the Current State work and begin our gap analysis. Our intent is to identify the difference (delta) from where we are to what we aspire to become. I know this seems obvious (and maybe a little redundant). This baseline is critical to identify what needs to be accomplished to meet the challenge.

The reality of organizational dynamics and politics (we are human after all) can distort the reality we are seeking here and truly obscure the findings. I think this happens in our quest to preserve the preferred “optics”. This is especially so when trying to define our desired end state. The business will have a set of strategic goals and objectives that may not align with the individuals we are collaborating with to discover what the tactical interpretation of this end state really means. We are seeking a quick or structured way to define a desired end state that can be reviewed and approved by all stakeholders when this activity gets underway.  The tactical realization of the strategy (and objectives) is usually delegated (and rightly so) to the front line management. The real challenge is eliciting, compiling, and gaining agreement on what this desired end state means to each of the stakeholders. This is not an easy exercise and demands a true mastery of communication and facilitation skills many are not comfortable with or have exercised on a regular basis.  A clear understanding of the complex interaction of any organization (and their un-intended consequences) is critical to a clear understanding of the desired end state.

Define Desired End State

graph_galbraith_star-model1There are a couple of ways to do this. One interesting approach I have seen is to use the Galbraith Star Model as an organizational design framework. The model is developed within this framework to understand what design policies and guidelines will be needed to align organizational decision making and behavior. The Star model includes the following five categories:

  • Strategy: Determine direction through goals, objectives, values and mission. It defines the criteria for selecting an organizational structure (for example functional or balanced Matrix). The strategy defines the ways of making the best trade-off between alternatives.
  • Structure: Determines the location of decision making power. Structure policies can be subdivided into: specialization: type and number of job specialties; shape: the span of control at each level in the hierarchy; distribution of power: the level of centralization versus decentralization; departmentalization: the basis to form departments (function, product, process, market or geography).
  • Processes: The flow of information and decision processes across the proposed organization’s structure. Processes can be either vertical through planning and budgeting, or horizontal through lateral relationships (matrix).
  • Reward Systems: Influence the motivation of organization members to align employee goals with the organization’s objectives.
  • People and Policies: Influence and define employee’s mindsets and skills through recruitment, promotion, rotation, training and development.

The preferred sequence in this design process is composed in the following order:

  • strategy;
  • structure;
  • key processes;
  • key people;
  • roles and responsibilities;
  • information systems (supporting and ancillary);
  • performance measures and rewards;
  • training and development; and
  • career paths.
StrategyModel

Strategy Model – Click to enlarge

A typical design sequence starts with an understanding of the strategy as defined. This in turns drives the organizational structure. Processes are based on the organization’s structure. Structure and Processes further refine reward systems and policy. Beginning with Strategy we uncover a shared set of goals (and related objectives) to define the desired end state organized around the following categories:

  • People/Organization considers the human side of Information Management, looking at how people are measured, motivated and supported in related activities.  Those organizations that motivate staff to think about information as a strategic asset tend to extract more value from their systems and overcome shortcomings in other categories.
  • Policy considers the message to staff from leadership.  The assessment considers whether staff is required to administer and maintain information assets appropriately and whether there are consequences for inappropriate behaviors.  Without good policies and executive support it is difficult to promote good practices even with the right supporting tools.
  • Process and Practice considers whether the organization has adopted standardized approaches to Information Management.  Even with the right tools, measurement approaches and policies, information assets cannot be sustained unless processes are consistently implemented.  Poor processes result in inconsistent data and a lack of trust by stakeholders.
  • Technology covers the tools provided to staff to properly meet their Information Management duties.  While technology on its own cannot fill gaps in the information resources, a lack of technological support makes it impractical to establish good practices.

Goal Setting

smart_goals_bwGoal setting is a process of determining what the stakeholder’s goals are, working towards them and measuring progress to plan. A generally accepted process for setting goals uses the SMART acronym (Specific, Measurable, Achievable, Realistic, and Timely). Each of these attributes related to the goal setting exercise is described below.

  • Specific: A specific goal has a much greater chance of being accomplished than a general goal.  Don’t “boil the ocean” and try to remain as focused as possible. Provide enough detail so that there is little or no confusion as to what exactly the stakeholder should be doing.
  • Measurable: Goals should be measurable so we can measure progress to plan as it occurs. A measurable goal has an outcome that can be assessed either on a sliding scale (1-10), or as a hit or miss, success or failure. Without measurement, it is impossible to sustain and manage the other aspects of the framework.
  • Achievable: An achievable goal has an outcome that is realistic given the organization’s capability to deliver given the necessary resources and time. Goal achievement may be more of a “stretch” if the outcome is more difficult to begin with. Is what we are asking the organization possible?
  • Realistic: Start small and remain sharply focused with what the organization can and will do and let the stakeholder’s experience the joys of meeting their goals.  Gradually increase the intensity of the goal after having a discussion with the stakeholder’s to redefine the goal.  Is our goal realistic given the budget and timing constraints?  If not, then we might want to redefine the goal.
  • Time Bound: Set a timeframe for the goal: for next quarter, in six months, by one year. Setting an end point for the goal gives the stakeholders a clear target to achieve.  Planning follow-up should occur within the 6-month period (best practice) but may occur within one year period or prior based on progress to plan.

Defining the desired end state is accomplished through a set of questions used to draw participants into the process to meet our SMART objectives.  This set of questions is compiled, evaluated, and presented in a way that is easy to understand. Our goal here is to help everyone participating in the work to immediately grasp where the true gaps or shortcomings exist and why this is occurring when we get to step three (3) in the gap analysis phase.  This is true if we are evaluating Information Strategy, our readiness to embrace a SOA initiative, or launching a new business initiative. We can complete the design process by using a variety of tools and techniques. I have used IDEF, BPMN or other process management methods and tools (including RASIC charts describing roles and responsibilities for example). Whatever tools you elect to use, they should effectively communicate intent and used to validate changes with the stakeholders who must be engaged in this process.

Now this is where many of us come up short.  Where do I find the questions to help drive SMART goals? How to I make sure they are relevant? What is this engine I need to compile the results? And how do I quickly compile the results dynamically and publish for comment every time I need to?

One of the answers for me came a few years ago when I first saw the MIKE 2.0 quick assessment engine for Information Maturity. The Information Maturity (IM) Quick Scan is the MIKE 2.0 tool used to assess current and desired Information Maturity levels within an organization. This survey instrument is broad in scope and is intended to assess enterprise capabilities as opposed to focusing on a single subject area. Although this instrument focuses on Information Maturity I realized quickly I had been doing something similar for years across many other domains. The real value here is in the open source resource you can use to kick start your own efforts.  I think it is also a good idea to become familiar with the benchmarks and process classification framework the American Productivity and Quality Center (APQC) has made available for a variety of industries. The APQC is a terrific source for discovering measures and quantifiable metrics useful for meeting the need for specific, measurable objectives to support the end state definition.

How it Works

The questions in the quick scan are organized around six (6) key groups in this domain to include Organization, Policy, Technology, Compliance, Measurement, and Process/Practice.  The results are tabulated based on responses (in the case of the MIKE 2.0 template) ranging from zero (0 – Never) to five (5 – Always).  Of course you can customize response the real point here is we want to quantify the responses received.  The engine component takes the results builds a summary, and produces accompanying tabs where radar graphs plots present the Framework, Topic, Lookup, # Questions, Total Score, Average Score, and Optimum within each grouping.  The MS Word document template then links to this worksheet and grabs the values and radar charts produced to assemble the final document. If all this sounds confusing, please grab the templates and try them for yourself.

Define Current State Diagram

Define Desired End State Model – Click to Enlarge

The groupings (and related sub-topics) are organized out of the box like this to include the following perspectives:

  • Compliance
  • Measurement
  • People/organization
  • Policy
  • Process/Practice
  • Technology

Each of these perspectives is summarized and combined into a MS Word document to present to the stakeholders.  The best part of this tool is it can be used periodically augment quantitative measures (captured in a dashboard for example) to assess progress to plan and improvement realized over time. Quantifying improvement quickly is vital to continued adoption of change. Communicating the results is stakeholders in quick, easy-to-understand format they are already familiar with is just as important using the same consistent, repeatable tool we used to define current state with.

Results

I think you can see this is valuable way to reduce complexity and gather, compile, and present a fairly comprehensive view of the desired end state of the domain in question. Armed with this view we can now proceed to step three (3) and begin to conduct the Gap Analysis exercise. The difference (delta) between these two (current and desired end state) becomes the basis for our road map development.  I hope this has answered many of the questions about step two (2) Define End State. This is not the only way to do this, but has become the most consistent and repeatable methods I’m aware of to define a desired end state quickly in my practice.  Understandings the gaps between the current and the desired end-state across the business, information, application, and technical architecture make development of a robust solution delivery road map possible.

How to build a Roadmap – Define Current State

Introduction

In an earlier post (How to Build a Roadmap) I discussed the specific steps required to develop a defensible, well thought out road map to identify specific actions using an overall pattern all roadmaps should follow. The steps required to complete this work:

I have received a lot of questions about step one (1) which is understandable given the lack of details about just how to quickly gather real quantifiable objectives and potential functional gaps. In the interest of simplicity and my attempt  to keep the length of the prior post manageable specific details about how to do this were omitted.

This post will provide a little more exposition and insight into one method I have found useful in practice. Done well, it can provide an honest and objective look in the mirror to successfully understand where we truly are as an organization and face the uncomfortable truth in some cases where we need to improve.  The reality of the organizational dynamic and politics (we are human after all) can distort the reality we are seeking here and truly obscure the findings. I think this happens in our quest to preserve the preferred “optics” without an objective and shared method all stakeholders are aware of and approve before embarking down this path. In the worst case, if left to the hands of outside consultants alone or in the hands of an unskilled practitioner we risk creating more harm than good before even starting. This is why I will present a quick, structured way to gather and evaluate current state that can be reviewed and approved by all stakeholders before the activity even gets underway.  Our objective is to develop a clear and unambiguous understanding of the current state. We should have a formal, well understood way to gather and evaluate the results.

Define Current State

First, we need to have a shared, coherent set of questions we can use to draw participants into the process that are relevant and can be quantified.  This set of questions should be able to be compiled, evaluated, and presented in a way which is easy to understand. Everyone participating in the work should be able to immediately grasp where the true gaps or shortcomings exist and why this is occurring.  This is true if we are evaluating Information Strategy, our readiness to embrace a SOA initiative, or launching a new business initiative. So, we need just a few key components.

A pool or set of relevant questions that can be answered quickly by the participants and results quantified An engine to compile the results A quick way to compile and summarize the results for distribution to the stakeholders

Now this is where many of us come up short.  Where do I find the questions and how to I make sure they are relevant? What is this engine I need to compile the results? And how do I quickly compile the results dynamically and publish for comment every time I need to? One of the answers for me came a few years ago when I first saw the MIKE 2.0 quick assessment engine for Information Maturity .

The Information Maturity (IM) Quick Scan is the MIKE 2.0 tool used to assess current and desired Information Maturity levels within an organization. This survey instrument is broad in scope and is intended to assess enterprise capabilities as opposed to focusing on a single subject area.  Although this instrument focuses on Information Maturity I realized quickly I had been doing something similar for years across many other domains. The real value here is in the open source resource you can use to kick start your own efforts.

So what does this do?

I’m going to focus on the Mike 2.0 tools here because they are readily available to you. The MS Office templates you need can be found at http://mike2.openmethodology.org/wiki/QuickScan_MS_Office_survey.

Who am I?Extending these templates into other subject areas is pretty simple once you understand how they work. The basic premise remains the same it really is just a matter of injecting your own subject matter expertise and organizing the results in a way that makes sense to you and your organization.  So here is what you will find there.

First a set of questions organized around the following categories:

  • People/Organization considers the human side of Information Management, looking at how people are measured, motivated and supported in related activities.  Those organizations that motivate staff to think about information as a strategic asset tend to extract more value from their systems and overcome shortcomings in other categories.
  • Policy considers the message to staff from leadership.  The assessment considers whether staffs are required to administer and maintain information assets appropriately and whether there consequences for inappropriate behaviours.  Without good policies and executive support it is difficult to promote good practices even with the right supporting tools.
  • Technology covers the tools that are provided to staff to properly meet their Information Management duties.  While technology on its own cannot fill gaps in the information resources, a lack of technological support makes it impractical to establish good practices.
  • Compliance surveys the external Information Management obligations of the organization.  A low compliance score indicates that the organization is relying on luck rather than good practice to avoid regulatory and legal issues.
  • Measurement looks at how the organization identifies information issues and analyses its data.  Without measurement, it is impossible to sustainably manage the other aspects of the framework.
  • Process and Practice considers whether the organization has adopted standardized approaches to Information Management.  Even with the right tools, measurement approaches and policies, information assets cannot be sustained unless processes are consistently implemented.  Poor processes result in inconsistent data and a lack of trust by stakeholders.

The templates include an engine to compile the results and a MS Word document template to render and present the results. Because it is based on MS Office the Assessment_Questions.xlsx, Assessment_Engine.xlsx, and Assessment_Report.docx are linked (rather than relative, they use MS’s way –really hardcoded to find linked files in the c:\assessment folder – yikes!) so that you open and score the Assessment_Questions first, then the Assessment_Engine picks these values and creates a nice tabbed interface and charts across all six subject areas. The Word document picks this up further and creates the customized report.

You can extend this basic model to include your own relevant questions in other domains (for example ESB or SOA related, Business Intelligence).  We are going to stick with the Information Maturity quick scan for now. Note I have extended a similar model to include SOA Readiness, BI/.DW, and Business Strategy Assessments.

How it Works

The questions in the quick scan are organized around six (6) key groups in this domain to include Organization, Policy, Technology, Compliance, Measurement, and Process/Practice.  The results are tabulated based on responses (in the case of the MIKE 2.0 template) ranging from zero (0 – Never) to five (5 – Always).  Of course you can customize response the real point here is we want to quantify the responses received.

The engine component takes the results builds a summary, and produces accompanying tabs where radar graphs plots present the Framework, Topic, Lookup, # Questions, Total Score, Average Score, and Optimum within each grouping.  The MS Word document template then links to this worksheet and grabs the values and radar charts produced to assemble the final document. If all this sounds confusing, please grab the templates and try them for yourself.

Define Current State Diagram

Define Current State Model

Each of these six (6) perspectives is then summarized and combined into a MS Word document to present to the stakeholders.

Results

I think you can see this is valuable way to reduce complexity and gather, compile, and present a fairly comprehensive view of the current state of the domain (in this case Information Management Maturity) in question. Armed with this quantified information we can now proceed to step 3 and conduct a Gap Analysis exercise based on our understanding of what is the desired end state. The delta between these two (current and desired end state) becomes the basis for our road map development.  I hope this has answered many of the questions about step one (1) Define Current State. This is not the only way to do this, but has become the most consistent and repeatable methods I’m aware of to quickly define current state in my practice.

The Architecture Value Proposition

A lot of discussions are flying around on the Enterprise Architecture boards about our role (or lack of) and the need to demonstrate value. Some have even questioned why this is so (after all, we all know the value of accountants, right?).  Do all EA professionals have an identity crisis?  Seems so, and I decided to share this post to clear the air a bit and share some thoughts of the kind of value we should deliver to our business peers.  If you are impatient enough with this introduction just scroll down to the “money” section and see what a quick taxonomy (not complete by any measure) of what our value proposition should look like – at a minimum what values we should find in our day to day work to share with our business peers on a regular basis. If this sounds like a lot of foo-foo to you, get ready to be challenged on a regular basis – and spend an inordinate amount of time and effort justifying your pay-check.  And less time solving for the important and urgent challenges we have been asked to help with.

First, how about a little context?  I believe (and I think you will too) that architecture is a comprehensive framework of well understood processes used to manage and align an organization’s IS assets with:

  • People and Organization (Structure)
  • Processes, and
  • Technology used to meet specific, actionable strategic business goals.

For example, reference the Galbraith Star Model.
A typical design sequence (in my world) starts with an understanding of the strategy as defined by the business. This in turns drives the organizational structure. Processes are based on the organization’s structure. Structure and Processes define the implementation of reward systems and people policy.

This is one way to understand our true role in architecture management is to ensure technology investment and widespread adoption within the organization occurs because:

  • The investment has value,
  • The program (projects) will be properly managed,
  • The organization has the capability to deliver the benefits,
  • Dedicated resources are working on the highest value opportunities,
  • Projects with inter-dependencies are undertaken in the optimum sequence.

Actionable objectives this function (architecture) enables include:

  • Driving costs (not shifting them) from the business processes meta-cycles and improve value-added efficiency across operating models
    • Cycle time reduction
    • Error reduction
    • Resource liberation
    • Cost reduction
    • Cost avoidance
  • Turning service portfolio investment into an operational variable cost which can relate directly to business volume
    • The business should only pay for what they use…
    • Can budget for services on a per usage basis
    • Use Strategic decision-making framework to:
      • manage cascading goals to evaluate corporate or shared goals along a vertical and horizontal axis to mitigate alignment issues
      • use functional views to focus IS investment choices
    • Realizing economic value through business process improvement as a:
      • Vehicle to drive measurable performance improvement:
        • Reduces defects embedded at the design stage of lifecycle by applying architectural principals
        • Uses problem solving approach upstream, reducing operational rework costs.
    • Enable process optimization through continuous, complimentary investment in reengineering and organizational development.

Our work then is to develop and promote management’s use of architecture to drive costs out of the organization and continue to meet stakeholder demands for distinctive, quality products and services.  The value proposition we bring should include detailed findings and quantitative methods to uncover how to accomplish this to meet the following objectives:

  • Improved planning helps to make more informed decisions. Ensures project plans are complete and consistent with business priorities and addresses the impact of changes (alignment) as needed.
  • Technology investments can be managed using consistent, repeatable processes to effectively build (acquire), maintain, and retire these assets over their useful life.
  • We can enable cost efficiencies through elimination or consolidation of redundant or obsolete resources.
  • Delivering quality information, designed and architected for the enterprise as a whole, has proven time and time again to be faster and less expensive by:
    • Reducing complexity
    • Encouraging component reuse
    • Improving communications within the organization
    • Providing a standardized, easy-to-use reference models and templates.
  • We can significantly improve IT and business alignment by encouraging the rapid adoption of technology to meet changing business needs in an effective manner. Improved service levels to key constituents — customers, employees, partners.
  • Encourage less expensive development and delivery by reusing and leveraging previously-developed shared services and products. Results in lower maintenance and integration costs. Provide a logical construction blueprint for defining and controlling the integration of systems and components by combining organizational, data, and business processes.
  • Communicate using common terminology, standard descriptions, and generally accepted industry standards between the business and technology providers.

Delivering Value

The following section represents the majority of the business value I have helped other organizations uncover and exploit through the architecture function. We should be able to quickly discover and describe each of the opportunities we encounter within technology management alone (forget about the business for now) to include:

  • Plan to Manage (Lifecycle),
  • Manage to Availability (Protect),
  • Request to Resolve (Support), and
  • Develop to Deploy (Develop).

We should be able to tie service orientation for example, to its impact on one or more target metrics – use short-term tactical focus to start with and describe its impact (i.e., quantify its value, examples are included) in terms of:

  • Process Cycle Efficiency (reduce the time to close a defect by 40% within three months)
  • Time (Reduce system testing time by three weeks on the next project)
  • Accuracy (Improve schedule estimation accuracy to within 10% of actual)
  • Resources (Reduce maintenance costs by 50% within one year)
    • People
    • Money
    • Physical (e.g., physical plant, infrastructure)

The following themes represent the significant elements we should examine and evaluate for the size and relative scale of the respective opportunity that can be realized.

Communications

  • Communicate effective alignment of IS strategy
    • Clear understanding of the current and future direction
    • Ensure business focus is on the highest priority and mission critical efforts.
    • Make more informed decisions
      • Control Strategic Scope and economic intent
      • Enhance investment decision-making by providing ready access to the business to information about the people, processes, and technology.
      • Assess the impact and mitigate risks associated with tactical decisions.
  • Improve Service levels
  • Produce Higher Quality Products
    • Improved reliability
    • Predictability (variance reduction). Methods to deliver services and information in a consistent and structured manner.
  • Increase project success rates
  • Meet operational goals and objectives
    • Process control objectives
    • Professional staff is accountable for compliance with policy and management direction.
    • Optimize activity objectives

Efficiency

  • Shorten work cycles
    • Improve Process Cycle Efficiency
      • Shift focus to value-added activities
      • Minimize Non-Value-Added activity
      • Eliminate obsolete non-value added practices
      • Optimize Essential Non-Value-Added-But-Necessary activity
    • Schedules
    • Maintenance costs
    • Development time
  • Reduce costly rework
  • Leverage Economies of Scale – Consolidation
    • Leverage existing IS assets
    • Shared knowledge base
    • Infrastructure and Shared Services
    • Skills and resources
  • Reduce or eliminate redundant investment
  • Promote Interoperability
    • Identify opportunities for vertical and horizontal interoperability between business units.  Improve or reuse existing IS assets.
  • Improve developer productivity
    • Leverage strengths of different types of developers.
    • Preserves business logic by isolating significant changes to “plumbing”
  • Reduce Need for Customization
    • Lower labor costs and lower Total Cost of Ownership
  • Enable vendor independence
  • Leverage the benefits of Standardization
    • Use standard, reusable components and services
      • Proven – Based on field experience
      • Authoritative – Offer the best advice available
      • Accurate – Technically validated and tested
      • Actionable – Provide the steps to success
      • Relevant – Address real-world problems
    • Use common processes, methods and tools
      • Tooling
      • Wizards
      • Templates
      • Metadata, Code configuration generation

Agility

  • Adapt quickly to changing business needs that require significant requirement changes
  • Accommodate Emerging Technologies
  • Engineering Life Cycle Support
    • Streamlined Governance and Stewardship
      • Meet Process control objectives
      • Optimize activity objectives
    • Guidance for tasks such as deployment and operations
  • Prepare for New Opportunities
    • Off-shoring
    • Emerging technologies
  • Improved Speed to Market
    • Increased productivity due to the need to develop and test only code that is related to business requirements.
  • Higher levels of customer satisfaction
  • Improve communications
  • Illustrate and communicate the business of the enterprise to all stakeholders
  • Common terminology and semantics
    • Standard descriptions
    • Generally accepted industry standards and models
    • Promote Global collaboration
  • Improve Risk Mitigation Strategies
    • Leverage proven Business Reference Models
    • Identify Capacity Planning needs and impact
    • Reuse previously identified Solution Set patterns that link preferred Business, Information, and Technology Architecture Components.
    • Provide explicit linkage between stated business goals and the solution proposal (clear line-of-sight).
  • Support Innovation
  • Identify opportunities to employ innovative processes and technology.

Quality

  • Drive one or more systems to common “use” or purpose
    • Shared information is more accurate. Information is collected once and used many times, avoiding the misunderstandings and keying errors associated with multiple collection points.
    • Shared information is more timely. Information can be made available instantly rather than waiting for a separate collection effort.
    • Shared information is more complete. Information from multiple sources can be assembled into a full description.
    • Shared information is less expensive. Information costs much less to store data and send it to another user than it does to collect it again.
  • Work Product Impact
    • Less variance in work products
      • Provide atomic solutions to recurring problems (Solve problems once)
      • Application Frameworks standardize the way in which a class of problems is solved.
      • Set of classes which enforce the architecture
    • Teach developers how to use it
    • Support and evolve framework

This is how I have demonstrated real business value in my practice. Not sure why this is still questioned unless our role remains a mystery (poor communications) or maybe we have simply not executed (execution failure does happen).  So hopefully this has helped some of you to understand and place into “business terms” the kind of value we should be delivering to our business peers.  And more important to all of us in the profession is how we can define an effective value proposition to share benefits that can be realized in a sustained, consistent, and repeatable manner.  Thanks for listening…

Modeling the MDM Blueprint – Part V

er_modelIn this series we have discussed developing the MDM blueprint by creating Common Information (part II), Canonical (part III), and Operating (part IV) models in our work streams. We have introduced the Operating Model into the mix to communicate how the solution will be adopted and used to realize the benefits we expect with the business in a meaningful way.  And hopefully set reasonable expectations with our business partners as to what this solution will look like when deployed.

Now it is time to model and apply the technical infrastructure or patterns we plan on using. The blueprint now moves from being computation and platform independent to one of expressing intent through the use of more concrete platform specific models.

Reference Architecture
After the initial (CIM, Canonical, and Operating models) work is completed then, and only then are we ready to move on to the computation and platform specific models. We know how to do this well – for example see Information service patterns, Part 4: Master Data Management architecture patterns.

At this point we now have enough information to create the reference architecture. One way (there are several) to organize this content is to use the Rozanski and Woods extensions to the classic 4+1 view model introduced by Philippe Kruchten. The views are used to describe the system in the viewpoint of different stakeholders (end-users, developers and project managers). The four views of the model are logical, development, process and physical view. In addition selected use cases or scenarios are used to demonstrate or show the architecture’s intent. Which is why the model contains 4+1 views (the +1 being the selected scenarios). 

41views1

Rozanski and Woods extended this idea by introducing a catalog of six core viewpoints for information systems architecture: the Functional, Information, Concurrency, Development, Deployment, and Operational viewpoints and related perspectives. This is elaborated in detail in their book titled “Software Systems Architecture: Working with Stakeholders Using Viewpoints and Perspectives”. There is much to learn from their work, I encourage you to visit the book’s web site for more information.

What we are describing here is how MDM leadership within very large-scale organization can eventually realize the five key “markers” or characteristics in the reference architecture to include:

– Shared services architecture evolving to process hubs;
– Sophisticated hierarchy management;
– High-performance identity management;
– Data governance-ready framework; and
– Registry, persisted or hybrid design options in the architecture selected.

Recommended, this is an exceptional way to tie the technical models back to the stakeholders needs as reflected in the viewpoints, perspectives, guidelines, principles, and template models used in the reference architecture. Grady Booch said “…the 4+1 view model has proven to be both necessary and sufficient for most interesting systems”, and there is no doubt that MDM is interesting.  Once this work has been accomplished and agreed to as part of a common vision, we have several different options to proceed with. One interesting approach is leveraging this effort into a Service Orientated Modeling Framework introduced by Michael Bell at Methodologies Corporation.

Service Orientated Modeling
The service-oriented modeling framework (SOMF) is a service-oriented development life cycle methodology. It offers a number of modeling practices and disciplines that contribute to a successful somf_v_2_0service-oriented life cycle management and modeling. It illustrates the major elements that identify the “what to do” aspects of a service development scheme. These are the modeling pillars that will enable practitioners to craft an effective project plan and to identify the milestones of a service-oriented initiative—in this case crafting an effective MDM solution.

SOMF provides four major SOA modeling styles that are useful throughout a service life cycle (conceptualization, discovery and analysis, business integration, logical design, conceptual and logical architecture). These modeling styles: Circular, Hierarchical, Network, and Star, can assist us with the following modeling aspects:

– Identify service relationships: contextual and technological affiliations
– Establish message routes between consumers and services
– Provide efficient service orchestration and choreography methods
– Create powerful service transaction and behavioral patterns
– Offer valuable service packaging solutions

SOMF Modeling Styles
SOMF offers four major service-oriented modeling styles. Each pattern identifies the various approaches and strategies that one should consider employing when modeling MDM services in a SOA environment.

– Circular Modeling Style: enables message exchange in a circular fashion, rather than employing a controller to carry out the distribution of messages. The Circular Style also offers a way to affiliate services.

– Hierarchical Modeling Style: offers a relationship pattern between services for the purpose of establishing transactions and message exchange routes between consumers and services. The Hierarchical pattern enforces parent/child associations between services and lends itself to a well known taxonomy. somf_styles

– Network Modeling Style: this pattern establishes “many to many” relationship between services, their peer services, and consumers similar to RDF. The Network pattern accentuates on distributed environments and interoperable computing networks.

– Star Modeling Style: the Star pattern advocates arranging services in a star formation, in which the central service passes messages to its extending arms. The Star modeling style is often used in “multi casting” or “publish and subscribe” instances, where “solicitation” or “fire and forget” message styles are involved.

There is much more to this method, encourage you to visit the Methodologies Corporation site (Michael is the founder) and download the tools, power point presentations, and articles they have shared with us.

Summary
So, based on my experience we have to get this modeling effort completed to improve the probability we will be successful. MDM is really just another set of tools and processes for modeling and managing business knowledge of data in a sustainable way.  Take the time to develop a robust blueprint to include Common Information (semantic, pragmatic and logical modeling), Canonical, (business rules and format specifications), and Operating Models to ensure completeness.  Use these models to drive a suitable Reference Architecture to guide design choices in the technical implementation.

This is hard, difficult work. Anything worthwhile usually is. Why put the business at risk to solve this important and urgent need without our stakeholders understanding and real enthusiasm for shared success?  A key differentiator and the difference between success and failure on an MDM journey is taking the time to model the blueprint and share this early and often with the business.  This is after all a business project, not an elegant technical exercise.  Creating and sharing a common vision through our modeling efforts helps ensure success from inception through adoption by communicating clearly the business and technical intent of each element of the MDM program.

In the last part of the series I will be discussing where all this fits into the larger MDM program and how to plan, organize, and complete this work.

Modeling the MDM Blueprint – Part I

Several practitioners have contributed to this complex and elusive subject (see Dan Power’s Five Essential Elements of MDM and CDI for example) and have done a good job at elaborating the essential elements.  There is one more element often overlooked in this field and remains a key differentiator and the difference between success and failure among the major initiatives I have had the opportunity to witness firsthand – modeling the blueprint for MDM.

pen1This is an important first step to take, assuming the business case is completed and approved. It forces us to address the very real challenges up front, before embarking on a journey that our stakeholders must understand and support in order to succeed. Obtaining buy-in and executive support means we all share a common vision for what we are solving for.

 MDM is more than maintaining a central repository of master data. The shared reference model should provide a resilient, adaptive blueprint to sustain high performance and value over time. A MDM solution should include the tools for modeling and managing business knowledge of data in a sustainable way.  This may seem like a tall order, but consider the implications if we focus on the tactical and exclude the reality of how the business will actually adopt and embrace all of your hard work. Or worse, asking the business to stare at a blank sheet of paper and expect them to tell you how to rationalize and manage the integrity rules connecting data across several systems, eliminate duplication and waste, and ensure an authoritative source of clean reliable information can be audited for completeness and accuracy.  Still waiting?

So what is in this blueprint?

The essential thing to remember is the MDM project is a business project that requires establishing of a common information model that applies whatever the technical infrastructure or patterns you plan on using may be. The blueprint should remain computation and platform independent until the Operating Model is defined (and accepted by the business), and a suitable Common Information Model (CIM) and Canonical model are completed to support and ensure the business intent. Then, and only then, are you ready to tackle the Reference Architecture.

The essential elements should include:
– Common Information Model
– Canonical Model
– Operating Model, and
– Reference Architecture (e.g. 4+1 views).

Will be discussing each of these important and necessary components within the MDM blueprint in the following weeks and encourage you to participate and share your professional experience. Adopting and succeeding at Master Data Management is not easy, and jumping into the “deep end” without truly understanding what you are solving for is never a good idea. Whether you are a hands-on practitioner, program manager, or an executive planner I can’t emphasize enough how critical modeling the MDM blueprint and sharing this with the stakeholders is to success. You simply have to get this right before proceeding further.

The Business Case for Enterprise Architecture

We have the frameworks. Plenty of them: Zachman, IEEE STD 610.12, FEAF (Federal Enterprise Architecture), and now an updated TOGAF (Open Group Architecture Framework) you can find more about at (http://www.opengroup.org/togaf).

We know what to do. All of us know what should be done and have a pretty good idea of how to do it. For example see what the GAO has institutionalized in their view of our profession in the publication labeled “A Practical Guide to CIO Council – Federal Enterprise Architecture. February 2001, Version 1.0 “.

Why do it?
Now comes the hard part. How about a business case for investing in this discipline? Expressed in terms ordinary business people can understand.  I have yet to find (I have looked) a really compelling piece of work that can express quickly what impact EA would have across five essential elements every business leader understands and manages to on a daily basis:

– cash flow
– margin (profitability)
– velocity
– growth
– customer (responsiveness, intimacy, and alignment)

I’m  guessing this is because we are all technicians at heart. I have also come to believe this is essential to having our peers in the business understand our value proposition. I remain convinced this discipline can (done properly) enable significant improvements in each of our organizations.  Any thoughts or suggestions are welcome, encourage all of us in this profession to evaluate and think carefully about what our real value is – as business people first, then as trusted advisors and partners to our business and IT colleagues alike.