Big Data Analytics – Unlock Breakthrough Results: (Step 5)

The Analytic User Profile
PeopleOutLineUnderstanding that form follows function we are now going to develop one of the most important interim products for our decision model; the analytic user profile. A profile is a way of classifying and grouping what the user community is actually doing with the analytic information and services produced. This step will develop a quantified view of our user community so we can evaluate each platform or tool for optimization quickly and produce meaningful results aligned with usage patterns. We already know that one size does not fit all (see Big Data Analytics and Cheap Suits). Selecting the right platform for the right job is important to success. This step will attempt to quantify a couple of key data points we can use to:

  • distinguish actual usage patterns (e.g. casual users from power users)
  • match resulting profiles to platform capability
  • match resulting profiles to tool categories
  • gain a deeper understanding of what the community of producers and consumers really needs.

Along the way we are going to explore a couple of different approaches used to solve for this important insight.

What is an Analytic User Profile?
A user profile is a way of classifying and grouping what the community is actually doing with the analytic information being produced and consumed. This can expressed with a simple diagram below. Note that typically 80% of the work is usually associated with review and retrieval of data using a descriptive or diagnostic analytic. The other 20% is related to the use of sophisticated predictive and prescriptive analytics used to augment the transaction data gathered and distributed for operational support. In what is now being labeled Big Data 3.0 analytic services are now being embedded into decision and operational processes, in effect combining analytic styles in ways that were just not possible a few years ago.

HighLevelDistributionNow we have a high level view of what is being done, who is doing this? There are several ways to classify and categorize roles or signatures. For example the diagram above includes terms like Miners, Explorers, and Gatherers (see the bottom axis for the labels). The following diagram illustrates another way to view this community using many of the same classification labels.

HighLevelDistribution_02

Of course you can refine this to any level of granularity you are comfortable with preserving the integrity of the classification. By that I mean preserving the function and not mixing form (like an organizational role as opposed to function). If you look at the diagram closely you will notice this diagram uses organizational roles and not functions like the first diagram. Use one or the other, but do not mix at your own peril. They mean very different things. There is no reason you can’t map one to the other and vice versa when needed.

Here is a table illustrating the analytic profile I use. Not perfect, but has served me well. This table includes the type of activity, optimal tool usage, and important functionality associated with each profile.ProfileTypes_01

Form Follows Function – An Example
Think it be helpful to illustrate these abstract concepts with a real example. In this case we will use the concept of form following function by examining what kind of questions and answers are typically developed when measuring retail conversion rates (similar to digital channel conversion rates). This diagram illustrates the types of questions typically asked and the necessary analysis required to answer them. Note, as analytic capability matures two things occur, the questions become more predictive focused and the systems to answer them become more complex.
ConversionRate_Example_01Now here is the same diagram overlaid with the typical tools used to solve for and answer each important question.

ConversionRate_Example_02

Do you see what is missing? The analytic profile or type of user for each question and each answer is not present. Think carefully about which profile or role would ask the question and which would give the answer. It should be clear that answering a predictive or prescriptive question about optimization requires a very set of skills and tools than just rendering the results in a reporting platform or mobile device. And who and how many are categorized in each group? This is the next step.

Completing the Analytic User Profile
Now we have a good idea of the kinds of questions and answers we may find it is time to prepare a quick census. This is not a precision exercise rather one of just understanding the relative size and number of each profile found within the analytic and consuming community. There are several advanced statistical methods available to you if more precision (and the data is available) is needed. For a quick way to estimate these values I will share one method that works well enough for me. Recall the relative distribution of profiles can be modeled to look something like the distribution of analytic profiles found in the following diagram.

DistributionChartUsing simple mathematics we can perform a simple calculation to approximate missing values where we know one value in the data; if know there are 600 reviewers based on the number of distinct logins captured and normalized over a suitable time frame (assuming a normal distribution) across reporting platforms then we can expect the total community to number around 1,000 (600/.60 = 2,000). In this population using the model we can expect to find:

  • 10 Data Scientists, Statisticians, and Miners creating statistical models using predictive and prescriptive analytics, sourced from internal and external data
  • 40 Developers creating and maintaining reports, queries, OLAP cubes, reporting applications
  • 50 Explorers analyze large amounts of data in an interactive, exploratory fashion
  • 60 Planners performing “what if” analyses to create budgets or planning assumptions
  • 600 Reviewers (known value) looking over a consistent set of data on a consistent basis (reporting), and drilling down to more detail only when something in awry in the data
  • 240 Gatherers and Operations support professionals retrieving a specific piece of data in near real-time to perform a specific business process

Here is an actual sample of field-work completed using this method where the number of distinct log-ins across reporting platforms (27,050) was known and almost all log-ins could be classified and grouped as Reviewers. Note this is an approximation due to proxy or application identifier use, but it is good enough for now. This is a large organization and reflects a certain economies of scale. Your distribution model may not reflect this capability and may need to be adjusted.

Populated_Table_02

Before you begin to question the accuracy and completeness of this exercise (and you should) note this work product only represents a quick rule of thumb or starting position. The results should be validated and confirmed or disproved with a more rigorous examination with the stakeholders. And of course the distribution model of the analytic profiles may be different based on your industry or line of business. This is quick. And only required one confirmed data point and a week of follow-up and confirmation within the organization.

If you have the time and need more precision (including a more sophisticated statistical analysis) there is always the tried and true field work option to collect more data points. This is usually performed as follows.

  1. Prepare profile census values and questionnaires. Develop the questions of the survey in general non-technical QuestionImageterms so that it could be understood by business users. Target a time of no more than 15 minutes to answer the questions. Provide a combination of single choice, multiple choice, rating scales and open ended questions to add variety and get more complete answers.
  2. Prepare and distribute the survey across the organization under examination. Provide an easy to use web-based online survey that can be accessed over the Internet. Email internally to various distribution lists with a URL link to the online survey.
  3. Compile results
  4. Clean or fill missing values using the appropriate algorithmic filters
  5. Test the refined results for validity using sound statistical techniques
  6. Interpret the finding and prepare the data sets for publication
  7. Publish findings for review and comment
  8. Incorporate responses and revise findings to reflect stakeholder comments

If this sounds like a lot of work it is. This field work is not quick and will take months of labor intensive activity to do successfully. The good news is you will have a much more precise insight into the analytic community and what your users are actually doing. Capturing this insight can also give the information on business needs by determining who are using tools, what business questions are being answered using tools, when or how frequent tools are being used, where the tools are being used (against what data sources) and finally to assess how the existing tools are being used.

Summary
Understanding that form follows function, we have now developed one of the most important interim products for our decision model; the analytic user profile. A profile is a way of classifying and grouping what the user community is actually doing with the analytic information and services produced. These profiles can be used to:

  • distinguish actual usage patterns (e.g. casual users from power users)
  • match resulting profiles to platform capability
  • match resulting profiles to tool categories
  • gain a deeper understanding of what the community of producers and consumers really needs.

With a quantified view of the analytic community we can now evaluate each platform or tool for optimization quickly and produce meaningful results that are aligned with usage patterns in the upcoming decision model.


If you enjoyed this post, please share with anyone who may benefit from reading it. And don’t forget to click the follow button to be sure you don’t miss future posts. Planning on compiling all the materials and tools used in this series in one place, still unsure of what form and content would be the best for your professional use. Please take a few minutes and let me know what form and format you would find most valuable.
Suggested content for premium subscribers:
Operating Model Mind Map (for use with Mind Jet – see https://www.mindjet.com/ for more)
Analytic Core Capability Mind Map
Analytic User Profile workbooks
Enterprise Analytics Mind Map
Reference Library with Supporting Documents

Prior Posts in this series can be found at:
Big Data Analytics – Nine Easy Steps to Unlock Breakthrough Results
Big Data Analytics – Unlock Breakthrough Results: (Step 1)
Big Data Analytics – Unlock Breakthrough Results: (Step 2)
Big Data Analytics – Unlock Breakthrough Results: (Step 3)
Big Data Analytics – Unlock Breakthrough Results: (Step 4)

Eight powerful secrets for retaining delighted clients

Eight powerful secrets for retaining delighted clients
(What they don’t teach you in business school)

Success-Secrets-1Over the years I have come to believe there are few simple secrets to my consulting success with several organizations both large and small. These secrets are not taught in business schools. And it seems the larger firms have stopped investing in helping younger professional learn the craft and soft skills needed. Many of these tips are just common sense and represent proven practice. You can always choose to ignore one or more of them and expect a client experience that is less than expected. I do think carefully about each one of them now in every engagement. They have served me well. Think you will them invaluable in your professional life as well.

1) Help them listen to themselves
The golden rule of any client communication is to listen. Once you are done listening, repeat or paraphrase what you have heard. Helping a client hear what they’ve just said is invaluable. Not only does it ensure you haven’t misinterpreted anything, hearing their thoughts explained by someone else often highlights potential issues. It is always better for the client to recognize problems themselves instead having to point them out.

2) Never ignore or reject a bad idea
When the client says ‘my idea is to …’ avoid the instinct to point out why the hilariously awful suggestion won’t work. Instead, listen, take notes and say something like ‘I will take that into consideration’. When returning with your much better ideas, they probably won’t mention it again. If they do, just say it didn’t quite work. They’re usually happy that you will have considered this and are truly receptive to their ideas. No matter how bad.

3) You can have it cheap, fast or good. Pick any two.
Explain the two out of three rule. All clients will always want you to produce distinctive, high quality work in less than a week for next-to-no money. But they sometimes forget that these things come at a price, and their reluctance to pay your rates or pressure to work faster can make you question your own reasoning. The most demanding won’t even understand why it’s not possible. Help them to understand by explaining the ‘two out of three’ rule:

Good Distinctive Quality + Fast = Expensive
You will defer every other unrelated job, cancel all un-necessary tasks and put in ungodly hours just to get the job done. But, don’t expect it to be cheap.

Good Distinctive Quality + Cheap = Slow
Will do a great job for a discounted price, but be patient until we have a free moment from pressing and better paying clients.

Fast + Cheap = Inferior Quality
Expect an inferior job delivered on time. You truly get what you pay for, and in our opinion this is the least favorable choice of the three. In most cases I decline or disengage rather than commit to something that may result in damaging the relationship. See secret 5 about never presenting ideas you don’t believe in and who can believe in junk?

To summarize: You can have it cheap, fast or good. Pick any two and meet everyone’s expectations.

4) Don’t make delivery promises straight away
Clients want immediate delivery date commitments. As much as you would like to conclude meetings with a firm, ‘yes I can’, always check with your team first or just ask for some time to ensure you have thought about the commitment thoughtfully. Not only does this show you take deadlines seriously, the next time the same client rings with an urgent request, you can buy time before committing. It is remarkable how many must-have-it-yesterday emergencies can fix themselves or simply melt away within hours of the initial request. A commitment is a promise. When you break a promise, no matter how small it may seem to you it can damage the client relationship and your reputation (brand).

5) Never present ideas you don’t believe in (no junk please!)
It’s tempting when preparing solution options to add one more into the mix. I guess we sometimes think that including a Junknot-so-great idea highlights the effort we put into the preferred option and will make our other suggestions look even stronger. But what if the client picks the wrong option? Then you are stuck producing your own dumb idea you simply don’t believe in. If your main ideas are good enough, that’s all you should need. If the client hates them, you can always fall back on the rejects or junk if needed.

6) Don’t assume you’ll find ‘it’ alone
You worked on presenting your ideas alone with little time for collaboration. Looks great to you – what a genius you are. And then client see this and says I’ll know it when I see it and this is not what I expected’. These are the words that you never want hear as a creative, hard-working professional. You have no found it. If a client doesn’t like your ideas but can’t explain why, never assume that you can hit the mark next time. The client not knowing what they want is your problem so spend more time with them exploring other work they like. This helps get inside their heads and closer to finding the elusive ‘it’.

7) Who actually has final approval?
The final seal of approval may not come from the person you deal with every day. Managers have Directors, Directors have VPs, and VPs have a C-level they report to. So always uncover exactly who is the ultimate authority before proceeding with your ideas. Most people usually hate showing rough sketches to their boss, meaning you need to flesh out one or two elements in advance (see the secret number 6) with your client. Ensuring your work avoids a last minute thumbs-down is worth the time and effort.

8) If all else fails, raise your rates
Most clients are a joy to work with. Hopefully these secrets will help you deal with the most challenging parts of the creative process and leave you with a delighted client. But sometimes, you just know deep down someone is going to be impossible to work with. And you will know this quickly. If simply turning down the work is not an option or will create a poor perception with the client, raise your rates by 20%. At least then you can console yourself with cash during another long weekend of last minute revisions, rework, and unnecessary stress. And what about that time forever lost to your family and loved ones? This is a trap; the higher rate is almost never worth the cost to your nervous system. Manage your time wisely, it truly “is never found again”.

19017dgvl2m2ajpg

Big Data Analytics – Unlock Breakthrough Results: (Step 4)

wieghted_foodIn this step we look a little closer into defining the critical capabilities used across the four operating models discussed in an earlier post (Big Data Analytics – Unlock Breakthrough Results: Step 3). We are going to assign relative weights to each of the critical capabilities groups for each operating model uncovered earlier. This is done to assign the higher weighting to capability groupings most important to the success of each model. Having the quantified index means we can evaluate each platform or tool for optimization within quickly and produce meaningful results. We already know a set of tools and platforms which are ideal for Centralized Provisioning are usually unsuited for use within a Decentralized Analytics operating model. In contrast critical capability essential to Embedded Analytics is very different from Governed Data Discovery. Yes there are some capabilities that cross operating models (e.g. metadata), and some that are far important than others. So what we are doing in this step is just gathering and validating the relative importance of each so form truly does follow function. This will become increasingly clear when building the decision models to guide our actions.

What is a decision model?
A Decision Model is a new way of looking at analytics using business logic. A key enabler sandwiched between BPM and Business Rules, the logic is captured and knits both together to illustrate what drives the decisions in a business. Instead of trying to capture and manage the logic one business rule at a time, a Decision Model groups the information sources, knowledge, and decisions (including the rules) into their natural logical groups to create the structure that make the model so simple to capture, understand, communicate and manage. Using this method we will be using a proven approach for OMG_DMN_Imagesolving platform and tool optimization in the same way that proven practice suggests every analytic decision be made. DMN provides the constructs that are needed to model decisions, so that organizational decision-making can be readily depicted in diagrams, accurately defined by business analysts, and optionally use to specify and deploy automated decision-making. The objective is to illustrate a method to address the perplexing management challenge of platform and tool optimization. In this step we are simply using an organizing principle to continue grouping and categorizing our findings quantifying each capability in its complexity and nuance across several facets. For more on this see the OMG specification released in September 2015.

Relative Weights
The relative weights and further refinements should reflect your site specific needs so there is less of chance of friction or semantical confusion when the decision model and the findings are shared with the stakeholders. This is a collaborative exercise where the findings are shared and confirmed with both technical and business stakeholders for agreement and validation. This usually means you (as an architect) create the baseline and then iteratively refine with the subject matter experts and business sponsors to agree on the final results or weights that will be used. This work still remains platform, tool, and vendor agnostic. We are simply trying to identify and assign quantitative measures to evaluate which functional (critical capability) is most important to each operating model. A good baseline to begin with is the Gartner work published as Critical Capabilities for Business Intelligence and Analytics Platforms this summer (12 May 2015 ID:G00270381). With this we have a reasonably good way to think about form and function across the different operating models which Gartner refers to in their work as baseline use cases. Recall that across any analytic landscape (including big data) we are most likely to encounter one or more of the four operating models to include:

– Centralized Provisioning,
– Decentralized Analytics,
– Governed Data Discovery, and
– OEM/Embedded Analytics.

This seems to be a sensible way to organize the decision model we building. Thanks to Gartner we also have a pretty good way to describe manage the fifteen (15) groups of critical capabilities to use when comparing or seeking platform and tool optimization within each model. The baseline used includes the following groups of features, functions, and enabling tools:

– Traditional Styles of Analysis
– Analytic Dashboards and Content
– IT-Developed Reports and Dashboards
– Platform Administration
– Metadata Management
– Business User Data Mash-up
– Cloud Deployment
– Collaboration and Social Integration
– Customer Services
– Development and Integration
– Ease of Use
– Embedded Analytics
– Free Form Interactive Exploration
– Internal Platform Integration
– Mobile

The purpose in all of this is arrive at some way to quantify which capability within each operating model is more important than the others; weighting their relative importance in satisfying need. In this step we are simply starting at a baseline. We can refine the critical analytic capabilities from this baseline to meet site specific needs before moving on to the weighting in the next step. Note these are high level summary weights. Each capability includes a number of different values or characteristics you can refine to any level of detail you believe necessary. They should all sum to the groups value (e.g. 20% for Platform Administration within the Centralized Provisioning model for example) to retain the integrity of the results.

For each of the fifteen (15) groups of critical capabilities we assign weights to be used in later steps to evaluate the relative importance of each within each operating model.

ExcelCriticalCapabilityWeights

Note: the weights used in this example are based on the Gartner work referred to above. I have changed the metadata weighting to reflect my experience, leave the balance of the work to the next step after you have tailored this baseline to your environment and are ready to apply your own weighting.

We have already seen there are very different needs required for each of the models presented. As the decision model is introduced and developed the data points for each can be used to develop quick snapshots and quantitative indexes when evaluating the form and function for each optimization in question.

Summary
The fifteen (15) critical capabilities are now assigned relative weights used within each of the four operating models. We are now at a point where the analytic community profiles can be compiled to arrive at a defensible approach to quantifying the data used in the upcoming decision model. This has also helped clarify and understand the key capabilities that drive each operating model which we see can be very different as illustrated in the following diagram.

RelativeWeightsChart

If you enjoyed this post, please share with anyone who may benefit from reading it. And don’t forget to click the follow button to be sure you don’t miss future posts. Planning on compiling all the materials and tools used in this series in one place, still unsure of what form and content would be the best for your professional use. Please take a few minutes and let me know what form and format you would find most valuable.

Suggested content for premium subscribers:
Big Data Analytics – Unlock Breakthrough Results: Step Four (4)
Operating Model Mind Map (for use with Mind Jet – see https://www.mindjet.com/ for more)
Analytic Core Capability Mind Map
Enterprise Analytics Mind Map
Analytics Critical Capability Workbooks
Analytics Critical Capability Glossary, detailed descriptions, and cross-reference
Reference Library with Supporting Documents

Prior Posts in this series can be found at:

Big Data Analytics – Unlock Breakthrough Results: (Step 2)

Herding-CatsThis post is part of a larger series to provide a detailed set of steps you can take to unlock breakthrough results in Big Data Analytics. The simple use case used to illustrate this method will address the perplexing management challenge of platform and tool optimization. This step is used to identify the types and nature of the operating models used within the analytic community. I’m using a proven approach for solving platform and tool optimization in the same manner that proven practice suggests every analytic decision be made. Here we are simply using an organizing principle to group and categorize our findings in what can quickly become a bewildering experience (much like herding cats) in its complexity and nuance.

Recall the nine steps to take as summarized in a prior post.

1) Gather current state Analytic Portfolio, and compile findings.
2) Determine the Analytic Operating Models in use.
3) Refine Critical Analytic Capabilities as defined.
4) Weight Critical Analytic Capability according to each operating model.
5) Gather user profiles and simple population counts for each form of use.
6) Gather platform characteristics profiles.
7) Develop platform and tool signatures.
8) Gather data points and align with the findings.
9) Assemble decision model for platform and tooling optimization.

Let’s start with examining the type and nature of the analytic operating models in use. Note an organization of any size will most likely use two or more of these models for very good reasons. I myself have seen all of these models employed at the same organization in my own practice. When moving on to the remaining steps it will become increasingly evident that having a keen understanding of the strategy, organization, technology footprint, and culture that drives the model adoption in question will become invaluable. First, let’s define our terms.

What is an operating model? 
Wikipedia defines an operating model as an abstract representation of how an organization operates across a range of domains in order to OperatingModels_Summaryaccomplish its function. An operating model breaks this system into components, showing how each works together. It helps us understand the whole. In our case we are going to focus on the analytic community and use this understanding to evaluate fit when making changes to ensure the enabling models will still work after the recommended optimization is called for. Thanks to Gartner who published Critical Capabilities for Business Intelligence and Analytics Platforms this summer (12 May 2015 ID:G00270381) we have a reasonably good way to think about form and function across the different operating models which Gartner refers to in their work as baseline use cases to include.

– Centralized Provisioning,
– Decentralized Analytics,
– Governed Data Discovery, and
– OEM/Embedded Analytics.

You may think what you will about Gartner I believe they have done a good job of grouping and characterizing the signatures around the four (4) operating models using fifteen (15) critical analytic capabilities to further decompose the form and function found within each. At a summary level the capabilities are grouped as follows.

– Traditional Styles of Analysis
– Analytic Dashboards and Content
– IT-Developed Reports and Dashboards
– Platform Administration
– Metadata Management
– Business User Data Mash-up
– Cloud Deployment
– Collaboration and Social Integration
– Customer Services
– Development and Integration
– Ease of Use
– Embedded Analytics
– Free Form Interactive Exploration
– Internal Platform Integration
– Mobile

Note: Detailed descriptions and characteristics of each of the fifteen (15) critical capabilities can be found in step three (3) where I will refine the Gartner definitions of Critical Analytic Capabilities to add additional context.

Why is this important?
Each of the four models have very different needs influenced by strategy, footprint, and culture of the organization. Each optimization will have to recognize their differences and accommodate for them to remain meaningful. A set of tools and 805platforms which are ideal for Centralized Provisioning are usually terrible and completely unsuited for use within a Decentralized Analytics operating model. Critical capability essential to Embedded Analytics is very different from Governed Data Discovery. Yes there are some capabilities that cross operating models (e.g. metadata), and some that are far important than others. In general this is a truly sound way to determine where your investment in capability should be occurring – and where it is not. Along the way you will surely stumble across very clever professionals who have solved for their own operating model limitations in ways that will surprise you. And some just downright silliness; remember culture plays a real and present role in this exercise. At a minimum I would think carefully about what you uncover across the following facets or dimensions.

  • Structure is drawing boundaries for each analytic community, defining the horizontal mechanisms that ensure coordination and scale, and evaluating the resource levels that reflect the roles of the each. It should define the high-level organization chart if form follows function. If you look carefully, the clues to helping understand and classify each model are there. And note some overlap and redundancy is expected between each of the models.
  • Accountability describes the roles and responsibilities of the organizational entities within each model and clarify how organizational units come together to make effective cross-enterprise analytic decisions. This is where a lot of organizational friction can occur resulting in undefined behaviors and unnecessary ambiguity.
  • Governance refers to the configuration and cadence for discussing and resolving issues of strategy, resource allocation (including talent), performance management and other matters under each model. Note the wide variety of skills and competencies needed under each model and the potential for a rapid proliferation of tools and methods.
  • Working describes how people collaborate across the seams that lie between different models. Behavior that’s consistent with intended values is critical to effective execution. Less understood by many, remember you really can’t do effective predictive or prescriptive analytic work without the descriptive or diagnostic data sets usually prepared by others under what is typically a very different operating model.
  • Critical Capability can be determined by using the collection referred to above to balance people, processes and technology investment. The choice of operating models has implications for the type of talent or technology platform and tool optimization required. This collection is a suggestion only (and a good one at that), in step three I will refine this further to illustrate how to extend and refine this set of capabilities.

Step Two – Determine the operating models in use
In this step we are going to gather a deep understanding for the characteristics within each operating model, where they differ, and what common components and critical capability are shared. If you read the Gartner reference they consider metadata to be most heavily weighted in the Centralized Provisioning and Governed Discovery models. Based on my experience it is just as critical (and perhaps even more so) in the Decentralized model as well, especially in the Big Data world where tools like Alation, Adaptive, and Tamr are becoming essential to supporting discovery and self-service capability. The rest of this post will briefly describe the key characteristics for each operating model, their signature attributes, and highlight a few differences to help determine which operating models are employed.

Centralized Provisioning

CentralizedProvisioningThe classic model used for years in delivery of what has been referred to as business intelligence. Typically we would find tight management controls to push through centralized strategy and efficiency, usually at a high cost. Tightly managed processes for collecting and cleaning data before consumption can be found in the classic patterns associated with Extract, Transform, and Load operations into a data warehouse or mart. Most often characterized by formal processes where a developer or specialists collects business requirements from the users and then creates sanctioned reports and dashboards for them on trusted data. Centralized provisioning enables an information consumer to access their Key Performance Indicators (KPIs) from an information portal — increasingly on a mobile device or embedded in an analytic application — to measure the performance of the business. Interactivity and discovery in centrally developed content is limited to what is designed in by the content author. Seven of fourteen most important capabilities needed this model would include:

– IT-Developed Reports and Dashboards
– Traditional Styles of Analysis
– Platform Administration
– Development and Integration
– Metadata Management
– Ease of Use
– Customer Services

Decentralized Analytics

DecentralizedAnalyticsThe opposite of centralized provisioning, this model or loose confederation encourages local optimization and entrepreneurial drive. Look for a community that rapidly and interactively explores trends or detects patterns in data sets often from multiple sources to identify opportunities or risks with minimal support from the IT development community. Interactivity and discovery in this model is NOT limited to what is designed in by the content authors we find in the Centralized Provisioning model. The users are the content authors. Users of platforms and tools that excel at the decentralized analytics model can explore data using highly interactive descriptive analytic (“what happened” or “what is happening”) or diagnostic analytic (“Why did something happen?”, “Where are areas of opportunity or risk?”, and “What if?”). Because of embedded advanced analytic functions offered by many vendors, users can extend their analysis to some advanced descriptive analysis (for example, clustering, segmenting and correlations) and to a basic level of predictive analytic (for example, forecasting and trends). They can also prepare their own data for analysis, reducing their reliance on IT and improving time to insight. As decentralized analytics becomes more pervasive, the risk of multiple sources of the truth and information governance itself becomes a real challenge. Six of fourteen most important capabilities important capabilities needed in this model would include:

– Analytic Dashboards and Content
– Free Form Interactive Exploration
– Business User Data Mashup and Modeling
– Metadata Management
– Ease of Use
– Customer Services

Governed Data Discovery

GovernDiscoveryA hybrid of centralized and decentralized this model is best characterized by offering freedom within a framework to enhance transparency and effectiveness. This model features business users’ ability to prepare and combine data, explore and interact visually with this data to enable discovery to be deployed and managed across the enterprise. With the success of data discovery tools in driving business value, there is an increasing demand to use data discovery capabilities for a broader range of analysis and an expanded set of users than previously addressed by traditional reporting and dashboards. Governed data discovery enables users to access, blend and prepare data, then visually explore, find and share patterns with minimal IT support using their own technical and statistical skills. At the same time, this model must also satisfy enterprise requirements for business-user-generated model standards, data reuse and governance. In particular, users should be able to reuse sanctioned and approved business-user-created data or data sets, derived relationships, derived business models, derived KPIs, and metrics that support analyses.

Governed data discovery can enable pervasive deployment of data discovery in the enterprise at scale without proliferating data discovery tooling sprawl. The expanded adoption of data discovery also requires analytic leaders to redesign analytics deployment models and practices, moving from an IT-centric to an agile and decentralized, yet governed and managed approach. This would include putting in place a prototype, pilot and production process in which user-generated content is created as a prototype. Some of these prototypes would need to be used in recurring analysis and promoted to a pilot phase. Successful pilots are promoted to production and operationalized for regular analysis as part of the system of record. Each step provides more rigor and structure in governance and Quality Assurance testing. Business user data mashup and modeling, administration, and metadata capabilities should be based understanding on the following characteristics which would differentiate a Governed model from the Decentralized Analytics model discussed earlier. Pursuing the following questions will help define the differences.

– Where are permissions enabled on business models?
– Who can access shared data connections and data sets?
– Who can create and publish data sets?
– Who can access shared user work spaces to publish visualizations?
– Is there shared metadata about usage, connections and queries ?
– Are usage, connections and queries monitored?
– Is there a information catalog available to enable discovery?

Eight of fourteen most important capabilities needed in this model would include:
– Analytic Dashboards and Content
– Free Form Interactive Exploration
– Business User Data Mashup and Modeling
– Internal Platform Integration
– Platform Administration
– Metadata Management
– Ease of Use
– Customer Services

Embedded Analytics

EmbeddedAnalyticsIn this model analytics (decisions, business rules, and processes) are integrated into the organization to capture economies of scale and consistency across planning, operations, and customer experience. Most typically found where developers are using software development kits (SDKs) and related APIs to include advanced analytics and statistical functions within application products. These capabilities are used to create and modify analytic content, visualizations and applications and embed them into a business process, application or portal. Analytic functions can reside outside the application, reusing the infrastructure but should be easily and seamlessly accessible from inside the application, without forcing users to switch between systems. The ability to integrate analytics with the application architecture will enable the analytic community to choose where in the business process the analytics should be embedded. On example of a critical capability for embedding advanced analytics would include consuming a SAS/R or PMML model to create advanced models embedded in dashboards, reports or data discovery views. Six of the fourteen most important capabilities needed in this model would include:

– Embedded (includes both developer and embedded advanced analytics)
– Cloud Deployment
– Development and Integration
– Mobile
– Ease of Use
– Customer Services

Putting It All Together
Believing form really does follow function it should be clear after this step what operating models are driving the platforms and tools that are enabling (or inhibiting) effective performance. Using the Gartner work and the refinements I have extended this with we can now see at a glance what core capabilities are most important to each model as illustrated in the following diagram. This will become a key input to consider when assembling the decision model and discovering platform and tooling optimization in the later steps.

Now that this step is completed it is time to turn our attention to further refining the critical analytic capabilities as defined and begin weighting each according to their relative importance to each operating model.  It will become increasingly clear why certain critical capabilities essential to one model will be less important to another when this task is completed.

If you enjoyed this post, please share with anyone who may benefit from reading it. And don’t forget to click the follow button to be sure you don’t miss future posts. Planning on compiling all the materials and tools used in this series in one place, still unsure of what form and content would be the best for your professional use. Please take a few minutes and let me know what form and format you would find most valuable.

Suggested content for premium subscribers: 
Big Data Analytics - Unlock Breakthrough Results: Step Two (2) 
Operating Model Mind Map (for use with Mind Jet - see https://www.mindjet.com/ for more)
Analytic Core Capability Mind Map
Enterprise Analytics Mind Map 
Analytics Critical Capability Workbooks
Analytics Critical Capability Glossary, detailed descriptions, and cross-reference
Logical Data Model (XMI - use with your favorite tool)
Reference Library with Supporting Documents

Big Data Analytics – Unlock Breakthrough Results: (Step 1)

tlmd_mitos_que_afectan_la_vida_de_tu_mascota_17You’ve made the big data investment. You believe Nucleus Research when it says that an investment in analytics return a whopping thirteen (13) dollars for every one (1) dollar spent. Now it’s time to realize value. This series of posts is going to provide a detailed set of steps you can take to unlock this value in a number of ways.  As a simple use case I’m going to address the perplexing management challenge of platform and tool optimization across the analytic community as an example to illustrate each step. This post addresses the first of nine practical steps to take.  Although lengthy, please stick with me, I think this you find this valuable. I’m going to use a proven approach for solving platform and tool optimization in the same manner that proven practice suggests every analytic decision be made.  In this case I will leverage the CRISP-DM method (there are others I have used like SEMMA from SAS) to put business understanding front and center at the beginning of this example.

Yes, I will be eating my own dog food now (this is why a cute puppy is included in a technical post and not the Hadoop elephant) and getting a real taste of what proven practice should look like across the analytic community.  Recall the nine steps to take summarized in a prior post.

1) Gather current state analytics portfolio, interview stakeholders, and compile findings.
2) Determine the analytic operating models in use.
3) Refine Critical Analytic Capabilities as defined to meet site specific needs.
4) Weight Critical Analytic Capability according to each operating model in use.
5) Gather user profiles and simple population counts for each form of use.
6) Gather platform characteristics profiles.
7) Develop platform and tool signatures.
8) Gather data points and align with the findings.
9) Assemble findings and prepare a decision model for platform and tooling optimization.

Using the CRISP-DM method as a guideline, we find that each of the nine steps corresponds to the CRISP-DM method as illustrated in the following diagram.

CRISP_StepAlignment

Note there is some overlap between understanding the business and the data. The models we will be preparing will use a combination of working papers, logical models, databases, and the Decision Model Notation (DMN) from the OMG to wrap everything together.  In this example the output product is less about deploying or embedding an analytic decision and more about taking action based on the results of this work.

Step One – Gather Current State Portfolio
In this first step we are going to gather a deep understanding for what exists already within the enterprise and learn how the work effort is organized. Each examination should include at a minimum:

  • Organization (including its’ primary and supporting processes)
  • Significant Data Sources
  • Analytic Environments
  • Analytic Tools
  • Underlying technologies in use

The goal is to gather the current state analytics portfolio, interview stakeholders, and document our findings. In brief, this will become an integral part of the working papers we can build on in the steps to follow.  This is an important piece of the puzzle we are solving for. Do not even think about proceeding until this is complete. Note the following diagram (click to enlarge) illustrates the dependencies between accomplishing this field work and each component of the solution.

UMLDependencyDiagram

Unlocking Breakthrough Results – Dependency Diagram

Organization
If form follows function, this is where we begin to uncover the underlying analytic processes and how the business is organized. Understanding the business by evaluating the organization will provide invaluable clues to uncover what operating models are in use.  For example, if there is a business unit organized outside of IT and reporting to the business stakeholder, you will most likely have a decentralized analytics model in addition to the centralized provisioning most analytic communities already have in place.

Start with the organization charts; but do not stop there. Recommend you get a little closer to reality in the interview process to really understanding what is occurring in the community. By examining the underlying processes this will become clear. For example, what is the analytic community really doing? Do they use a standard method (CRISP-DM) or something else? An effective way to uncover this beyond the simple organization charts (which are never up-to-date and notorious for mislabeling what people are actually doing) is using a generally accepted model (like CRISP-DM) to organize the stakeholder interviews. This means we can truly understand what is typically performed by whom, using what processes to accomplish their work.  And where boundary conditions exist or in the worst case are un-defined.  An example is in order.  Using the CRISP-DM model we see there are a couple of clear activities that typically occur across all analytic communities.  This set of processes is summarized in the following diagram (click to enlarge).

CRISP_DM_MindMap

Gathering the analytic inventory and organizing the interviews now becomes an exercise in knowing what to look for using this process model. For example, diving a little deeper we can now explore how modeling is performed during our interviews guided by a generally accepted method. We can structure questions around the how, who, and what is performed for each expected process or supporting activity. Following up on this line of questioning should normally lead to samples of the significant assets which are collected and managed within an analytic inventory. Let’s just start with the modeling effort and a few directed questions.

  • Which organization is responsible for the design, development, testing, and deployment of the models?
  • How do you select which modeling techniques to use? Where are the assumptions used captured?
  • How do you build the models?
  • Where do I find the following information about each model?
    •     Parameter, Variable Pooling Settings
    •     Model Descriptions
    •     Objectives
    •     Authoritative Knowledge Sources Used
    •     Business rules
    •     Anticipated processes used
    •     Expected Events
    •     Information Sources
    •     Data sets used
    •     Description of any Implementation Components needed
    •     A Summary of Organizations Impacted
    •     Description of any Analytic Insight and Effort needed
  • Are anticipated reporting requirements identified?
  • How is model testing designed and performed?
  • Is a regular assessment of the model performed to recognize decay?

When you hear the uncomfortable silence and eyes point to the floor you have just uncovered one meaningful challenge.  Most organizations I have consulted into DO NOT have an analytic inventory, much less a metadata repository (or even a simple information catalog) I would expect to support a consistent, repeatable process.  This is a key finding for another kind of work effort that is outside the scope of this discussion.  All we are doing here is trying to understand what is being used to produce and deploy information products within the analytic community.  And is form really following function as the organization charts have tried to depict? Really?

An important note: we are not in a process improvement effort; not just yet. Our objective is focused on platform and tool optimization across the analytic community.  Believing form really does follow function it should be clear after this step what platforms and tools are enabling (or inhibiting) effective response and solving for this important and urgent problem across the organization.

Significant Data Sources
The next activity in this step is to also gain a deeper understanding what data is needed to meet the new demands and business opportunities made possible with big data.  Let’s begin with understanding how the raw materials or data stores can be categorized.  Data may be sourced from any number of sources to include one or more of the following:

  • Structured data (from  tables, records)
  • Demographic data
  • Times series data
  • Web log data
  • Geospatial data
  • Clickstream data from websites
  • Real-time event data
  • Internal text data (i.e. from e-mails, call center notes, claims, etc.)
  • External social media text data

If you are lucky there will be an enterprise data model or someone in enterprise architecture who can point to the major data sources and where the system of record resides. These are most likely organized by subject area (Customer, Account, Location, etc.) and almost always include schema-on-write structures. Although the focus is big data, it still is important to recognize that vast majority of data collected originates in transactional systems (e.g. Point of Sale).  Look for curated data sets and information catalogs (better yet an up-to-date metadata repository like Adaptive or Alation) to accelerate this task if present.

Data in and of itself is not very useful until it is converted or processed into useful information.  So here is a useful way to think about how this is viewed or characterized in general. The flow of information across applications and the analytic community from sources external to the organization can take on many forms. Major data sources can be grouped into three (3) major categories:

  • Structured Information,
  • Semi-Structured Information and
  • Unstructured Information.

While modelling techniques for structured information have been around for some time, semi-structured and unstructured information formats are growing in importance. Unstructured data presents a more challenging effort.  Many believe up to 80% of the information in a typical organization is unstructured this must be an important area for focus as part of an overall information management strategy. It is an area, however, where the accepted best practices are not nearly as well-defined. Data standards provide an important mechanism for structuring information. Controlled vocabularies are also helpful (if available) to focus on the use of standards to reduce complexity and improve reusability. When we get to modeling platform characteristics and signatures in the later steps the output of this work will become increasingly valuable.

Analytic Landscape
I have grouped the analytic environments, tools, and underlying technologies together in this step because they are usually the easiest data points to gather and compile.

  • Environments
    Environments are usually described as platforms and can take several different forms. For example, you can group these according to intended use as follows:
    – Enterprise Data Warehouse
    – Specialized Data Marts
    – Hadoop (Big Data)
    – Operational Data Stores
    – Special Purpose Appliances (MPP)
    – Online Analytical Processor (OLAP)
    – Data Visualization and Discovery
    – Data Science (Advanced Platforms such as the SAS Data Grid)
    – NLP and Text Engineering
    – Desktop (Individual Contributor; yes think how pervasive Excel and Access are)
  • Analytic Tools
    Gathering and compiling tools is a little more interesting. There is such a wide variety of tools designed to meet several different needs, and significant overlap in functions delivered exists among them. One way to approach this is group by intended use.  Try using the INFORMS taxonomy for example to group the analytic tools you find.  There work identified three hierarchical but sometimes overlapping groupings for analytics categories: descriptive, predictive, and prescriptive analytics. These three groups are hierarchical and can be viewed in terms of the level of analytics maturity of the organization.  Recognize there are three types of data analysis:

    • Descriptive (some have split Diagnostic into it’s own category)
    • Predictive (forecasting)
    • Prescriptive (optimization and simulation)

This simple classification scheme can be extended to include lower level nodes and improved granularity if needed. The following diagram illustrates a graphical depiction of the simple taxonomy developed by INFORMS and widely adopted by most industry leaders as well as academic institutions.

INFORMS_Taxonomy

Source: INFORMS (Institute for Operations Research and Management Science)

Even though these three groupings of analytics are hierarchical in complexity and sophistication, moving from one to another is not clearly separable. That is, the analytics community may be using tools to support descriptive analytics (e.g. dashboards, standard reporting) while at the same time using other tools for predictive and even prescriptive analytics capability in a somewhat piecemeal fashion. And don’t forget to include the supporting tools which may include metadata functions, modeling notation, and collaborative workspaces for use within the analytic community.

  • Underlying technologies in use
    Technologies in use can be described and grouped as follows (and this just a simple example and is not intended to be an exhaustive compilation).

    • Relational Databases
    • MPP Databases
    • NOSQL databases
      • Key-value stores
      • Document store
      • Graph
      • Object database
      • Tabular
      • Tuple store, Triple/quad store (RDF) database
      • Multi-Value
      • Multi-model database
    • Semi and Unstructured Data Handlers
    • ETL or ELT Tools
    • Data Synchronization
    • Data Integration – Access and Delivery

Putting It All Together
Not that we have compiled the important information needed, where do we put this for the later stages of the work effort?  In an organization of any size this can be quite a challenge, just due to the sheer size and number of critical facets we will need later, the number of data points, and the need to re-purpose and leverage this in a number of views and perspectives.

Here is what has worked for me.  First use a mind or concept map (Mind Jet for example) to organize and store URIs to the underlying assets. Structure, flexibility, and the ability to export and consume data from a wide variety of sources is a real plus.  The following diagram illustrates an example template I use to organize an effort like this. Note the icons (notepad, paperclip, and MS-Office) even at this high level point to a wide variety of content gathered and compiled in the fieldwork (including interview notes and observations).

EA_MindMap

Enterprise Analytics – Mind Map Example

For larger organizations without an existing Project Portfolio Management (PPM) tool or metadata repository that supports customizations (extensions, flexible data structures) it is sometimes best to augment the maps with a logical and physical database populated with the values already collected and organized in specific nodes of the map.  A partial fragment of a logical model would look something like this, where some sample values are captured in the yellow notes.

Logical

Logical Model Fragment

Armed with the current state analytics landscape (processes and portfolio), stakeholder’s contributions, and the findings compiled we are now ready to move on to the real work at hand. In step (2) we will use this information to determine the analytics operating models in use supported by the facts.

If you enjoyed this post, please share with anyone who may benefit from reading it. And don’t forget to click the follow button to be sure you don’t miss future posts. Planning on compiling all the materials and tools used in this series in one place, still unsure of what form and content would be the best for your professional use.  Please take a few minutes and let me know what form and format you would find most valuable.

Suggested content for premium subscribers: 
Big Data Analytics - Unlock Breakthrough Results:(Step 1) 
CRISP-DM Mind Map (for use with Mind Jet, see https://www.mindjet.com/ for more)
UML for dependency diagrams.  Use with yUML (see http://yuml.me/)
Enterprise Analytics Mind Map (for use with Mind Jet)
Logical Data Model (DDL; use with your favorite tool)
Analytics Taxonomy, Glossary (MS-Office)
Reference Library with Supporting Documents