The startling truth of the Overton Window

bc40ca5941f3795e465fb0c32c7caaffIn my analytic practice I do quite a bit of sentiment analysis. This is not a technology discussion or another post about algorithms. This is a quick introduction to the Overton Window, a key concept that has helped me make some sense of what seems to make no sense at all sometimes where politics and sentiment analysis are concerned.  Understanding this concept has helped when trying to identify business insights, understand customers, prospects, and opinion leaders to discover what others are saying about their brand, products, reputation and their competitors in a more meaningful way.  And does provide a useful way to filter or mitigate cognitive bias which may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, and sometime just plain irrational behavior. This is also one way to help understand the bigger picture in public policy debate and uncover key trends which can be mystifying at times.

With all the craziness going on in the world today sometimes it helps to peer beyond the trees to see the forest. This is hard to do with the provocative (and sometimes entertaining) statements, name-calling and the unnecessary marginalizing of those with a different point of view. See this post for one example where Donald Trump Has Brought Discussion of Political Impossibilities into the Open. Before reading this I’m sure many of you had to wonder about why someone like this with a commanding lead among his peers would take risky policy positions that seem to defy gravity and diverge from generally accepted views. And why this approach is succeeding where so many others have failed.

  • How can seemingly smart leadership take a position that seems to be so surprising given the firestorm of  political conflict created?
  • Is it just me (why is this so obvious, am I the idiot who didn’t get the memo)?
  • Has anybody in this political arena really stopped to consider the unintended consequences of their own response?
  • Is someone labeled by others a lunatic and a fool simply telling the truth?

In my field where data and its’ interpretation are important my own answers to these questions had remained a Overton_Window_diagram.svg
mystery to me until I ran into a concept known as the Overton Window. The Overton Window, also known as the window of discourse is term originated with Joseph P. Overton (1960–2003), a former vice president of the Mackinac Center for Public Policy. He described a window (or range) that an idea’s political viability depends mainly on whether it falls within the window, rather than on politicians’ individual preferences. This window includes a range of policies considered politically acceptable in the current climate of public opinion, which a politician can recommend without being considered too extreme to gain or keep public office.

Before you think this only has use in the world of politics (which it seems is everywhere) see this less familiar example where Mark Pilgrim discussed the continuum of opinion about the W3C and web standards where the Overton Window (range of possibilities) looked like this in 2001:

1. W3C is the only organization defining web standards.
2. Other organizations may exist, but they haven’t produced anything useful.
3. Other organizations and standards may exist, but W3C is the only legitimate organization that should define standards.
4. W3C standards should be preferred because they are technically superior.
5. W3C is one standards organization among many.
Vendors should support multiple standards, and content producers should choose the standard that best suits their needs.
6. W3C standards are inappropriate for the public Internet, but may be useful in other environments.
7. W3C standards are not appropriate for any purpose.
8. W3C is not a legitimate standards organization.
9. W3C should be disbanded and replaced by another standards organization.

According to Mark, “…in 2001, the Overton window (the range of acceptable opinions) was squarely on #1 for every web standard you could think of. HTML, CSS, RDF, SVG, XForms, WCAG, you name it. Vendor-neutral web standards were all the rage, and the W3C was, quite simply, the only game in town for defining such standards”. By 2006, “ …the W3C is now under fire on several fronts, and various people have publicly criticized various W3C standards and (in some cases) publicly abandoned the W3C itself.” The Overton Window had clearly moved.

As Mark states “…there is usually no benefit in arguing too far outside the Overton Window. People who have been claiming since 2001 that the W3C should be disbanded (#9) have had a negligible impact on the ongoing debate. Their opinions are just too far outside the range of acceptable opinion, and they risk being dismissed as kooks. If the Overton Window does eventually shift far enough that such opinions become acceptable, those people may feel vindicated, but it is unlikely that they will have been responsible for shifting the debate”.

Where else does this window exist and what does it look like? Consider the following example where the Overton Window of political possibility is depicted in a gun control or 2nd Amendment discussion (note the subtle cognitive bias introduced in the choice of  colors).


Or this post about professional opinion on climate change…


And see this terrific example from Daniel Herriges about Moving the Overton Window in urban planning.

The implications of this concept should be recognized when analyzing the underlying data and its’ interpretation in social media and sentiment analysis. It has other subtle uses in transformation and change management. And of course in this crazy political season it may just help you understand the motivation and might just answer how smart leadership can take a position that seems to be risky, unreasonable, or just plain crazy like a fox.

What is Natural Language Processing?

Before proceeding with the Building Better Systems series I thought I should write a quick post over the weekend about the underlying Natural Language Processing (NLP) and text engineering technologies proposed in the solution. I have received a lot of questions about this when I posted How to build better systems – the specification  and mentioned this key element. And a few assumptions that are not quite correct about just what this technology TextMining_SoftBorderis all about.  This is completely understandable. Unless you are in voice recognition, machine learning, or artificial intelligence NLP is not a mainstream subject many of us are familiar with. I’m not going to pretend to know the subject as well as others who are dedicated to this field. There are others who know this subject inside out and can discuss the science behind it far better than I can. Nor do I intend to go into a detailed exposition describing just how powerful and useful this technology can be. I will share with you how I was able to take advantage of the work done to date and use a couple of tools with NLP components already embedded to automate many of the labor intensive tasks usually performed when evaluating system specifications. My focus is seeking a way to automate and uncover defects in our specifications as quickly as possible.  This can be accomplished by uncovering defects in our specifications related to:

  • Structural Completeness/Alignment
  • Text Ambiguity and Context
  • Section Quantity/Size
  • Volatility
  • Lexicon Discovery
  • Plain Language; word complexity density – complex words

NLP and text engineering can do this for us and much more. Rather than elaborate any further, I think it is better to just see it in action with a few good examples. Fortunately the University of Illinois Cognitive Computation Group as already done this for all of us and created a site for a little taste of what can be done. Their demonstration page using NLP has several clear demonstrations of the following key concepts:

Natural Language Analysis

  • Coreference Resolution
  • Part of Speech Tagging
  • Semantic Role Labeling
  • Shallow Parsing
  • Text Analysis

Entities and Information Extraction

  • Named Entity Recognition
  • Named Entity Recognizer (often using extended entity type sets)
  • Number Quantization
  • Temporal Extraction and Comparison


  • Context Sensitive Verb Paraphrasing
  • LLM (Lexical Level Matching)
  • Named Entity Similarity
  • Relation Identification
  • Word Similarity

For example say we wanted to parse and process this simple sentence:

Time Zones are used to manage the various campaigns that are executed to ensure customers are called within the allowed campaigning window which is 8 AM through 8 PM.”

Using the text analysis demonstration at the site we can copy and paste this simple phrase into the dialog box and submit for processing. The results are returned in the following diagram.


Click to enlarge

The parser has identified things (entities), guarantees, purpose, verbs (use, manage, call, and execute) and time within this simple sentence.  I think you can see now how powerful this technology can be when used to collect, group, and evaluate text . While this is impressive enough think of the possibilities for using this technology to process page after page of business and system requirements?

There is also a terrific demonstration illustrating what the Cognitive Computation Group calls The Wikification system. Use their examples or plug in your own text for processing. Using the Wikification example you can insert plain text to be parsed and processed to identify entities with Wikipedia articles. The result set returned includes the live links to visit the corresponding Wikipedia page and the categories associated with each entity. Here is an example from an architecture specification describing (at a high level) Grid Service Agent behavior.

The Grid Service Agent (GSA) acts as a process manager that can spawn and manage Service Grid processes (Operating System level processes) such as Grid Service Manager and Grid Service Container. Usually, a single GSA is run per machine. The GSA can spawn Grid Service Managers, Grid Service Containers, and other processes. Once a process is spawned, the GSA assigns a unique id for it and manages its life cycle. The GSA will restart the process if it exits abnormally (exit code different than 0), or if a specific console output has been encountered (for example, an Out Of Memory Error).

The result returned is illustrated in the following diagram. Note the parser here has categorized and selected context specific links to public Wikipedia articles (and related links) to elaborate on the objects identified where such articles exist.


The more ambitious among us (with an internal Wiki) can understand how powerful this can be. Armed with the source code this is potentially a truly wonderful application processor to link dynamic content (think system specifications or requirements) in context back to an entire knowledge base.

If you really want to get your hands dirty and dive right in, there are two widely known frameworks for natural language processing.

  • GATE (General Architecture for Text Engineering)
  • UIMA (Unstructured Information Management Architecture)

GATE is a Java suite of tools originally developed at the University of Sheffield and now used worldwide by a wide community of scientists, companies for all sorts of natural language processing tasks. It is readily available and works well with Protégé (semantic editor) for those of you into ontology development.

UIMA (Unstructured Information Management Architecture) originally developed by IBM but now maintained by the Apache Software Foundation. .

If you need to input plain text and identify entities, such as persons, places, organizations; or relations, (such as works-for or located-at) this may be the way to do it.  Access to both frameworks is open, and there is a large community of developers supporting both initiatives.

For myself, I’m going to use a wonderful commercial product available in the cloud (and on-premise if needed) called The Visible Thread to illustrate many of the key concepts.  Designed for just this need, it also comes with the nlp-300x300 collaboration features already built-in and a pretty handy user interface for managing the collection (the fancy word for this is corpus; look it up and impress your friends) of documents I need to evaluate.  As an architect my primary mission is reduce unnecessary effort, so it just makes sense to use this. Of course, if you want to build this yourself there are several options available using GATE or UIMA to develop your own text processor to evaluate system specification document(s).  I hope you will take the time to try some of the samples at the University of Illinois Cognitive Computation Group to get a feel for what is going on under the covers as I discuss the methods and techniques around building better systems through high quality specifications in the coming weeks.  You don’t need to be a rocket scientist (something I can’t relate to either) to appreciate the underlying technology and concepts you can take advantage of to make this meet this important and urgent need a reality in your own practice.