What is Natural Language Processing?

Before proceeding with the Building Better Systems series I thought I should write a quick post over the weekend about the underlying Natural Language Processing (NLP) and text engineering technologies proposed in the solution. I have received a lot of questions about this when I posted How to build better systems – the specification  and mentioned this key element. And a few assumptions that are not quite correct about just what this technology TextMining_SoftBorderis all about.  This is completely understandable. Unless you are in voice recognition, machine learning, or artificial intelligence NLP is not a mainstream subject many of us are familiar with. I’m not going to pretend to know the subject as well as others who are dedicated to this field. There are others who know this subject inside out and can discuss the science behind it far better than I can. Nor do I intend to go into a detailed exposition describing just how powerful and useful this technology can be. I will share with you how I was able to take advantage of the work done to date and use a couple of tools with NLP components already embedded to automate many of the labor intensive tasks usually performed when evaluating system specifications. My focus is seeking a way to automate and uncover defects in our specifications as quickly as possible.  This can be accomplished by uncovering defects in our specifications related to:

  • Structural Completeness/Alignment
  • Text Ambiguity and Context
  • Section Quantity/Size
  • Volatility
  • Lexicon Discovery
  • Plain Language; word complexity density – complex words

NLP and text engineering can do this for us and much more. Rather than elaborate any further, I think it is better to just see it in action with a few good examples. Fortunately the University of Illinois Cognitive Computation Group as already done this for all of us and created a site for a little taste of what can be done. Their demonstration page using NLP has several clear demonstrations of the following key concepts:

Natural Language Analysis

  • Coreference Resolution
  • Part of Speech Tagging
  • Semantic Role Labeling
  • Shallow Parsing
  • Text Analysis

Entities and Information Extraction

  • Named Entity Recognition
  • Named Entity Recognizer (often using extended entity type sets)
  • Number Quantization
  • Temporal Extraction and Comparison


  • Context Sensitive Verb Paraphrasing
  • LLM (Lexical Level Matching)
  • Named Entity Similarity
  • Relation Identification
  • Word Similarity

For example say we wanted to parse and process this simple sentence:

Time Zones are used to manage the various campaigns that are executed to ensure customers are called within the allowed campaigning window which is 8 AM through 8 PM.”

Using the text analysis demonstration at the site we can copy and paste this simple phrase into the dialog box and submit for processing. The results are returned in the following diagram.


Click to enlarge

The parser has identified things (entities), guarantees, purpose, verbs (use, manage, call, and execute) and time within this simple sentence.  I think you can see now how powerful this technology can be when used to collect, group, and evaluate text . While this is impressive enough think of the possibilities for using this technology to process page after page of business and system requirements?

There is also a terrific demonstration illustrating what the Cognitive Computation Group calls The Wikification system. Use their examples or plug in your own text for processing. Using the Wikification example you can insert plain text to be parsed and processed to identify entities with Wikipedia articles. The result set returned includes the live links to visit the corresponding Wikipedia page and the categories associated with each entity. Here is an example from an architecture specification describing (at a high level) Grid Service Agent behavior.

The Grid Service Agent (GSA) acts as a process manager that can spawn and manage Service Grid processes (Operating System level processes) such as Grid Service Manager and Grid Service Container. Usually, a single GSA is run per machine. The GSA can spawn Grid Service Managers, Grid Service Containers, and other processes. Once a process is spawned, the GSA assigns a unique id for it and manages its life cycle. The GSA will restart the process if it exits abnormally (exit code different than 0), or if a specific console output has been encountered (for example, an Out Of Memory Error).

The result returned is illustrated in the following diagram. Note the parser here has categorized and selected context specific links to public Wikipedia articles (and related links) to elaborate on the objects identified where such articles exist.


The more ambitious among us (with an internal Wiki) can understand how powerful this can be. Armed with the source code this is potentially a truly wonderful application processor to link dynamic content (think system specifications or requirements) in context back to an entire knowledge base.

If you really want to get your hands dirty and dive right in, there are two widely known frameworks for natural language processing.

  • GATE (General Architecture for Text Engineering)
  • UIMA (Unstructured Information Management Architecture)

GATE is a Java suite of tools originally developed at the University of Sheffield and now used worldwide by a wide community of scientists, companies for all sorts of natural language processing tasks. It is readily available and works well with Protégé (semantic editor) for those of you into ontology development.

UIMA (Unstructured Information Management Architecture) originally developed by IBM but now maintained by the Apache Software Foundation. .

If you need to input plain text and identify entities, such as persons, places, organizations; or relations, (such as works-for or located-at) this may be the way to do it.  Access to both frameworks is open, and there is a large community of developers supporting both initiatives.

For myself, I’m going to use a wonderful commercial product available in the cloud (and on-premise if needed) called The Visible Thread to illustrate many of the key concepts.  Designed for just this need, it also comes with the nlp-300x300 collaboration features already built-in and a pretty handy user interface for managing the collection (the fancy word for this is corpus; look it up and impress your friends) of documents I need to evaluate.  As an architect my primary mission is reduce unnecessary effort, so it just makes sense to use this. Of course, if you want to build this yourself there are several options available using GATE or UIMA to develop your own text processor to evaluate system specification document(s).  I hope you will take the time to try some of the samples at the University of Illinois Cognitive Computation Group to get a feel for what is going on under the covers as I discuss the methods and techniques around building better systems through high quality specifications in the coming weeks.  You don’t need to be a rocket scientist (something I can’t relate to either) to appreciate the underlying technology and concepts you can take advantage of to make this meet this important and urgent need a reality in your own practice.


How to build better systems – the specification

Towering stack of paperworkI’m going to take a little departure from building Road Maps to begin a new series How to build better systems.  Applied Enterprise Architecture means a little hands-on practical application is always a good way to keep our knives sharp and skills current. We will begin with the specification. I define a Systems requirements specification (SRS) as a collection of desired requirements and characteristics for a proposed software system design.  This includes Business Requirements (commonly known as a BRD), non-functional requirements, and core architectural deliverables. The collection is intended to describe the behavior of a system to be developed.  The collection should include a set of use cases that describe interactions the users will have with the software.  The specification should be accompanied by a set of architectural guidelines for the project team (see Nick Malik’s fine post “A practical set of EA deliverables” for good examples) to follow as first principals.  This is true (in slightly different variants) no matter what Systems Development Life Cycle (for example Waterfall, Agile, Spiral, and others, even Test Driven Development) we are planning to use to manage the development effort.  Getting this right means can reduce defects and errors early and remains the single most effective action we can take to improve our project outcomes.

The Challenge

We probably can all agree that defining functional and non-functional requirements are the most critical elements of a project’s success or failure.  In fact searching Google for the simple search term “defining good requirements” returns over ninety-one (91) million hits! Yet in spite of all the good advice and fine writing available we have all seen inaccurate, incomplete, and poorly written requirements specifications. We all know this will result in poor quality products, wasted resources, higher costs, dissatisfied customers, and late project completion.  The cost to remediate these defects in development projects is staggering.  I’m going to try to keep this very professional and objective. I can’t help thinking this is probably a very good place to rant about the dismal state of affairs many of us find ourselves in today. We spend an enormous amount of energy to test executable code, yet the tests themselves are constructed on faulty assumptions or ambiguous specifications (you figure it out) that can’t be tested in the first place.  This is a common and all too familiar challenge I see everywhere in my practice. And I mean everywhere.

How bad is it?

More than 31% of all projects will be cancelled before completion. 53% of projects will take more time to complete and will cost twice as of their original estimates. Overall, the train_wreck_2_lsuccess rate for IT projects is less than 30%. Using generally accepted quantitative findings fixing requirements defects (errors) can eat up to roughly one third of most software project budgets. For example, requirements errors have been found to consume 28% to 42.5% of total software development cost (Hooks and Farry, 2001).  Rework can consume up to 40–50 percent of project budgets (Boehm and Basili). In fact, faulty requirements form the root cause for 75–85 percent of rework (Leffingwell). And early detection is critical. The cost to a 1 million dollar budget at the low end of scale can be more than $250,000. Finding and fixing defects outside the SDLC phase where it originates has its own cost (up to 400 times as much to remedy – Reifer, 2007).  It might cost $250 to $1000 to find and fix a requirements defect during the requirements phase. If uncovered when the product is in production repairing it can cost $100,000 to $400,000 in this example.  As Willie Sutton once said he robbed banks “because that’s where the money is”.   And so it is here where the real opportunity lies (and has been for some time) to capture and preserve real business value.  Reducing specification requirements errors is the single most effective action we can take to improve project outcomes.

The four biggest causes of failed projects are:

  1. incomplete, missing, or poorly written requirements,
  2. lack of user input and buy-in,
  3. failure to define clear objectives, and
  4. scope creep.

All four of these toot causes result from poor requirements and specifications practices. It is clear that current requirement management processes and tools are simply not working.

So what?

This is how we have always performed the work, why change the way we do this now? Crafting requirements specifications can seem like a daunting task. And how can the author tell the difference between a good requirement and a bad one? How can we ensure clear unambiguous system requirements are crafted on a consistent and repeatable basis? While many new business analysts are looking for templates it is rare a template provides enough rigor and information to help write better system requirements. Templates can help you get started, but they don’t really tell you what a good requirement looks like.  Understanding what represents good content in the requirement is a little more difficult (ex; Specific, Measurable, Achievable, Realistic, and Timely).  Even more difficult is preventing text ambiguity from occurring in the document(s) due to the presence of ‘non-actionable’, vague or imprecise language.

There is a better way

I have stumbled across a better way to tackle this challenge. The approach uses process improvements, a few new roles, and Natural Language Processing (NLP) technology to identify defects early in the development life cycle before they become expensive to correct.

The approach I propose is inexpensive and mitigates much of the risk associated with defects and errors early in the development life cycle.  We already have many fine tools to manage the traditional requirements practice. What I intend to share with you is very different and until very recently widely unknown to many in the development community.  The method works (IIBA’s Business Analysis Body of Knowledge) across all five types of requirements:

  1. Business Requirements
  2. Stakeholder Requirements
  3. Functional Requirements
  4. Nonfunctional Requirements, and
  5. Transition Requirements

Word Cloud (NLP)The immediate benefit reduces costly rework (or downstream change requests) and the opportunity to build the right product the first time.  The more significant value is uncovering and reducing requirements defects early. This is the single most effective action (have I said this before?) we can take to improve our project outcomes. In the coming weeks I will address a method to tackle this challenge that is inexpensive and uncovers many of the defects and errors early in the development cycle.  I will share with you how to evaluate the following key elements:

  • Structural Completeness/Alignment (%)
  • Text Ambiguity and Context (Quality %)
  • Section Quantity/Size
  • Volatility (Change over time)
  • Lexicon (Frequency Distribution across documents) Discovery
  • Plain Language (Readability) Word Complexity Density – complex words

I will also illustrate how use Natural Language Processing (NLP) and text engineering tools to automate and identify defects. One of the most powerful features of this method is the immediate feedback provided to authors and designers early in the development life cycle.  They can then take action to correct the defects uncovered while they are still relatively inexpensive to remedy.  The end result is improved specifications to drive better products.  So if you are program manager, analyst, developer, or architect stay tuned for an approach that just may change the way you think about your writing. And improve your delivery of quality systems.