"in order to reason well ... it is absolutely necessary to possess ... such virtues as intellectual honesty and sincerity and a real love of the truth." — C. S. Pierce
When it is mentioned project management is a control system many in the agile world whince. But in fact project is a control system, a closed loop control system.
Here's how it works.
Each of these elements has some unit of measure:
Here's a small example of incremental delivery of value in an enterprise domain
The accomplishment of a mission or fulfillment of a business strategy can be called the value produced by the project. In the picture above the value delivered to the business is incremental, but fully functional on delivery to accomplish the business goal. These goals are defined in Measures of Effectiveness and Measures of Performance and these measures are derived from the business strategy or mission statement. So if I want a fleet of cars for my taxi service, producing a sketboard, then a bicycle, is not likley to accomplishment the business goal.
But the term value alone is nice, but not sufficient. Value needs to have some unit of measure. Revenue, cost reduction, environmental cleanup, education of students, reduction of disease, the process of sales orders at a lower cost, flying the 747 to it's destination with minimal fuel. Something that can be assessed in tangible units of measure.
In exchange for this value, with it's units of measure, we have the cost of producing this value.
To assess the value or the cost, we need to know the other item. We can't know the value of something without knowing its cost. We can't know if the cost is appropriate without knowing the value produced by the cost.
This is one principle of Microeconomics of software development
The process of deciding between choices about cost and value - the trade space between cost and value - starts with information about both cost and value. This information lives in the realm of uncertainty before and during the project's life-cycle. It is only known on the cost side after the project completes. And for the value may never be known in the absence of some uncertainty as to the actual measure. This is also a principle of microeconomics - the measures we use to make decisions are random variables.
To determine the value of the random variable we need to estimate, since of course they are random. With these random variables - cost of producing value and the value exchanged for the cost, the next step in projects is to define what we want the project to do:
The actual delivery of this value can be incremental, it can be iterative, evolutionary, linear, big bang, or other ways. Software many times can be iterative or incremental, pouring concrete and welding pipe can as well. Building the Interstate might be incremental, the high rise usually needs to wait for the occupancy permit before the value is delivered to the owners. There is no single approach.
For each of these a control system is needed to assure progress to plan is being made. The two types of control systems are Open Loop and Close Loop. The briefing below speaks to those and their use.
Obvious not every decision we make is based on mathematics, but when we're spending money, especially other people's money, we'd better have so good reason to do so. Some reason other than gut feel for any sigifican value at risk. This is the principle of Microeconomics.
All Things Considered is running a series on how people interprete probability. From capturing a terrortist to the probability it will rain at your house today. The world lives on probabilitic outcomes. These probabilities are driven by underlying statistical process. These statistical processes create uncertainties in our decision making processes.
Both Aleatory and Epistemic uncertainty exist on projects. These two uncertainties create risk. This risk impacts how we make decisions. Minimizing risk, while maximizing reward is a project management process, as well as a microeconomics process. By applying statistical process control we can engage project participants in the decision making process. Making decision in the presence of uncertainty is sporty business and many example of poor forecasts abound. The flaws of statistical thinking are well documented.
When we encounter to notion that decisions can be made in the absence of statistical thinking, there are some questions that need to be answered. Here's one set of questions and answers from the point of view of the mathematics of decision making using probability and statistics.
The book opens with a simple example.
Here's a question. We're designing airplanes - during WWII - in ways that will prevent them getting shot down by enemy fighters, so we provide them with armour. But armor makes them heavier. Heavier planes are less maneuverable and use more fuel. Armoring planes too much is a proplem. Too little is a problem. Somewhere in between is optimum.
When the planes came back from a mission, the number of bullet holes was recorded. The damage was not uniformly distributed, but followed this pattern
The first thought was to provide armour where the need was the highest. But after some thought, the right answer was to provide amour where the bullet holes aren't - on the engines.
- Engine - 1.11 bullet holes per square foot (BH/SF)
- Fueselage - 1.73 BH/SF
- Fuel System - 1.55 BH/SF
- Rest of plane - 1.8 BH/SF
"where are the missing bullet holes?" The answer was onb the missing planes. The total number of planed leaving minus those returning were the number of planes that were hit in a location that caused them not to return - the engines.
The mathematics here is simple. Start with setting a variable to Zero. This variables is the probability that a plane that takes a hit in the enginer manages to staty in the air and return to base. The result of this analysis (pp. 5-7 of the book) can be applied to our project work.
This is an example of the thought processes needed for project management and the decision making processes needed for spending other peoples money. The mathematician approach is to ask what assumptions are we making? Are they justified? The first assumption - the errenous assumption - was tyhat the planes returning represented were a random sample of all the planes. If so, the conclusions could be drawn.
In The End
Show me the numbers. Numbers talk BS walks is the crude phrase, but true. When we hear some conjecture about the latest fad think about the numbers. But before that read Beyond the Hype: Rediscovedring the Essence of Management, Robert Eccles and Nitin Nohria. This is an important book that lays out the processes for sorting out the hype - and untested and liley untestable conjectures - from the testable processes.
The presentation Dealing with Estimation, Uncertainty, Risk, and Commitment: An Outside-In Look at Agility and Risk Management has become a popular message for those suggesting we can make decisions about software development in the absence of estimates.
The core issue starts with first chart. It shows the actual completion of a self-selected set of projects versus the ideal estimate. This chart is now in use for the #NoEstimates paradigm as to why estimating is flawed and should be eliminated. How to eliminate estimates while making decisions about spending other peoples money is not actually clear. You'll have to pay €1,300 to find out.
But let's look at this first chart. It shows the self-selected projects, the vast majority completed above the initial estimate. What is this initial estimate? In the original paper, the initial estimate appears to be the estimate made by someone for how long the project would take. No sure how that estimate was arrived at - the basis of estimate - or how was the estimate was derived. We all know that subject matter expertise is the least desired and past performance, calibrated for all the variables is the best.
So Here in Lies the Rub - to Misquote from Shakespeare's Hamlet
The ideal line is not calibrated. There is no assessment if the orginal estimate was credible or bogus. If it was credible, what was the confidence of that credibility and what was the error band on that confidence.
This is a serious - some might say egregious - error in statistical analysis. We're comparing actuals to a baseline that is not calibrated. This means the initial estimate is meaningless in the analysis of the variances without an assessment of it accuracy and precision. To then construct a probability distribution chart is nice, but measured against what - against bogus data.
This is harsh, but the paper and the presentation provide no description of the credibility of the initial estimates. Without that, any statistical analysis is meaningless. Let's move to another example in the second chart.
The second chart - below - is from a calibrated baseline. The calibration comes from a parametric model, where the parameters of the initial estimate are derived from prior projects - the reference class forecasting paradigm. The tool used here is COCOMO. There are other tools based on COCOMO and Larry Putman's and other methods that can be used for similar calibration of the initial estimates. A few we use are QSM, SEER, Price.
One place to start is Validation Method for Calibrating Software Effort Models. But this approach started long ago with An Empirical Validation of Software Cost Estimation Models. All the way to the current approaches of ARIMA and PCA forecasting for cost, schedule, and performance using past performance. And current approaches, derived from past research, of tuning those cost drivers using Bayesian Statistics.
The issue of software management, estimates of software cost, time, and performance abound. We hear about it every day. Our firm works on programs that have gone Over Target Baseline. So we walk the walk every day.
But when there is bad statistics used to sell solutions to complex problems, that's when it becomes a larger problem. To solve this nearly intractable problem of project cost and schedule over run, we need to look to the root cause. Let's start with a book Facts and Fallacies of Estimating Software Cost and Schedule. From there let's look to some more root causes of software project problems. Why Projects Fail is a good place to move to, with their 101 common casues. Like the RAND and IDA Root Cause Analysis reports many are symptoms, rather than root causes, but good infomation all the same.
So in the end when it is suggested that the woo's of project success can be addressed by applying
Ask a simple question - is there any tangible, verifiable, externally reviewed evidence for this. Or is this just another self-selected, self-reviewed, self-promoting idea that violates the principles of microeconomics as it is applied to software development, where:
Economics is the study of how people make decisions in resource-limited situations. This definition of economics fits the major branches of classical economics very well.
Macroeconomics is the study of how people make decisions in resource-limited situations on a national or global scale. It deals with the effects of decisions that national leaders make on such issues as tax rates, interest rates, and foreign and trade policy, in the presence of uncertainty
Microeconomics is the study of how people make decisions in resource—limited situations on a personal scale. It deals with the decisions that individuals and organizations make on such issues as how much insurance to buy, which word processor to buy, what features to develop in what order, whether to make or buy a capability, or what prices to charge for their products or services, in the presence of uncertainty. Real Options is part of this decision making process as well.
Economic principles underlie the structure of the software development life cycle, and its primary refinements of prototyping, itertaive and incremental development, and emerging requirements.
If we look at writing software for money, it falls into the microeconomics realm. We have limited resources, limited time, and we need to make decisions in the presence of uncertainty.
In order to decide about the future impact of any one decision - making a choice - we need to know something about the furture which is itself uncertain. The tool to makes these decisions about the future in the presence of uncertainty is call estimating. Lot's of ways to estimate. Lots of tools to help us. Lots of guidance - books, papers, classrooms, advisers.
But asserting we can in fact make decisions about the future in the presence of uncertainty without estimating is mathematically and practically nonsense.
So now is the time to learn how to estimate, using your favorite method, because to decide in the absence of knowing the impact of that decision is counter to the stewardship of our customers money. And if we want to keep writing software for money we need to be good stewards first.
Visiting the Montana State Museum of the Rockies this weekend and came across this sign in an exhibit.
Now writing software for money is not this kind of science, but it is closely related to engineering and the enablement of engineering processes in our domain - things that fly away, swim away, drive away, and the enterprise IT systems that support those outcomes.
When we hear about some new way to do something around managing projects that spend other peoples money, we do need to ask the questions posed by the sign above.
Is there any evidence that the suggested way - this new alternative of doing something - has the desired outcomes?
No? Then it's going to be difficult for those of us working in a domain that provides mission critical solutions - ERP, embedded software, infrastructure that other systems depend on - to know how to assess those suggestions.
The process of asking and answering a question like that is found in the Governance paradigm. Since our role is to be stewards of our customer's money in the delivery of value in exchange for that money, it's a legitimate question and deserves a legitimate answer. Without an answer, or at least and answer than can be tested outside the personal anecdotal experience of the proposer, it tends to be unsubstantiated opinion.
The question is two fold. Can the customer accept the release into use and the other does the customer have the ability to make use of the incremental capabilities of these releases?
Let's start with the incremental release. I know the picture to the left is considered a metaphor by some. But as a metaphor it's a weak. Let's look a a few previous posts. Another Bad Agile Analogy, Use, Misuse, and Danger of Metaphor. Each of these presents some issues with using Metaphors.
But let's be crystal clear here. Incremental development in the style of the bottom picture may be a preferred method, once the customer agrees. Much of the reterotic around agile assumes the customer can behave in this way, without looking outside the ancedotal and many times narrow experiences of the those making that suggestion. For agile to succeed in the enterprise and mission critical product and project domain, testing the applicability of both pictures is needed.
Ask the customer if they are willing to use the skateboard while waiting for the car? Otherwise you have a solution looking for a problem to solve.
Now to the bigger issue. In the picture above, the top series is a linear development and the bottom an iterative or incremental depending on where you work. Iterating on the needed capabilities to arrive at the car. Or incrementally delivering a mode of transporatation.
The bottom's increment shows 5 vehicles produced by the project. The core question that is unanswered is does the customer want a skate board, scooter, bicycle, motorcycle, and then a car for transportation. If yes, no problem. But if the customer actually needs a car to conduct business, drive the kids to school, or arrive at the airport for your vacation trip.
The failure of the metaphor and most metaphors is they don't address the reason for writing software for money
Provide capabilities for the business to accomplish something - Capabilities Based Planning
The customer didn't buy requirements, software, hardware, stories, features, or even the agile development process. They bought a capability to do something. Here's how to start that process.
Here's the outcome and an insurnace provider network enrollemtn ERP system.
Producing skateboards, then scooters, then bicycles and then finally the car isn't going to meet the needs of the customer if they want a car or a fleet of cars. In the figure above the Minimal Viable Features, aren't features they are capabilities. For example this statement is a minimal viable product is likey a good description of a Beta Feature. Could be connected to a business capability, but without a Capabilities Based Plan as in above, can't really tell.
So How Did We Get In This Situation?
Here's a biased opinion informed by my several decades of experience writing code and managing others who write code - we're missing the systems engineering paradigm in commercial software development. That is for software development of mission critical systems, and Enterprise IT is an example of mission critical systems.
Here's some posts:
The patradigm of Systems Engineering fills 1,000's pages and dozen's of books, but it is boiled down to this.
You need to know what DONE looks like in units of measure meaningful to the decision makers. Those units start with Measures of Effectiveness and Measures of Performance.
Each of those measures is probabilistic, driven by the underlying statistical processes of the system. These mean you must be able to estimate not only cost and schedule, but how that cost and schedule will deliver the needed system capabilities measured in MOE's and MOP's.
Warning this is an Opinion Piece.
In a conversation this week the quote Insanity is doing everything the same way and expecting a different outcome. Or some variant of that. Attributed to Einstein. As if attributing it to Einstein makes it somehow more credible, than attributing it to Dagwood Bumstead.
Why is this Seemingly Trival Point Important
We toss around platitudes, quotes, and similar phrases in the weak and useless attempt to establish credibility of an idea by referencing some other work. Like quoting a 34 year old software report from NATO, when only mainframes and FORTRAN 77 were used, to show the software crisis and try to convince people it's the same today. Or use un-vetted, un-reviewed, charts and graphs from an opinion piece in popular techncial magazine as the basis of statistical analysis of self-selected data
Is it world shaking news? No. Is the balanced of the universe disrupted? Hardly.
But is shows a lack of mental discipline that leaks into the next level of thought process. It's always the little things that count, get those right and the big things follow. That is a quote from somewhere. But it also shows laziness of thought, use of platitudes in place of the hard work to solve nearly intractable problems, and all around disdain for working on those hard problems. It's a sign of our modern world - look for the fun stuff, the easy stuff, and the stuff we don't really want to be held accountable for if it goes wrong
I will use the Edwin Land quote though, that is properly attributed to him
Don't undertake a project unless it is manifestly important and nearly impossible.
That doesn't sound like much fun, let's work on small, simple, and easy projects and tell everyone how those successful processes we developed can be scaled to the manifestly important and nearly impossible ones.
When there are charts showing an Ideal line or a chart of samples of past performance - say software delivered - in the absence of a baseline for what the performance of the work effort or duration should have been, was planned to be, or even better could have, this is called Open Loop control.
The issue of forecasting the Should, Will, Must cost problem has been around for a long time. This work continues in DOD, NASA, Heavy Construction, BioPharma, and other high risk, software intensive domains.
When we see graphs where the baseline to which the delays or cost overages are compared and those baselines are labeled Ideal, (like the chart below), it's a prime example of How to LIe With Statistics, Darrell Huff, 1954. This can be over looked in an un-refereed opinion paper in a IEEE magazine, or a self-published presentation, but a bit of homework will reveal that charts like the one below are simply bad statistics.
This chart is now being used as the basis of several #NoEstimates presentations, which further propagates the misunderstandings of how to do statistics properly.
Todd does have other papers that are useful Context Adaptive Agility is one example from his site. But this often used and misused chart is not an example of how to properly identify problems with estimates,
Here's some core issues:
Here's where the process goes in the ditch - literally.
We can use the ne plus ultra put-down of theoretical physicist Wolfgang Pauli's "This isn't right. It's not even wrong." As well the projects were self-selected, and like the Standish Report, self-selected statistics can be found in the How to Lie book
It's time to look at these sort of conjectures in the proper light. They are Bad Statisics, and we can't draw any conclusion from any of the data, since the baseline to which the sampled values are compared Aren't right. They're not even wrong." We have no way of knowing why the sampled data has a variance from the ideal the bogus ideal
So time to stop using these charts and start looking for the Root Causes for the estimating problem.
A colleague (former NASA cost director) has three reasons for cost, schedule, and technical shortfalls
Only the 2nd is a credible reason for project shortfalls in performance.
Without a credible, calibrated, statistically sound baseline, the measurements and the decisions based on those measurements are Open Loop.
You're driving your car with no feedback other than knowing you ran off the road after you ran off the road, or you arrived at your destination after you arrived at your destination.
I don't know Stephen, but his post is provacutative. I'm assigned to a client outside my normal Defense Department and NASA comfort zone. The client needs a Release Management System integrated with a Change Control Board. Both are the basis of our defense and space software world. This client is trying to use agile, but has little in the way of the discipline needed to actually make it work.
The SWDev is like play writing is a beautiful concept that can be applied to the choas of the new client and also connected back to our process driven space and defense, which by the way makes heavy use of agile, but without all the drama of the it's all about me developer community.
Let's start here:
In both software and play writing, structure is almost entirely arbitrary. Because neither obey the laws of physics, the structure of software and plays comes from the act of composition. A good software engineer will know their composition from end to end. But another programmer can always come along and edit the work, inserting their own code as they see fit. It is received wisdom in programming that most bugs arise from imprudent changes made to old code.
It turns out of course neither of those statements is correct in the sense we may think. There is the act of composition, but that composition needs a framework in which to be developed. Otherwise we wouldn't know what we're watching or know what we're developing until it is over. And neither is actually how plays are written or software is written. It may be an act of composition, but it is not an act of spontaneous creation.
Let's start with play writing. It may be that the act of writing a play where the structure is entirely arbitrary is possible, but it's unlikely that would be a play you'd pay money to see. A Harold Pinter play may be unstructured Waiting for Gadot may be unstructured, but that's not really how plays are written. They follow a structured approach - there is a law of physics for play writing.
That's about the story of the play. To actually write a play, here's a well traveled path to success. These guidelines are for the outcome of the writing effort starting in the beginning.
When we talk about writing software there is a similar story line
The story line is the basis of Capabilities Based Planning. With the capabilities, the requirements can be elicited. From those requirements, decisions can be made for what order to deliver them to produce the best value for the business or the mission.
This process is about decision making. And decision making uses information about the future. This future information comes many times from estimates about the future.
Project Management is a control system, subject to the theory and practice of control systems. The Project Management Control System provides for the management of systems and processes - cost estimating, work scope structuring and authorization, scheduling, performance measurement, reporting, for assessing the progress of spending other peoples money.
The level of formality for these processes varies according to domain and context. From sticky notes on the wall for a 3 person internal warehouse locator website of a plastic shoe manufacture - to a full DCMA ANSI-748C validated Earned Value Management System (EVMS) on a $1B software development project and everything in between.
The key here is if we're going to say we have a control system it needs to be a Closed Loop control system, not an Open Loop control system. On Open Loop system is called train watching, we sit by the side of the tracks and count the trains going by and report that number. How many trains should go by, could go by? We don't know. That's what's shown in the first picture. We sample the data, we apply that data to the process and it generates an output. There is no corrective action, it's just a signal based on the past performance of the system. Some examples of Open Loop control implemented in the first picture:
The key attribute of Open Loop Control
The key disadvantage of open-loop systems is it is poorly equipped to handle disturbances or changes in the conditions which that reduce its ability to complete the desired task.
A close loop system behaves differently. Here's some example of controllers used in the second picture
The key attributes of Close Loop Control, shown in the second picture
Because the closed-loop system has knowledge of the output condition - in the case of projects the desired cost, schedule, and technical performance, it is equipped to handle system disturbances or changes in the conditions which may reduce its ability to complete the desired task.
When we have a target cost - defined on day one by the target budget, a planned need date, and some technical performance target, closed loop control provides the needed feedback to make decisions along the when, when the actual performance is not meeting our planned or needed performance
In the end it comes back to the immutable principle of microeconomics. When we are spending money to produce a value, we need to make decisions about which is the best path to take, which are the best of multiple options to choose. In our to do this we need to know something about the cost, schedule, and performance forecasts from each of the choices. Then we need feedback from the actual performance to compare with our planned performance to create an error signal. With this error signal, we can then DECIDE what corrective actions to take.
Without this error signal, derived from the planned values compared with the actual values there is no information needed to decide. Sure we can measure what happened in the past and decide, just like we can count trains and make some decision. But that decision is not based on a planned outcome, a stated need, or an Estimated Arrival time for example.
Without that estimated arrival time, we can't tell if the train is late or early, just that it arrived. Same with the project measurements.
Open Loop provides no feedback, so you're essentially driving in the rear view mirror, when you should be looking out the windshield deciding where to go next to escape the problem.
Gentlemen, we have run out of money; now we have to think - Winston Churchill
The role of estimating in project and product development is many fold...
In both cases the future cost and future monetized value are probabilistic numbers.
With both these numbers and their Probability Distribution Function, decisions can be made about options - choices that can be made to influence the probability of project or product success.
Without this information, the microeconomics of writing software for money is not possible and the foundation of business processes abandoned.
In order to make these estimates of cost, schedule, and the technical performance of the project or product, some model is needed and the underlying uncertainty of the elements of the model. These uncertainties come in two forms
To suggest decisions can be made without knowing this future information violates the principles of microeconomics of business
There is a popular notion that agile is bottoms up and tradititional is top down. Neither is actually effective in deliverying value to the customer based on the needd capabilties, time phased to match the business or mission need.
The traditional - read PMI and an over generalization - project life cycle is requirements elicitaton based. Go gather the requirements, arrange them in some order that makes sense and start implementing them. The agile approach (this is another over generlaizaiton) is to let the requirements emerge, implement them in the priority the customer says - or discovers.
Both these approaches have serious problems as evidenced by the staistics of software development
Why is it hard to think beyond our short term vision? Rapid delivery of incremental value is common sense, no one would object to that - within the ability of the business to absorb this value of course. This is called the Business Rhythm.
But that rapid redelivery of incremental value is only a means to an end. The end is a set of capabilities of the business that allows that business to accomplish their Mission. To do something as a whole with those incremental features. That is turn the features into a capability.
Think about a voice over IP system, who's feature set was incrementally delivered to 5,000 users at a nation wide firm. This week we can call people, receive calls from people, but we don't have the Hold feature yet. Are you really interested in taking that product and putting it to use?
How about an insurance enrollment system, where you can sign up, provide your financial and health background, choose between policies, but can't see which doctors in your town take the insurance, because the Provider Network piece isn't complete yet.
These are not notional examples, they're real projects I work on. For these type projects - most projects in the enterprise IT world - an All In feature set is needed. Not the Minimum Viable Product (MVP). But the set of Required Capabilities to meet the business case goals of providing a service or product to customers. No half baked release with missing market features.
You might say, that incremental release of features could be a market strategy, but looking at actual products or integrated services, it seems there is little room for partial capabilities in anything, let alone Enterprise class products. Either the target market gets the set of needed capabilities to capture market share or provide the business service or it doesn't and someone else does.
An internal system may have different behaviours, I can't say since I don't work in that domain. But we've heard loud and strident voices telling us deliver fast and deliver often when there is no consideration for the Business Rhythm of the market or user community for those incremental - which is a code word for partially working - capabilities.
Of course the big bang, design, code test, paradigm was nonsense to start with. That's not what I talking about here. I'm talking about the lack of critical assessment of what is the value flow of the business and only then applying a specific set of processes to deliver that value. Outcome first, then method.
So Now The Hard Part
The conversation around software delivery seems to be dominated by those writing software, rather than by those paying for the software to be written. Where are the critical thinking skills to ask those hard nosed business questions:
Questions like that have been replaced with platitudes and simple and many times simple minded phrases.
It is the mark of an educated mind to rest satisfied with the degree of precision which the nature of the subject admits and not to seek exactness where only an approximation is possible —Aristotle (384 B.C - 322 B.C.)
When we hear someone say estimates are guesses, When we estimate we act as if we believe the plan will not change, or similar uninformed nonsense, think of Aristotle. Without the understanding, from education, experience, and skill to realize that all project variables are random variables and vary naturally and vary from external events.
As such in order to determine the future impacts from decisions that involve cost, schedule, and performance, we need to estimate that impact of those random processes on the outcome of our decision.
This is the basis of all decision making in the presence of uncertainty. It's been claimed decisions can be made without estimates, but until someone comes up with the way to make decisions without estimating those impacts, statistical estimating is the way.
Since all variables on all projects are random - cost, schedule, and delivered capabilities, in the economics of projects, the chance of being wrong and the Cost of being wrong is the expected opportunity cost. When we write software for money, we are participating in the microeconomics process of decision making based on information about the future:
Information is needed to assess both the cost and the value in order to DECIDE what to do. The formula for the value of this information can be mathematical as well as intuitive.
We make better decisions when we can reduce uncertainty about those decisions. Knowing the value of the information used to make those decisions is part of the microeconomics of writing software for money.
If we are uncertain about a business decision, or a decision for the business based on technology, that means we have a chance of making a wrong decision. By wrong it means the consequences of the alternatives cannot be assessed and one chocie that might have been preferable was not choosen. The cost of being wrong is the difference between the wrong choice and the best alternative.
In order to make an informed decision we need information - as mentioned above. This information itself has uncertainty, and therefore most times we need to estimate the actual numbers from the source of the information:
These questions and their answers are critical to the successful operation of any business, whose fundamental principle of operations is to turn expense into revenue. Since the variables involved in our projects are actually random variables, we'll need to estimate the answers, leaving the bigger question unanswered to date...
Can we make decisions without estimating the future impact on cost, schedule, and performance or that decision?
Gather information in support of decision making is decision risk reduction. The desire to reduce risk is good business practice. The decision maker needs information about the behaviour of the random variables involved in the decision making process. These must be estimated before the fact to make a decision about the future.
To develop the needed estimates we need a Basis of Estimate process, which means building the estimates from Reference Classes, parametric models, or similar cardinal based processes that have calibrated in some way. The Ordinal (relative) estimate are not credible. This removes the ill conceived notion that estimates are guesses.
† Extracted from How To Measure Anything, Douglas W. Hubbard.
In the estimating discussion there is a popular notion that we can't possibly estimate something we haven't done before. So we have to explore - using the customers money by the way - to discover what we don't know.
The when we hear about we've never done this before and estimating is a waste of time, think about the title of the post.
Everything's a Remix
Other than inventing new physics all software development has been done in some form or another before. The only true original thing in the universe is the Big Bang. Everything else is derived from something that came before.
Now we may not know about this thing in the past, but that's a different story. It as done before in some form, but we didn't realize it. There are endless examples of copying ideas from the past is thinking they are innovative, new and break through. The iPad and all laptops came from Allan Kay's 1972 paper, "A personal computer for childern of all ages." Even how the touch screen on the iPhone works was done before Apple announced it as the biggest breakthrough in the history of computing.
In our formal defense acquisition paradigm there are many programs that are research. This looks like this flow. Making estiimates about the effort and duration is difficult, so blocks of money are provided to find out. But these are not product production or systems development processes. The Systems Design and Development (SDD) is between MS-B and MS-C. We don't confuse exploring from developing. Want to explore work on a DARPA program. Want to develop, work in post-MS-B and know somethiong about what came before.
The Pre-milestone A works is to identify what capabilities will be needed in the final product. The DARPA programs I work are even further to the left of Milestone A.
On the other end of the spectrum from this formal process, a collection of sticky notes on the wall could have similar flow of maturity. But the principles are still the same.
So How To Estimate in the Presence of We've Never Done This Before
Here's a critical concept - we can't introduce anything new until we're fluent in the language of our domain, and we do that through emulation.† This means for us to move forward we have to have done something like this in the past. So if we haven't done something like this in the past, don't know anyone who has, or can't find an example of it being done, we will have little success being innovative. As well, we will hopelessly fail in trying to estimate the cost, schedule, and probability of delivering capabilities. In other words we'll fail and blame it on the estimating process and assume that we'll be successful if we stop estimating.
So stop thinking about we can't know what we don't know and start thinking someone has done this before and we just need to find that someone, somewhere, something. Nobody starts out being original, we need copying to get started. Once copied, transformation is the next step. With the copy we can estimate size and effort. We can now transform it into something that is better, and since we now know about the thing we copied, we have a reference class. Yes that famous Reference Class Forecasting used by all mature estimating shops. With the copy and it's transformed item, we can them combine ideas into something new. The Alto from Xerox and then the Xerox Star for executives, was the basis of the Lisa and Mac.
You can estimate almost anything, and every software system if you do some home work and suspend the belief it can't be done. WHY? because it's not your money, and those providing you the money have an interest in several things about their money - what will it cost, when will you be done, and using the revenue side of the balanced sheet, when will they break even on the exchange of money for value? This is the principle of every for profit business on the planet. The not-for-profits have to pay the electric bill as well, as do the non-profits. So everyone, everywhere needs to know the cost of that value they asked us top produce BEFORE we've spent all their money and ran out of time to reach the target market for that pesky break even equation.
Anyone tells you otherwise is not in the business of business, but just on the expense side and that means not on the decision making side either, just labor doing what they're told to do - which is a noble profession, but unlikely to influence how decisions are made.
The notion of decision rights is the basis of governance. When you hear about doing or not doing something in the absence of who needs this information, ask who needs this information and is it your decision right to decide to fulfill or not fulfill the request for that information? As my colleague, retired NASA Cost Director, says follow the money, that's where you find the decider.
† Everything is a remix, Part 3, Kirby Furgeson.
Software development is microeconomics. Microeconomics is about making decisions - choices - based on knowing something about cost, schedule, and techncial impacts from that decision. In the microeconomics paradigm, this information is part of the normal business process.
This is why when there is conjecture that you can make decisions in the absence of estimating the impacts of that decision, it ignores the principles of business. Or the notion that when numbers are flying around in an organization they lay the seeds for dysfunction, we need to stop and think about how business actually works. Money is used to produce value which is then exchanged for more money. No business will survivie for long in the absence of knowing about the numbers contained in the balance sheet and general ledger.
This book should be mandatory reading anyone thinking about making improvements in what they see as dysfunctions in their work environment. No need to run off and start inventing new untested ideas, they're right here for the using. With this knowledge comes the understanding about why estimates are needed to make decisions. In the microeconomics paradigm, making a choice is about opportunity cost. What will it cost me to NOT do something. The set of choices that can be acted on given their economic behaviour. Value produced from the invested cost. Opportunities created from the cost of development. And other trade space discussions.
To make those decisions with any level of confidence, information is needed. This information is almost always about the future - return on investment, opportunity, risk reduction strategies. That information is almost always probabilistically driven by an underlying statistical process. This is the core motivation for learning to estimate - to make decisions about the future that are most advantageous for the invested cost.
That's the purpose of estimates, to support business decisions.
This decision making processes is the basis of Governance which is the structure, oversight, and management process that ensures delivery of the needed benefits of IT in a controlled way to enhance the long term sustainable success of the enterprise.
The agile notion of delivering early, delivering often is a wonderful platitude, but ignores the underlying business rhythm for accepting the software features into producitive use by the dynamics of any business or market channel. Here's some examples of business rhythms I've worked.
A common problem in our development of the Program Management Office is getting so caught up in putting out fires. This is Covey's “addiction of the urgent.” In this process we lose the big-picture perspective. This note is about the big-picture view of the project management process as it pertains to our collection of projects. These are very rudimentary principles, but they are important to keep in mind.
5 Basic Principles
1. Be conscious of what you're doing, don’t be an accidental manager. Learn PM theory and practice. Realize you don't often have direct control. Focus on being a professional and the PM's mantra:
"I am a project professional. I work on projects. Projects are undertakings that are goal-oriented, complex, finite, and unique. They pass through a life cycle, which begins with project selection and ends with project termination."
2. Invest in front-end work; get it right the first time. We often leap before we look due to an over–focus on results-oriented processes, simple and many times simple-minded platitudes about project management and the technical processes and ignore basic steps. Trailblazers often achieve breakthroughs, but projects need forethought. Projects are complex, and the planning, structure, and time spent with stakeholders are required for success. Doing things right takes time and effort, but this time and effort is much cheaper than rework.
3. Anticipate the problems that will inevitably arise. Most problems are predictable. Well-known examples are:
4. Go beneath surface illusions; dig deep to find the real situation. Don't accept things at face value. Don't treat the symptom, treat the root cause, and the symptoms will be corrected. Our customers usually understands their own needs, but further probing will bring out new needs. Robert Block suggests a series of steps:
5. Be as flexible as possible; don’t get sucked into unnecessary rigidity and formality. Project Management is the reverse of Fermi's 2nd law: we're trying to create order out of chaos. But in this effort:
 The Politics of Projects, Robert Block, Yourdon Press, 1983.