Managing Uncertainty: the difference between Investing & Gambling

November 6, 2017 by

The last week of September had me doing three industry presentations that all shared a common theme, “uncertainty”. Today’s blog will focus on the presentation Uncertainty Considerations for Development Planning Type Curves.

Uncertainty extends through the reservoir, drilling, completions & operations and is compounded by commodity prices. What is certain is that shareholders have little tolerance for production shortfalls. The following image show the reduction in stock price of 8 companies in 2017 that occurred following the announcement of production shortfalls.

Figure 1: Stock price reactions to production shortfalls

I could have easily showed you several companies who hit their production targets and maintained or increased their stock price. While it’s not important to know who these companies are, it is important to know that there are best practices that can help protect you from targeting statistically unachievable results and falling short of your production promises.

Wait a minute, isn’t that called “sandbagging”?

Sandbag is a tactic used to hide or limit expectations… in order to produce greater than anticipated results” according to Investopedia. A disciplined approach to characterizing, understanding and managing uncertainty is a strategy to mitigate your downside and is consistent with the practices of “investing”. “Gambling” typically lacks risk mitigation strategies — it inherently acknowledges and embraces the associated risk. By using a disciplined approach to managing uncertainty you can place confidence intervals on your production promises. When you can say, “We have a 90% confidence of achieving or exceeding this production number”, that’s not sandbagging, that’s smart business.

Why Use Aggregation Curves?

Aggregation Curves (shown in Figure 4) are a tool that you can use to inform uncertainty-management activities from more than one perspective:

  • Given the sample size of an analogue data set, how much uncertainty is there around the Mean of that limited sample set (i.e. what is my confidence that the analogue sample set adequately defines the population)
  • Given I have a known reliable analogue Mean (based on a statically significant sample set) what is the uncertainty of the Program Arithmetic Mean for the number of wells I plan to drill.

Let’s take the second perspective and walk through an example and apply the Aggregation Curve principles.

Characterizing Uncertainty

I have a reliable analogue dataset with a Mean Estimated Ultimate Recoverable (EUR). For this dataset the range of uncertainty can be expressed as a P10:P90 ratio of 5, where the Mean corresponds with the P38 value (as illustrated in the following Probit plot of a Cumulative Probability Distribution).

Figure 2: Probit plot showing a reliable analogue Mean based on a statistically significant sample set

If I translate the Cumulative Probability Distribution into a Histogram using the same values I might more readily see that each well that I drill has a 62% chance of being less than the mean. If I am only drilling 15 wells how confident am I that their Arithmetic Mean will approach the reliable analogue Mean?

Figure 3: Histogram showing reliable analogue Mean based on a statistically significant sample set 


How to use Aggregation Curves

Aggregation Curves, sometimes called Trumpet Curves, communicate how the P10 and P90 values (the 80% confidence range) of the Mean converge as the sample size increases. Expressed another way, the uncertainty that the arithmetic Mean of a sample set actually represents the Mean of the population will decrease as my sample set gets larger. The colours in Figure 4 represent the range of uncertainty for distributions of varying P10:P90 ratios.

To demonstrate how you would use these curves in a real world example I have outlined the following steps:

  • I have a known reliable analogue Mean based on a statistically significant sample set (see Figures 2 and 3).
  • I would like to determine the Downside Mean, which is the Mean value that I have a 90% chance of achieving or exceeding given the number of wells I am drilling.
  • Locate the sample size (15 wells) on the X-axis for the P10:P90 ratio of your sample set (my example has a P10:P90 = 5).
  • Multiply the reliable analogue Mean by the corresponding Percentage on the Y-axis to derive a “Downside Mean”

Figure 4: Aggregation Curves

In this example the results would be:

  • Mean = 4,979 mmcf
  • P10:P90 = 5
  • Downside Mean Factor = 78.7%
  • Downside Mean = 3,918 mmcf

Completion Design Example

I can apply these same principles to assessing completion design changes when limited datasets are available. In the example below I have two distributions, my Base Case where I am using a Proppant Intensity between 50 & 100 tonnes per 100m completed length (green dots and line). I want to assess the economics of increasing the Proppant Intensity by 50% (shown by the red dots and line). I have a limited sample size and apply the Aggregation Curve principles to derive a Downside Mean (dashed black line).

Without applying the Aggregation Curve principles I may have concluded that 50% more proppant made economic sense. However, taking into account the uncertainty of the Mean due to the sample size, I may come to a very different conclusion.

Figure 5: Aggregation Curve principles applied to completion design analysis

You may also want to check out the other two presentations I did not feature in this blog.

How Many Months of Production Do I Need for a Reliable Forecast? should make you wonder about how much uncertainty you have in your conclusions based on how many months of production history you’re including in your analysis.

Effectiveness of Nitrogen Energized Completions in the Alberta Montney should give you pause about drawing early conclusions, especially when you have a small sample set.


Thanks for reading. We welcome your questions and suggestions for future blogs.