Document 182032

To avoid having this newsletter filtered as bulk mail, please add [email protected] to your contact list
If you are having trouble viewing this email, click here to view it in a web browser.
Outlook 2007 users please click here
Welcome to Trends, the e-newsletter from Business Forecast Systems. Trends puts more than two
decades worth of forecasting knowledge, experience and expertise at your fingertips every other month.
Watch this space for tips & techniques, information & insight, observations & opinions and more. Thanks
for reading!
Forecasting 101: How to Forecast Data Containing
An outlier is a data point that falls outside of the expected range of the data (i.e., it is an unusually large or
small data point). If you ignore outliers in your data, there is a danger that they can have a significant
adverse impact on your forecasts. This article surveys three different approaches to forecasting data
containing outliers, discusses the pros and cons of each and makes recommendations about when it is
best to use each approach.
Read more…
Forecasting 101: How to Forecast Data Containing
Data and Forecasting: Trends 30-Second Survey
How to Get Good Forecasts from Bad Data
Oberto Sausage Finds the Right Recipe for
Lighter Side
Data and Forecasting: Trends 30-Second Survey Results
In the May issue of Trends we conducted the third
installment of the ongoing Trends 30-Second Survey
series with a survey on data and forecasting. In this issue
we report the results about the types of historical data
forecasters are using and on the various approaches
forecasters utilize to adjust, alter or transform their
historical data prior to analysis. Downloadable slides,
including a listing of additional resources, are also
Read more…
How to Get Good Forecasts from Bad Data
Calendar of Events
Forecast Pro Training
Product training workshops teach you how to use
Forecast Pro most effectively.
Post-Summit Forecast Pro Training
September 20, 2007
Boston, Massachusetts USA
Forecast Pro Unlimited Training
November 7-8, 2007
Washington, DC USA
Forecast Pro XE Training
December 10-11, 2007
Phoenix, Arizona USA
Forecast Pro Unlimited Training
December 12-13, 2007
Phoenix, Arizona USA
One of the most common forecasting challenges facing companies is that the
available data are not optimal for forecasting purposes. In this article, Ellen Bonnell,
Founder of Trends Savants, shares pragmatic advice on how you can create accurate
forecasts even when grappling with less-than-perfect data.
Read more…
Oberto Sausage Finds the Right Recipe for Forecasting
As a market leader facing constant growth and event-driven demand, Oberto Sausage Company needed
to develop a forecasting process that incorporated both statistically-based modeling approaches and
judgmental input. In this article, you will hear how the company has developed a streamlined forecasting
process which captures ongoing “base” demand as well as variation in demand associated with events
such as promotions and unusual marketplace conditions.
Post-Summit Forecast Pro Training
February 14, 2008
Orlando, Florida USA
The Forecasting Summit
Forecasting Summit offers a unique combination of
education, discussion, instruction and perspectives
on business forecasting for practitioners.
Forecasting Summit 2007
September 17-19, 2007
Boston, Massachusetts USA
Forecasting Summit 2008
February 11-13, 2008
Orlando, Florida USA
Read more…
Lighter Side
If you have to forecast, forecast often.
-Edgar R. Fiedler
Forecast Pro Appearances
Look for Forecast Pro at the following events.
APICS 2007 Conference
October 21-23, 2007
Denver, Colorado
Forecast Pro Partner Events
Look for Forecast Pro at the following events.
Hitachi East Japan Solutions, Ltd. Forecast Pro
User Conference
September 11, 2007
Tokyo, Japan
We at Trends value your feedback. Please feel
free to send us your comments, questions or
requests at [email protected]
Business Forecast Systems, Inc. (BFS) respects your privacy. We do not share e-mail addresses, phone numbers or fax numbers with third parties. BFS
domains include and
© 2007 Business Forecast Systems, Inc. All rights reserved.
I do not wish to be contacted by BFS for any future emailing.
Please update my email address.
Business Forecast Systems, Inc. | 68 Leonard St. | Belmont, MA 02478, USA
Phone: 617.484.5050 | Fax: 617.484.9219
Forecasting 101: How to Forecast Data Containing Outliers
An outlier is a data point that falls outside of the expected range of the data (i.e., it is an unusually large or small data point). If
you ignore outliers in your data, there is a danger that they can have a significant adverse impact on your forecasts. This
article surveys three different approaches to forecasting data containing outliers, discusses the pros and cons of each and
makes recommendations about when it is best to use each approach.
Option #1: Outlier Correction
A simple solution to lessen the impact of an outlier is to replace the outlier with a more typical value prior to generating the
forecasts. This process is often referred to as Outlier Correction. Many forecasting solutions, including Forecast Pro, offer
automated procedures for detecting outliers and “correcting” the history prior to forecasting.
Correcting the history for a severe outlier will often improve the forecast. However if the outlier is not truly severe, correcting for
it may do more harm than good. When you correct an outlier, you are rewriting the history to be smoother than it actually was
and this will change the forecasts and narrow the confidence limits. This will result in poor forecasts and unrealistic confidence
limits when the correction was not necessary.
Forecast Pro Unlimited screenshot showing both an outlier report and a graph displaying a “corrected” outlier.
1. If the cause of an outlier is known, alternative approaches (such as option #2 and #3 below) should be considered prior to
resorting to outlier correction.
2. Outlier correction should be performed sparingly. Using an automated detection algorithm to identify potential candidates for
correction is very useful; however, the detected outliers should ideally be individually reviewed by the forecaster to determine
whether a correction is appropriate.
3. In cases where an automated outlier detection and correction procedure must be used, (for example if the sheer number of
forecasts to be generated precludes human review) then the thresholds for identifying and correcting an outlier should be set
very high. Ideally the thresholds would be calibrated empirically by experimenting with a subset of the data.
Editors’ notes: This issue of Trends also reports on the results of our 30-Second Survey about the various approaches
forecasters utilize to adjust, alter or transform their historical data prior to analysis.
Option #2: Separate the demand streams
At times, when the cause of the outlier is known, it may be useful to separate a time series into two different demand streams
and forecast them separately. Consider the following three examples.
Example A: A pharmaceutical company’s demand for a given drug consists of both prescription fills (sales) and free goods
(e.g., samples distributed free of charge to physicians). The timing of the distribution of free goods introduces outliers in the
time series representing total demand. Separating the demand streams yields an outlier-free prescription fills series and allows
different forecasting approaches to be used for each series—which is appropriate since the drivers generating the demand are
different for the two series.
Example B: A manufacturing company’s demand normally consists of orders from its distributors. In response to an unusual
event, the government places a large one-time order that introduces a significant outlier into the demand series, but does not
impact base demand from the distributors. Separating the demand streams yields an outlier-free distributor demand series and
allows the forecast for the government’s demand series to be simply set to zero.
Example C: A food and beverage company sells its products from both store shelves and promotional displays (e.g., end
caps, point-of-sale displays, etc.). It has access to the two separate demand streams. Although it is tempting to forecast these
two series separately, it may not be the best approach. Although the promotional displays will increase total demand, they will
also cannibalize base demand. In this example it may be better to forecast total demand using a forecasting method that can
accommodate the promotions (e.g., event models, regression, etc.).
1. Separating the demand streams should only be considered when you understand the different sources of demand that are
introducing the outliers.
2. If the demand streams can be separated in a “surgically-clean” manner, you should consider separating the demand
streams and forecasting them separately.
3. In cases where the demand streams cannot be cleanly separated, you are often better of working with a single time series.
Option #3: Use a forecasting method that models the outliers
Outliers can be caused by events of which you have knowledge (e.g., promotions, one-time orders, strikes, catastrophes, etc.)
or can be caused by events of which you have no knowledge (i.e., you know that the point is unusual, but you don’t know why).
If you have knowledge of the events that created the outliers, you should consider using a forecasting method that explicitly
models these events.
Event models are an extension of exponential smoothing that are particularly well suited to this task. They are easy to build
and lend themselves well to automation. Another option is dynamic regression.
Unlike time series methods, which base the forecasts solely on an item’s past history, event models and dynamic regression
are causal models, which allow you to bring in additional information such as promotional schedules, the timing of business
interruptions and (in the case of dynamic regression) explanatory variables.
By capturing the response to the events as part of the overall forecasting model these techniques often improve the accuracy
of the forecasts as well as providing insights into the impact of the events.
In instances where the causes of the outliers are known, you should consider using a forecasting method that explicitly models
the events.
Ignoring large outliers in your data often leads to poor forecasts. The best approach to forecasting data containing outliers
depends on the nature of the outliers and the resources of the forecaster. In this article, we have discussed three approaches—
outlier correction, separating the demand streams and modeling the outliers—which can be used when creating forecasts
based on data containing outliers.
About the author:
Eric Stellwagen is Vice President and co-founder of Business Forecast Systems, Inc. (BFS) and co-author of the Forecast Pro
software product line. He consults widely in the area of practical business forecasting—spending 20-30 days a year presenting
workshops on the subject—and frequently addresses professional groups such as the University of Tennessee’s Sales
Forecasting Management Forum, APICS and the Institute for Business Forecasting. Recognized as a leading expert in the
field, he has worked with numerous firms including Coca-Cola, Procter & Gamble, Merck, Blue Cross Blue Shield, Nabisco,
Owens-Corning and Verizon, and is currently serving on the board of directors of the International Institute of Forecasters (IIF).
Forecast Pro is a registered trademark of Business Forecast Systems, Inc.
© Copyright 2007, Business Forecast Systems, Inc.
home | products | education | support
customers | resources | company | contact us
Data and Forecasting: Trends 30-Second Survey Results
Many thanks to those of you who took the time to fill out the survey. For those who did not fill out the survey but find value in
the results, please consider adding your input to future surveys.
The Trends Data and Forecasting survey asked four basic questions (multiple responses allowed on questions 1-3). In
addition, open comments were solicited. Forty-four surveys were completed.
The questions were:
When you use “historical data” for forecasting, what specific kinds of data do you use?
Do you alter the historical data prior to analyzing it?
Do you forecast in units or currency or both?
What part of the organization are you in?
Click here to view the original survey.
When asked to specify which kinds of historical data they use, almost all of the survey participants chose only one selection
even though multiple selections were allowed. Only five respondents made more than one selection. Shipments and Orders/
Bookings were the top two responses, with 41% saying they use Shipments and 32% saying they use Orders/Bookings. Only
two respondents chose both Shipments and Orders. One of the respondents who indicated “Other” specified that they use
“Requested Ship Date” of orders.
This comment shows an understanding of the meaning of “historic demand” as it pertains to forecasting. When trying to select
the data best suited for demand forecasting, consider the question, “What did the customer want and when did they
want it?” Then, use the available data that comes closest to answering this question.
Two of the respondents indicated they look at both Point-of-Sale data and Shipment data. While POS reveals true demand at
the consumer level, suppliers sell directly to the retailer (not to the consumer) and therefore need to predict when the demand
from the retailer will occur. Developing an understanding of the connection between consumer demand (POS) and retailer
replenishment patterns (Shipments or Orders) is extremely important and can yield significant benefits for companies who
have access to POS data.
Perhaps the most significant finding is that 75% of the respondents reported that they alter their historical data
in some way before analyzing it. Of those who did alter their historical data prior to forecasting, almost two-thirds
indicated that they remove outliers (points which fall outside of the expected range of the data—i.e., points which are unusually
high or low). Two of the respondents who chose “Other” were included in this group based on their descriptive comments. One
noted that they, “Remove known abnormal demand spikes” while the other indicated they, “Remove days where system issues
resulted in abnormal volumes/call patterns.”
Twenty-four percent deseasonalize their data prior to forecasting. Another twenty-four percent make adjustments for variation
in the length of periods (either 4-4-5 adjustments or adjustments for number of business days in the period.) Eighteen percent
make adjustments for lost orders. This type of adjustment is typically performed when using shipment data. The inability to
meet customer demand in the past shouldn’t necessarily be part of expectations for the future. Again, this underscores the
idea that “historical demand” is the answer to the question, “What did the customer want and when did they want it?”
A majority of the respondents indicated that they forecast in units (62%). Thirty-two percent forecast in both Units and
Currency while only 7% indicated they forecast in Dollars/currency. It is interesting to note that one respondent who chose
“Other” specified that they forecast margin and that the demand planning department’s process integrated input from both the
sales and marketing department.
Click here to download the “Trends 30-Second Survey: Excel and Forecasting” slides in PPT format.
Click here to download the “Trends 30-Second Survey: Excel and Forecasting” slides in PDF format.
Forecast Pro is a registered trademark of Business Forecast Systems, Inc.
© Copyright 2007, Business Forecast Systems, Inc.
home | products | education | support
customers | resources | company | contact us
The following is an excerpt from an article appearing in the Summer 2007 issue of Foresight: The International
Journal of Applied Forecasting.
I forecast for a living. As a consultant, I recommend changes to forecasting methodologies and changes to
the forecast process. I also create the forecasts that clients will use as inputs to their plans for
manufacturing, distribution, inventory control, materials management, labor optimization and revenue
I think that makes me a forecasting expert. I love being an expert and enjoying all the benefits that come
with the territory, but there are also challenges that go along with being an expert. For example, I actually have to be an
expert. I must observe situations, determine better courses of action, test my recommendations and get results. Not just
any results—better results.
But I have to use the same inputs. Simply put, I have to take the same inputs to a forecast and get better results, grappling
with the same less-than-perfect data that is prevalent in most companies. Even though there is a long list of valid causes
for bad data, it’s often all that I’ve got to work with.
Guiding Principles
My experience has taught me that there are fundamental principles or guidelines toward better forecasts. Whenever I
make a decision concerning less-than-perfect data, I try to be in alignment with these principles:
The Seven Guiding Principles For Better Forecasts
Accept that data do not have to be perfect. The data only need to predict the future.
Redefine the problem so it can be solved with the data you have, not just the data you would
like to have.
Recognize that definitions used for corporate reporting may not be suitable for forecasting.
Separate data that are useful from data that are not. Value empirical evidence over tribe
Use the right amount of data.
Quantify the impact when making operational or marketing plans.
Never run a forecast without re-testing out-of-tolerance for the entire data set. You will always
learn something.
Let’s take a closer look at three of these Guiding Principles.
Guiding Principle #1: Accept that data do not have to be perfect. The data only
need to predict the future.
This principle is the most challenging for the data manager—if the data are not what you need, modify them. Imagine that!
Instead of changing the data used for reporting, we create a second set of data to be used for forecasting. This second set
is based on actual activity, but we adjust some of the data points in an effort to obtain better forecasts.
This second set of data gives us the freedom to handle tough situations like missing data. The shorter the collection time
interval, the more likely there will be gaps in the data which inhibit statistical forecasting.
I try to fill in the gaps in reasonable ways. One technique is interpolation. Interpolation can be as simple as averaging
the period before and the period after or as complex as borrowing seasonality from a like location or trend from a like
product. Avoid replacing missing data with zeros, unless you have an intermittent demand series. Zeros mean “zero
E-mail us:
[email protected]
[email protected]
activity” rather than missing data.
Sometimes I remove data from the series. I remove data that reflect occurrences unlikely to be repeated, such as fires,
labor strikes, severe weather, shut downs, acts of nature and just plain wrong data. Once removed, consider the data
points missing and interpolate to fill in the gaps.
There are times when the data need to be systematically transformed to avoid unreasonable or even nonsensical
forecasts. As an example, values very close to zero can be the cause of negative forecasts.
The key to putting Guiding Principle #1 into practice is to store both (1) the actual data and (2) the data used to
forecast, and also to include a reason code for the modification. By doing so, a forecaster can decide whether the
calculation of forecast accuracy should include or exclude the data point(s) that have been modified.
Guiding Principle #3: Recognize that the definitions used for corporate reporting
may not be suitable for forecasting.
A company’s fiscal calendar, product groupings, and location hierarchies may not be a sound basis for forecasting, so I
establish my own with the forecasting task in mind.
The objective of a forecasting calendar is to do away with complexities such as adjusting for trading days or creating report
footnotes to indicate that “November had an extra week” or “Christmas was on a weekend.” Fiscal periods are somewhat
artificial because they are constructed for financial reporting and most likely do not reflect the real nature of the data.
Calendar months do not provide the same forecast accuracy as 28 days of data that are perfectly aligned to the same 28
days last year and the year before.
I start my calendar work with research to determine the true periodicity of the data. Different than seasonality, periodicity
is the natural repetitive cycle of a variable. For example, automobile traffic data has a work-week periodicity that is heavier
on Monday, lighter on Friday. The Monday-Friday cycle is repeated week after week regardless of the season. After
generating the forecasts, I reorganize back to the fiscal calendar as needed.
Product groupings are my next makeover. I use the company’s current product groupings for manufacturing or marketing
reports as a starting point, but I find that better forecasts come from unique forecasting groups. The goal is to combine
products that don’t forecast well individually, but forecast well as a group.
It sometimes requires trial and error to find the best groups. Try products that are promoted together, substitutes for each
other, have the same base metal or are purchased by the same customers. I forecast the group as a whole and each
product separately and then allocate the group total to the individual products proportionally. Use the same productgrouping techniques to forecast by location, creating groups that—as a whole—result in better forecasts than each location
Guiding Principle #4: Separate data that are relevant from data that are not.
This principle requires perseverance and more than a little patience. The goal is to separate variables that make a
difference from variables that add very little to forecast accuracy. Creating the list of possible variables is not the hard part
since most companies adhere to theories that have become conventional wisdom and are assumed to be the truth.
I make it a policy to analyze the relevance of every piece of conventional wisdom to the forecast. Before I begin the
analysis, I request concurrence from my client that empirical evidence will always be accepted over conventional wisdom
and tribe knowledge. I keep my analyses simple and use easy-to-understand statistics, such as correlation, to present
Sometimes just the process of collecting the data for the relevance analysis provides insights. One of my clients believed
that advertising penetration was critical to the forecast. The penetration was never fully analyzed because the data were
kept by more than one advertising agency, and the data formats among the agencies were not compatible. When I
collected the data I found that my client already had 100% penetration in all markets and the variable therefore was
irrelevant to the forecast!
There will always be less-than-perfect data. Establish a forecast process that includes identification and management of
the less-than-perfect data, using guiding principles to make data decisions and to create data-action plans. Implementation
of these plans may well require participation and acceptance from more than one department within the organization, so
build education and enlightenment into your forecast process.
To view the table of contents for past issues of Foresight, to request a free copy or to purchase an article
or subscription, click here.
You can also hear Ms. Bonnell speak on this topic at the upcoming Forecasting Summit conference to be
held on September 17-19, 2007 in Boston, MA.
About the Author
Ellen Bonnell is the founder of Trend Savants, formerly Application Builders, a consulting firm specializing in forecasting,
econometrics and planning. Clients include Toyota, Dunkin’ Donuts, Baskin-Robbins, Taco Bell, Mazda, Nissan, American
Hospital Supply, Allstate Insurance, MCI Worldcom and Coca-Cola Bottling Companies. Ellen teaches Economics in the
San Diego College System, is on the faculty of the National Institute for Management Research, and serves as a member
of the Practitioner Advisory Board for Foresight: The International Journal of Applied Forecasting.
Ms. Bonnell completed her Bachelor of Science degree in Business Economics from Indiana University and Masters of
Business Administration coursework from Northwestern University.
Business Forecast Systems in cooperation with the International Institute of Forecasters
Case Study
Oberto Sausage Finds the Right
Recipe for Forecasting
Oberto Sausage Company is a leading manufacturer of meat snacks and sausage
products. Based in Kent, Washington, the family-owned company has been in
business for more than 85 years. Its brands include Oh Boy! Oberto, Lowrey’s Meat
Snacks, Pacific Gold Meat Snacks and Smokecraft Real Smokehouse Snacks. Oberto
sells its products directly to mass merchandisers and major supermarket chains in
the US, and is distributed globally by Frito-Lay.
As a leader in a category which has experienced consistent double-digit growth and
where event-driven demand is always present, Oberto needed a forecasting system
which would allow them to:
♦ routinely model and forecast ongoing “base” demand;
♦ easily incorporate field-based knowledge of expected deviation from base
♦ keep track of changes to the forecast along with the assumptions underlying
those changes;
♦ develop consumption-based forecasts for key business segments and inte-
grate those forecasts into the overall forecast, and;
♦ maintain an optimal balance between forecast quality and the complexity,
cost and resources devoted to generating the forecast.
Eric Kapinos, Director of Forecasting and Planning at Oberto, is a veteran forecaster who began his career in the early-90’s as a Forecast Analyst for Starbucks
Coffee. Over the years, he has served on several forecasting teams, developing
forecasting processes and selecting/implementing solutions to support those processes. The solutions have ranged from home-grown, Excel-based systems to commercially developed, large-scale planning systems.
Kapinos notes, “I started out being a big believer in consensus forecasting and
collaboration. But I came to realize you never have absolute consensus, and collaboration should only be used where it’s adding value.” At Oberto, Mr. Kapinos has
structured a successful forecast process that combines statistically-based modeling
approaches with judgmental input. The process is executed by a focused team of
individuals, each of whom adds unique value to the forecast. The tools used to
support the process are Forecast Pro Unlimited, Forecast Pro Unlimited Collaborator and Forecast Pro XE.
“We have a full compliment of Forecast Pro products at Oberto,” explains Kapinos, “We use Forecast Pro
Unlimited as the main foundation for our demand forecasting process—it’s where the forecast is generated and
maintained. After we establish the forecast, it is fed into our ERP system where it drives procurement, planning/scheduling and plant execution.”
“One of our biggest forecasting challenges is really understanding what our true baseline demand is. We start
by creating a relatively conservative base forecast that’s statistically-driven off history. Most of the time, we
utilize the Expert Selection in Forecast Pro Unlimited. To accommodate abnormal conditions in the history,
things that happened that aren’t expected to happen again in the future—promotions, weather, outliers—we
use event models. That’s our starting point”.
Next, this base forecast (or, as Kapinos terms it, the “Business as Usual” forecast) is passed off to the Demand
Manager and Customer Service Representatives (CSRs) who review it in Forecast Pro Unlimited Collaborator.
The CSRs are responsible for making changes to the base forecast, incorporating field-based knowledge of
expected deviation from base demand. Kapinos notes, “The forecast team interfaces directly with the sales team
and often has knowledge of unusual conditions, things that wouldn’t be reflected in the history. Their job is to
make sure the forecasting process captures this ‘Business as Unusual’. When appropriate, they use Forecast Pro
Unlimited Collaborator to enter forecast overrides—replacing the statistical baseline forecast.”
The other integral players on the Oberto team are the Forecast Analysts. The Forecast Analysts focus on important customers for which consumption data is available. They use the dynamic regression (causal) modeling
capabilities in Forecast Pro XE. Kapinos explains, “For higher value Business Units the Forecast Analysts build
causal models to capture promotional events or any other unusual conditions in the marketplace, and the
results are then moved into Forecast Pro Unlimited as overrides. The Forecast Analysts concentrate on the 20%
of our customers and events that drive 80% of our volume.”
After initially employing a more complex and widely-deployed forecasting process and system, Oberto has
opted for a more focused and streamlined approach using Forecast Pro as its backbone. Notes Kapinos,
“There’s a sweet spot really. What you don’t want to do is have a sales team rolling up an SKU-level forecast for
every SKU-by-week. We use our sales team to provide intelligence only when history doesn’t tell us what’s
happening—and this approach has worked well.”
“Having a place to retain overrides and maintain notes systematically—just good administrative practices with
forecasting—is very, very useful. Even as simple as it is in Forecast Pro, it is powerful functionality. We’re able
to quickly review our forecast overrides and understand why they were made.”
Kapinos points out key benefits of the forecast process at Oberto, “We’ve been able to sustain years of doubledigit growth while inventory value has remained constant. We’ve also been able to strategically identify potential gaps in our plans where we may have shortfalls with important customers and move proactively to fill those
“Our forecasts are used for everything from planning/scheduling all the way up to revenue projection by the
executive team.”
Business Forecast Systems, Inc. ◆ 68 Leonard Street, Belmont, MA 02478 USA ◆ Phone: 617-484-5050 ◆ E-mail: [email protected]