Skip to main content

Advertisement

Forecasting product sales with a stochastic Bass model

Article metrics

Abstract

With the Bass model and data of previous sales a point estimate of future sales can be made for the purpose of stock management. In order to obtain information about the accuracy of that estimate a confidence interval can be of use. In this study such an interval is constructed from a Bass model extended with a noise term. The size of the noise is assumed to be proportional with the yearly sales. It is also assumed that the deviation from the deterministic solution is sufficiently small to make a small noise approximation. This perturbation takes the form of a time dependent Ornstein–Uhlenbeck process. For the variance of the perturbation an exact expression can be given which is needed in order to obtain confidence intervals.

Introduction

The initial purchase of a new product is often explained in terms of innovators who buy the product on their own initiative without being influenced by other buyers. Imitators buy the product because others have done so. Together they form the total number of buyers, the saturation level of a market. The Bass diffusion model [1] yields an estimate of these metrics on the basis of sales data. The model consists of a differential equation that relates current sales growth to past accumulated sales levels. This modelling approach has become extremely popular, not only within the context of new-product diffusion patterns [2,3,4,5,6,7].

Product introducing firms that have a key interest in reliable market forecasts, may face the following fundamental challenges when using the Bass diffusion modelling approach. First, to obtain reliable forecasts, one needs accurate measurements of the innovation and imitation effects and of the saturation level. It has been recognized, however, that the information content of the market data, from which one has to derive the accuracy, may not only change over time, but may also differ among these metrics of interest [8]. Because the distribution of the information content is reflected in the sensitivity of the market metrics with respect to the available data, its derivation is an empirical issue, for which a proper statistical approach is needed. Furthermore, the solution of the differential equation of the Bass diffusion model yields point estimates of futures sales. From a point estimate, however, one cannot conclude about its accuracy. A stochastic approach that facilitates the construction of confidence intervals for the estimated future sales is warranted.

In this study, the following steps can be discerned. We first consider the case that sales data of all years are known and fit the Bass diffusion model to these data. Next, we investigate the over-time information content of the metrics of interest. Based on these results, we identify the minimal sample size that is needed to obtain reliable forecasts. For that purpose the sample may be compared with samples of similar product introductions in the past [9]. In our approach we only use information from sales of the product under consideration. We also did not expand the deterministic model with additional parameters, which indeed may give rise to interesting results [3, 4, 10, 11].

We introduce a stochastic extension of the Bass model. Assuming that stochastic perturbations, in the form of white noise, are small, we consider solutions of this extended model that stay in the neighbourhood of the solution of the deterministic Bass diffusion model. Next, we compute the variance of the stochastic component of the solution which in turn will be used for the construction of the upper and lower boundary of the confidence interval for the yearly sales. Finally, we apply a forecasting approach, based upon the previous methodological steps. Ultimately, confidence intervals for the point forecasts are derived. For the ease of exposition, we make use of a dataset [12] in each of these consecutive steps. Our approach is related to that of Boswijk and Franses [8], who apply a similar approach in their analysis of the validity of the parameter estimation procedure.

Methods

The Bass model, formulated as a differential equation for the product sales \(y(t)\) cumulated over the years, takes the form

$$ y'(t) = f\bigl(y(t)\bigr)\quad \text{with } f(y) = (m - y) \bigl\{ p + (q/m) y\bigr\} \quad \text{and}\quad y(0) = 0. $$
(1)

The term with parameter p, called the coefficient of innovation, represents consumers who buy the product on their own initiative without being influenced by other buyers. The term with q, the imitation coefficient, represents consumers that buy the product because others have done so. Finally, the parameter m denotes the total number of buyers over the years. To illustrate our proposed methodological steps, we consider a dataset that can be found in Lilien et al. [12]. It deals with steam iron sales and gives cumulated yearly sales \((t _{i}, y _{i})\), \(i= 0, 1,\dots , n\) with \(t _{i} = i \) and \(n= 31\).

Estimating the parameters of the model

The model (1) is fitted to this data by using the Matlab based software package Grind (http://www.sparcs-center.org/grind) and yields the parameter values:

$$ m = 86.35, \qquad p = 0.00204\quad \text{and}\quad q = 0.2735. $$
(2)

In Fig. 1(a) the numerical solution of (1) with the parameter values (2) and the data points \((t _{i}, y _{i})\) are given. If one decreases n, then at a certain value the parameter estimation procedure fails. It is due to the fact that then the information matrix gets ill-posed, so that no dependable parameter estimation can be made, see also Massiani and Gohs [5]. In this study the parameters are estimated with linear regression. For small data sets this results in inaccurate estimates especially for m, see also Fig. 3.

Figure 1
figure1

(a) Function \(y(t)\) satisfying (1)–(2), given by the solid curve, fits the data (). (b) Yearly sales (5) are given by the solid curve. The dashed lines denote the boundaries of the 95% confidence region

From the point of view of market decision making, one is often interested in yearly sales:

$$ dy_{i} = y_{i + 1} - y_{i}, $$
(3)

see Fig. 1(b). For our purpose of finding estimates of the boundaries of the confidence intervals, it suffices to approximate \(dy _{i}\) by the derivative \(y '(t _{i})\) with \(t _{i} = i + 0.5\). This approximation is based on a Taylor expansion of \(s'(t)\) at \(t = i + 0.5\):

$$ \begin{aligned} s(i + 1) - s(i) &= \int _{i}^{i + 1} s'(t) \,\mathrm{d}t \\ &= \int _{i}^{i + 1} s'(i + 0.5) + (t - i - 0.5) s''(i + 0.5) \\ &\quad {}+ 0.5(t - i - 0.5)^{2}s'''(i + 0.5) +\cdots \,\mathrm{d}t. \end{aligned} $$

Interchanging summation and integration we obtain

$$ s(i + 1) - s(i) = s'(i + 0.5) + \frac{1}{24}s'''(i + 0.5) +\cdots. $$
(4)

In Fig. 2 it can be seen that the first term at the right hand side of (4) indeed approximates the left hand side very well.

Figure 2
figure2

Yearly sales for (1)–(2): exact values () for time interval \((i, i + 1)\) given by left hand side of (4) and approximation () by \(s'(t + 0.5)\) being the first term of the right hand side

Now, the exact solution of (1) is:

$$ y(t): = s(t) = m \biggl[ \frac{1 - \exp \{ - (p + q)t\}}{1 + (q/p) \exp \{ - (p + q)t\}} \biggr], $$
(5)

so that

$$ s'(t) = \frac{mp (p + q)^{2}\exp \{ (p + q) t\}}{[q + p\exp \{ (p + q) t\} ]^{2}}. $$
(6)

Information content of the data

When estimating parameters, using data of only a first part of the time interval of the evolving process (e.g. for making a forecast), one must have an idea of the information content of the data as it is used. Again, the availability of an exact solution for (1) makes such an analysis possible. The sensitivity of the parameter m with respect to the data can be analysed with a method presented by Kalaba and Spingarn [13]. Assuming that the other two parameters are fixed, we determine the parameter m from a data set \(\{t _{i}, y _{i}\}\) by minimizing the penalty function

$$ P(m) = \sum_{i = 0}^{n} \bigl\{ y_{i} - y(t_{i};m)\bigr\} ^{2}. $$
(7)

The value of m for which the penalty function reaches its minimum value follows from (7)

$$ \frac{\mathrm{d}P}{\mathrm{d}m} = \sum_{i = 0}^{n} 2\bigl\{ y_{i} - y(t _{i};m)\bigr\} \frac{\mathrm{d}y(t_{i};m)}{\mathrm{d}m} = 0, $$
(8)

where \(\mathrm{d}y(t _{i};m)/\mathrm{d}m\) denotes the sensitivity function of the solution with respect to the parameter m at time \(t _{i}\):

$$ S\bigl(y(t_{i};m)\bigr) = \frac{\mathrm{d}y(t_{i};m)}{\mathrm{d}m} = q \bigl[\exp \bigl\{ (p + q)t_{i}\bigr\} - 1\bigr]/\bigl[p + q\exp \bigl\{ (p + q)t_{i}\bigr\} \bigr]. $$
(9)

In a same way, we derive:

$$ S\bigl(y(t_{i};p)\bigr) = m q \bigl[\exp \bigl\{ (p + q)t_{i}\bigr\} \bigl((p + q)t - 1\bigr) + 1\bigr]/\bigl[p + q \exp \bigl\{ (p + q)t_{i}\bigr\} \bigr]^{2} $$

and

$$ S\bigl(y(t_{i};q)\bigr) = m \bigl[\exp \bigl\{ (p + q)t_{i}\bigr\} \bigl(p(qt_{i} + 1) + qt_{i}^{2} \bigr) - p\bigr]/\bigl[p + q\exp \bigl\{ (p + q)t_{i}\bigr\} \bigr]^{2}. $$

It is remarked that \(S(y(t_{i};m))\) is independent of m.

Next the dependence upon the data is traced by differentiating (8) with respect to \(y _{j}\). This dependence comes from m as well as from of \(y _{j}\) itself:

$$ 2 \Biggl[ S\bigl(y(t_{j};m)\bigr) - \sum _{i = 0}^{n} S\bigl(y(t_{j};m) \bigr)^{2}\frac{ \partial m}{\partial y_{i}} \Biggr] = 0. $$

This results in the (reverse) sensitivity function

$$ R (m;y_{j}) = \frac{\partial m}{\partial y_{j}} = c_{m} S \bigl(y(t_{j};m)\bigr) \quad \text{with } c_{m} = \Biggl[ \sum_{i = 0}^{n} S\bigl(y(t_{i};m) \bigr)^{2} \Biggr] ^{ - 1}. $$

The sensitivity \(R(p;y _{j})\) for parameter p is slightly more complicated because \(y(t;p)\) is nonlinear in p. Now (8) becomes

$$ \frac{\mathrm{d}P}{\mathrm{d}p} = \sum_{i = 0}^{n} 2\bigl\{ y_{i} - y(t _{i};p)\bigr\} \frac{\mathrm{d}y(t_{i};p)}{\mathrm{d}p} = 0, $$
(10)

where \(\mathrm{d}y(t _{i};p)/\mathrm{d}p = S(y(t _{i};p))\) being the sensitivity function of the solution with respect to parameter p. Next the dependence upon the data is again traced by differentiating (10) with respect to \(y _{j}\):

$$ 2S\bigl(y(t_{j};p) \bigr) + 2\sum_{i = 1}^{n} y_{j}\frac{\partial S(y(t_{j};p))}{ \partial p}\frac{\partial p}{\partial y_{j}} - S\bigl(y(t_{i};p)^{2} \bigr)\frac{ \partial p}{\partial y_{j}} - y(t_{i};p)\frac{\partial S(y(t_{i};p))}{ \partial p} \frac{\partial p}{\partial y_{j}} = 0. $$

This results in the sensitivity function

$$ R (p;y_{j}) = \frac{\partial p}{\partial y_{j}} = c_{p} S \bigl(y(t_{j};p)\bigr) $$

with

$$ c_{p} = \Biggl[ \sum_{i = 0}^{n} S \bigl(y(t_{i};m)^{2}\bigr) + y(t_{i};p) \frac{ \partial S(y(t_{i};p))}{\partial p} - y_{i}\frac{\partial S(y(t_{i};p))}{ \partial p} \Biggr]^{ - 1}. $$

For parameter q the same steps can be followed as for p. So, all in all, the sensitivity function

$$ R(r;y_{i}) = \frac{\partial r}{\partial y_{i}}\quad \text{with } r = m, p \text{ and } q $$

differs only a multiplicative constant from the sensitivity of the solution with respect to the parameters which is given by

$$ S\bigl(y(t_{i};r)\bigr) = \frac{\mathrm{d}y(t_{i};r)}{\mathrm{d}r}. $$

Instead of the above multiplicative constant we take the constant \(c _{r}\) such that

$$ c_{r} = \biggl[ \int _{t = 0}^{31} S\bigl(y(t;r) \bigr) \,\mathrm{d}t \biggr]^{ - 1}\quad \text{and have } R(r;y_{j}) = c_{r}S\bigl(y(t_{j};r)\bigr). $$
(11)

With this normalisation we can compare the influence of data points at different times. This analysis shows that the early observations of the sales curve are more informative about the internal (q) and external (p) influences, than about the saturation level (m), see Fig. 3. From a time interval, that starts at \(t= 0\) and ends at \(t = t _{n}\), dependable estimates of the three parameters can be made for \(n \geq 17\). Note that \(t _{17}\) lies just before the inclination point \(t_{\mathrm{incl}}\) of \(s(t)\), which can be derived from (6): \(s''(t_{\mathrm{incl}}) = 0\).

Figure 3
figure3

Sensitivity \(R(t)\) of the parameters m (solid), p (dashed) and q (dotted) for data point \((t, y)\). See (6)

Deriving a confidence domain

Allowing model noise we are interested in the dynamics of the system near the solution (5). More specifically, we study the adequacy of the model in the form of its associated confidence domain. For that purpose we introduce the tangent linear equation augmented with white noise proportional to yearly sales \(s'(t)\). Doing so, we consider the following initial value problem

$$ y'(t) = f\bigl(y(t)\bigr)\quad \text{with } y(t_{0}) = y_{0} $$
(12)

having the solution \(y = s(t) =s(t;y _{0})\). If \(y _{0}\) is not exactly known, one may be interested in the evolution of the solution for a slightly different initial value. Thus, we analyse nearby solutions of the differential equation. Let the perturbed initial value be

$$ y(t_{0}) = y_{0} + v_{0}. $$
(13)

The perturbation evolves in time as \(v(t)\). Substitution of \(y(t) = s(t) +v(t)\) in (1) yields

$$ s'(t) + v'(t) = f\bigl(s(t) + v(t)\bigr) = f\bigl(s(t) \bigr) + \bigl[f'(y)\bigr]_{y = s(t)}v(t) + \frac{1}{2} \bigl[f''(y)\bigr]_{y = s(t)}v(t)^{2} + \cdots. $$

Note that the term at the right hand side has been developed in a Taylor expansion at \(y = s(t)\).

Assuming that the perturbation remains small we cut off this expansion after the second term. And, since \(s'(t) = f(s(t))\), the equation takes the form of the tangent linear equation

$$ v'(t) = \bigl[ f'(y) \bigr]_{y = s(t)}v(t) + \beta s'(t) \xi (t) \quad \text{with } f'(y) = q - p - 2(q/m)y, $$
(14)

where \(\xi (t)\) denotes standard white noise. Note that, through a different transformation, Skiadas and Giovanis (Eq. (21) at p. 89) arrive at a similar linear equation [14].

Using Ito calculus we reformulate (14) as a time dependent Ornstein–Uhlenbeck process, it yields the Langevin equation:

$$ \mathrm{d}v(t) = \bigl[ f'(y) \bigr]_{y = s(t)}v \,\mathrm{d}t + \beta s'(t) \,\mathrm{d}W(t), $$
(15)

see Gardiner [15]. In that study it is derived that

$$ \operatorname{Var}\bigl[v(t)\bigr] = \beta ^{2} \int _{0}^{t} s'\bigl(t' \bigr)^{2}\exp \biggl\{ 2 \int _{t'}^{t} q - p - 2(q/m)s(r)\,\mathrm{d}r\biggr\} \,\mathrm{d}t' = \beta ^{2}Q(t). $$
(16)

With the formula handling software package Derive 5 we obtain

$$ Q(t) = \frac{m^{2}p^{2}(p + q)^{4}t\exp \{ 2(p + q)t\}}{[q + p\exp \{ (p + q)t\} ]^{4}} = t s'(t)^{2}. $$
(17)

Using the data the parameter β can be estimated. In (14) it is assumed that the white noise is proportional to the yearly sales, viz. it is small at the start and end of the sales process. Visual inspection of our dataset, however, shows substantial deviations from the model curve in the beginning and end of the sample (see Fig. 4). Apparently, processes are at work that were not taken up in the model. We account for the potential biasing effect of these large relative errors, by introducing a weight function \(s'(t _{i})/( s'(t _{0})+ \cdots + s'(t _{n}))\) with \(t _{i} = i + 0.5\) when estimating the parameter β with least squares. Using (3) and (17) we arrive at the penalty function

$$ P(\beta ) = \Biggl[ \sum_{j = 0}^{n} s'(t_{j}) \Biggr]^{ - 1}\sum _{i = 0}^{n} s'(t_{i}) \bigl[ \bigl(dy_{i} - s'(t_{i})\bigr)^{2} - \beta ^{2}Q(t) \bigr]^{2}, $$
(18)

which has to be minimised. It results in an estimate

$$ \beta ^{2} = 0.00141. $$

In Fig. 1(b) the dashed lines are the boundaries of the 95% confidence interval based on the empirical rule [16] \(s'(t) \pm 2 \operatorname{Stdev}[v(t)]\) and the assumption that the model is perfect. The latter is clearly not the case. Thus, we have to take in account the limited accuracy of the stochastic model for small yearly sales.

Figure 4
figure4

Confidence intervals for yearly sales with past and future separated by the vertical solid line \(t = 19\) (future data unknown): (a) The function \(y(t)\) satisfying (1) and (10) is given by the solid curve fitting the data () for \(t \leq 19\). (b) The boundaries of the 95% confidence region (dashed) satisfying (9)–(11)

Results

Now, we arrive at the last stage of our proposed methodology in which we derive confidence intervals for the point forecasts of the Bass model. Let us have data \((t _{i}, y _{i})\) with \(t _{i}= 1, 2, \dots , n\). In the present example the process of estimating parameters is based on the numerical solution of (1). This iteration process, based on least squares, only converges for \(t _{n} \geq 17\). So point estimates of future sales can only made for the years thereafter. It is seen that at \(t= 17\) the curve of Fig. 1(a) is just before the inclination point. The confidence interval we may compute with (16) does only give satisfying results for \(t \geq 19\), just after the inclination point. Therefore, we consider the case \(t _{n}= 19\). Using the data for \(t \leq 19\) we obtain the following estimates for the parameters

$$ m = 74.75, \qquad p = 0.001393\quad \text{and}\quad q = 0.3209, $$
(19)

which gives point estimates for the cumulated and yearly sales for \(t > 19\), see respectively Figs. 4(a) and 4(b). Now, based on (16), we obtain from the data of the past (\(t \leq 18.5\))

$$ \beta ^{2} = 0.00110. $$
(20)

It leads to acceptable 95% confidence intervals up to \(t= 26.5\) for the yearly sales. After that we arrive in a region where \(s'(t)\) gets small bringing about larger relative errors.

Discussion and conclusions

In this study we introduce confidence intervals in a way different from the commonly used definition in statistics, where a point estimate of a quantity is obtained by the mean of a sample of that quantity. Then a 95% confidence interval contains about 95% of the set of data points. Here the point estimate of a quantity is obtained from a model that is fitted with data of other quantities of the system. Consequently, if the model is not perfect, the number of data points within the 95% confidence interval may differ more from 95% than expected. It is noted that the confidence interval obtained from the data for \(t \leq 18.5\) does not differ much from the confidence interval obtained from the complete data set (\(t \leq 30.5\)). This is due to the fact that once the inclination point is reached, information has been gained from low yearly sales up to high yearly sales: after the inclination point a same route is taken in the opposite direction.

It is remarked that with the Bass model reliable future sale estimates can be made near the inclination point. Near this point the curve of cumulated sales looks close to a straight line. A linear forecast based on local data may result in a large overestimation of required stocks. The Bass model quite well reduces this risk.

With respect to our empirical analysis, it is noted that both at the beginning and in the end when yearly sales are small the model does not agree with the market observations. Apparently, the model does not cover buying actions different from the assumed innovative and imitational traits. Promotion of products or special discounts may be responsible for such deviant behaviour.

The parameter estimation has been carried out with the numerical solution of the Bass equation. Since this first order differential equation has a smooth solution, a high accuracy can be achieved. The optimal values of the parameters were found using the Levenberg-Marquardt method (MATLAB). When the information matrix is ill-posed this approximation process does not converge so that no estimate can made. In this way it is found at which stage of the process sufficient data from the past are available in order to make a dependable forecast. Note that with linear regression [17] highly uncertain parameter values are found for data sets that cover only a small initial time interval. In the example of steam iron sales the point estimate could be made just before arriving at the inclination point and a confidence interval came within reach just after this point. If the behaviour of the buyers does not meet the requirements of the Bass model, then the computed confidence interval should be discarded. This applies to episodes with low yearly sales at the beginning and at the end.

By guarantying the exclusion of bad estimates and giving expected variances, our way of estimating parameters yields accuracy and precision in the new-product sales forecasting process. Doing so, benefits the investment decisions of companies regarding the introduction of new products. Our methodological framework may also be of use in optimal stock management [18].

References

  1. 1.

    Bass FM. A new product growth for model consumer durables. Manag Sci. 1969;15:215–27.

  2. 2.

    Mahajan V, Muller E, Bass FM. New product diffusion models in marketing: a review and directions for research. J Mark. 1990;54:1–26.

  3. 3.

    Mahajan V, Muller E, Wind Y, editors. New-product diffusion models. Berlin: Springer; 2000.

  4. 4.

    Tashiro T. Hierarchical Bass model: a product diffusion model considering a diversity of sensitivity to fashion. Physica A. 2016;461:824–32.

  5. 5.

    Massiani J, Gohs A. The choice of Bass model coefficients to forecast diffusion for innovative products: an empirical investigation for new automotive technologies. Res Transp Econ. 2015;50:17–28.

  6. 6.

    Lee C-Y, Huh S-Y. Forecasting the diffusion of renewable electricity considering the impact of policy and oil prices. The case of South Korea. Appl Energy. 2017;197:29–39.

  7. 7.

    Yoon I, Yoon SK. An estimation of offset supply for the Korean emissions trading scheme using the Bass diffusion model. Int J Glob Warm. 2017;12:99–115.

  8. 8.

    Boswijk HP, Franses PH. On the econometrics of the Bass diffusion model. J Bus Econ Stat. 2005;23:255–68.

  9. 9.

    Dekimpe MG, Parker PH, Sarvary M. Staged estimation of international diffusion models: an application to global cellular telephone adoption. Technol Forecast Soc Change. 1998;57:105–32.

  10. 10.

    Islam T, Fiedig DG. Modelling the development of supply-restricted tele-communications markets. J Forecast. 2001;20:249–64.

  11. 11.

    Liang X, Xie L, Yan H. Self-restraining Bass models. J Forecast. 2015;34:472–7.

  12. 12.

    Lilien GL, Rangaswamy A, Van den Bulte C. Diffusion models: managerial applications and software. In: Mahajan V, Muller E, Wind Y, editors. New-product diffusion models. New York: Springer; 2000. p. 295–310.

  13. 13.

    Kalaba RE, Spingarn K. Sensitivity of parameter estimates to observations, system identification, and optimal inputs. Appl Math Comput. 1980;7:225–35.

  14. 14.

    Skiadas CH, Giovanis AN. A stochastic Bass innovation diffusion model for studying the growth of electricity consumption in Greece. Appl Stoch Models Data Anal. 1997;23:85–101.

  15. 15.

    Gardiner CW. Stochastic methods: a handbook for the natural sciences and social sciences. Berlin: Springer; 2009.

  16. 16.

    Ott RL, Longnecker M. An introduction to statistical methods and data analysis. 6th ed. Brooks/Cole; 2010.

  17. 17.

    Satoh D. A discrete Bass model and its parameter estimation. J Oper Res Soc Jpn. 2001;44:1–18.

  18. 18.

    Lukas E, Spengler TS, Kupfer S, Kieckhäfer K. When and how much to invest? Investment and capacity choice under product life cycle uncertainty. Eur J Oper Res. 2017;260:1105–14.

Download references

Acknowledgements

Not applicable.

Availability of data and materials

Use have been made of data presented in reference [12].

Funding

Not applicable.

Author information

MK started the study and delivered the main contribution to the introduction and discussion sections. JG carried out the sensitivity analysis for the parameters as they depend on the data and elaborated the different mathematical methods needed for constructing the confidence intervals. Both authors read and approved the final manuscript.

Correspondence to Johan Grasman.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Abbreviations

Not applicable.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Keywords

  • Bass model
  • Ornstein–Uhlenbeck process
  • Sensitivity of parameter to data
  • Confidence domain