When estimating parameters, using data of only a first part of the time interval of the evolving process (e.g. for making a forecast), one must have an idea of the information content of the data as it is used. Again, the availability of an exact solution for (1) makes such an analysis possible. The sensitivity of the parameter m with respect to the data can be analysed with a method presented by Kalaba and Spingarn [13]. Assuming that the other two parameters are fixed, we determine the parameter m from a data set \(\{t _{i}, y _{i}\}\) by minimizing the penalty function
$$ P(m) = \sum_{i = 0}^{n} \bigl\{ y_{i} - y(t_{i};m)\bigr\} ^{2}. $$
(7)
The value of m for which the penalty function reaches its minimum value follows from (7)
$$ \frac{\mathrm{d}P}{\mathrm{d}m} = \sum_{i = 0}^{n} 2\bigl\{ y_{i} - y(t _{i};m)\bigr\} \frac{\mathrm{d}y(t_{i};m)}{\mathrm{d}m} = 0, $$
(8)
where \(\mathrm{d}y(t _{i};m)/\mathrm{d}m\) denotes the sensitivity function of the solution with respect to the parameter m at time \(t _{i}\):
$$ S\bigl(y(t_{i};m)\bigr) = \frac{\mathrm{d}y(t_{i};m)}{\mathrm{d}m} = q \bigl[\exp \bigl\{ (p + q)t_{i}\bigr\} - 1\bigr]/\bigl[p + q\exp \bigl\{ (p + q)t_{i}\bigr\} \bigr]. $$
(9)
In a same way, we derive:
$$ S\bigl(y(t_{i};p)\bigr) = m q \bigl[\exp \bigl\{ (p + q)t_{i}\bigr\} \bigl((p + q)t - 1\bigr) + 1\bigr]/\bigl[p + q \exp \bigl\{ (p + q)t_{i}\bigr\} \bigr]^{2} $$
and
$$ S\bigl(y(t_{i};q)\bigr) = m \bigl[\exp \bigl\{ (p + q)t_{i}\bigr\} \bigl(p(qt_{i} + 1) + qt_{i}^{2} \bigr) - p\bigr]/\bigl[p + q\exp \bigl\{ (p + q)t_{i}\bigr\} \bigr]^{2}. $$
It is remarked that \(S(y(t_{i};m))\) is independent of m.
Next the dependence upon the data is traced by differentiating (8) with respect to \(y _{j}\). This dependence comes from m as well as from of \(y _{j}\) itself:
$$ 2 \Biggl[ S\bigl(y(t_{j};m)\bigr) - \sum _{i = 0}^{n} S\bigl(y(t_{j};m) \bigr)^{2}\frac{ \partial m}{\partial y_{i}} \Biggr] = 0. $$
This results in the (reverse) sensitivity function
$$ R (m;y_{j}) = \frac{\partial m}{\partial y_{j}} = c_{m} S \bigl(y(t_{j};m)\bigr) \quad \text{with } c_{m} = \Biggl[ \sum_{i = 0}^{n} S\bigl(y(t_{i};m) \bigr)^{2} \Biggr] ^{ - 1}. $$
The sensitivity \(R(p;y _{j})\) for parameter p is slightly more complicated because \(y(t;p)\) is nonlinear in p. Now (8) becomes
$$ \frac{\mathrm{d}P}{\mathrm{d}p} = \sum_{i = 0}^{n} 2\bigl\{ y_{i} - y(t _{i};p)\bigr\} \frac{\mathrm{d}y(t_{i};p)}{\mathrm{d}p} = 0, $$
(10)
where \(\mathrm{d}y(t _{i};p)/\mathrm{d}p = S(y(t _{i};p))\) being the sensitivity function of the solution with respect to parameter p. Next the dependence upon the data is again traced by differentiating (10) with respect to \(y _{j}\):
$$ 2S\bigl(y(t_{j};p) \bigr) + 2\sum_{i = 1}^{n} y_{j}\frac{\partial S(y(t_{j};p))}{ \partial p}\frac{\partial p}{\partial y_{j}} - S\bigl(y(t_{i};p)^{2} \bigr)\frac{ \partial p}{\partial y_{j}} - y(t_{i};p)\frac{\partial S(y(t_{i};p))}{ \partial p} \frac{\partial p}{\partial y_{j}} = 0. $$
This results in the sensitivity function
$$ R (p;y_{j}) = \frac{\partial p}{\partial y_{j}} = c_{p} S \bigl(y(t_{j};p)\bigr) $$
with
$$ c_{p} = \Biggl[ \sum_{i = 0}^{n} S \bigl(y(t_{i};m)^{2}\bigr) + y(t_{i};p) \frac{ \partial S(y(t_{i};p))}{\partial p} - y_{i}\frac{\partial S(y(t_{i};p))}{ \partial p} \Biggr]^{ - 1}. $$
For parameter q the same steps can be followed as for p. So, all in all, the sensitivity function
$$ R(r;y_{i}) = \frac{\partial r}{\partial y_{i}}\quad \text{with } r = m, p \text{ and } q $$
differs only a multiplicative constant from the sensitivity of the solution with respect to the parameters which is given by
$$ S\bigl(y(t_{i};r)\bigr) = \frac{\mathrm{d}y(t_{i};r)}{\mathrm{d}r}. $$
Instead of the above multiplicative constant we take the constant \(c _{r}\) such that
$$ c_{r} = \biggl[ \int _{t = 0}^{31} S\bigl(y(t;r) \bigr) \,\mathrm{d}t \biggr]^{ - 1}\quad \text{and have } R(r;y_{j}) = c_{r}S\bigl(y(t_{j};r)\bigr). $$
(11)
With this normalisation we can compare the influence of data points at different times. This analysis shows that the early observations of the sales curve are more informative about the internal (q) and external (p) influences, than about the saturation level (m), see Fig. 3. From a time interval, that starts at \(t= 0\) and ends at \(t = t _{n}\), dependable estimates of the three parameters can be made for \(n \geq 17\). Note that \(t _{17}\) lies just before the inclination point \(t_{\mathrm{incl}}\) of \(s(t)\), which can be derived from (6): \(s''(t_{\mathrm{incl}}) = 0\).