Skip to main content

Space mapping-based receding horizon control for stochastic interacting particle systems: dogs herding sheep


Control of stochastic interacting particle systems is a non-trivial task due to the high dimensionality of the problem and the lack of fast algorithms. Here, we propose a space mapping-based approximation of the stochastic control problem by solutions of the deterministic one. In combination with the receding horizon control technique this yields a reliable and fast numerical scheme for the closed loop control of stochastic interacting particle systems. As a numerical example we consider the herding of sheep with dogs. The numerical results underline the feasibility of our approach and further show stabilizing behaviour of the closed loop control.

1 Introduction

Collective behaviour of crowds or swarms has been investigated by various researchers in the past decades [1417, 33]. First, the focus was on the simulation of large groups, like flocks of birds and schools of fish, and their attractive and repulsive self-interaction [11, 18]. The resulting models are able to reflect major properties of the interaction such as flocking and the formation of mills [10]. Further, the stability of these patterns was analysed [1, 12, 18]. Later, the models were refined to take into account view cones or topographical aspects like walls [9, 13, 24]. To include a random disturbance of the individuals’ behaviour one introduces an additive Brownian motion in the velocity component of the dynamics [6, 28]. Mathematically, this changes the model from ordinary differential equations (ODEs) to a system of stochastic differential equations (SDEs).

Based on this knowledge, the investigation of the interaction of crowds and external agents became of interest [3, 21]. In particular, the idea of controlling crowds with the help of the external agents [7, 8]. The corresponding optimal control problem (OCP) is then constrained by the dynamics of the respective ODE or SDE system. For the deterministic problem one can employ standard techniques from variational calculus to derive the gradient of the cost functional and to implement a tailored iterative scheme to compute the controls.

Unfortunately, these classical methods cannot be directly adapted to the stochastic problems [34]. In fact, the stochastic influence forces the decoupled forward and backward equations of an deterministic optimal control problem, to be a fully coupled Forward–Backward SDE system involving a ghost process to capture the uncertain terminal condition, see, e.g., [20] for the derivation of such a system based on a Hamiltonian formulation. First, steps towards a numerical realization in special cases can be found in [22].

Here, we are interested in controlling the crowd over a large time horizon, such that open loop control is not appropriate. Instead we use the closed loop receding horizon control to allow for feedback during the time evolution (see also [2]). To deal with the stochastic nature of the model we employ the space mapping approach [4, 36], which allows for the control of a high fidelity model (here the stochastic one) by the optimization of a surrogate model (here the deterministic one).

The space mapping approach first came up in the engineering community [5] as a tool to solve large scale optimization problems with the help of an easier surrogate model. Through the years the technique became well-established in engineering and has been also recognized by the mathematics community for various applications like radiative heat transfer, control the dispersion of particles in a fluid, dynamic compressor optimization of gas networks and optimal inflow control of transmission lines, see, e.g., [19, 23, 25, 26, 29, 31, 35].

Here, we consider as new application the herding of a crowd of sheep using dogs with repulsive influence on the crowd. The combination of the space mapping technique with the receding horizon control will finally allow for the construction of a tailored closed loop algorithm to control interacting stochastic particle systems.

The manuscript is organized as follows: in the next section the details of a general class optimal control problems with SDE constraints are given. Then, the space mapping approach is discussed in Sect. 3 and the Aggressive Monte Carlo Space Mapping Algorithm is presented. We derive the first-order optimality system of the deterministic ODE model and the gradient of the respective reduced cost functional which is needed for the numerical implementation in Sect. 4. The algorithms for the numerical investigation are described in Sect. 5. We present a projected gradient method for the deterministic optimization and discuss the receding horizon procedure for the closed loop control of the stochastic particle system. The feasibility of our approach is underlined by the numerical results presented in Sect. 6. We discuss a space-mapping approach based on a mean-field approximation in Sect. 7, before we give conclusions and an outlook in Sect. 8.

2 The control problem

In this section we define the general class of control problems constrained by a stochastic interacting particle system.

2.1 Stochastic interacting particle system

Let D denote the space dimension and T the length of the time interval under consideration. The positions and velocities of the particles are represented by

$$ X_{i} \colon [0,T] \rightarrow \mathbb {R}^{D},\qquad V_{i} \colon [0,T] \rightarrow \mathbb {R}^{D},\quad i=1,\ldots,N, $$

and combined in the vectors

$$\begin{aligned} &X(t) = \bigl(X_{1}(t), \dots, X_{N}(t) \bigr)^{T} \in \mathbb {R}^{ND}, \\ &V(t) = \bigl(V_{1}(t), \dots, V_{N}(t) \bigr)^{T} \in \mathbb {R}^{ND} \end{aligned}$$

for each \(t \in [0,T]\), respectively. In analogy, we consider M external agents having positions

$$ a_{m} \colon [0,T] \rightarrow \mathbb {R}^{D},\quad m = 1,\ldots,M, $$

with \(a(t) = (a_{1}(t), \dots, a_{M}(t))^{T} \in \mathbb {R}^{MD}\) for each \(t \in [0,T]\).

Their velocities

$$ u_{m} \colon [0,T] \rightarrow \mathbb {R}^{D},\quad m=1,\ldots,M, $$

are combined in \(u(t) = (u_{1}(t), \dots, u_{M}(t))^{T}\) for each \(t \in [0,T]\). Later, they act as control functions. We assume \(u \in L^{2}([0,T], \mathbb {R}^{MD})\).

The self-organisation of the crowd and the interaction of the particles with the agents is modelled with the help of radially symmetric interaction potentials

$$ \varPhi _{1}, \varPhi _{2} \colon \mathbb {R}_{0}^{+} \to \mathbb {R}, \qquad \varPhi _{j}\bigl( \vert x \vert \bigr) = \varPhi _{j}(r), \quad j=1,2. $$

For the sake of well-posedness, we assume that their first and second derivative

$$ \nabla _{r} \varPhi _{j} (r) =: G_{j}(r)\quad \text{and}\quad \nabla _{r}^{2} \varPhi _{j}(r) =: H_{j}(r) $$

are locally Lipschitz and globally bounded, i.e., \(\varPhi _{j} \in \mathcal{C}^{2}_{b}(\mathbb {R}_{0}^{+})\) for \(j=1,2\).

Further, we include a friction term with parameter \(\alpha >0\) and additive stochastic noise with strength \(\sigma \ge 0\) influencing the velocities of the individuals. The friction models the lethargy of the individuals, while the stochasticity allows for disturbances of the surroundings, that are not considered explicitly. Let \(B_{t}^{i}\), \(i=1,\dots,N\), denote independent D-dimensional Brownian motions. Then, the stochastic state system is given by

$$\begin{aligned} &\mathrm {d}X_{i} = V_{i} \,\mathrm {d}t, \quad i =1,\dots, n, \end{aligned}$$
$$\begin{aligned} &\mathrm {d}V_{i} = \Biggl( - \frac{1}{N} \sum _{j=1}^{N} G_{1}\bigl( \vert X_{i} - X_{j} \vert \bigr) - \sum _{m=1}^{M} G_{2}\bigl( \vert X_{i} -a_{m} \vert \bigr) - \alpha v_{i} \Biggr) \,\mathrm {d}t + \sigma \,\mathrm {d}B_{t}^{i}, \end{aligned}$$
$$\begin{aligned} &\mathrm {d}a_{m} = u_{m} \,\mathrm {d}t,\quad m = 1,\dots,M, \end{aligned}$$

supplemented with initial data \(Y_{0}:= (X_{0},V_{0},a_{0})\). The full state is a random variable \(Y = (X,V,a)\).

Remark 1

Clearly, for \(\sigma =0\) the above system reduces to an ODE system, which we are going to use as the surrogate model for the space mapping procedure.

It would be interesting to generalize the approach for common noise situations, i.e., \(B_{t}^{i} = B_{t}\) in order to model effects on the system as a whole, instead of single particles. This will have impacts on the cost-functional and also on the mean-field equation discussed later on. For simplicity, we restrict ourselves here to the case of individual noise.

2.2 Well-posedness of the state systems

Assuming a maximal velocity \(u_{\mathrm{max}}\) for the agents, we define the set of admissible controls

$$ \mathcal {U}_{\mathrm {ad}}= \bigl\{ u \in L^{2}\bigl([0,T], \mathbb {R}^{MD}\bigr) \colon \bigl\vert u_{m} (t) \bigr\vert \le u_{\mathrm{max}} \text{ for a.e. }t, m=1,\dots,M \bigr\} . $$

Note, that the ODE for the agents can be solved explicitly for given \(u \in \mathcal {U}_{\mathrm {ad}}\) which yields

$$ a(t) = a_{0} + \int _{0}^{t} u(s) \,\mathrm {d}s. $$

Indeed, we get an absolutely continuous function a, which can be plugged into the SDE system governing the dynamic of the crowd. Using the assumption \(\varPhi _{j} \in \mathcal {C}_{b}^{2}(\mathbb {R}_{0}^{+})\), we obtain weak solutions of the stochastic system in the sense of Itô due to standard SDE theory, see, e.g., [32]. Further, the state fulfills \(Y \in \mathcal{C}([0,T],\mathbb {R}^{ND})\).

Remark 2

Note, that the stochastic system can be generalized to space and time dependent \(\sigma = \sigma (x,t)\) without any effect on the well-posedness as long as the following conditions are satisfied

$$\bigl\vert \sigma (t,x) \bigr\vert \le C\bigl(1 + \vert x \vert \bigr),\qquad \bigl\vert \sigma (t,x) - \sigma (t,y) \bigr\vert \le D \vert x-y \vert $$

for some positive constants \(C, D\) and \(x,y \in \mathbb{R}^{D}\).

Remark 3

Note, that \(u \in L^{1}([0,T,],\mathbb{R}^{MD})\) would be enough regularity to obtain the absolutely continuous function a. But to identify the gradient for the numerical algorithm later on, we need a Hilbert-space structure. Thus, we choose the stronger assumption for \(U_{\mathrm{ad}}\).

In the case \(\sigma = 0\), we obtain a deterministic ODE system which attains a unique solution by standard results from ODE theory. This allows us to define the control-to-state map \(\mathcal {S}_{c}\) which assigns to each \(u \in \mathcal {U}_{\mathrm {ad}}\) the unique solution y of the ODE system. In analogy, we define \(\mathcal {S}_{f}(u)=Y\) for the solution of the SDE system. For better readability we refer to states of the ODE system with lower-case letters and states corresponding to the SDE system with upper-case letters.

2.3 The cost functional

In general, cost functionals involving empirical quantities, like expectation, variance or other kind of moments of the particle crowd are appropriate for the space mapping approach.

In the following, we consider a specific cost functional that is based on the expected trajectory of the centre of mass of the crowd reflecting the aim of our application, i.e., steering the crowd to a predefined destination \(Z_{\mathrm {des}}\). To do so we define a time dependent reference state \(\bar{Z} \colon [0,T] \rightarrow \mathbb{R}^{D}\). Similar to the approach in [8], we measure the spread of the crowd around . In particular, due to the stochastic behaviour of the state system we use the expected paths \({\mathbb {E}}[X]\).

This leads to the following cost functional

$$ J(Y,u; \bar{Z},\bar{u}):= \int _{0}^{T} \frac{1}{2N} \sum _{k=1}^{N} \bigl\Vert {\mathbb {E}}\bigl[X_{k}(t) \bigr] - \bar{Z}(t) \bigr\Vert ^{2} + \frac{\gamma }{2} \bigl\Vert u(t)- \bar{u}(t) \bigr\Vert _{\mathbb {R}^{MD}}^{2} \,\mathrm {d}t, $$

where the first term tracks the expected centre of mass of the crowd and penalizes its distance to the desired trajectory. The second term measures the control costs and is weighted with the parameter \(\gamma >0\).

Remark 4

The predefined desired trajectory \(\bar{Z}(t)\) and the reference velocities ū are input parameters for the cost functional. In the space mapping procedure, shall be replaced by the expected centre of mass and ū by the optimal control of the surrogate mode.

To sum up, the SDE constrained optimal control problem of consideration is given by

Problem 1

Find \(u^{*} \in \mathcal {U}_{\mathrm {ad}}\) such that

$$ u^{*} = \operatorname*{argmin}_{u \in \mathcal {U}_{\mathrm {ad}}} J(Y, u ; \bar{Z}, \bar{u})\quad \text{subject to } \text{(1a)--(1c)} \text{ with initial condition } Y(0) = Y_{0}. $$

Remark 5

The existence of an optimal control can be shown with standard techniques from variational calculus [27]. In fact, an existence result can be obtained for all sequentially weak lower semicontinuous and coercive cost functionals \(J(Y,u)\). Note that, in general, we cannot expect its uniqueness due to the non-convexity which is introduced by the nonlinearity in the state system.

3 The space mapping approach

The direct solution of this SDE constrained optimal control problem is a non-trivial task. Nevertheless, we can exploit the fact that the deterministic ODE model is for small noise σ a good approximation for the stochastic one in combination with the space mapping procedure.

The general idea of space mapping for optimization problems is to approximate a complex (fine) model by a simple (coarse) surrogate model such that its main features are still resolved and the coarse model allows for a fast optimization. In particular, no gradient information of the fine model needs to be computed. Space mapping goes back to Bandler [5] and an excellent introduction is given in the review [4] and the references therein.

Let \(\mathcal {G}_{f}\) and \(\mathcal {G}_{c}\) be two operators mapping the fine control and the coarse control to some observable, respectively. To get an approximation of the fine model optimization

$$u_{f}^{*} = \operatorname*{argmin}_{u \in \mathcal {U}_{\mathrm {ad}}} \bigl\vert \mathcal{G}_{f}(u)-\bar{w} \bigr\vert $$

for a desired value , one uses optimizers of the coarse model, i.e.,

$$u_{c}^{*} = \operatorname*{argmin}_{u \in \mathcal {U}_{\mathrm {ad}}} \bigl\vert \mathcal{G}_{c}(u)-\bar{w} \bigr\vert . $$

For a better approximation the space mapping function

$$\mathbf {T}: \mathcal {U}_{\mathrm {ad}}\to \mathcal {U}_{\mathrm {ad}},\qquad \mathbf {T}(u_{f}) = \operatorname*{argmin}_{u\in \mathcal {U}_{\mathrm {ad}}} \bigl\vert \mathcal{G}_{c}(u) - \mathcal{G}_{f}(u_{f}) \bigr\vert $$

is introduced, which assigns to an input \(u_{f}\) of the fine model a control \(u_{c}\) for the coarse model, yielding the best approximation of the fine model output \(\mathcal{G}_{f}(u_{f})\) by the coarse model output \(\mathcal{G}_{c}(u_{c})\).

If the observables to the respective optimizers are similar, i.e., \(\mathcal{G}_{f}(u_{f}^{*}) \approx \mathcal{G}_{c}(u_{c}^{*})\), we expect that it holds

$$\mathbf {T}\bigl(u_{f}^{*}\bigr) = \operatorname*{argmin}_{u\in \mathcal {U}_{\mathrm {ad}}} \bigl\vert \mathcal{G}_{c}(u) - \mathcal{G}_{f} \bigl(u^{*}_{f}\bigr) \bigr\vert \approx \operatorname*{argmin}_{u\in \mathcal {U}_{\mathrm {ad}}} \bigl\vert \mathcal{G}_{c}(u) - \bar{w} \bigr\vert = u_{c}^{*}. $$

Indeed, the space mapping is fixed by the observable defined by the operators \(\mathcal {G}_{f}\) and \(\mathcal {G}_{c}\). In the following we use

$$\mathcal {G}_{f}(u) = J(Y,u; \bar{Z},\bar{u}) = \frac{1}{2N} \sum _{i=1}^{N} \left \Vert \begin{pmatrix} {\mathbb {E}}[X^{i}] - \bar{Z} \\ \sqrt{\gamma }(u - \bar{u}) \end{pmatrix} \right \Vert _{L^{2}(0,T)}^{2} $$


$$\mathcal {G}_{c}(u) = J(y,u; \bar{Z},\bar{u}) = \frac{1}{2N} \sum _{i=1}^{N} \left \Vert \begin{pmatrix} x^{i} - \bar{Z} \\ \sqrt{\gamma }(u - \bar{u}) \end{pmatrix} \right \Vert _{L^{2}(0,T)}^{2} $$

and compute \({\mathbb {E}}[X]\) with the help of a Monte Carlo simulation as proposed in [31]. Therefore, it makes sense to set \(\bar{w} = 0\). That means, in the following, the stochastic interacting particle system \((\sigma >0)\) will act as the fine model, while the coarse model is given by the deterministic particle system \((\sigma =0)\).

Another possible choice could use the solution operators of the state equations, i.e., \(\mathcal {G}_{f} = \mathcal {S}_{f}\) and \(\mathcal {G}_{c} = \mathcal {S}_{c}\) with being an desired state.

Remark 6

Note, that the space mapping function T might be formally set valued if the optimization problem admits multiple solutions. Assumptions on the models ensuring that T is well defined are discussed in detail in [19, 26].

In general, the space mapping function T is directly not accessible, such that there are several approximations proposed in the literature [4, 5, 19]. These update the controls of the fine models iteratively. For example, Aggressive Space Mapping (ASM) and Trust Region Aggressive Space Mapping (TRASM) borrow the idea from quasi-Newton methods to approximate the Jacobian with the help of Broyden-type matrices. On the other hand, Hybrid Aggressive Space Mapping (HASM) combines the classical space mapping method with classical optimization techniques (cf. [4]).

We use the ASM approach for the numerical computations below. Hence, the update \(h^{k}\) for the next iterate is given by

$$ B^{k} h^{k} = -\bigl(\mathbf {T}\bigl(u_{f}^{k} \bigr) - u_{c}^{*}\bigr),\qquad u_{f}^{k+1} = u_{f}^{k} + \rho h^{h}, $$

where \(B^{k}\) is the kth Broyden matrix iterate and \(\rho >0\) the step-length.

For a smooth presentation of the algorithm, we define the expected centre of mass of the stochastic particle crowd as

$$ \bar{X}(t) = {\mathbb {E}}\Biggl[ \frac{1}{N} \sum _{k=1}^{N} X^{k}(t) \Biggr],\quad t \in [0,T]. $$

The resulting Aggressive Monte Carlo Space Mapping (AMCSM) approach[31] is stated in all details in Algorithm 1.

Algorithm 1
figure a

Aggressive Monte Carlo space mapping (AMCSM)

Remark 7

For the present application of dogs herding sheep, we need just one solve of the fine stochastic model, which involves the expensive Monte Carlo simulation in each step of the algorithm. The optimization step is only involving the coarse deterministic model, for which fast numerical algorithms based on gradient information are available. Clearly, for \(N\gg 1\), the determinsitic problem is still challenging, one idea is to use a mean-field appoximation in this case. See the discussion on the mean-field limit in Sect. 7. Further note, that the space-mapping approach discussed here does in general not yield a perfect space mapping, such that the algorithm might terminate with a suboptimal solution (cf. [19]). This does not matter in our case, since we are designing a closed loop control with the help of the receding horizon control technique. The numerical results below indicate that the method proposed here, works fine for the problem at hand. Nevertheless, for other problems with short time horizons the space mapping solutions may fail to be robust. A qualitative study of the approximation and the robustness are subject to future work.

4 Optimal control of the coarse model

The core of the space mapping approach is the fast optimization of the coarse model. Since we intend to use a steepest descent algorithm, we derive the first-order optimality conditions for the coarse optimization problem. The derived adjoint information can then be used for the evaluation of the gradient of the reduced cost functional.

4.1 First-order optimality condition

For the deterministic optimal control problem with ODE constraints we can derive the adjoint system and the optimality condition with the help of the extended Lagrangian. Note, that the calculations are very similar to [8]. The deterministic system

$$\begin{aligned} &\frac{\mathrm {d}}{\mathrm {d}t} x_{i} = v_{i}, \quad i=1,\dots, N, \end{aligned}$$
$$\begin{aligned} &\frac{\mathrm {d}}{\mathrm {d}t} v_{i} = - \Biggl( \frac{1}{N} \sum _{k=1}^{N} G_{1}(x_{i} - x_{k}) + \sum_{m=1}^{M} G_{2}(x_{i} -d_{m}) + \alpha v_{i} \Biggr) =: -W^{i}(y), \end{aligned}$$
$$\begin{aligned} &\frac{\mathrm {d}}{\mathrm {d}t} d_{m} = u_{m},\quad m = 1,\dots,M, \end{aligned}$$

can be compactly denoted by \(\frac{\mathrm {d}}{\mathrm {d}t} y = F(y,u)\), supplemented with the initial conditions \(y(0) = y_{0}\).

We define the set of controls \(\mathcal {U}\) and the state space \(\mathcal {Y}\) as

$$ \mathcal {U}= \bigl\{ u \in L^{2}\bigl([0,T], \mathbb {R}^{MD}\bigr)\bigr\} ,\qquad \mathcal {Y}= \bigl[H^{1}\bigl( [0,T], \mathbb{R}^{ND}\bigr) \bigr]^{2} \times H^{1}\bigl( [0,T],\mathbb{R}^{MD} \bigr). $$

Obviously it holds \(\mathcal {U}_{\mathrm {ad}}\subset \mathcal {U}\). Further, we define \(\mathcal {X}:=[L^{2}( [0,T],\mathbb{R}^{ND})]^{2}\times L^{2}( [0,T], \mathbb{R}^{MD})\) and

$$ \mathcal {Z}:=\mathcal {X}\times \bigl(\bigl[\mathbb{R}^{ND}\bigr]^{2}\times \mathbb{R}^{MD} \bigr), $$

as the space of Lagrange multipliers, with \(\mathcal {Z}^{*}\) being its dual.

We define the state operator \(e\colon \mathcal {Y}\times \mathcal {U}\rightarrow \mathcal {Z}^{*}\) for deterministic ODE as

$$ e(y,u) = \begin{pmatrix} \frac{\mathrm {d}}{\mathrm {d}t}y - F(y,u) \\ y(0)-y_{0} \end{pmatrix} $$

and the dual pairing

$$ \bigl\langle e(y,u),(\xi,\eta ) \bigr\rangle _{\mathcal {Z}^{*},\mathcal {Z}} = \int _{0}^{T} \biggl( \frac{\mathrm {d}}{\mathrm {d}t}y(t) - F \bigl(y(t),u(t)\bigr)\biggr) \cdot \xi (t) \,\mathrm {d}t + \bigl(y(0) - y_{0} \bigr) \cdot \eta. $$

Let \((\xi,\eta )\in \mathcal {Z}\) denote the Lagrange multiplier which is in fact the adjoint state. Then, the extended Lagrangian corresponding to the coarse problem reads

$$ \mathcal{L}(y,u,\xi,\eta ;\bar{Z},\bar{u}) = J(y,u;\bar{Z}, \bar{u}) + \bigl\langle e(y,u),(\xi,\eta ) \bigr\rangle _{\mathcal {Z}^{*},\mathcal {Z}}. $$

As usual the first-order optimality condition of the coarse problem is given by

$$ d \mathcal{L}(y,u,\xi,\eta ;\bar{Z}, \bar{u}) =0. $$

Following the standard approach from variational calculus for the derivation of the adjoint equations (cf. [27]), we obtain the following first order optimality system.

Theorem 1

Let \((y^{*},u^{*})\) be an optimal pair. Then, the first-order optimality condition corresponding to the coarse problem reads

$$ \int _{0}^{T} \bigl( \gamma \bigl( u^{*}(t) - \bar{u}(t) \bigr) - \xi _{3}(t) \bigr) \cdot \bigl(u(t)-u^{*}(t)\bigr) \,\mathrm {d}t\ge 0 \quad \text{for all $u\in \mathcal{U}_{\mathrm{ad}}$}, $$

where \(\xi =(\xi _{1},\xi _{2},\xi _{3})\in \mathcal {Y}\) satisfies the adjoint system given by

$$ \begin{aligned} & \frac{\mathrm {d}}{\mathrm {d}t}\xi _{1} = -d_{{{x}}} { W} \bigl(y^{*}\bigr)[\xi _{2}]- \frac{1}{NT}(x - \bar{Z}),\qquad \frac{\mathrm {d}}{\mathrm {d}t}\xi _{2} = \xi _{1} - \alpha \xi _{2}, \\ & \frac{\mathrm {d}}{\mathrm {d}t}\xi _{3} = -d_{{{d}}} { W} \bigl(y^{*}\bigr)[ \xi _{2}], \end{aligned}$$

supplemented with the terminal conditions \(\xi _{1}(T) = 0\), \(\xi _{2} (T) = 0\), \(\xi _{3}(T) = 0\).

Remark 8

The variational inequality in (5) can be derived as well with the help of the Pontryagin maximum principle. In view of the numerical implementation, the inequality is not handy. We therefore choose the Lagrangian approach here, leading to explicit expressions for the adjoint which can be used in the algorithm. Together with a projection onto the feasible set \(U_{\mathrm{ad}}\) we can design a projected gradient-descent method for the optimization problem, see Algorithm 2.

Algorithm 2
figure b

Optimal control algorithm for the coarse problem

4.2 Gradient of the reduced cost functional

In this section we introduce the reduced cost functional for the coarse model constraint and formally calculate its gradient which we need for the descent algorithm. Using the control-to-state map \(\mathcal {S}_{c}\) we define the reduced cost functional as

$$ \hat{J}(u):= J\bigl(\mathcal {S}_{c}(u),u;\bar{Z}, \bar{u}\bigr). $$

Assuming sufficient regularity for \(\mathcal {S}_{c}\) we further derive the gradient of the reduced cost functional. Making use of the state equation \(e(y,u) = 0\) we implicitly obtain \(d\mathcal {S}_{c}(u)\) via

$$ 0= d_{u} e\bigl(\mathcal {S}_{c}(u),u\bigr) = d_{y} e \bigl(\mathcal {S}_{c}(u),u\bigr)\bigl[d\mathcal {S}_{c}(u)\bigr] + d_{u} e\bigl( \mathcal {S}_{c}(u),u\bigr). $$

With the help of the adjoint equation

$$ \bigl(d_{y}e(y,u)\bigr)^{*}[\xi ] = - d_{y} J(y,u) $$

we compute the Gâteaux derivative of Ĵ in direction \(h\in \mathcal {U}\)

$$ d\hat{J}(u)[h] = \bigl\langle d_{y} J(y,u), d\mathcal{S}_{c}(u)[h] \bigr\rangle _{ \mathcal {Y}^{*},\mathcal {Y}} + \bigl\langle d_{u} J(y,u),h\bigr\rangle _{\mathcal {U}}= \bigl\langle \gamma (u - \bar{u}) -\xi _{3}, h \bigr\rangle _{\mathcal {U}}. $$

Since \(\mathcal {U}\) is a Hilbert space, we may use the Riesz representation theorem to identify the gradient of the reduced cost functional as

$$ \nabla \hat{J} (u) = \gamma (u - \bar{u})-\xi _{3}. $$

Now, we have all ingredients at hand to state the gradient descent method for the numerical simulations.

5 Numerical schemes

The Aggressive Monte Carlo Space Mapping algorithm (AMCSM) proposed in Algorithm 1 uses solutions of the coarse optimal control problem and only evaluations of the fine stochastic particle system.

5.1 Optimization algorithm for the coarse model

We solve the deterministic ODE systems of state and adjoint problem with the explicit Euler scheme. In the optimal control loop for the deterministic problem, we update the controls using nonlinear conjugate gradient (NCG) steps. The step size for the gradient update is obtained by a line search based on the Armijo rule with projection (cf. [27]). These ingredients define the numerical scheme for the deterministic optimization stated in Algorithm 2, where we denote by \(u^{n}\) the control of the nth optimization iteration. When the optimal solution of the coarse problem \(u_{c}^{*}\) is found, we compute \(\bar{x}= \frac{1}{N} \sum_{i=1}^{N} x_{i}\), where the \(x_{i}\) refer to the optimal positions extracted from \(\mathcal {S}_{c}(u_{c}^{*})\).

In our particular case, the projection \(\mathcal{P}_{\mathcal {U}}\) has the explicit representation

$$ \mathcal{P}_{\mathcal {U}}(h) (t) = \textstyle\begin{cases} u_{\mathrm{max}}\frac{h_{m}(t)}{ \vert h_{m}(t) \vert } & \text{for $ \vert h_{m}(t) \vert >u_{\mathrm{max}}$}, \\ h_{m}(t) & \text{otherwise}, \end{cases} $$

for \(m=1,\ldots,M\) and \(t\in [0,T]\).

5.2 Receding horizon control

The appropriate time horizon for steering the crowd to the given destination depends on the distance of the crowd to the destination and might be large. Since the space mapping procedure is based on optimal controls, we need to store the full forward information to compute the adjoints. On large time intervals this leads to an extensive memory consumption. Additionally, having the application of dogs herding sheep in mind, an open loop approach is rather unrealistic. In reality, a dog will react on the current state of the crowd, such that it makes more sense to model the problem using a closed loop ansatz.

This is why a closed loop control for a large time horizon is preferable. Now, we are going to combine the above numerical approaches with the receding horizon control [2]. In more detail, we split the time interval of interest \([0,T]\) into K smaller intervals \(I_{1},\dots,I_{K}\). Then, we apply the space mapping algorithm to these smaller intervals. In fact, we compute the stochastic output by an Euler–Mayurama scheme on \(I_{1}\) but store only the first half of the solution. Then, we initialize the values using the optimal values at time \(t = I_{1}/2\) and compute the solution on the interval \([\frac{I_{1}}{2}, \frac{I_{2}}{2}]\) and glue half of this solution to the one stored before. After two steps, we have the optimal control on the full interval \(I_{1}\) available. We proceed iteratively until we reach the terminal time T. The receding horizon procedure is visualized in Fig. 1. Note that here is some freedom in choosing the length of the smaller interval. Numerical studies motivated us to use \(\frac{I_{k}}{2}\).

Figure 1
figure 1

Visualization of the receding horizon procedure. The first iteration computes the optimal control on the interval \(I_{1}\). Only the first half of it, \(u_{*}^{1}\), is accepted as optimal solution. Then the optimal control on the interval \([I_{1}/2, I_{2}/2]\) is computed. The first half, \(u_{*}^{2}\), is accepted and clued to \(u_{*}^{1}\). These two steps give us the optimal control on \(I_{1}\). We proceed iteratively up to the terminal time T

Remark 9

Using this receding horizon procedure we need to adapt the desired trajectory . Indeed, we cannot expect that the controls lead the crowd to the destination in one subinterval. Hence, we adapt on \(I_{k}\) in the following way: we interpolate the distance of the initial centre of mass of the crowd and the desired destination \(Z_{\mathrm {des}}\) with the time steps used in one subinterval. Of course, this is not attainable for small k, nevertheless we simulate the deterministic optimal control using this interpolation as on \(t \in [I_{k-1}, I_{k}]\). Then, we compute the trajectory of the center of mass corresponding to this solution. We expect this trajectory to be appropriate and use it in the space mapping procedure on the interval \([I_{k-1},I_{k}]\).

6 Numerical results

In the following we present numerical results underlining the feasibility of our approach. In particular, we investigate the number of space mapping iterations needed to obtain appropriate results. Further, we shall see how the number of dogs is influencing the success of the herding procedure. Finally, we analyze numerically if the system is stabilized for large times \(T\gg 1\).

For the simulations we choose Morse potentials [18] to model the interaction:

$$\begin{aligned} &G_{j}\bigl( \vert X_{i} - X_{k} \vert \bigr) = \nabla P_{j}(X_{i},X_{k}),\quad j\in \{1,2\}, \\ &P_{j}(X_{i},X_{k}) = C_{r,j} e^{- \vert X_{i} - X_{k} \vert / \ell _{r,j}} - C_{a,j} e^{- \vert X_{i} - X_{k} \vert /\ell _{a,j}}. \end{aligned}$$

To realize the self-organization of the sheep we assume that they have some long range attraction and short range repulsion, i.e., we set

$$ C_{r,1} = 1,\qquad C_{a,1} = 5e^{-4}, \qquad \ell _{r,1} = 2,\qquad \ell _{a,1}= 1e^{-2}. $$

Further, we assume the dogs to scare the sheep and therefore have stronger repulsive influence. This leads to

$$ C_{a,2} = C_{a,1},\qquad \ell _{a,2}=\ell _{a,1},\qquad C_{r,2} = 1e^{-2},\qquad \ell _{r,2} =0.5. $$

Remark 10

We emphasize that the space mapping control approach discussed here can be adapted to various other applications by changing the interaction potentials or the cost functional.

The following parameters are fixed for all simulations

$$\begin{aligned} &\gamma = 1e^{-2}, \qquad u_{\mathrm{max}} = 5e^{-2},\qquad K = 20,\qquad \epsilon _{\mathrm{opt}}= 5e^{-3},\\ & dt = 1e^{-2},\qquad \alpha = 0.5, \end{aligned}$$

where dt denotes the time step size. Moreover, we choose \(\sigma (x,t) = \sigma\), i.e. the stochastic force is independent of space and time. Nevertheless, σ will be changed for different simulations and is thus specified explicitly later on as well as other parameters.

6.1 Influence of the stochasticity σ

To study the influence of the stochasticity on the number of space mapping iterations, we set

$$ N=30,\qquad M = 5,\qquad T = 20, $$

run 100 Monte Carlo samples and stop the iteration if

$$ \bigl\Vert u_{f} - u_{c}^{*} \bigr\Vert / \bigl\Vert u_{c}^{*} \bigr\Vert < 0.3\quad \text{or}\quad \bigl\Vert u_{f}^{n} - u_{c}^{*} \bigr\Vert / \bigl\Vert u_{c}^{*} \bigr\Vert - \bigl\Vert u_{f}^{n+1} - u_{c}^{*} \bigr\Vert / \bigl\Vert u_{c}^{*} \bigr\Vert < 0.005 $$

for two consecutive iterates \(u_{f}^{n}\) and \(u_{f}^{n+1}\).

The accuracy of the deterministic controls deteriorates as the stochastic influence increases, see Fig. 2 (up) as well as Table 1. For \(\sigma =0.03\) the stochastic influence starts to superimpose the crowd behaviour. Figure 2 (down) shows the trajectories of the center of mass of the crowd using space mapping. We see that space mapping works well for small values of σ. As the stochasticity starts to superimpose the crowd behaviour, the space mapping technique is not so efficient. This is expected, since for large volatility the deterministic model is not a good approximation of the stochastic one. The second part of Table 1 shows results obtained with 1000 Monte Carlo samples. The values change only slightly, which justifies to fix the number of MC samples to 100 for the following computations.

Figure 2
figure 2

Up: We show the trajectories for the centre of mass of the crowd employing the optimal deterministic controls. The accuracy of the deterministic controls deteriorates as the stochastic influence increases. For \(\sigma =0.03\) the stochastic influence starts to superimpose the crowd behaviour. Down: The trajectories of the centre of mass of the crowd resuting from the space mapping procedure. We see that the trajectory corresponding to \(\sigma = 0.02\) was improved

Table 1 Numerical investiation of the space mapping procedure. For \(\sigma = 0.01\) no space mapping is needed, the optimal deterministic control is accepted. The number of space mapping steps increases with increasing stochastic strength. The \(L^{2}\)-error of the trajectory of the center of mass compared to the center of mass of the optimal deterministic solution increases as well for larger σ. The space mapping procedure is decreasing the error by a factor three for \(\sigma = 0.02\). As the stochastic starts to superimpose the crowd behaviour for \(\sigma \ge 0.03\), we see that the space mapping approach decreases the error only marginally. The second part of the table shows results obtained with 1000 Monte Carlo samples. The values change only slightly, which justifies to fix the number of MC samples to 100 for the following computations

Remark 11

We would like to emphasize that a basic Monte Carlo approach works fine in the present setting. For problems that are more involved, it may be necessary to use Multi-level Monte Carlo techniques in order to get efficient approximations for the stochastic states.

6.2 Influence of the number of dogs M

In the following figures, we depict sheep as blue dots, dogs as red triangles. The trajectories of the dogs are depicted as red lines and the trajectory of the center of mass of the crowd is the blue line. A cross marks the desired location \(Z_{\mathrm{des}}\).

Varying the number of dogs leads to very different controls which can be visualized implicitly by the trajectories of the dogs. For this study we chose the parameter values

$$ \epsilon _{\mathrm{SM}} = 0.5,\qquad N=20,\qquad \sigma = 0.01. $$

Moreover, instead of fixing T we used \(| \bar{X} - Z_{\mathrm{des}}| < 0.05\) as stopping criterion and did 100 Monte Carlo runs. The change of the stopping criterion is necessary because we expect that a different number of dogs will need different times to steer the crowd to the desired destination.

Figure 3 compares the trajectories of the dogs (red) and the resulting trajectory of the centre of mass of the crowd (blue). Note, that one dog has a hard time of leading the crowd as the iteration stops at \(T_{1} = 3400\). The situation is getting better for two dogs. They are successful at time \(T_{2} = 80\). Three dogs finish at time \(T_{3} = 60\). In the other cases we have \(T_{4} = 60, T_{5} = 70, T_{6} = 40\).

Figure 3
figure 3

Space mapping trajectories for 1–6 dogs. The simulations are stopped if \(| \bar{X} - Z_{\mathrm{des}}| < 0.05\) holds true. The corresponding times are \(T_{1} =3400, T_{2} = 80, T_{3} = 60, T_{4} = 60, T_{5} = 70, T_{6} = 40\), where the subscript refers to the number of dogs involved in the simulation. We see that already one dog is able to steer the crowd. Nevertheless, more dogs significantly decrease the time needed for the steering process. The stochastic influence in the system is implicitly displayed in the trajectory of the dogs in the figure on the top left. As for deterministic systems one would expect to have a homogeneous helix

Remark 12

We emphasize that the initial positions of the dogs were chosen manually and not included in the optimization. Hence, we cannot deduce the optimal number of dogs from these results.

6.3 Stabilization

Next, we show snapshots of a simulation with \(T = 250, \gamma = 1e^{-3}\) and 5 dogs in order to investigate if the herding process stabilizes. Indeed, we see in Fig. 4 that the dogs begin to circle around the crowd when the task of steering the centre of mass to the destination \(Z_{\mathrm{des}}\) is achieved. This behaviour can be interpreted as stabilization of the system. For this simulation the maximum number of space mapping iterations was limited to two. The stabilization indicated by this example is underlined by the following computations. For simplicity we set the friction parameter to \(\alpha =0\) and consider the deterministic model.

Figure 4
figure 4

Snapshots of the optimization procedure at \(t=10, 25, 50, 75, 125, 250\) (top-left to bottom-right). We see that the five dogs are able to steer the centre of mass of the crowd to the destination and that the crowd stays together. The latter is a new information which is not accessible by investigating only the centre of mass. Moreover, we see a stabilization as the dogs begin to circle around the crowd (\(t=75, 125, 250\)) after the centre of mass reached the desired destination. The black circle is underlining the discussion of the stabilization of the crowd

Herding one sheep with two dogs. We first investigate the case having two dogs and only one sheep. Suppose the sheep is located at the destination \(Z_{\mathrm{des}}\) and the dogs are initially positioned at a circle with radius \(\alpha p_{1}\) around \(Z_{\mathrm{des}}\), one at each end of a diameter. Then, the positions of the dogs can be parametrized with the help of \(p_{1}\) as

$$a_{1} = Z_{\mathrm{des}} + \alpha p_{1},\qquad a_{2} = Z_{\mathrm{des}} - \alpha p_{1}. $$

In this setting the deterministic state equations are given by

$$\begin{aligned} &\dot{x}_{1} = v_{1},\qquad \dot{v}_{1} = -G_{2}\bigl( \vert x_{1} - a_{1} \vert \bigr) - G_{2}\bigl( \vert x_{1}-a_{2} \vert \bigr), \\ &\dot{a_{i}} = u_{i},\quad i=1,2. \end{aligned}$$

Choosing the initial conditions \(x_{1} = Z_{\mathrm{des}}, v_{1}(0) = 0, a_{1} = Z_{\mathrm{des}} + \alpha p_{1}, a_{2} = Z_{\mathrm{des}} - \alpha p_{1}\), we obtain due to the radial symmetry of the interaction potentials

$$\dot{v}_{1} = -G_{2}\bigl( \vert x_{1} - a_{1} \vert \bigr) - G_{2}\bigl( \vert x_{1}-a_{2} \vert \bigr) = 0. $$

Thus, the system is stable for \(u_{i} = 0\). Note, that we have reduced the system from two controls to only one control affecting both dogs at the same time. One can even change the positions of the dogs by \(\dot{p}_{1} = f\) for some function f without any effect on the position of the sheep. This shows the stability of the configuration in this toy example.

Herding 2Nsheep with 2Mdogs. The observation of the previous section can be generalized to the following framework having an even number of dogs 2M and an even number of sheep 2N involved.

We assume the initial configuration to fulfil the following assumptions:

  1. (A1)

    \(v_{i} = 0, i=1,\dots,2N\).

  2. (A2)

    The centre of mass is located at the destination \(Z_{\mathrm{des}}\), i.e.

    $$\frac{1}{2N} \sum_{i=1}^{2N} x_{i} = Z_{\mathrm{des}}. $$
  3. (A3)

    For each \(x_{i}\) exists \(x_{j}\) with \(i \ne j\) and \(x_{i} - a_{k} = -(x_{j} - a_{\ell})\) where \(a_{k}\) and \(a_{\ell}\) are positioned at a circle around \(Z_{\mathrm{des}}\) each on one end of a diameter.

Further, we note that we have assumed the interaction potentials to be radially symmetric. Using these assumptions we find

$$\begin{aligned} \frac{d}{dt} \Biggl(\frac{1}{2N} \sum_{i=1}^{2N} x_{i} \Biggr) &= \frac{1}{2N} \sum_{i=1}^{2N} v_{i} \\ &= \frac{1}{2N} \sum_{i=1}^{2N} \Biggl(v_{i} (0) + \int _{0}^{t} \frac{1}{2N} \sum _{j=1}^{2N} G_{1}\bigl( \vert x_{i} - x_{j} \vert \bigr) + \sum _{k=1}^{2M} G_{2}\bigl( \vert x_{i} - a_{k} \vert \bigr) \,\mathrm{d} s \Biggr) \\ &= 0. \end{aligned}$$

Thus, setting \(u_{k} =0\) for all \(k=1,\dots,2M\) we obtain stability for the location of the centre of mass. Moreover, changing the velocities of the dogs as discussed in the framework with one sheep and two dogs, the position of the centre of mass is conserved as well. These findings are illustrated by the black circle added to the terminal configuration of the simulation shown in Fig. 4. The dog at the bottom of the picture is far away from the crowd such that its contribution to the forces can be neglected.

Of course, the stochastic influence and the error in the numerical integration will lead to dogs circling around the crowd. Thus, we do not expect to obtain numerically a stable setting with all sheep and dogs standing still.

7 Space-mapping using the mean-field limit

Crowds consisting of many individuals, i.e. \(N\gg 1\), are often investigated from a mesoscopic point of view with the help of a mean-field equation, see e.g. [3, 6, 7, 9]. Following the steps in [8], this equation can be obtained via the empirical density

$$f^{N}(t,x,v) = \frac{1}{N}\sum_{i=1}^{N} \delta _{0}\bigl(x - x_{i}(t)\bigr)\otimes \delta _{0}\bigl(v - v_{i}(t)\bigr). $$

Formally passing to the mean-field limit \(N\to \infty \) leads the deterministic optimization problem

$$ J(f,u; \bar{Z},\bar{u}):= \int _{0}^{T} \frac{1}{2N} \sum _{k=1}^{N} \bigl\Vert {\mathbb {E}}\bigl[f(t)\bigr] - \bar{Z}(t) \bigr\Vert ^{2} + \frac{\gamma }{2} \bigl\Vert u(t)- \bar{u}(t) \bigr\Vert _{\mathbb {R}^{MD}}^{2} \,\mathrm {d}t, $$

subject to the state system given by

$$\begin{aligned} &\partial _{t} f + v\cdot \nabla _{x}f = \operatorname {div}_{v} \Biggl(\Biggl[(G_{1} \ast f) + \sum _{k=1}^{M} G_{2}(x - a_{m}) + \alpha v) \Biggr] f \Biggr), \\ &\frac{d}{dt} a_{m} = u_{m},\quad m=1,\dots,M. \end{aligned}$$

with initial conditions \(f(0,x,v) = f_{0}(x,v), a_{m}(0) = a_{0}^{m}\) for \(m=1,\dots,M\). Here we used

$${\mathbb {E}}\bigl[f(t)\bigr] = \int _{\mathbb{R}} x f(t,x,v) \,\mathrm{d}x \,\mathrm{d}v. $$

Now, drawing the random initial conditions i.i.d. from \(f_{0}(x,v)\), it is well-known that \(f(t,x,v)\) assigns the probability of finding a particle at time t at position x with velocity v. Hence, one could use the deterministic mean-field problem for f as coarse model for a space-mapping in order to control the stochastic limit for many particles. Note that a similar deterministic optimization problem was solved in [8].

Remark 13

We want to emphasize that in the common noise case the limiting equation is not deterministic but a stochastic PDE [30]. Thus, it is not clear whether it is an appropriate choice as coarse model. The cost for computing optimal controls with the SPDE are probably very high.

8 Conclusion and outlook

We discussed a space mapping approach in combination with receding horizon control for the closed loop control of a stochastic interacting particle system. The numerical results underline that the method is feasible for interacting particle systems with small stochastic perturbation. Further, they indicate that a sub-optimal control for the stochastic system is found efficiently already after few space mapping iterations.

In near future, we plan to use the space mapping approach to control a stochastic system involving a large number of interacting particles. In this case, the mean-field approximation can be used as coarse model for the space mapping approach. Moreover, a rigorous analysis of the space mapping procedure applied to stochastic problems is of interest. An investigation of the stabilizing effect of the feedback control is planned. A rigorous generalization to the setting with common noise and its influence on the space-mapping performance are interesting future projects as well.


  1. Albi G, Balague D, Carrillo JA, von Brecht J. Stability analysis of flock and mill rings for second order models in swarming. J Appl Math. 2017;74:794–818.

    MathSciNet  MATH  Google Scholar 

  2. Albi G, Pareschi L. Selective model-predictive control for flocking systems. Commun Appl Ind Math. 2018;9(2):4–21.

    MathSciNet  MATH  Google Scholar 

  3. Albi G, Pareschi L. Modeling self-organized systems interacting with few individuals: from microscopic to macroscopic dynamics. Appl Math Lett. 2013.

  4. Bakr MH, Bandler JW, Madsen K, Søndergaard J. Review of the space mapping approach to engineering optimization and modeling. Optim Eng. 2000;1:241–76.

    Article  MathSciNet  MATH  Google Scholar 

  5. Bandler JW, Biernacki RM, Chen SH, Grobelny PA, Hemmers RH. Space mapping technique for electromagnetic optimization. IEEE Trans Microw Theory Tech. 1994;42(12).

  6. Bolley F, Cañizo JA, Carrillo JA. Mean-field limit for the stochastic vicsek model. Appl Math Lett. 2012;25(3):339–43.

    Article  MathSciNet  MATH  Google Scholar 

  7. Burger M, Pinnau R, Totzeck C, Tse O. Mean-field optimal control and optimality conditions in the space of probability measures. 2019. arXiv:1902.05339.

  8. Burger M, Pinnau R, Totzeck C, Tse O, Roth A. Instantaneous control of interacting particle systems in the mean-field limit. 2019. arXiv:1903.12407.

  9. Carrillo JA, Fornasier M, Toscani G, Vecil F. Particle, kinetic and hydrodynamic models of swarming. In: Mathematical modeling of collective behavior in socio-economic and life sciences. Berlin: Springer; 2010. p. 297–336.

    Chapter  MATH  Google Scholar 

  10. Carrillo JA, Klar A, Roth A. Single to double mill small noise transitions via semi-lagrangian finite volume methods. 2014.

  11. Carrillo JA, Martin S, Panferov V. A new interaction potential for swarming models. Phys D, Nonlinear Phenom. 2013;1:112–26.

    Article  MathSciNet  Google Scholar 

  12. Carrilo JA, Huang Y, Martin S. Nonlinear stability of flock solutions in second-order swarming models. Nonlinear Anal, Real World Appl. 2014;17:332–43.

    Article  MathSciNet  MATH  Google Scholar 

  13. Chraibi M, Schadschneider A, Seyfried A. On force-based modeling of pedestrian dynamics. In: Modeling, simulation and visual analysis of crowds. Berlin: Springer; 2013. p. 23–41. Chap. 2.

    Chapter  Google Scholar 

  14. Christiani E, Piccoli B, Tosin A. Modeling self-organization in pedestrians and animal groups from macroscopic and microscopic viewpoints. In: Mathematical modeling of collective behavior in socio-economic and life sciences. Berlin: Springer; 2010. p. 337–64.

    Chapter  Google Scholar 

  15. Couzin ID, Krause J. Self-organization and collective behavior or vertebrates. Adv Study Behav. 2003;52:1–67.

    Google Scholar 

  16. Couzin ID, Krause J, James R, Ruxton GD, Franks NR. Collective memory and spatial sorting in animal group. J Theor Biol. 2002;218:1–11.

    Article  MathSciNet  Google Scholar 

  17. Cucker F, Smale S. Emergent behavior in flocks. IEEE Trans Autom Control. 2007;52:852–62.

    Article  MathSciNet  MATH  Google Scholar 

  18. D’Orsogna MR, Chuang Y, Bertozzi A. Self-propelled particles with soft-core interactions: patterns, stability and collapse. Phys Rev Lett. 2006;96(10):104302.

    Article  Google Scholar 

  19. Echeverría D, Hemker PW. Space mapping and defect correction. Comput Methods Appl Math. 2005;5(2):107–36.

    Article  MathSciNet  MATH  Google Scholar 

  20. El Karoui N, Peng S, Quenez MC. Backward stochastic differential equations in finance. Math Finance. 1997;7(1):1–71.

    Article  MathSciNet  MATH  Google Scholar 

  21. Escobedo R, Ibañez A, Zuazua E. Optimal strategies for driving a mobile agent in a “guidance by repulsion” model. Commun Nonlinear Sci Numer Simul. 2016;39:58–72.

    Article  MathSciNet  Google Scholar 

  22. Gong B, Liu W, Tang T, Zhao W, Zhou T. An efficient gradient projection method for stochastic optimal control problems. SIAM J Numer Anal. 2017;55(6):2982–3005.

    Article  MathSciNet  MATH  Google Scholar 

  23. Göttlich S, Teuber C. Space mapping techniques for the optimal inflow control of transmission lines. Optim Methods Softw. 2018;33(1):120–39.

    Article  MathSciNet  MATH  Google Scholar 

  24. Haskovec J. Flocking dynamics and mean-field limit in Cucker–Smale-type model with topological interactions. Phys D, Nonlinear Phenom. 2013. 42–51.

  25. Herty M, Banda M. Towards a space mapping approach to dynamic compressor optimization of gas networks. Optim Control Appl Methods. 2011;32(3):253–69.

    Article  MathSciNet  MATH  Google Scholar 

  26. Hintermüller M, Vicente LN. Space mapping for optimal control of partial differential equations. SIAM J Optim. 2005;15(4):1002–25.

    Article  MathSciNet  MATH  Google Scholar 

  27. Hinze M, Pinnau R, Ulbrich M, Ulbrich S. Optimization with PDE constraints. Berlin: Springer; 2009.

    MATH  Google Scholar 

  28. Jabin PE, Wang Z. Mean field limit for stochastic particle systems. In: Active particles. vol. 1. Berlin: Springer; 2017. p. 379–402.

    Chapter  Google Scholar 

  29. Kötz J, Kolb O, Göttlich S. A combined first and second order traffic network model. 2019. arXiv:1903.06744.

  30. Lacker D. A general characterization of the mean field limit for stochastic differential games. Probab Theory Relat Fields. 2016;165:581–648.

    Article  MathSciNet  MATH  Google Scholar 

  31. Marheineke N, Pinnau R, Reséndiz E. Space mapping-focused control techniques for particle dispersions in fluids. Optim Eng. 2012;13:101–20.

    Article  MathSciNet  MATH  Google Scholar 

  32. Øksendal B. Stochastic differential equations. Berlin: Springer; 2003.

    Book  MATH  Google Scholar 

  33. Parrish JK, Hamner WM. Animal groups in three dimensions: how species aggregate. Cambridge: Cambridge University Press; 1997.

    Book  Google Scholar 

  34. Pfeiffer L. Two approaches to stochastic optimal control problems with a final-time expectation constraint. Appl Math Optim. 2018;77(2):377–404.

    Article  MathSciNet  MATH  Google Scholar 

  35. Pinnau R, Thömmes G. Space mapping and optimal control in radiative transfer. AIP Conf Proc. 2009;1168:1061–3.

    Article  MATH  Google Scholar 

  36. Shah HR, Hosder S, Leifsson LT, Koziel S, Tesfahunegn YA. Multi-fidelity robust aerodynamic design optimization under mixed uncertainty. In: 17th AIAA non-deterministic approaches conference. 2015. p. 917.

    Google Scholar 

Download references


Not applicable.

Availability of data and materials

Not applicable.

Authors’ information

RP holds a professorship at TU Kaiserslautern. CT has a postdoc position at TU Kaiserslautern.


The authors received no funding for this work.

Author information

Authors and Affiliations



RP has made substantial contributions to the conception, the design of the work and the interpretation of data. CT has made substantial contributions to the design of the work, the interpretation of data and the creation of new software used in the work. All authors have approved the submitted version and have agreed both to be personally accountable for the author’s own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature.

Corresponding author

Correspondence to Claudia Totzeck.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Totzeck, C., Pinnau, R. Space mapping-based receding horizon control for stochastic interacting particle systems: dogs herding sheep. J.Math.Industry 10, 11 (2020).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • 60H10
  • 34F05
  • 49J15


  • Space mapping
  • Receding horizon control
  • Interacting particle systems
  • Stochastic optimization
  • Numerics