 Research
 Open Access
Modelling and optimization applied to the design of fast hydrodynamic focusing microfluidic mixer for protein folding
 Benjamin Ivorra^{1}Email authorView ORCID ID profile,
 Miriam R. Ferrández^{2},
 María Crespo^{3},
 Juana L. Redondo^{2},
 Pilar M. Ortigosa^{2},
 Juan G. Santiago^{4} and
 Ángel M. Ramos^{1}
https://doi.org/10.1186/s1336201800463
© The Author(s) 2018
 Received: 4 December 2017
 Accepted: 8 June 2018
 Published: 15 June 2018
Abstract
In this work, we consider a microfluidic mixer that uses hydrodynamic diffusion stream to induce the beginning of the folding process of a certain protein. To perform these molecular changes, the concentration of the denaturant, which is introduced into the mixer together with the protein, has to be diminished until a given value in a short period of time, known as mixing time. In this context, this article is devoted to optimize the design of the mixer, focusing on its shape and its flow parameters with the aim of minimizing its mixing time. First, we describe the involved physical phenomena through a mathematical model that allows us to obtain the mixing time for a considered device. Then, we formulate an optimization problem considering the mixing time as the objective function and detailing the design parameters related to the shape and the flow of the mixer. For dealing with this problem, we propose an enhanced optimization algorithm based on the hybridization of two techniques: a genetic algorithm as a core method and a multilayer line search methodology based on the secant, which aims to improve the initialization of the core method. More precisely, in our hybrid approach, the core optimization is implemented as a subproblem to be solved at each iteration of the multilayer algorithm starting from the initial conditions that it provides. Before applying it to the mixer design problem, we validate this methodology by considering a set of benchmark problems and, then, compare its results to those obtained with other classical global optimization methods. As shown in the comparison, for the majority of those problems, our methodology needs fewer evaluations of the objective function, has higher success rates and is more accurate than the other considered algorithms. For those reasons, it has been selected for solving the computationally expensive problem of optimizing the mixer design. The obtained optimized device shows a great reduction in its mixing time with respect to the stateoftheart mixers.
Keywords
 Global optimization method
 Metaheuristic algorithms
 Mathematical modelling
 Microfluidic mixer design
1 Introduction
Proteins are biomolecules composed of one or more long chains of amino acids. Protein folding refers to the processes by which these amino acids interact with each other and produce a welldefined threedimensional structure, called folded protein, able to perform a wide range of biological functions [1]. The fundamental principles of protein folding have practical applications in genome research, drug discovery, molecular diagnostics or food engineering. Protein folding can be initiated, for instance, by inducing changes in chemical potential (e.g. changes in the concentration of a chemical specie).
The aim of this work is to optimize the main design parameters of a particular hydrodynamic focused microfluidic mixer (mixer shape and flow injection velocities) for minimizing the mixing time of this device, taking into account that, till the date, the best mixer designs perform mixing times of approximately 1.0 μs [3].
There already exists several optimization algorithms which have been improved by choosing appropriate initialization techniques. To cite an instance, the Direct Tabu Search algorithm (DTS) [19, 20] relies on a modification of the cost function which adds penalty terms and prevent the algorithm to revisit previously investigated neighborhoods. Other methodologies, such as the Greedy Randomized Adaptive Search Procedure (GRASP), combine greedy solutions with a local search. Line search methods [6, 21] have been likewise modified by coupling them with other optimization algorithms. To give an example, in [22] the authors combined the Enhanced Unidirectional Search method [23] with the 323 line search scheme [24], and the resulting optimization method was proven to be satisfactory for solving highdimensional continuous nonlinear optimization problems. In the context of MultiObjective optimization, we underline the approach presented in [25], a method composed of two line search algorithms: one establishes an initial choice in the Pareto front and the other one sets a suitable initial condition for exploring the front.
In this paper, we propose a metaheuristic technique based on a specific GA combined with a line search method to dynamically upgrade its population. Our approach is validated through multiple test cases [26]. The results are then compared with those given by the following optimization algorithms: SD, GA, DTS, Continuous GRASP (CGR), a Controlled Random Search algorithm (CRS) [27], and a Differential Evolution algorithm (DE) [28].
This work is organized as follows. Section 2 introduces a mathematical model which computes the mixing time for a given mixer design (i.e., mixer geometry and flow injection velocities). In Sect. 3, we state the optimization problem which aims to minimize the mixing time by choosing a suitable mixer design. In Sect. 4, we describe the methodology used to solve the considered optimization problem. Finally, in Sect. 5, we present and discuss the results obtained during this work. First, we validate the optimization algorithm on various benchmark problems. Then, we detail and analyze the optimized microfluidic mixer.
2 Mathematical modelling
In order to simplify the notations, we introduce \(\Omega=\Omega_{2D,q}\). In the boundary of Ω, denoted by Γ, we define: \(\Gamma_{c}\) the boundary representing the center inlet; \(\Gamma_{s}\) the boundary representing the side inlet; \(\Gamma_{e}\) the boundary representing the outlet; \(\Gamma_{w1}\) the boundary representing the wall defining the lower corner; \(\Gamma_{w2}\) the boundary representing the wall defining the upper corner; \(\Gamma_{a}\) the boundary representing the Yaxis symmetry. A schematic representation of these boundaries is given in Fig. 2.
System (2) is completed with the following boundary conditions:
3 Optimization problem
Here, we optimize the main design parameters (mixer shape and flow injection velocities) for minimizing the time required to attain a desired conversion of the denaturant concentration. This denaturant conversion, in turn, induces the folding process of the protein.
First, we specify the set of parameters defining a particular mixer design. Then, we introduce the optimization problem to be solved in Sect. 5.
In order to solve numerically Problem (5), we have considered the numerical implementation detailed below.
The solution of System (2)–(4) was computed numerically by using the software Matlab (www.mathworks.com) and COMSOL Multiphysics 5.2a (www.comsol.com), the latter based on the Finite Element Method (FEM). More specifically, we considered Lagrange P2P1 elements to stabilize the pressure and to accomplish the Ladyzhenskaya, Babouska and Brezzi stability condition. The 2ndorder Lagrange elements model the flow field and concentration components, while linear elements stand for the pressure. The Navier–Stokes equations were solved using Galerkin Least Square streamline and crosswind diffusion approaches with a view to avoid numerical oscillations. The convectiondiffusion equation was solved by using an upwind scheme. A Direct Dampled Newton method was applied to solve the associated linear systems. Finally, when computing the mixing time (defined by Equation (6)), we made use of the solutions of the previous FEM model and a trapezoidal approximation of the integral. We refer to the book [31] for a complete description of those techniques.
The model parameters considered during this work are associated to the denaturant guanidine hydrochloride (GdCl), a Chaotropic agent which is commonly used for protein folding. Its thermophysical coefficients can be approximated as those of water, that is, its density is \(\rho=1010\) kg m^{−3} and its dynamic viscosity is \(\eta=9.8 \times10^{4}\) kg m^{−1} s^{−1}. In addition, the diffusion coefficient of GdCl in the background buffer (presumed to be similar to water) is \(D=2\times10^{9}\) m^{2} s^{−1}. Taking into account those coefficients and the constraint \(\mathit{Re}=\rho v L/ \eta\leq15\) previously set, the maximum side injection velocity is \(u_{s} \leq\) 7 m s^{−1}. The values of α and ω in Equation (6) have been adapted to GdCl and taken here as 0.9 and 0.3, respectively. It has been empirically observed that a 3 times decrease of the GdCl concentration infers the folding process of several proteins (see for instance Ref. [3, 14]).
4 Optimization method
In this section, we describe in detail the optimization algorithm and the parameters used to solve Problem (5).
The considered optimization algorithm, called Genetic MultiLayer Algorithm (GMA), consists on the combination of two methods: a genetic algorithm (GA) and a multilayer secant algorithm (MSA). On the one hand, the GA [7] approximates the solution of (5) and, on the other hand, the MSA [32, 33] provides suitable initial populations for the GA. Both algorithms have been validated on several industrial problems in Refs. [13–15, 18, 33–35].
The GAs are metaheuristics global optimization methods based on a set of points, known as the population of individuals, which evolve using some stochastic processes inspired from the Darwinian theory of species [7]. They are widely applied to solving many complex optimization problems due to the fact that they are intrinsically parallel and they can deal with largescale problems. Moreover, regarding their search capacity, they exhibit good performances with functions presenting several local minima. Another advantage is that they do not require apriori knowledge about the objective function neither sensitivity computation. However, it is important to point some of their weaker characteristics. Indeed, they are less accurate and have slower convergence than other algorithms as, for instance, those based on gradients. Before explaining the methodology used to enhance these inconveniences, we detail the GA used in this work.
For the considered GA, the user must set four parameters: the number of individuals in the population denoted by \(N_{p} \in \mathbb{N}\), the number of generations \(N_{g} \in \mathbb{N}\), the mutation probability \(p_{m} \in[0,1]\) and the crossover probability \(p_{c}\in[0,1]\). Additionally, the GA method needs an initial population \(X^{0}\) of \(N_{p}\) individuals belonging to the search space Φ, i.e., \(X^{0} = \{ x^{0} _{j} \in\Phi, j=1,\ldots,N_{p}\}\).
 1.Selection: The first one being implemented is the selection procedure consisting in randomly choosing \(N_{p}\) individuals among the \(N_{p}\) individuals belonging to the previous population \(X^{i}\) and considering that they can be repeated. Then, the intermediate population obtained after applying this first operator, denoted by \(X^{i+1, 1}\), is calculated as:where \({\mathcal {S}}^{i}\) is a \((N_{p},N_{p})\)matrix whose entries \({\mathcal {S}}^{i} _{j,k}\) are defined as:$$X^{i+1,1} = {\mathcal {S}}^{i} X^{i}, $$It is important to mention that each individual \(x_{j}^{i} \in X^{i}\), with \(j=1,\ldots,N_{p}\), has a probability to be selected during this process which is given by \(T^{1}(x_{j}^{i})/ \sum_{k=1} ^{N_{p}} T ^{1}(x_{k}^{i})\).$${\mathcal {S}}^{i} _{j,k} = \textstyle\begin{cases} 1 &\text{if the }k\text{th individual of }X^{i}\text{ is the }j\text{th selected individual,} \\ 0 &\text{otherwise.} \end{cases} $$
 2.Crossover: Once the selected population \(X^{i+1, 1}\) is obtained, its individuals are considered in consecutive pairs for the crossover, which carries out an exchange of data between them with a probability \(p_{c}\). Analogously to the selection procedure, this operation can be expressed mathematically as:where \({\mathcal {C}}^{i}\) is also a realvalued \((N_{p},N_{p})\)matrix. As said previously, the crossover process has to perform over each pair of consecutive individuals, i.e., the ones placed at the \(2j1\) and 2j rows of \(X^{i+1, 1}\), with \(j=1,\dots\), floor(\(N_{p}/2\)). (Notice that floor(y) is the function providing the nearest integer which is lower or equal than y). Therefore, the coefficients corresponding to those rows in the matrix \({\mathcal {C}}^{i}\) are given by:$$X^{i+1,2} = {\mathcal {C}}^{i} X^{i+1,1}, $$where$${\mathcal {C}}_{2j1,2j1}^{i}=\lambda_{1}, \qquad {\mathcal {C}}_{2j1,2j}^{i}=1 \lambda_{1}, \qquad {\mathcal {C}}_{2j,2j}^{i}=\lambda_{2},\qquad {\mathcal {C}}_{2j,2j1} ^{i}=1\lambda_{2}, $$Then, the remaining coefficients of \({\mathcal {C}}^{i}\) are set to 0. Finally, if \(N_{p}\) is odd then the last element of the population \(X^{i+1,1}\) does not have a partner, so it is directly copied in \(X^{i+1,2}\) by setting \({\mathcal {C}}_{N_{p},N_{p}}^{i}=1\).

with a probability \(p_{c}\), the considered rows exchange data, thus, \(\lambda_{1}\) and \(\lambda_{2}\) are randomly chosen in \(]0,1[\), considering a uniform distribution;

in the other case, with a probability \(1p_{c}\), the considered individuals are maintained in the next intermediate population \(X^{i+1,2}\) without any changes, i.e., \(\lambda_{1}=\lambda_{2}=1\).

 3.Mutation: Unlike the previous processes, this mechanism operates individually on each row of \(X^{i+1,2} \). More specifically, the mutation procedure consists in randomly perturbing, with a probability \(p_{m}\), the individual corresponding to a row of \(X^{i+1,2}\). It can be written aswhere the realvalued \((N_{p},N)\)matrix \({\mathcal {M}}^{i}\) has zeros at the jth row, with \(j=1,\ldots,N_{p}\), with a probability \(1p_{m}\), or it has a vector \(m_{j}\in \mathbb{R}^{N}\), randomly generated considering a uniform distribution in the subset of \(\mathbb{R}^{N}\) such that \(x_{j}^{i+1,2} + m_{j} \in\Phi\), otherwise.$$X^{i+1,3} =X^{i+1,2}+{\mathcal {M}}^{i}, $$
 4.Elitism: After applying all the four previous mechanisms, the majority of the individuals in the new population \(X^{i+1,3}\) may be different from the ones in the previous population \(X^{i}\). Then, it could happen that the individual \(x^{i}_{b}\), where \(b \in\{1,\ldots,N _{p}\}\), having the best value of the objective function T in \(X^{i}\) is no longer included in \(X^{i+1,3}\). Furthermore, it may occur that none of the individuals in \(X^{i+1,3}\) enhance the value \(T(x^{i}_{b})\). The purpose of the elitism procedure is copying the individual \(x^{i}_{b}\) into the population \(X^{i+1}\), more specifically into its bth row. If there are several individuals with the best value for T, one of them is randomly selected for being maintained. Using matrix operations, the elitism process can be formulated as follows:where \(I_{N}\) denotes the Ndimensional identity matrix and the realvalued matrix \({\mathcal {E}}^{i}\) of size \((N_{p},N_{p})\) has all the coefficients equal to zero except for the entry \({\mathcal {E}}_{b,b}^{i}=1\), when the individual \(x_{b}\) has better value for T than all the individuals in \(X^{i+1,3}\).$$X^{i+1} = \bigl(I_{N} {\mathcal {E}}^{i} \bigr) \bigl(X^{i+1,3} \bigr)+ {\mathcal {E}}^{i} X^{i}, $$

If there is a significant evolution of the cost function between this individual and \(o_{l}\), the secant method generates a new individual close to \(o_{l}\) that performs a refined search near the actual solution.

Otherwise, the secant method creates a new individual far from \(o_{l}\), to expand the exploration of the admissible space.
The hybrid algorithm GMA has been already tested for solving different optimization problems [13, 15, 35] and, according to several numerical experiments, it seems that it achieves good results consuming lower computational time than the GA used on its own. The GMA is included by the authors in the software Global Optimization Platform which is freely available for its download at http://www.mat.ucm.es/momat/software.htm.
5 Results and discussion
In Sect. 5.1, we first validate the GMA presented previously on benchmark problems. Then, in Sect. 5.2, we present and discuss the numerical results obtained when searching the optimal design of the microfluidic mixer.
5.1 Validation of the GMA
This subsection aims to validate the aboveproposed optimization methodology by applying it to solve a set of benchmark problems. In particular, we consider the following set of box constrained optimization problems detailed in [26]: Branin (denoted by Bra), Eason (Eas), Goldstein–Price (GP), Shubert (Shu), Hartmann with 3 (Hm3) and 6 (Hm6) variables, Rosenbrock with 2 (Rb2), 5 (Rb5) and 10 (Rb10) variables, Shekel with 4 variables and a number of objectives of 5 (Sk5), 7 (Sk7) and 10 (Sk10), and Zakharov with 5 (Za5) and 10 (Za10) variables. That particular list of benchmark problems is considered as a good enough representation of low dimensional optimization problems (i.e., ≈10 variables) because it illustrates a wide and diverse set of difficulties that can be found in real problems [19, 36].
The GMA is applied for solving numerically those selected benchmark problems using the following parameters: \(l_{\max}=1000\), \(N_{p}=10\), \(N_{g}=10\), \(p_{c}=0.55\), \(p_{m}=0.5\). Those parameters have been demonstrated to be suitable for solving similar optimization problems in [13, 16, 29, 37]. Moreover, when the GMA ends, its solution is improved by performing 10 iterations of the Steepest Descent (SD) algorithm, in which the descent step size ρ is determined using 10 iterations of a dichotomy method starting from \(\rho_{0}=1\). This last layer of SD is carried out in order to enhance the accuracy of the final solution.

DTS: The Direct Tabu Search algorithm is an improved algorithm which exploits the idea of not revisiting regions of the search space that have been already explored. For achieving that, some penalty terms are added to the objective function. In this work, we use the implementation detailed in [19] with the parameters recommended in there.

CGR: The Continuous GRASP (CGR) algorithm is an enhanced version of the Greedy Randomized Adaptive Search Procedure (GRASP). The latter combines the construction of a greedy solution with a local search method. Their implementation, the employed parameters, and the considered results can be found in [36].

SD: We use the wellknown Steepest Descent algorithm based on the gradient starting from a random point in the search space Φ, with 3000 iterations and with a descent step size ρ determined by the mean of 10 iterations of a dichotomy method with initial condition \(\rho_{0}=1\).

GA: The Genetic Algorithm explained at Sect. 4 is run with the following settings: \(N_{g}=1000\), \(N_{p}=180\), \(p_{c}=0.45\) and \(p_{m}=0.15\), which are suggested in [14, 17].

CRS: The Controlled Random Search algorithm is also considered for solving the selected benchmark problems. Their parameters have been set according to the values recommended in [38]. In particular, we consider 200 individuals in the population, as many trial points as the size of the problem, 3000 iterations as the maximum threshold and a rate of 0.55 for the success test.

DE: Finally, the Differential Evolution method is run with the parameters prescribed for low dimensional optimization problems in [39]. More precisely, the crossover operator is set to rand/1/exp with 0.9 for the crossover probability and 0.5 for the mutation probability. Additionally, we consider a maximum number of 5000 iterations and the population size is set to 5 times the dimension of the benchmark problem.
Moreover, we consider another complementary stopping criterion consisting in a maximum number of evaluations of the objective function. In this work, we set this maximum value to 50,000 for each performance because, according to the literature [12, 13, 16, 17], it is a high enough value. If such an algorithm consumes all the 50,000 evaluations and its solution does not satisfy the first stopping criterion (7), we consider that the algorithm has failed solving numerically the problem at hand.
Success rate (%) of the optimization algorithms when solving the considered benchmark problems (Func)
Func.  DTS  CGR  SD  GA  CRS  DE  GMA 

Bra  100  100  100  100  100  100  100 
Eas  82  100  0  100  100  100  100 
GP  100  100  53  100  100  100  100 
Shu  92  100  25  100  100  100  100 
Hm3  100  100  51  100  100  100  100 
Hm6  83  100  48  100  90  51  100 
Rb2  100  100  80  100  100  100  100 
Rb5  85  100  74  96  87  97  92 
Rb10  85  100  71  95  68  93  81 
Sk5  57  100  16  97  17  74  96 
Sk7  65  100  7  96  23  88  98 
Sk10  52  100  0  96  10  93  97 
Za5  100  100  100  100  100  100  100 
Za10  100  100  100  100  100  100  100 
Average number (considering only the runs satisfying the stopping criterion (7)) of evaluations needed by the optimization algorithms to solve the considered benchmark problems (Func)
Func.  DTS  CGR  SD  GA  CRS  DE  GMA 

Bra  212  10,090  251  1304  2953  2347  252 
Eas  223  5093  –  40,125  2877  3851  3488 
GP  230  53  295  465  2429  1937  439 
Shu  274  18,608  120  7748  9947  3049  1270 
Hm3  438  1719  466  1119  1493  447  425 
Hm6  1787  29,894  217  4418  2907  8456  1054 
Rb2  254  23,544  2275  3918  6177  7952  1675 
Rb5  1684  182,520  3465  43,604  7927  41,939  43,972 
Rb10  9037  725,281  5096  44,557  43,822  44,156  44,828 
Sk5  819  9274  229  37,328  5702  40,032  6991 
Sk7  812  11,766  208  36,046  3618  3479  4619 
Sk10  6828  17,612  –  40,217  3540  2386  1637 
Za5  1003  12,467  268  24,988  5384  40,026  2674 
Za10  4032  2,297,937  540  40,489  9004  40,031  20,719 
In view of those results, GMA shows a good performance in comparison to the other analyzed algorithms. In fact, according to Table 1, it has success rates similar to the ones of GA and better than DTS, CRS and DE. Moreover, as we can observe in Table 2, GMA needs a number of evaluations which is, in the majority of the cases, lower than the one required by GA, CRS and DE. Therefore, for those problems where the gradient of the objective function is not available or it is difficult to compute, the GMA can be a good alternative to classical evolutionary algorithms.
From those results, the GMA seems to enhance the convergence and to achieve a reduction of the computational effort associated with the number of functional evaluations. In addition, the GMA has been also applied for solving some industrial design problems, as the one concerning us, here, about the optimization of microfluidic mixers. In this kind of realworld problems, where the objective functions are frequently computationally expensive and exhibit several local minima, the improvements in convergence and the savings in evaluations are of vital importance.
5.2 Microfluidic mixer optimization results
In this subsection, we present the optimization results obtained when solving Problem (5) with the Genetic MultiLayer Algorithm (GMA), and its parameters, presented in Sect. 4. The number of evaluations of T used by GMA was approximately 6000 and the optimization process took about 40 h (on a 3.6 Ghz I7 Intel Computer with 32 Gb of RAM).

The width of the mixing region (i.e., the area where the center and side mixer channels intersect and both fluids are mainly mixed) reaches a minimum value of about 1.1 μm near \(y = 16.5~\mu\mbox{m}\). At that location, the maximum velocity comes up to 26 m s^{−1}, helping to accelerate the mixing time.

The value of angle θ (about \(\pi/5\) radians) between the center and side mixer channels, which was set to 0 in Refs. [3, 4, 14]).

The choice of suitable maximum injection velocities, set to \(u_{s}=5.2\) m s^{−1} and \(u_{c}=0.038\) m s^{−1}. In that case, the Reynolds number Re (defined in Sect. 3) is around 9, satisfying the constraint \(\mathit{Re}<15\) imposed to avoid secondary flows.
Value of the optimal parameters in \(\phi^{\mathrm{opt}}\), solution of Problem (5)
Parameter  \(u_{s}\)  σ  θ  \(l_{c}\)  \(l_{s}\)  \(l_{e}\)  \(cx_{1}\)  \(cy_{1}\)  \(l_{1}\)  \(h_{1}\)  \(cx_{2}\)  \(cy_{2}\)  \(l_{2}\)  \(h_{2}\) 

Value  5.2  7.3×10^{−3}  0.6  2.5  9.1  16.3  1.1  16.6  0.5  0.3  0.9  18.9  0.1  1.1 
6 Conclusions
In this work, we dealt with the design of a microfluidic mixer based on hydrodynamic focusing for protein folding. More precisely, we aimed to find the shape parameters and the injection velocities that minimize the mixing time, which is the time needed for reducing the denaturant concentration under a desired level.
In order to compute the mixing time of a device attending to given design variables, we have presented a mathematical model consisting in the incompressible Navier–Stokes equations coupled with a convectiondiffusion equation. This model was solved numerically using the Finite Element Method approximation over a simplified twodimensional domain.
Then, we have formulated the mixer design optimization problem and we have proposed a methodology for solving it efficiently. This methodology, called GMA, is composed by a genetic algorithm (GA) and a multilayer line search method for enhancing the GA initialization. Regarding the result, we have observed that applying it to some benchmark problems, the GMA improves the convergence of the GA. Moreover, the GMA consumes less number of evaluations of the objective function than the GA, achieving a reduction in the computational effort. Therefore, the GMA is strongly recommended for dealing with optimization problems whose evaluations of the objective function are computationally expensive, as our mixer design problem.
Finally, solving our mixer design problem with the GMA, we have found an optimized device which exhibits a mixing time of 0.1 μs. This new design implies a great reduction with respect to the previously developed mixers. In fact, according to the literature [3–5, 14, 40], the lowest mixing time among the previous similar mixers was 1 μs and it was achieved by the devices detailed in Refs [5, 14]. Therefore, the mixer designed with the parameters determined by our optimization methodology presents a mixing time of only 10% of those previous works. Analyzing the factors that can be responsible for this improvement, we have noticed two important novelties in the obtained design parameters: the angle of the side inlet channels and the inlet velocities. The angle parameter, with a value of \(\pi/5\) radians, helps avoiding strong centripetal accelerations in the inlet side channel streams, an experimentallyobserved phenomenon, explained in Ref. [5]. On the other hand, the inlet velocities have a great influence on the mixing time, but they were not optimized numerically in previous works. For instance, the values for those velocities were \(u_{s}=3.25\) m s^{−1} and \(u_{c}=0.032\) m s^{−1} in Ref. [3]. Here, we have obtained \(u_{s}=5.2\) m s^{−1} and \(u_{c}=0.038\) m s^{−1}.
For a deep sensitivity analysis of the mixing times regarding the optimized mixer parameters, we refer the interested reader to the Refs [16, 29].
Declarations
Acknowledgements
Not applicable.
Availability of data and materials
The GMA can be freely downloaded at: http://www.mat.ucm.es/momat/software.htm. No other material is available.
Authors’ information
Ivorra Benjamin, Miriam R. Ferrández, Maria Crespo and Angel M. Ramos are specialist in mathematical modelling and optimization methods. Juana L. Redondo and Pilar M. Ortigosa are specialist in global optimization algorithms. Juan G. Santiago is a specialist in chemical engineering processes and, in particular, microfluidic devices.
Funding
This work was carried out thanks to the financial support of the “Spanish Ministry of Economy and Competitiveness” under projects MTM201122658 and MTM201564865P; the “Junta de Andalucía” and the European Regional Development Fund through project P12TIC301; and the research group MOMAT (Ref.910480) supported by “Banco Santander” and “Universidad Complutense de Madrid”.
Authors’ contributions
All authors have contributed to this work. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Berg J, Tymoczko J, Stryer L. Biochemistry. 5th ed. New York: Freeman; 2002. Google Scholar
 Brody J, Yager B, Goldstein R, Austin R. Biotechnology at low Reynolds numbers. Biophys J. 1996;71(6):3430–41. View ArticleGoogle Scholar
 Hertzog D, Ivorra B, Mohammadi B, Bakajin O, Santiago J. Optimization of a microfluidic mixer for studying protein folding kinetics. Anal Chem. 2006;78(13):4299–306. View ArticleGoogle Scholar
 Hertzog D, Michalet X, Jäger M, Kong X, Santiago J, Weiss S, et al.. Femtomole mixer for microsecond kinetic studies of protein folding. Anal Chem. 2004;76(24):7169–78. View ArticleGoogle Scholar
 Yao S, Bakajin O. Improvements in mixing time and mixing uniformity in devices designed for studies of proteins folding kinetics. Anal Chem. 2007;79(1):5753–9. View ArticleGoogle Scholar
 Luenberger D, Ye Y. Linear and nonlinear programming. International series in operations research & management science. Berlin: Springer; 2008. MATHGoogle Scholar
 Goldberg DE. Genetic algorithms in search, optimization and machine learning. 1st ed. Boston: AddisonWesley; 1989. MATHGoogle Scholar
 Gonçalves JF, de Magalhães Mendes JJ, Resende MGC. A hybrid genetic algorithm for the job shop scheduling problem. Eur J Oper Res. 2005;167(1):77–95. MathSciNetView ArticleMATHGoogle Scholar
 Rocha M, Neves J. Preventing premature convergence to local optima in genetic algorithms via random offspring generation. In: Imam I, Kodratoff Y, ElDessouki A, Ali M, editors. International conference on industrial, engineering and other applications of applied intelligent systems. Lecture notes in computer science. vol. 1611. Berlin: Springer; 1999. p. 127–36. Google Scholar
 Carrasco M, Ivorra B, Ramos AM. A varianceexpected compliance model for structural optimization. J Optim Theory Appl. 2012;152(1):136–51. MathSciNetView ArticleMATHGoogle Scholar
 Carrasco M, Ivorra B, Ramos AM. Stochastic topology design optimization for continuous elastic materials. Comput Methods Appl Mech Eng. 2015;289:131–54. MathSciNetView ArticleGoogle Scholar
 Muyl F, Dumas L, Herbert V. Hybrid method for aerodynamic shape optimization in automotive industry. Comput Fluids. 2004;33(5):849–58. View ArticleMATHGoogle Scholar
 Gomez S, Ivorra B, Ramos AM. Optimization of a pumping ship trajectory to clean oil contamination in the open sea. Math Comput Model. 2011;54(1):477–89. MathSciNetView ArticleMATHGoogle Scholar
 Ivorra B, Mohammadi B, Santiago J, Hertzog D. Semideterministic and genetic algorithms for global optimization of microfluidic protein folding devices. Int J Numer Methods Eng. 2006;66(2):319–33. MathSciNetView ArticleMATHGoogle Scholar
 Ivorra B, Mohammadi B, Ramos AM. Optimization strategies in credit portfolio management. J Glob Optim. 2009;43(2–3):415–27. MathSciNetView ArticleMATHGoogle Scholar
 Ivorra B, Redondo JL, Santiago JG, Ortigosa PM, Ramos AM. Two and threedimensional modeling and optimization applied to the design of a fast hydrodynamic focusing microfluidic mixer for protein folding. Phys Fluids. 2013;25(3):032001. View ArticleMATHGoogle Scholar
 Ivorra B, Mohammadi B, Ramos AM. Design of code division multiple access filters based on sampled fiber bragg grating by using global optimization algorithms. Optim Eng. 2014;15(3):677–95. View ArticleMATHGoogle Scholar
 Ivorra B. Application of the laminar Navier–Stokes equations for solving 2D and 3D pathfinding problems with static and dynamic spatial constraints: implementation and validation in comsol multiphysics. J Sci Comput. 2018;74(2):1163–87. MathSciNetView ArticleMATHGoogle Scholar
 Hedar AR, Fukushima M. Tabu search directed by direct search methods for nonlinear global optimization. Eur J Oper Res. 2006;170(2):329–49. MathSciNetView ArticleMATHGoogle Scholar
 Lamghari A, Dimitrakopoulos R. A diversified tabu search approach for the openpit mine production scheduling problem with metal uncertainty. Eur J Oper Res. 2012;222(3):642–52. View ArticleMATHGoogle Scholar
 Vieira DAG, Lisboa AC. Line search methods with guaranteed asymptotical convergence to an improving local optimum of multimodal functions. Eur J Oper Res. 2014;235(1):38–46. MathSciNetView ArticleMATHGoogle Scholar
 Gardeux V, Chelouah R, Siarry P, Glover F. Em323: a line search based algorithm for solving highdimensional continuous nonlinear optimization problems. Soft Comput. 2011;15(11):2275–85. View ArticleGoogle Scholar
 Gardeux V, Chelouah R, Siarry P, Glover F. Unidimensional search for solving continuous highdimensional optimization problems. In: ISDA’09 – ninth international conference on intelligent systems design and applications, 2009. Los Alamitos: IEEE Comput. Soc.; 2009. p. 1096–101. View ArticleGoogle Scholar
 Glover F. The 323, stratified split and nested interval line search algorithms. In: Research report, OptTek systems. Boulder. 2010. Google Scholar
 Grosan C, Abraham A. Hybrid line search for multiobjective optimization. In: Perrot R, Chapman B, Subhlok J, de Mello R, Yang L, editors. High. Lecture notes in computer science. vol. 4782. Berlin: Springer; 2007. p. 62–73. Google Scholar
 Floudas C, Pardalos P. Handbook of test problems in local and global optimization. Norwell: Kluwer Academic; 1999. View ArticleMATHGoogle Scholar
 Price WL. Global optimization by controlled random search. J Optim Theory Appl. 1983;40(3):333–48. MathSciNetView ArticleMATHGoogle Scholar
 Price K, Storn RM, Lampinen JA. Differential evolution: a practical approach to global optimization (natural computing series). New York: Springer; 2005. MATHGoogle Scholar
 Ivorra B, Redondo JL, Ramos AM, Santiago JG. Design sensitivity and mixing uniformity of a microfluidic mixer. Phys Fluids. 2016;28(1):012005. View ArticleGoogle Scholar
 Danckwerts PV. Continuous flow systems. Chem Eng Sci. 1953;2(1):1–13. View ArticleGoogle Scholar
 Glowinski R, Neittaanmäki P. Partial differential equations: modelling and numerical simulation. Computational methods in applied sciences. Netherlands: Springer; 2008. View ArticleMATHGoogle Scholar
 Debiane L, Ivorra B, Mohammadi B, Nicoud F, Poinsot T, Ern A, et al.. A lowcomplexity global optimization algorithm for temperature and pollution control in flames with complex chemistry. Int J Comput Fluid Dyn. 2006;20(2):93–8. View ArticleMATHGoogle Scholar
 Ivorra B, Ramos AM, Mohammadi B. Semideterministic global optimization method: application to a control problem of the Burgers equation. J Optim Theory Appl. 2007;135(3):549–61. MathSciNetView ArticleMATHGoogle Scholar
 Isebe D, Azerad P, Bouchette F, Ivorra B, Mohammadi B. Shape optimization of geotextile tubes for sandy beach protection. Int J Numer Methods Eng. 2008;74(8):1262–77. View ArticleMATHGoogle Scholar
 Ivorra B, Mohammadi D, Dumas L, Durand O, Redont P. Semideterministic vs. genetic algorithms for global optimization of multichannel optical filters. Int J Comput Sci Eng. 2006;2(3):170–8. View ArticleGoogle Scholar
 Hirsch M, Pardalos P, Resende M. Speeding up continuous GRASP. Eur J Oper Res. 2010;205(3):507–21. View ArticleMATHGoogle Scholar
 Ivorra B. Optimisation globale semideterministe et applications industrielles. ANRTgrenoble. 2006. Google Scholar
 Hendrix E, Ortigosa P, García I. On success rates for controlled random search. J Glob Optim. 2001;21(3):239–63. MathSciNetView ArticleMATHGoogle Scholar
 Storn R, Price K. Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces. J Glob Optim. 1997;11(4):341–59. MathSciNetView ArticleMATHGoogle Scholar
 Knight JB, Vishwanath A, Brody JP, Austin RH. Hydrodynamic focusing on a silicon chip: mixing nanoliters in microseconds. Phys Rev Lett. 1998;80(17):3863–6. View ArticleGoogle Scholar