Skip to main content

Concatenated backward ray mapping on the compound parabolic concentrator

Abstract

Concatenated backward ray mapping is an alternative for ray tracing in 2D. It is based on the phase-space description of an optical system. Phase space is the set of position and direction coordinates of light rays intersecting a surface. The original algorithm (Filosa, ten Thije Boonkkamp and IJzerman in J Math Ind 11(1):4, 2021) is limited to optical systems consisting of only straight surfaces; we generalize it to accommodate curved surfaces. The algorithm is applied to a standard optical system, the compound parabolic concentrator. We compare the accuracy and speed of the generalized algorithm, the original algorithm and Monte Carlo ray tracing. The results show that the generalized algorithm outperforms both other methods.

1 Introduction

The illumination optics industry deals with the design of optical systems. An optical system consists of a light source, optical components such as lenses and reflectors and a target surface which can be a receiver or an aperture. The target can be surrounded by the optical components, or it can be located in a far field region, i.e., away from the optical components. Light emitted at the source of the optical system propagates through the system and forms an intensity distribution at the target. The shape of the target distribution depends on the optical system and the intensity distribution at the source. The goal in illumination optics is to obtain a desired intensity distribution at the target.

The desired distribution is generally computed using a forward method for a given light source and optical system. Forward methods are part of an iterative process which starts with an educated guess of the optical system made by an optical designer. The intensity distribution is computed with the forward method during every iteration; the optical system is changed by the designer based on the results. This trial-and-error procedure continues until the desired target distribution is obtained. The forward method is used many times as part of this iterative process, therefore there is a great need for fast and accurate simulation methods.

Forward methods are also used for other purposes. For instance, white light emitted by a source contains all colors. Different colored light follows slightly different paths and contributes to the intensity distribution differently, creating a distorted target distribution. Forward methods are used in an iterative process to correct for these different paths and obtain the desired target distribution. Another application of forward methods is light scattering. Reflectors in an optical system are not perfectly smooth and do not behave exactly as they are designed. Rough surfaces cause a light beam to scatter when it reflects, creating a distorted target distribution. Forward methods are also used in an iterative process to correct for scattering and obtain the desired target distribution.

The forward methods that are currently used in industry are Monte Carlo (MC) or Quasi-Monte Carlo (QMC) ray tracing. MC ray tracing is based on a probabilistic interpretation of the source distribution [2, Ch. 2, p. 27]. Many randomly distributed rays are traced from source to target; the intensity distribution is found by dividing the target into equal cells and counting the number of rays arriving in each cell. QMC ray tracing was introduced as a faster alternative to MC ray tracing. The difference between them is that QMC ray tracing distributes the rays along low discrepancy sequences [3]. MC ray tracing is a slow and expensive procedure because many rays must be traced to obtain accurate results. MC ray tracing has a convergence rate of \(\mathcal{O}(1 / \sqrt{\text{Nr}})\) [4, Ch. 3, p. 36], where Nr is the number of rays traced. QMC ray tracing performs better with a convergence rate of \(\mathcal{O}(1 / \text{Nr})\) [4, Ch. 3, p. 42], but it is still a slow procedure.

Concatenated backward ray mapping (CBRM) is an alternative to (Q)MC ray tracing in 2D that uses the phase space (PS) of an optical system [1]. The PS of each surface is defined by the position and direction coordinates of all rays that interact with it. An optical surface in 2D is a line segment or a curved segment, however we still refer to them as surfaces. The algorithm determines which light rays emitted by the source reach the target at a certain angle by tracing a small number of rays backward (from target to source). Only rays that can be traced back to the source are subsequently traced forward from source to target. Doing this for many angles at which light can reach the target gives a very accurate representation of the intensity distribution. This results in a significant reduction of the number of rays needed and therefore a significant reduction in the computation time compared to traditional (Q)MC ray tracing.

Numerical results [1] indeed show that CBRM computes the intensity distribution more accurately and in less computation time than (Q)MC ray tracing. However, CBRM requires the optical system to consist of straight surfaces. In this paper we generalize CBRM to accommodate curved optical surfaces. We introduce the generalized CBRM algorithm and apply it to the compound parabolic concentrator (CPC) [5, Ch. 1, p. 8]. The CPC (Fig. 1) is a standard optical system which collects light from a Lambertian source and reshapes it to a focused beam. Before we introduce the generalized CBRM algorithm we first explain CBRM and apply it to the two-faceted cup (Fig. 2). The performance of CBRM and generalized CBRM is compared on the CPC. Since CBRM requires the optical system to consist of straight surfaces, it is applied to a discretized CPC.

Figure 1
figure 1

The CPC with source (1), reflectors (2, 3), and target (4)

Figure 2
figure 2

The two-faceted cup with source (1), reflectors (2, 3), and target (4)

The structure of this paper is as follows. Section 2 defines the phase space of an optical surface. In Sect. 3 we describe the concatenated backward ray mapping algorithm. Section 4 explains the generalized algorithm. Section 5 describes the numerical experiments that compare the algorithms on the (discretized) CPC. Section 6 shows the results of the experiments. In Sect. 7 we draw conclusions.

2 Phase space

Concatenated backward ray mapping is based on the phase-space (PS) [4, Ch. 7, p. 87] description of an optical system. Phase space is the set of position and direction coordinates of rays intersecting a surface. It is a two-dimensional space for 2D surfaces. The position coordinate \(q \in Q\) is the x-coordinate of the intersection of the ray with the surface. The direction coordinate \(p \in P\) is given by \(p = n \sin{\theta}\), where \(\theta \in [-\pi /2, \pi /2]\) is the angle between the ray and the inward facing unit surface normal \(\hat{\boldsymbol{\nu }}\) and n is the index of refraction. We consider only optical systems formed by reflective surfaces, therefore refraction is not taken into account and \(n=1\). The index n will be omitted from now on. PS is indicated with \(S = Q \times P\). Every optical surface has a source PS and a target PS. Source PS describes light emitted by the surface and target PS describes light reaching the surface. The light source only emits light, so it only has a source PS; the target only receives light, so it only has a target PS.

The optical surfaces of the system are numbered \(1, \dots , \text{N}\) where 1 is the light source and N is the target. The source and target PS of surface j are indicated with \(\text{S}_{j}\) and \(\text{T}_{j}\), respectively. The coordinates of a ray reaching surface j is indicated with \((q_{\text{t}, j}, p_{\text{t}, j}) \in \text{T}_{j}\). After reflection, the ray is emitted by the surface from the same position but with a new direction. The ray now has the coordinates \((q_{\text{s}, j}, p_{\text{s}, j}) \in \text{S}_{j}\). Note that \(q_{\text{t}, j} = q_{\text{s}, j}\) while \(p_{\text{s}, j}\) is obtained by applying the law of reflection to the ray.

The phase spaces \(\text{S}_{j}\) and \(\text{T}_{j}\) are divided into regions \(\text{S}_{j,k}\) and \(\text{T}_{j,l}\). \(\text{S}_{j,k}\) is the region of \(\text{S}_{j}\) containing all light rays emitted by j, illuminating surface \(k \in \{2,\dots ,\text{N}\}\) assuming that j acts as a light source. \(\text{T}_{j,l}\) is the region of \(\text{T}_{j}\) containing all light rays emitted by surface \(l \in \{1,\dots ,\text{N}-1\}\), illuminating surface j assuming that l acts as a light source. Note that \(\text{S}_{j}\) and \(\text{T}_{j}\) may contain empty regions. The boundaries \(\partial \text{S}_{j,k}\) are connected to the boundaries \(\partial \text{T}_{k,j}\) for every \(j \in \{1, \dots , \text{N}-1\}\) and \(k \in \{2, \dots , \text{N}\}\) by the edge-ray principle [4, Ch. 4, p. 45]. These boundaries can be determined analytically and are formed by four curves:

$$ \begin{aligned} \partial \text{S}_{j,k} &= \partial \text{S}^{1}_{j,k} \cup \partial \text{S}^{2}_{j,k} \cup \partial \text{S}^{3}_{j,k} \cup \partial \text{S}^{4}_{j,k}, \\ \partial \text{T}_{k,j} &= \partial \text{T}^{1}_{k,j} \cup \partial \text{T}^{2}_{k,j} \cup \partial \text{T}^{3}_{k,j} \cup \partial \text{T}^{4}_{k,j}. \end{aligned} $$
(1)

Given two surfaces j and k, \(\partial \text{S}_{j,k}\) and \(\partial \text{T}_{k,j}\) are determined as follows. \(\partial \text{S}^{1}_{j,k}\) and \(\partial \text{T}^{1}_{k,j}\) are formed by the set of rays originating at the left endpoint of j tracing out surface k. \(\partial \text{S}^{2}_{j,k}\) and \(\partial \text{T}^{2}_{k,j}\) are formed by the set of rays tracing out surface j reaching the right endpoint of k. \(\partial \text{S}^{3}_{j,k}\) and \(\partial \text{T}^{3}_{k,j}\) are formed by the set of rays originating at the right endpoint of j tracing out surface k. \(\partial \text{S}^{4}_{j,k}\) and \(\partial \text{T}^{4}_{k,j}\) are formed by the set of rays tracing out surface j reaching the left endpoint of k. As an example Fig. 3 shows these sets of rays in the two-faceted cup where \(j=1\) and \(k=4\), i.e., the source and target. The segments formed by these rays in S1 are shown in Fig. 4. The segments formed in T4 are shown in Fig. 5.

Figure 3
figure 3

Rays on the boundaries of the regions S1,4 and T4,1 in the two-faceted cup

Figure 4
figure 4

Boundary of the region S1,4 of the two-faceted cup consisting of segments \(\partial \text{S}^{1}_{1,4}\) (green), \(\partial \text{S}^{2}_{1,4}\) (blue), \(\partial \text{S}^{3}_{1,4}\) (purple) and \(\partial \text{S}^{4}_{1,4}\) (orange)

Figure 5
figure 5

Boundary of the region T4,1 of the two-faceted cup consisting of segments \(\partial \text{T}^{1}_{4,1}\) (green), \(\partial \text{T}^{2}_{4,1}\) (blue), \(\partial \text{T}^{3}_{4,1}\) (purple) and \(\partial \text{T}^{4}_{4,1}\) (orange)

Previously [1][4, Ch. 7]. we only had analytic expressions of the boundaries of Eq. (1) when j and k were straight surfaces. Here, we introduce a general analytic expression for each boundary of Eq. (1) when surfaces j and k are described by parametric equations. Let surface j be described by the parameterization \(\boldsymbol{P}_{j}(\gamma ) = \big( x_{j}(\gamma ), z_{j}(\gamma ) \big) \ (\gamma _{\text{min}} \leq \gamma \leq \gamma _{\text{max}})\) and surface k by the parameterization \(\boldsymbol{P}_{k}(\lambda ) = \big( x_{k}(\lambda ), z_{k}(\lambda ) \big) \ (\lambda _{\text{min}} \leq \lambda \leq \lambda _{\text{max}})\), then the rays that form the boundaries \(\partial \text{S}^{1}_{j,k}\) and \(\partial \text{T}^{1}_{k,j}\) are parameterized by

$$ \boldsymbol{r}^{1}_{j,k}(\lambda ) = \boldsymbol{P}_{k}(\lambda ) - \boldsymbol{P}_{j}(\gamma _{\text{min}}), \ (\lambda _{\text{min}} \leq \lambda \leq \lambda _{\text{max}}). $$
(2)

The rays form a vertical segment in \(\text{S}_{j}\) as only the direction coordinate changes; they form a curved segment in \(\text{T}_{k}\) because both the position and direction coordinates change. The analytic expressions for \(\partial \text{S}^{1}_{j,k}\) and \(\partial \text{T}^{1}_{k,j}\) are

$$\begin{aligned} \partial \text{S}^{1}_{j,k}(\lambda ) &= \Big\{ \big(x_{j}(\gamma _{ \text{min}}), \ \hat{\boldsymbol{\tau }}_{j}(\gamma _{\text{min}}) \boldsymbol{\cdot }\hat{\boldsymbol{r}}^{1}_{j,k}(\lambda ) \big) \ \Big| \ \lambda _{\text{min}} \leq \lambda \leq \lambda _{\text{max}} \Big\} , \end{aligned}$$
(3a)
$$\begin{aligned} \partial \text{T}^{1}_{k,j}(\lambda ) &= \Big\{ \big(x_{k}(\lambda ), \ -\hat{\boldsymbol{\tau }}_{k}(\lambda ) \boldsymbol{\cdot } \hat{\boldsymbol{r}}^{1}_{j,k}(\lambda ) \big) \ \Big| \ \lambda _{ \text{min}} \leq \lambda \leq \lambda _{\text{max}} \Big\} . \end{aligned}$$
(3b)

We indicate with \(\hat{\boldsymbol{r}}^{1}_{j,k}(\lambda )\) the normalization of the ray in Eq. (2) and with \(\hat{\boldsymbol{\tau }}_{j}(\gamma )\) and \(\hat{\boldsymbol{\tau }}_{k}(\lambda )\) the normalized tangent vectors to surfaces j and k respectively. The tangent vectors are obtained by rotating the inward facing surface normals \(\hat{\boldsymbol{\nu }}_{j}\) and \(\hat{\boldsymbol{\nu }}_{k}\) by an angle of \(\pi /2\) counterclockwise. Note that the parameter in Eq. (3a)–(3b) corresponds to the surface that is traced out. The expressions for the other boundary segments are similar.

2.1 Phase space of the two-faceted cup

The two-faceted cup depicted in Fig. 2 is a simple optical system consisting of four surfaces. A Lambertian light source (surface 1), two reflectors formed by straight line segments (surfaces 2, 3) and a target (surface 4). A ray leaving the source of the cup can reflect many times between the reflectors before reaching the target. Light that reflects on one of the reflectors always propagates to another surface. The phase spaces of the two-faceted cup can be seen in Fig. 6 and are given by the following expressions:

$$ \begin{aligned} \text{S}_{1} &= \text{S}_{1,2} \cup \text{S}_{1,3} \cup \text{S}_{1,4}, \quad \text{S}_{2} = \text{S}_{2,3} \cup \text{S}_{2,4}, \quad \text{S}_{3} = \text{S}_{3,2} \cup \text{S}_{3,4}, \\ \text{T}_{2} &= \text{T}_{2,1} \cup \text{T}_{2,3}, \quad \text{T}_{3} = \text{T}_{3,1} \cup \text{T}_{3,2}, \quad \text{T}_{4} = \text{T}_{4,1} \cup \text{T}_{4,2} \cup \text{T}_{4,3}. \end{aligned} $$
(4)
Figure 6
figure 6

Source and target phase spaces of the two-faceted cup

2.2 Phase space of the compound parabolic concentrator

The CPC [5, Ch. 1, p. 8] depicted in Fig. 1 is a standard optical system that collects light from a Lambertian source and reshapes it to a focused beam. The system consists of an aperture (surface 4), a receiver (surface 1) and two reflectors that are segments of parabolas (surfaces 2, 3). We take the receiver as the light source and the aperture as the target, thus considering light traveling in the opposite direction. A light ray reflects on at most one reflector of the CPC. It can however reflect infinitely many times on a reflector. The parameterization of the right reflector follows from the polar equation of a parabola [5, Ch. 17, p. 471] and is given by:

$$ \boldsymbol{P}(\phi ) = \big( -h(\phi ) \sin{(\phi + \theta )} - a, \ h(\phi ) \cos{(\phi + \theta )} \big), \ \frac{3\pi}{2}-\theta \leq \phi \leq 2\pi -2\theta , $$
(5)

where the axis of the parabola is rotated θ radians around the origin and the parabola is translated horizontally along a distance \(a>0\). Note that the parabola has focus \((-a, 0)\) and intersects the horizontal axis at the point \((a,0)\). The parameterization of the left reflector is similar but it is rotated −θ radians, it is translated in the opposite direction and it has different bounds for ϕ. The value of \(h(\phi )\) is given by:

$$ h(\phi ) = \frac{2a \ (1+\sin{(\theta )})}{1-\cos{(\phi )}}. $$
(6)

A ray leaving the source of the CPC can reflect many times on a single reflector before reaching the target. The phase spaces of the CPC can be seen in Fig. 7 and are given by the following expressions:

$$ \begin{aligned} \text{S}_{1} &= \text{S}_{1,2} \cup \text{S}_{1,3} \cup \text{S}_{1,4}, \quad \text{S}_{2} = \text{S}_{2,2} \cup \text{S}_{2,3} \cup \text{S}_{2,4}, \quad \text{S}_{3} = \text{S}_{3,2} \cup \text{S}_{3,3} \cup \text{S}_{3,4}, \\ \text{T}_{2} &= \text{T}_{2,1} \cup \text{T}_{2,2} \cup \text{T}_{2,3}, \quad \text{T}_{3} = \text{T}_{3,1} \cup \text{T}_{3,2} \cup \text{T}_{3,3}, \quad \text{T}_{4} = \text{T}_{4,1} \cup \text{T}_{4,2} \cup \text{T}_{4,3}. \end{aligned} $$
(7)
Figure 7
figure 7

Source and target phase spaces of the CPC

3 Concatenated backward ray mapping

For each optical system there exists an optical map \(\textbf{M}_{1,\text{N}} : \text{S}_{1} \to \text{T}_{\text{N}}\) such that \(\textbf{M}_{1,\text{N}}(q_{\text{s}, 1}, p_{\text{s}, 1})=(q_{ \text{t}, \text{N}}, p_{\text{t}, \text{N}})\) for every \((q_{\text{s}, 1}, p_{\text{s}, 1}) \in \text{S}_{1}\). All rays that follow the same path Π from source to target form a region \(\text{R}_{\text{s}}(\Pi ) \subset \text{S}_{1}\) and \(\text{R}_{\text{t}}(\Pi ) \subset \text{T}_{N}\). A path is the sequence of surfaces encountered by a ray traveling from source to target. The map \(\textbf{M}_{1,\text{N}}(\Pi )\) is the map \(\textbf{M}_{1,\text{N}}\) restricted to the path Π and relates \(\text{R}_{\text{s}}(\Pi )\) to \(\text{R}_{\text{t}}(\Pi )\).

Étendue is a quantity that describes how light is spread out in terms of area and solid angle [5, Ch. 3, p. 55]. Étendue is also the volume in PS containing light [5, Ch. 3, p. 75]; in 2D phase space it is the area covered by light. The areas covered by light rays in S1 and \(\text{T}_{N}\) are equal because of étendue conservation [5, Ch. 3, p. 57]. Because of the optical mapping, every point in S1 can be mapped to a point in \(\text{T}_{N}\). If an area in \(\text{T}_{N}\) is empty that means that there are no points in S1 that can be mapped here. So, empty regions in \(\text{T}_{N}\) occur where no light rays exist that travel from source to target.

The light intensity \(I(p)\) at the target for a given \(p=\text{const}\) is computed by integrating the target luminance over the position coordinate q and is defined by:

$$ I(p) = \int _{Q} L(q, p) \, \text{d}q. $$
(8)

The light intensity in \(\text{T}_{N}\) depends on the luminance, which is positive in non-empty regions of PS. Assuming positive luminance on the source gives the following relation:

$$ \begin{aligned} L(q,p) &> 0 \qquad \forall _{q,p} \in \text{T}_{\text{N},1}, \\ L(q,p) &\geq 0 \qquad \forall _{q,p} \in \text{T}_{\text{N},j}, \ j \in \{2,\dots ,\text{N}-1\}. \end{aligned} $$
(9)

The phase spaces of an optical system are connected through maps that relate the coordinates on every PS. Propagation maps \(\textbf{P}_{j,k} : \text{S}_{j,k} \to \text{T}_{k,j}\) describe light that travels from a surface j to another surface k; they relate coordinates of \(\text{S}_{j}\) to \(\text{T}_{k}\) such that \(\textbf{P}_{j,k}(q_{\text{s}, j}, p_{\text{s}, j}) = (q_{\text{t}, k}, p_{\text{t}, k})\). Reflection maps \(\textbf{R}_{k} : \text{T}_{k} \to \text{S}_{k}\) describe light that reflects on a surface k; they relate coordinates of \(\text{T}_{k}\) to \(\text{S}_{k}\) such that \(\textbf{R}_{k}(q_{\text{t}, k}, p_{\text{t}, k}) = (q_{\text{s}, k}, p_{\text{s}, k})\). Note that \(q_{\text{t}, k} = q_{\text{s}, k}\). Every map \(\textbf{M}_{1,\text{N}}(\Pi )\) can be described by a composition of propagation and reflection maps. Considering all paths Π from source to target, the positive luminance regions \(\text{R}_{\text{t}}(\Pi ) \subset \text{T}_{N}\) can be determined. From Eq. (9) follows that the luminance for some path Π connecting the source and target is:

$$ \begin{aligned} L(q,p) &> 0 \qquad \forall _{q,p} \in \text{R}_{\text{t}}( \Pi ), \\ L(q,p) &=0 \qquad \text{otherwise}. \end{aligned} $$
(10)

Generally the luminance at the source depends on the direction of the emitted light. When the luminance does not depend on the direction it is called a Lambertian source [5, Ch. 3, p. 59]. When the source luminance is constant it means that the luminance of light emitted towards an observer is independent of the observation direction, it appears equally bright from all directions from which the light is emitted.

Assuming a Lambertian source with luminance equal to 1, computing \(I(p)\) reduces to computing the boundaries \(\partial \text{R}_{\text{t}}(\Pi )\) of the regions of all possible paths Π. The intensity is given by the sum of the interval lengths formed by the intersections of the support of the luminance and the line \(p=\text{const}\). The intersection points between \(p=\text{const}\) and the boundary \(\partial \text{R}_{\text{t}}(\Pi )\) have position coordinates \(q^{\text{min}}(\Pi , p)\) and \(q^{\text{max}}(\Pi , p)\), where \(q^{\text{min}}(\Pi , p) < q^{\text{max}}(\Pi , p)\). Using Eq. (10), it follows that Eq. (8) reduces to:

$$ I(p) = \sum _{\Pi }\int ^{q^{\text{max}}(\Pi , p)}_{q^{\text{min}}(\Pi , p)} L(q, p) \, \text{d$q$} = \sum _{\Pi}(q^{\text{max}}(\Pi , p) - q^{ \text{min}}(\Pi , p)). $$
(11)

CBRM computes the light intensity \(I(p)\) using the phase spaces of all surfaces of an optical system. A parallel light beam is represented by a straight line segment on the line \(p=\text{const}\) in the source/target PS of a straight surface since the direction coordinate of all rays is the same. The intersections between \(p=\text{const}\) and the boundaries in PS are computed several times by the algorithm; this is done analytically since \(p=\text{const}\) is a horizontal line, and we have analytical descriptions of all boundaries of all phase spaces. Therefore, the light beam is required to always stay parallel which in turn requires the optical system to consist of only straight surfaces.

The algorithm uses the map \(\textbf{M}_{1,\text{N}}(\Pi ) : \text{R}_{\text{s}}(\Pi ) \to \text{R}_{\text{t}}(\Pi )\) for all possible paths Π to compute \(I(p)\) for a given direction \(p=\text{const}\) in \(\text{T}_{N}\). To construct the map \(\textbf{M}_{1,\text{N}}(\Pi )\), the corresponding path Π should be known. The algorithm computes all possible paths Π by considering rays in \(\text{T}_{N}\) along a given direction \(p \in [-1,1]\) and tracing them backward recursively in PS [1]. The endpoints of the light beam in \(\text{T}_{N}\) are \((q^{\text{min}}_{\text{t},\text{N}}, p_{\text{t},\text{N}})\) and \((q^{\text{max}}_{\text{t},\text{N}}, p_{\text{t},\text{N}})\). The line \(p=\text{const}\) intersects various regions in PS. The intersection points with boundaries \(\partial \text{T}_{\text{N},j}\) are \((u^{\text{min}}_{\text{N},j}, p_{\text{t},\text{N}})\) and \((u^{\text{max}}_{\text{N},j}, p_{\text{t},\text{N}})\). The intersection segment with region \(\text{T}_{\text{N},j}\) is given by \([v^{\text{min}}_{\text{N},j}, v^{\text{max}}_{\text{N},j}] = [q^{ \text{min}}_{\text{t},\text{N}}, q^{\text{max}}_{\text{t},\text{N}}] \cap [u^{ \text{min}}_{\text{N},j}, u^{\text{max}}_{\text{N},j}]\) and corresponds to rays emitted by another surface \(j \neq \text{N}\). The endpoints of the intervals are transformed to coordinates \((q^{\text{min}}_{\text{t},j}, p_{\text{t},j})\) and \((q^{\text{max}}_{\text{t},j}, p_{\text{t},j})\) in \(\text{T}_{j}\) by sequentially applying the maps \(\textbf{P}^{-1}_{j,\text{N}} : \text{T}_{\text{N},j} \to \text{S}_{j, \text{N}}\) and \(\textbf{R}^{-1}_{j} : \text{S}_{j} \to \text{T}_{j}\). The procedure is repeated in \(\text{T}_{j}\). The recursion ends when an intersection segment \([v^{\text{min}}_{k,j}, v^{\text{max}}_{k,j}]\) is traced back to S1 or when an intersection segment is empty. If S1 is found, the endpoints \((q^{\text{min}}_{\text{s},1}, p_{\text{s},1})\) and \((q^{\text{max}}_{\text{s},1}, p_{\text{s},1})\) of the light beam are traced to \(\text{T}_{N}\) along Π by applying \(\textbf{M}_{1,\text{N}}(\Pi )\). This gives two points \((q^{\text{min}}(\Pi , p), p)\) and \((q^{\text{max}}(\Pi , p), p)\) at the boundary of a region \(\text{R}_{\text{t}}(\Pi ) \subset \text{T}_{N}\) with positive luminance. The main steps to calculate \(I(p)\) are given in Algorithm 1.

Algorithm 1
figure a

Recursive procedure to compute the intensity for optical systems consisting of only straight surfaces

The range of angular coordinates at the target is divided into Ni equidistant intervals with endpoints \(p^{m}\) and \(p^{m+1}\) where \(m \in \{0, \dots , \text{Ni}-1 \}\). The averaged and normalized intensity Î is given for every interval \(p^{m+1/2} = \frac{1}{2}(p^{m}+p^{m+1})\) by:

$$ \hat{I}(p^{m + 1/2}) = \frac{1}{U_{\text{t}}} \int _{p_{m}}^{p_{m+1}} I(p) \, \text{d}p, $$
(12)

and is computed using the trapezoidal rule [4, Ch. 7, p. 100]. \(U_{\text{t}}\) denotes the total étendue at the target; recall that in PS this corresponds to the area of the region covered by the light rays. The intensity distribution is obtained by plotting \(p^{m+1/2}\) on the horizontal axis and \(\hat{I}(p^{m + 1/2})\) on the vertical axis for each interval. For more details on the CBRM algorithm see [1][4, Ch. 7].

Figure 8 shows the first steps of the algorithm for light on \(p=-0.2\) in T4 on the two-faceted cup. In step 1 (Fig. 8a) we find light traveling directly from source to target. The algorithm updates the intensity and continues to the next region of PS in T4. In step 2 (Fig. 8b) we find light traveling from surface 2 (left reflector) to the target. The light is traced to T2 where it lies on \(\text{p}=-0.82\) and the procedure is repeated. In step 3 (Fig. 8c) we find light traveling from the source to surface 2. The algorithm traces this light back to the target and updates the intensity, then it continues to the next region of T2. In step 4 (Fig. 8d) we find light traveling from surface 3 (right reflector) to surface 2. The light is traced to T3 where it lies on \(\text{p}=0.29\) and the procedure is repeated. In step 5 (Fig. 8e) we find light traveling from surface 2 to surface 3. The light is traced to T2 where it lies on \(\text{p}=0.41\) and the procedure is repeated. In step 6 (Fig. 8f) we find light traveling from surface 3 to surface 2. The light is traced to T3 where it does not intersect any region of PS meaning it was not emitted by the source. Therefore, the computation for this part of the light beam stops; the algorithm is finished for the light of step 2 reaching the target from surface 2. The process is repeated for the light reaching the target from surface 3.

Figure 8
figure 8

Concatenated backward ray mapping on the two-faceted cup for \(p=-0.2\) in T4. The intersection segment computed at each step is colored green

3.1 Intensity distribution of the two-faceted cup

The cup is a simple system for which we can compute the intensity distribution exactly. The exact intensity is not computed from a single mathematical expression but with a special procedure that is given in [6]. The target is divided into 100 bins for (Q)MC ray tracing. The intensity in each bin is also computed with CBRM using Eq. (12). The intensity distribution found with MC ray tracing, shown in Fig. 9, is noisy and not close to the exact solution. The intensity distribution computed with CBRM, shown in Fig. 9, matches the exact solution precisely. MC ray tracing required 104 rays, but CBRM required only 103 rays. CBRM is more accurate than MC ray tracing and requires fewer rays to compute the intensity. We can use CBRM to find the boundaries of the positive luminance regions in T4; they are shown in Fig. 10.

Figure 9
figure 9

Intensity distributions of the two-faceted cup computed with MC ray tracing (green) and CBRM (red) compared to the exact solution

Figure 10
figure 10

Points on the boundaries of the positive luminance regions in T4 of the cup for all paths Π. Each color represents a different path

4 Generalized algorithm

CBRM only handles parallel beams of light, limiting the algorithm to optical systems consisting of only straight surfaces. We generalize the algorithm to accommodate curved surfaces. Equation (11) defines the intensity in TN as the sum of the interval lengths formed by the intersections of the support of the luminance and the line \(p=\text{const}\). Therefore, we must still consider a parallel beam of light at the target. The direction of all rays of the beam will vary as it reflects on a curved surface. As a result, the light beam is not necessarily parallel at the other surfaces of the system. In PS this means that position and direction coordinates of all rays in the beam can vary. The beam is no longer represented by a straight line segment on \(p=\text{const}\), but by a curved segment for which we generally have no analytical expression.

Intersections in PS cannot be computed analytically since there is no expression for the light beam. We instead discretize the line \(p=\text{const}\) in \(\text{T}_{N}\) at the start of the procedure by taking equidistant points and connect them with straight line segments. This discretization is the PS representation of the light beam; when traced to other surfaces it forms a discretized curve C.

In addition to the light beam, the phase spaces of the optical system are also discretized. Recall from Sect. 2 that there are analytic descriptions of all boundaries of all phase spaces. We discretize each boundary in PS by taking points on these boundaries equidistant in the q-direction, and connect them with straight line segments. Figure 11 shows T4 of the CPC with light at \(p=-0.1\) and the discretization used by the algorithm. Each discretized PS is stored in a doubly connected edge list (DCEL) [7, Ch. 2, p. 29]. The DCEL stores each region of the PS as a face, each point as a vertex and each line segment as a pair of half-edges. All edges are incident to two faces of the DCEL; therefore, they are split into two half-edges such that each half-edge has exactly one incident face. Each face is bound by the vertices and half edges that form the boundary of the PS region. The half-edges connect pairs of vertices and are ordered counterclockwise around the face they bound. The DCEL is a nice data structure to store geometric information. It makes it easy to perform operations such as traversing the boundary of a given face, accessing a face from an adjacent one if a common edge is given or visiting all edges around a given vertex.

Figure 11
figure 11

The line \(p=-0.1\) in T4 of the CPC

Computing an intersection between C and the boundaries in PS reduces to computing the intersection between two straight line segments; one segment of C and one segment of the discretized boundaries. However, since the PS and the light beam are discretized with many segments we have to check for many pairs of segments if they intersect. To solve this we build a KD-tree [8] for each PS. The KD-tree places a bounding box around the PS and subdivides it into increasingly smaller regions storing the boundary segments in its leafs. Non-overlapping regions of PS are stored in the internal nodes and leafs of the KD-tree, i.e., the data structure is a space partition. A boundary segment that intersects different regions of the partition is stored in multiple leafs of the KD-tree since the regions of the KD-tree do not overlap, and they all contain the segment. The tree is a binary tree and every node is split in half along an axis-aligned split plane. We use the surface area heuristic (SAH) [8] to select the best splitting plane for every potential split. With the SAH we compute a cost for all possible split planes of a region of the KD-tree. The split plane with the lowest cost is considered to be the best split plane. A region is split when the cost of the best split plane is less than the cost of not splitting the region. The region is not split in half when the cost of the best split plane is greater than the cost of not splitting the region.

Given a segment of C we create a half line by extending the origin of the segment beyond the outer boundaries of PS. The origin of the segment is the endpoint with the smallest q-coordinate; it is extended along the direction of the segment in increasing q direction. The segments of the boundaries in PS that can be intersected by the half line are in the cells (leafs) of the KD-tree intersected by the half line; the segments in all cells not intersected by the half line can be ignored. We are interested in the intersection point closest to the origin of the half line. Therefore, we first check for intersections in the cell closest to the origin and continue through the other cells along the half line. When an intersection is found between the half line and a PS segment all subsequent cells of the tree along the ray can be ignored. If the intersection point lies on the segment of C then an intersection between C and a discretized boundary is found; if the intersection point is not on the segment of C then the segment is contained inside a PS region.

The luminance for all possible paths Π for a given direction \(p \in [-1,1]\) in \(\text{T}_{N}\) is computed recursively, similarly to the original algorithm. The endpoints of the light beam in \(\text{T}_{N}\) are \(C^{\min}_{\text{t},\text{N}}\) and \(C^{\max}_{\text{t},\text{N}}\). The discretized curve \(C_{\text{t},\text{N}}\) intersects various regions in \(\text{T}_{N}\). \(C_{\text{N},j}\) is the subset of \(C_{\text{t},\text{N}}\) intersecting the region \(\text{T}_{\text{N},j}\). The intersection points \(C^{\min}_{\text{N},j}\) and \(C^{\max}_{\text{N},j}\) with \(\partial \text{T}_{\text{N},j}\) are computed analytically. Each subset \(C_{\text{N},j}\) corresponds to rays emitted by another surface \(j \neq \text{N}\). All segments of \(C_{\text{N},j}\) are transformed to a new discretized curve \(C_{\text{t},j}\) with endpoints \(C^{\min}_{\text{t},j}\) and \(C^{\max}_{\text{t},j}\) in \(\text{T}_{j}\). This is done by sequentially applying the maps \(\textbf{P}^{-1}_{j,\text{N}} : \text{T}_{\text{N},j} \to \text{S}_{j, \text{N}}\) and \(\textbf{R}^{-1}_{j} : \text{S}_{j} \to \text{T}_{j}\). The procedure is repeated in \(\text{T}_{j}\). The recursion ends when a subset \(C_{k,j}\) is traced back to S1 or when a subset is empty, like in the original algorithm. If S1 is found, all points of \(C_{\text{s},1}\) are traced to \(\text{T}_{N}\) along Π by applying \(\textbf{M}_{1,\text{N}}(\Pi )\). The endpoints \(C^{\min}(\Pi , p)\) and \(C^{\max}(\Pi , p)\) of \(C(\Pi , p)\) are on the boundary of a region \(\text{R}_{\text{t}}(\Pi ) \subset \text{T}_{N}\) with positive luminance. The steps of the generalized CBRM algorithm are given in Algorithm 2.

Algorithm 2
figure b

Recursive procedure to compute the intensity for optical systems containing curved surfaces

4.1 Intensity distribution of the CPC

Recall from Sect. 2.2 that light in the CPC can reflect an infinite number of times, generalized CBRM can run indefinitely as a result. To prevent this we do not consider light that reflects more than 10 times. The reference solution of the CPC is computed with QMC ray tracing using 109 rays that reflect no more than 10 times. The target is divided into 110 bins for (Q)MC ray tracing. The intensity in each bin is also computed with generalized CBRM using Eq. (12). The intensity distribution computed with MC ray tracing, shown in Fig. 12, is again noisy like for the two-faceted cup. Generalized CBRM is able to match the reference solution closely as can be seen in Fig. 12. MC ray tracing required 105 rays reflecting no more than 10 times, but generalized CBRM required only \(1.6 \cdot 10^{4}\) rays. The light beam and the boundaries in PS were discretized with 103 segments each for the generalized CBRM method. The generalized CBRM algorithm is much more accurate than MC ray tracing and requires far fewer rays to compute the intensity. We can also use generalized CBRM to find the boundaries of the positive luminance regions in T4; they are shown in Fig. 13.

Figure 12
figure 12

Intensity distributions of the CPC computed with MC ray tracing (green) and generalized CBRM (red) compared to a reference solution

Figure 13
figure 13

Points on the boundaries of the positive luminance regions in T4 of the CPC for all paths Π with a maximum of 10 reflections. Each color represents a different path

5 Numerical experiments

We compare generalized CBRM to CBRM and MC ray tracing on the CPC. Recall from Sect. 2.2 that light in the CPC can reflect an infinite number of times, generalized CBRM can therefore run indefinitely. To prevent this we do not consider light that reflects more than 10 times. This restriction is also applied to CBRM and MC ray tracing for correct comparison. Since CBRM only handles optical systems consisting of straight surfaces, it is applied to a discretized CPC. We discretize each reflector by taking points on the parabola equidistant in the x-direction, and connect them with straight line segments. The maximum number of reflections that can occur in the discretized CPC is limited by the number of discrete segments. Recall from 2.2 that a light ray reflects on at most one reflector of the CPC. Therefore, a ray in the discretized CPC can reflect on at most all segments that discretize a reflector. MC ray tracing and generalized CBRM are applied to the regular CPC.

We first compare intensity distributions of the CPC computed with generalized CBRM, CBRM and MC ray tracing to a reference solution. We apply CBRM to a discretized CPC that has 10 segments discretizing each reflector, such that the maximum number of reflections that can occur is equal to the maximum number of reflections we consider. The boundaries in PS and the light beam used by generalized CBRM are all discretized with 103 segments. We compute the reference solution by QMC ray tracing 109 rays that reflect at most 10 times. The target is divided into 110 bins for MC ray tracing. The intensity in each bin is also computed with (generalized) CBRM using Eq. (12) where the endpoints of the bin are \(p^{m}\) and \(p^{m+1}\).

Next, we compare the performance of generalized CBRM for various parameter settings of the algorithm. This is done twice, using 110 and 1010 intervals/bins at the target. We define the performance as the error of the solution compared to the computation time. The error is calculated with:

$$ \text{error} = \frac{1}{\text{Ni}} \ \sum _{m=1}^{\text{Ni}}|\hat{I}(p^{m + 1/2}) - \hat{I}_{\text{ref}}(p^{m + 1/2})|, $$
(13)

where Ni is the number of bins (intervals) at the target and \(\hat{I}_{\text{ref}}\) denotes the reference intensity. We compare five different discretizations of the boundaries in PS using \(100 \cdot 2^{i}\) segments, where \(i \in \{0, \dots , 4\}\). We compute the intensity distribution for each PS discretization 10 times using different discretizations of the light beam consisting of \(100 \cdot 2^{i}\) segments, where \(i \in \{0, \dots , 9\}\). The reference solution is again computed by QMC ray tracing a number of rays that reflect at most 10 times. We use a reference solution of 109 rays when the target is divided into 110 bins and of 1010 rays when the target is divided into 1010 bins. The best performing discretization of the boundaries in PS is the one that reaches the smallest error. If the smallest errors are similar then the performance is determined by the time it takes to compute the smallest error. In this case, the best performing discretization of the boundaries in PS is the one with the least computation time.

Finally, we compare the discretizations (of the boundaries in PS) that give the best performance of generalized CBRM for 110 and 1010 bins to the performance of CBRM. The intensity distribution of CBRM is computed using discretizations of \(2^{i}\) segments on the reflectors, where \(i \in \{0, \dots , 9\}\). The performance of CBRM is also defined as the computation time compared to the error of the solution given in Eq. (13). We compute the error of CBRM using the reference solutions from the previous experiment.

6 Results

The aim of the research was to introduce a generalization to the concatenated backward ray mapping algorithm. We discussed the original backward method and our generalized method. Both algorithms were used to compute the intensity distribution on the compound parabolic concentrator.

Figure 14a and Fig. 14b show three intensity distributions of the CPC computed by generalized CBRM, CBRM and MC ray tracing compared to a reference solution. The figures clearly show the differences between the intensity distributions found by the algorithms. MC ray tracing computed a noisy intensity distribution with a profile similar to that of the reference solution. CBRM on the other hand found an intensity distribution with a profile that differs from the profile of reference solution. Finally, generalized CBRM found an intensity distribution closely matching the profile of the reference solution and without any noise. It took \(3.6 \cdot 10^{4}\) rays to compute the CBRM solution, \(1.6 \cdot 10^{4}\) rays to compute the generalized CBRM solution and 106 rays to compute the MC ray tracing solution. MC ray tracing gives an intensity distribution with steep sides that is similar to the reference solution because it was applied to the regular CPC. However, the intensity distribution is noisy due to the random selection of rays at the source. This is in agreement with the noisy distributions of the cup and the CPC that were previously discussed in Sect. 3.1 (Fig. 9) and Sect. 4.1 (Fig. 12). CBRM [1] on the other hand was applied to a discretization of the CPC where each reflector is replaced by 10 straight line segments. This optical system is different from the CPC, so it also behaves differently. As a result CBRM gave an intensity distribution without steep sides that differs from the reference solution. A finer discretization of the CPC makes the systems more alike, so the distributions should also be more alike. Generalized CBRM did not require the optical system to be discretized, and it did not use a random set of light rays. It computed an intensity distribution with steep sides that is similar to the reference solution without any noise. What stands out in these results is that generalized CBRM is more accurate than MC ray tracing and CBRM but required fewer rays to compute the intensity distribution.

Figure 14
figure 14

Comparison of intensity distributions of the CPC. We compare MC ray tracing (green), CBRM (blue) and generalized CBRM (red) to a reference solution computed with QMC ray tracing

Figure 15a compares the performance of generalized CBRM for different parameter settings of the algorithm using 110 intervals/bins at the target surface. Every curve was computed using a different discretization of the boundaries in PS. What is interesting about Fig. 15a that all curves have a similar shape, which implies that using a finer discretizations of the boundaries has only a small effect on the performance of generalized CBRM. The points of each curve were computed using different discretizations of the light beam. The error of all curves became much smaller as the number of light beam segments increased. We can also see in Fig. 15a that all curves stopped improving at the same discretization of the light beam. The discretization of the light beam had a larger effect on performance than the discretization of the boundaries in PS because of the recursion that happens in generalized CBRM. During every iteration, generalized CBRM computes the intersection between the light beam and the regions in PS; the algorithm continues for each intersection segment. As a result, generalized CBRM computes using a smaller subset of the discretized light beam after each reflection. Therefore, it is important to discretize the light beam with enough segments to ensure that it still closely resembles the light beam after many reflections. Still there is a point after which increasing the number of discrete segments on the light beam no longer improves the accuracy of the solution.

Figure 15
figure 15

Performance of generalized CBRM on the CPC. The error is the difference between the intensity distribution computed with generalized CBRM and a reference solution computed with QMC ray tracing. We compare PS discretizations of 100 (blue), 200 (orange), 400 (yellow), 800 (purple) and \(1.6 \cdot 10^{3}\) (green) segments

The test of Fig. 15a was repeated in Fig. 15b this time using 1010 bins at the target. The results are similar to those of the previous test, but there are two notable differences. The algorithm was more accurate because more bins were used at the target which allowed it to use more PS information. Furthermore, using only 100 segments to discretize the boundaries in PS significantly reduced the performance of the algorithm. This happened because generalized CBRM used more PS information to compute the intensity profile. As a result it also required a better discretization of the boundaries in PS. This shows that there is a correlation between the number of bins at the target and the minimum discretization of the boundaries in PS required by generalized CBRM.

We compared generalized CBRM, using the PS discretization with the best performance, to the performance of CBRM. In Fig. 15a, all discretizations of the boundaries in PS reach a similar error value. So, the best result for the case of 110 bins at the target is the discretization with 100 boundary segments since it takes the least computation time. In Fig. 15b, the discretizations also reach a similar error except for the discretization of 100 boundary segments. So, the best result for the case of 1010 bins at the target is the discretization with 200 boundary segments. The comparison of CBRM and generalized CBRM for the case of 110 bins at the target is shown in Fig. 16a. The most striking observation in this figure is the sudden decrease of the CBRM error for a discretization of 64 reflector segments. The figure also shows that the generalized CBRM error is smaller than the CBRM error in most cases but that CBRM reaches maximum precision slightly faster than generalized CBRM. The sudden decrease of the CBRM error can be explained by the target PS of the target of the discretized CPC. The discretized CPC is different from the CPC meaning that the non-empty regions in the target PS of the target are also different from those of the CPC. Using a discretization with more reflector segments makes the optical systems and the non-empty regions in the target PS of the target more alike. At a certain discretization the shape of the non-empty regions is so similar to those of the CPC that it resulted in a sudden increase in accuracy.

Figure 16
figure 16

Performance of generalized CBRM (blue) compared to CBRM (orange) on the CPC. The error is the difference between the intensity distributions and a reference solution computed with QMC ray tracing

Figure 16b compares the performance of CBRM and generalized CBRM for the case where there are 1010 bins at the target. What stands out in this figure is the performance difference between CBRM and generalized CBRM. Generalized CBRM behaved similarly to the case of 110 bins at the target. However, the number of bins at the target had a large effect on the performance of CBRM. It took a much finer discretization of the CPC to reach maximum precision. When CBRM computes with more bins it uses more information about the target PS of the target. As a consequence the algorithm requires a finer discretization of the CPC to make the shape of the non-empty regions so similar to those of the CPC that it results in a sudden increase in accuracy.

7 Conclusion

In this paper we introduced a generalized concatenated backward ray mapping (CBRM) algorithm that handles optical systems with curved surfaces. We presented a more general description of the boundaries in phase space (PS), allowing us to compute the source and target PS of curved optical surfaces. We modified the original CBRM algorithm to be able to compute the intensity distribution for optical systems containing curved surfaces. The generalized algorithm uses a discretization of the phase spaces of an optical system and a discretization of the PS representation of the light beam. We implemented a doubly connected edge list (DCEL) and KD-tree data structure to compute more efficiently in PS. CBRM and generalized CBRM are applied to the compound parabolic concentrator (CPC), a standard optical system that reshapes light from a Lambertian source into a focused beam.

MC ray tracing is limited by the random selection of rays at the source because of which the intensity distribution is noisy. The number of bins at the target has a large effect on CBRM. The CPC must be discretized by many reflection segments if the number of bins is large. Otherwise, the CBRM is not accurate enough. However, if the CPC is discretized with many segments CBRM is slow. Generalized CBRM on the other hand is effected by the discretization of the light beam and the discretization of the boundaries in PS. The number of bins at the target has no effect on generalized CBRM. Of the two discretizations the discretization of the light beam has the biggest effect on performance. The number of light beam segments required for the discretization depends on the maximum number of reflections for which the algorithm is computed; more segments are required for more reflections. Generalized CBRM computes equally good or better compared to CBRM depending on the number of bins at the target.

We have shown that there are general expressions of the boundaries in PS and that they can be used to compute the phase spaces of an optical system with curved surfaces. We introduced a generalized backward ray mapping algorithm that uses this PS information to compute the intensity distribution of an optical system. Our results showed that generalized CBRM computes the intensity distribution of the CPC faster and more accurately than the original CBRM method and Monte Carlo ray tracing.

While generalized CBRM is applied to an optical system with parabolic reflectors, it can handle arbitrary curved reflectors. The expressions of the boundaries in PS can be applied to the surfaces of a lens as well. This allows us to obtain the source and target PS of the lens surfaces. Point source to point target optical systems are not supported because their PS can not be described by the boundary expressions. Currently, only the law of reflection is implemented. To be able to handle lenses we must also implement the law of refraction. The algorithm should apply the law of reflection when computing in the PS of a reflector; otherwise it should apply the law of refraction. Implementing this change in the algorithm is fairly simple. However, implementing additional optical components is not trivial and time-consuming.

Future research may involve combining reflection and refraction in the algorithm, which allows us to compute the intensity distribution of most 2D optical systems. Another extension is to modify the expressions of the PS boundaries to support a point source and point target. It is also interesting to explore splines instead of straight segments to discretize the light beam and the boundaries in PS which could improve the accuracy of the algorithm. Another research direction is to extend the method to 3D optical systems which will make it more applicable; a first step could be to find a suitable representation of the boundaries in PS for 3D optical systems.

Data availability

The datasets used and/or analysed during the study are not publicly available at this time but may be obtained from the authors upon request.

Code availability

The simulation code used for obtaining the results presented in this paper is not publicly available at this time but may be obtained from the authors upon request.

Abbreviations

MC:

Monte Carlo

QMC:

Quasi Monte Carlo

PS:

phase space

CBRM:

concatenated backward ray mapping

DCEL:

doubly connected edge list

CPC:

compound parabolic concentrator

References

  1. Filosa C, ten Thije Boonkkamp J, IJzerman W. Inverse ray mapping in phase space for two-dimensional reflective optical systems. J Math Ind. 2021;11(1):4.

    Article  Google Scholar 

  2. Jensen H, et al. Monte Carlo ray tracing. Siggraph Course notes 44; 2003.

  3. Caflisch RE. Monte Carlo and quasi-Monte Carlo methods. Acta Numer. 1998;7:49. https://doi.org/10.1017/S0962492900002804.

    Article  MathSciNet  Google Scholar 

  4. Filosa C. Phase space ray tracing for illumination optics. Eindhoven University of Technology; 2018.

  5. Chaves J. Introduction to nonimaging optics. 1st ed. Boca Raton: CRC Press; 2008.

    Book  Google Scholar 

  6. van den Berg JB et al.. Non-imaging optics for LED-lighting. In: Proceedings of the 84th European study group mathematics with industry (SWI 2012). 2013. p. 70–103.

    Google Scholar 

  7. de Berg M, Cheong O, van Kreveld M, Overmars M. Computational geometry. 3rd ed. Berlin: Springer; 2008.

    Book  Google Scholar 

  8. Wald I, Havran V. On building fast kd-trees for ray tracing, and on doing that in O(N log N). In: 2006 IEEE symposium on interactive ray tracing. 2006. p. 61–9.

    Chapter  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This work was funded by NWO and Technische Wetenschappen in the framework of the project “3D phase space ray tracing for the illumination optics industry”, project 17959.

Author information

Authors and Affiliations

Authors

Contributions

All authors are equal contributors to this paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Willem Jansen.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jansen, W., Anthonissen, M., ten Thije Boonkkamp, J. et al. Concatenated backward ray mapping on the compound parabolic concentrator. J.Math.Industry 14, 11 (2024). https://doi.org/10.1186/s13362-024-00149-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13362-024-00149-6

Keywords