Next: Streamline Integration Up: Method Previous: Data Interpolation

Regularization: Moving Least Squares

To perform a stable fiber tracing on experimental data, the data needs to be filtered. A simple global box or Gaussian filter will not work well, since it will blur (destroy) most of the directional information in the data. We also want the filter to be adjustable to the data and be able to put more weight on the data in the direction of the traced fiber, rather than between fibers. We also want the filter to have an adjustable local support, which can be modified according to the measure of the confidence level in the data. Finally, we want the filter to preserve sharp features where they exist (ridges), but eliminate irrelevant noise if it can. Thus, to do this, the behavior of the filter at some voxel in space will depend on the ``history of the tracing'', (where it came from), so the filtering needs to be tightly coupled with the fiber tracing process. Due to the above reasons, we chose to use a moving least squares (MLS) approach. The idea behind this local regularization method is to find a low degree polynomial which best fits the data, in the least squares sense, in the small region around the point of interest. Then we replace the data value at that point by the value of the polynomial at that point ${\mathbf T}(x_p,y_p,z_p) \rightarrow \bar {\mathbf
T}_p$. Thus, this is a data approximation rather than an interpolation method. The measure of ``fitness'' will depend on the filter location, orientation and the history of motion. The one dimensional MLS method was first introduced in signal processing literature [9,17]. We start the derivation by writing a functional $E$ that measures the quality of the polynomial fit to the data in the world coordinate system. To produce E, we integrate the squared difference between $\mathbf F$, which is an unknown linear combination of tensor basis functions, and $\mathbf T$, the known continuous trilinear interpolated version of given tensor data. We integrate over all of 3D space and use weighting function $G$ to create a region of interest centered at chosen 3-D point ${\mathbf r}_p$ with coordinates $(x_p,y_p,z_p)$:

$\displaystyle E({\mathbf r}_p) =
\int_{-\infty}^{\infty} G({\mathbf r}-{\mathbf...
...hbf F}({\mathbf r}-{\mathbf r_p}) - {\mathbf T}({\mathbf r})]^2
~d{\mathbf r}^3$

    (8)

The second argument ${\mathbf T_p}$ (value of the tensor at the center point) of the weighting function $G$ determines the weighting function's size and orientation. The square of the tensor difference in (8) is a scalar, defined through the double-dot (component wise) product [4,3]:

$\displaystyle ({\mathbf F} - {\mathbf T})^2 = ({\mathbf F} - {\mathbf T}):({\mathbf F} - {\mathbf T}) =$

    (9)

$\displaystyle \sum_{\alpha\beta}({\mathbf F}^{\alpha\beta} - {\mathbf T}^{\alph...
...) = \sum_{\alpha\beta}({\mathbf F}^{\alpha\beta} - {\mathbf T}^{\alpha\beta})^2$

     

 
Figure 4: Coordinate systems used by linear transformation Eq. $(10)$ to change from $\{x, y, z\}$ coordinates in Eq. $(8)$ into $\{\zeta , \eta , \theta \}$ coordinates in Eq. $(11)$.

 

Within the functional $E()$, the tensor function ${\mathbf F}$ is a linear combination of tensor basis functions we will use to fit the data. The function $G$ is the moving and rotating anisotropic filtering window, centered at the point ${\mathbf r}_p$ (See Figure 4.) We felt it was more convenient to perform the computations in the local frame of reference connected to a particular filtering window, rather than in world coordinates. It is straightforward to transform equation (8) to a local frame of reference using a change of variables involving the following translation and rotation transformation:

$\displaystyle \left(
\begin{array}{c}
\zeta \\
\eta \\
\theta \\
\end{array}...
...z - z_p\\
\end{array}\right) =
{\mathbf R}_p^{-1}({\mathbf r} - {\mathbf r_p})$

    (10)

where ${\mathbf R}_p = \{ {\mathbf e}_1,{\mathbf e}_2,{\mathbf e}_3
\}$ is a rotation matrix formed by the eigenvectors of the tensor ${\mathbf T}_p$ at the point ${\mathbf r}_p$. Then, in the local frame of reference, $\{\zeta , \eta , \theta \}$, the equation (8) becomes

$\displaystyle E = \int_V {[ {\mathbf F}({\mathbf R}_p \{\zeta,\eta,\theta\}) - \mathbf T}({\mathbf r_p} + {\mathbf R}_p\{\zeta,\eta,\theta\})]^2$

     

$\displaystyle ~G({\mathbf R}_p\{\zeta,\eta,\theta\};{\mathbf T_p})~d\zeta d\eta d\theta$

    (11)

Integration is performed over the parts of space where $G$ has been chosen to be nonzero. We now instantiate ${\mathbf F}$ and $G$ in the local (rotated and translated) frame of reference. These can be thought of as functions of $\{\zeta , \eta , \theta \}$ only, since ${\mathbf R}_p$ is independent of the integration variables. For $\mathbf F$ we will use a polynomial function of degree $N$ in the variables $\zeta$, $\eta$ and $\theta$:
\begin{displaymath}
{\mathbf F}({\mathbf R}_p~\{\zeta,\eta,\theta\})
\equiv
{\mathbf f}({\mathbf A};
\zeta,\eta,\theta)
\end{displaymath} (12)

where
\begin{displaymath}
{\mathbf f}({\mathbf A}; \zeta,\eta,\theta) = \sum_{mnp} {\mathbf A}_{mnp}
~\zeta^m \eta ^n \theta^p
\end{displaymath} (13)

 

Figure 5: Oriented moving least squares (MLS) tensor filter. The smallest ellipsoids represent the interpolated tensor data; the largest ellipsoid represents the domain of the moving filter $g()$ described in Eq. $(8)$; the dark ellipsoid represents the computed filtered tensor. The filter travels continuously along the fiber line; grid shows initial sampling of the tensor data.

 

 

To instantiate $G$, we also use a function of variables $\zeta$, $\eta$ and $\theta$:

\begin{displaymath}
G({\mathbf R_p}\{\zeta,\eta,\theta\};{\mathbf T_p}) \equiv g(\zeta,\eta,\theta,
\lambda_p^i)
\end{displaymath} (14)

The function g() is clipped to zero at some low threshold value to create a finite integration volume and $\lambda_p^i$ are eigenvalues of the tensor ${\mathbf T_p}$. Substituting expressions (13) and (14) into the equation (11) we get:

$\displaystyle E = \int_V {[\sum_{mnp} {\mathbf A}_{mnp}
~\zeta^m \eta ^n \theta^p - \mathbf T}({\mathbf r_p} + {\mathbf R}_p\{\zeta,\eta,\theta\})]^2$

     

$\displaystyle ~g(\zeta,\eta,\theta;\lambda_p^i)~d\zeta d\eta d\theta$

    (15)

The least squares fitting procedure reduces to minimizing functional $E$ with respect to tensor elements ${\mathbf A}_{rst}^{\alpha\beta}$. To do this we differentiate the expression (11) with respect to each one of the $\mathbf A$ coefficients, equate the result to zero, and linearly solve to find the unknown $\mathbf A$'s:
\begin{displaymath}
\partial E/\partial {\mathbf A}_{rst}^{\alpha\beta} = 0
\end{displaymath} (16)

This gives us the following linear system for the unknown $\mathbf A$'s:
\begin{displaymath}
\sum_{mnp} {\mathbf M}_{mnp,rst} {\mathbf A}_{mnp}^{\alpha\beta} = {\mathbf B}_{rst}^{\alpha\beta}
\end{displaymath} (17)

where ${\mathbf M}_{mnp,rst}$ are elements of the matrix; ${\mathbf B}_{rst}$ are right hand side values of the equation:

$\displaystyle {\mathbf M}_{mnp,rst} =
\int_V \zeta^{m+r} \eta^{n+s} \theta^{p+t}~g(\zeta,\eta,\theta;\lambda_p^i)~d\zeta d\eta d\theta$

    (18)

$\displaystyle {\mathbf B}_{rst}^{\alpha\beta} =
\int_V {\mathbf T}^{\alpha\beta}({\mathbf r_p} + {\mathbf R}_p\{\zeta,\eta,\theta\}) \zeta^r \eta^s \theta^t$

     

$\displaystyle g(\zeta,\eta,\theta;\lambda_p^i)~d\zeta d\eta d\theta$

    (19)

These integrals can be computed numerically or any specific choice of $g()$.3 The equation (17) is just a ``regular'' linear system to find the $\mathbf A$'s. Written out component-wise for the tensors ${\mathbf A}$, ${\mathbf B}$ and ${\mathbf M}$ with contracted indexing it becomes
$\displaystyle {\mathbf A}_{mnp}^{\alpha\beta}$ $\textstyle \equiv$ $\displaystyle {\mathbf A}^{\alpha\beta}_{m+Nn +N^2p} = {\mathbf a}_j^{\alpha\beta}$ (20)
$\displaystyle {\mathbf B}_{rst}^{\alpha\beta}$ $\textstyle \equiv$ $\displaystyle {\mathbf B}^{\alpha\beta}_{r +Ns + N^2t} = {\mathbf b}_i^{\alpha\beta}$  
$\displaystyle {\mathbf M}_{mnp,rst}$ $\textstyle \equiv$ $\displaystyle {\mathbf M}_{m+Nn+N^2p,e+Ns+N^2t} = {\mathbf M}_{ij}$  

This system is also known as a system of ``normal equations'' for the least squares optimization
\begin{displaymath}
\sum_j {\mathbf M}_{ij} {\mathbf a}_j^{\alpha\beta} = {\mathbf b}_i^{\alpha\beta}
\end{displaymath} (21)

The optimization procedure allows as to compute the polynomial coefficients for best approximation of the tensor data within a region of a chosen point by a chosen degree of polynom. Then the value at the point ${\mathbf r}_p$, which is the origin in the $\{\zeta , \eta , \theta \}$ frame of reference, can be easily calculated using (13):
\begin{displaymath}
{\mathbf {\bar T}}_p^{\alpha\beta} = \sum_{mnp} {\mathbf A}...
...thbf A}_{000}^{\alpha\beta} = {\mathbf a}_{0}^{\alpha\beta}
\end{displaymath} (22)

It is important to notice, that the value of ${\mathbf A}_{000}$ depends on the order of polynomial used for fitting. We also notice, that using a zero-order polynomial approximation (i.e., $N = 0$) is equivalent to finding a weighted average of a tensor function within the filter volume:
\begin{displaymath}
{\mathbf {\bar T}}_p = \int_V {\mathbf T}({\mathbf r_p} + ...
...eta\} )~g(\zeta,\eta,\theta;\lambda_p^i)~d\zeta d\eta d\theta
\end{displaymath} (23)

The major advantage of the higher order approximation is that it better preserves the absolute magnitude of the feature in the areas which have maxima or minima, compared to simple averaging, which tends to lower the height of the maxima and the depth of the minima. Finally, for the filter function $g()$, we have chosen an anisotropic Gaussian weighting function $G$ with axes aligned along the eigenvector directions and ellipsoidal semi-axes (the radii) proportional to the square root of corresponding eigenvalues.
\begin{displaymath}
\begin{array}{lll}
g(\zeta,\eta,\theta; \lambda_p^i) & =...
...eta/(\sigma b))^2 -
(\theta/(\sigma c))^2)}
\end{array}
\end{displaymath} (24)

with
\begin{displaymath}
a = \sqrt{\lambda_p^1},~~~b = \sqrt{\lambda_p^2},~~~ c= \sqrt{\lambda_p^3}
\end{displaymath} (25)

The variable $\lambda_p^1$ is the largest eigenvalue of the diffusion tensor ${\mathbf T_p}$ at the location ${\mathbf r_p}$, $\lambda_p^2$ is the second largest, etc. The value $\sigma$ is a parameter that can enlarge or contract all of the ellipsoid radii. It is important to notice, that since we are trying to trace fibers, i.e., to extract structures with a very strong directional information, the filter is typically much more influenced by the data points ``in front'' and ``behind'' but not on the side. Thus, usually, $a>>b,c$. Also note that in the equation for the filter function $g()$ (25), we have a choice of values for the diffusion tensor ${\mathbf T}_p$. In our algorithm we use the filtered tensor value from the previous time step, ${\bar {\mathbf T}}_{p-1}$, to determine the weighting function ellipsoid to use for the current time step.

Next: Streamline Integration Up: Method Previous: Data Interpolation
Leonid Zhukov 2003-01-05