wavepy.surface_from_grad

Surface from gradient data

In many x-ray imaging techniques we obtain the differential phase of the wavefront in two directions. This is the same as to say that we measured the gradient of the phase. Therefore to obtain the phase we need a method to integrate the differential data. Matematically we want to obtain the surface \(s(x,y)\) from the (experimental) differential curves \(s_x = s_x(x,y) = \frac{\partial s (x, y)}{\partial x}\) and \(s_y = s_y(x,y) = \frac{\partial s (x, y)}{\partial y}\).

This is not as straight forward as it looks. The main reason is that, to be able to calculate the function from its gradient, the partial derivatives must be integrable. A function is integrable when the two cross partial derivative of the function are equal, that is

\[\frac{\partial^2 s (x, y)}{\partial x \partial y} = \frac{\partial^2 s (x, y)}{\partial y \partial x}\]

However, due to experimental errors and noises, there are no guarantees that the data is integrable (very likely they are not).

To obtain a signal/image from differential information is a broader topic with application in others topic of science. Few methods have been developed in different context, in special computer vision where this problem is refered as “Surface Reconstruction from Gradient Fields”. For consistense, we will (try to) stick to this same term.

These methods try to find the best signal \(s(x,y)\) that best describes the differential curves. For this reason, it is advised to use some kind of check for the integration, for instance by calculating the gradient from the result and comparing with the original gradient. It is better if this is done in the current library. See for instance the use of wavepy.error_integration() in the function wavepy.surdace_from_grad.frankotchellappa() below.

It is the goal for this library to add few different methods, since it is clear that different methods have different strenghts and weakness (precision, processing time, memory requirements, etc).

References

[B5], [B1], [B8], [B17], [B9], [B10].

Functions:

frankotchellappa(del_f_del_x, del_f_del_y[, ...]) The simplest method is the so-called Frankot-Chelappa method.
wavepy.surface_from_grad.frankotchellappa(del_f_del_x, del_f_del_y, reflec_pad=True)[source]

The simplest method is the so-called Frankot-Chelappa method. The idea behind this method is to search (calculate) an integrable gradient field that best fits the data. Luckly, Frankot Chelappa were able in they article to find a simple single (non interective) equation for that. We are even luckier since this equation makes use of FFT, which is very computationally efficient.

Considering a signal \(s(x,y)\) with differential signal given by \(s_x = s_x(x,y) = \frac{\partial s (x, y)}{\partial x}\) and \(s_y = s_y(x,y) = \frac{\partial s (x, y)}{\partial y}\). The Fourier Transform of \(s_x\) and \(s_y\) are given by

\[\mathcal{F} \left [ s_x \right ] = \mathcal{F} \left [ s_x \right ] (f_x, f_y) = \mathcal{F} \left [ \frac{\partial s (x, y)} {\partial x} \right ](f_x, f_y), \quad \mathcal{F} \left [ s_y \right ] = \mathcal{F} \left [ s_y \right ] (f_x, f_y) = \mathcal{F} \left [ \frac{\partial s (x, y)} {\partial y} \right ](f_x, f_y).\]

Finally, Frankot-Chellappa method is base in solving the following equation:

\[\mathcal{F} \left [ s \right ] = \frac{-i f_x \mathcal{F} \left [ s_x \right ] - i f_y \mathcal{F} \left [ s_y \right ]}{2 \pi (f_x^2 + f_y^2 )}\]

where

\[\mathcal{F} \left [ s \right ] = \mathcal{F} \left[ s(x, y) \right ] (f_x, f_y)\]

is the Fourier Transform of \(s\).

To avoid the singularity in the denominator, it is added numpy.finfo(float).eps(), the smallest float number in the machine.

Keep in mind that Frankot-Chelappa is not the best method. More advanced alghorithms are available, where it is used more complex math and interactive methods. Unfortunatelly, these algorothims are only available for MATLAB.

References

[B5]. The padding we use here is [B10].

Parameters:
  • del_f_del_x, del_f_del_y (ndarrays) – 2 dimensional gradient data
  • reflec_pad (bool) – This flag pad the gradient field in order to obtain a 2-dimensional reflected function. See more in the Notes below.
Returns:

ndarray – Integrated data, as provided by the Frankt-Chellappa Algorithm. Note that the result are complex numbers. See below

Notes

  • Padding

    Frankt-Chellappa makes intensive use of the Discrete Fourier Transform (DFT), and due to the periodicity property of the DFT, the result of the integration will also be periodic (even though we only get one period of the answer). This property can result in a discontinuity at the edges, and Frankt-Chellappa method is badly affected by discontinuity,

    In this sense the idea of this padding is that by reflecting the function at the edges we avoid discontinuity. This was inspired by the code of the function DfGBox, available in the MATLAB File Exchange website.

    Note that, since we only have the gradient data, we need to consider how a reflection at the edges will affect the partial derivatives. We show it below without proof (but it is easy to see).

    First lets consider the data for the \(x\) direction derivative \(\Delta_x = \dfrac{\partial f}{\partial x}\) consisting of a 2D array of size \(N \times M\). The padded matrix will be given by:

    \[\begin{split}\left[ \begin{matrix} \Delta_x(x, y) & -\Delta_x(N-x, y) \\ \Delta_x(x, M-y) & -\Delta_x(N-x, M-y) \end{matrix} \right]\end{split}\]

    and for the for the y direction derivative \(\Delta_y = \dfrac{\partial f}{\partial y}\) we have

    \[\begin{split}\left[ \begin{matrix} \Delta_y(x, y) & \Delta_y(N-x, y) \\ -\Delta_y(x, M-y) & -\Delta_y(N-x, M-y) \end{matrix} \right]\end{split}\]

    Note that this padding increases the number of points from \(N \times M\) to \(2M \times 2N\). However, the function only returns the \(N \times M\) result, since the other parts are only a repetion of the result. In other words, the padding is done only internally.

  • Results are Complex Numbers

    Again due to the use of DFT’s, the results are complex numbers. In principle an ideal gradient field of real numbers results a real-only result. This “imaginary noise” is observerd even with theoretical functions, which leads to the conclusion that it is due to a numerical noise. It is left to the user to decide what to do with noise, for instance to use the modulus or the real part of the result. But it is recomended to use the real part.

wavepy.surface_from_grad.error_integration(del_f_del_x, del_f_del_y, func, pixelsize, errors=False, shifthalfpixel=False, plot_flag=True)[source]