<< Chapter < Page Chapter >> Page >

Conclusions

Using the terahertz reflected waveforms, we were able to measure the projections and reconstruct the originalimage. This process was completed in two steps. In the first, inverse and Wiener filtering were used to deconvolve the data fromthe reference pulse to obtain the actual projections of the test object. In the second, the Filtered Backprojection Algorithmfeaturing the Fourier Slice Theorem was used to filter the projections using a ramp filter and backproject the result over theimage plane, thus reconstructing the image of the object.

As far as the deconvolution part of the project concerns, the regularized inverse filter gives betterresults than Wiener filtering, as already pointed out. Care should be exercised when picking the regularized parameter γ, so as not toincrease the noise level of the resulting signal. Usually, this is a case-dependent procedure that takes numerical experimentation. Itshould be noted that the original data at hand were of very good quality with low noise level. Thus the results of Wiener filteringwere worse than those obtained by inverse filtering.

For the reconstruction part, it was found that the number of projection angles used was vital to the clarityof the final image. We were able to greatly downsample the data and still maintain image quality to a certain point. Due to the size ofthe data, the algorithms ran for minutes.

Future work

Much potential for future improvement and implementation is possible using this method of computerizedtomography. Advanced deconvolution methods featuring wavelets for efficient noise reduction can be used for isolating the projectionsout of the measured waveforms. Furthermore, it seems reasonable to cut-off the first part of each measured waveform since it onlycontains noise and no useful information for the image reconstruction. This can be accomplished by appropriately windowingthe raw measurements before any other manipulations takes place. Due to the linear nature of the process, different algorithm codecould have been written to start reconstructing the image immediately after the first projection had been calculated. Thisand other efficiency tools could be implemented to greatly increase the speed of the overall reconstruction, making the processapplicable for real-time use.

References

J. Pearce, H. Choi, D. M. Mittleman, J. White, and D. Zimdars, Opt. Lett. 30, 1653 (2005).

A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging (IEEE Press, 1988).

J. S. Lim, Two- Dimensional Signal and Image Processing (Prentice- Hall Inc., 1990).

D. M. Mittleman, S. Hunsche, L. Bovin, and M. C. Nuss, Opt. Lett. 22, 904 (1997).

B. B. Hu and M. C. Nuss, Opt. Lett., 20, 1716 (1995).

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Terahertz ray reflection computerized tomography. OpenStax CNX. Dec 12, 2005 Download for free at http://cnx.org/content/col10312/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Terahertz ray reflection computerized tomography' conversation and receive update notifications?

Ask