<< Chapter < Page | Chapter >> Page > |
Then, for the measurement itself, one has to select the appropriate stabilization time and the duration time. Normally, longer striation/duration time can results in more stable signal with less noises, but the time cost should also be considered. Another important parameter is the temperature of the sample, as many DLS instruments are equipped with the temperature-controllable sample holders, one can actually measure the size distribution of the data at different temperature, and get extra information about the thermal stability of the sample analyzed.
Next, as is used in the calculation of particle size from the light scattering data, the viscosity and refraction index of the solution are also needed. Normally, for solutions with low concentration, the viscosity and refraction index of the solvent/water could be used as an approximation.
Finally, to get data with better reliability, the DLS measurement on the same sample will normally be conducted multiple times, which can help eliminate unexpected results and also provide additional error bar of the size distribution data.
Although size distribution data could be readily acquired from the software of the DLS instrument, it is still worthwhile to know about the details about the data analysis process.
As is mentioned in [link] , the decay rate Γ is mathematically determined from the g _{1} ( τ ) curve; if the sample solution is monodispersed, g _{1} ( τ ) could be regard as a single exponential decay function e ^{-Γ} τ , and the decay rate Γ can be in turn easily calculated. However, in most of the practical cases, the sample solution is always polydispersed, g _{1} ( τ ) will be the sum of many single exponential decay functions with different decay rates, and then it becomes significantly difficult to conduct the fitting process.
There are however, a few methods developed to meet this mathematic challenge: linear fit and cumulant expansion for mono-modal distribution, exponential sampling and CONTIN regularization for non-monomodal distribution. Among all these approaches, cumulant expansion is most common method and will be illustrated in detail in this section.
Generally, the cumulant expansion method is based on two relations: one between g _{1} ( τ ) and the moment-generating function of the distribution, and one between the logarithm of g _{1} ( τ ) and the cumulant-generating function of the distribution.
To start with, the form of g _{1} ( τ ) is equivalent to the definition of the moment-generating function M (- τ , Γ) of the distribution G (Γ), [link] .
The m th moment of the distribution m m (Γ) is given by the m th derivative of M (- τ , Γ) with respect to τ , [link] .
Similarly, the logarithm of g _{1} ( τ ) is equivalent to the definition of the cumulant-generating function K (- τ , Γ), EQ, and the m th cumulant of the distribution k m (Γ) is given by the m th derivative of K (- τ , Γ) with respect to τ , [link] and [link] .
By making use of that the cumulants, except for the first, are invariant under a change of origin, the k m (Γ) could be rewritten in terms of the moments about the mean as [link] , [link] , [link] , and [link] , where μ m are the moments about the mean, defined as given in [link] .
Notification Switch
Would you like to follow the 'Nanomaterials and nanotechnology' conversation and receive update notifications?