The year was 1993, the authors were M. Steinhart and J. Pleštil, and they did the same from a different perspective . Credit where credit is due, their yet un-cited paper contains a good study of measurement stability and its effects on inferred information, and indeed has the equation for effective time-expenditure available (though written up in a confusing way).
So all sadness on our side aside (as there is now no short-sweet-and-quick publication possible on this), please use your time wisely and cite that 1993 paper as it deserves. Do not let good methods like this be covered by years of dust.
To give you some more ammunition for your citation-gun, here is a good paper detailing dead-time correction, and how Poisson statistics fail when these corrections are applied . I found that I should not blindly square-root my photons (real and imaginary), but that I should use the equations they provided where applicable. If you have a Pilatus detector, there is no reason to sweat much, as this correction is only applied in the very extreme count-rate regions .
As always: let us know what you think and leave a comment!
Often, especially when measuring on big facilities, you are given a limited amount of time. So when it comes to measuring the sample and the background, this limited time has to be divided between a measurement of the sample, and a measurement of the background.
Normally, one would spend about 50% of the time on a sample, and 50% on the background, or even more time on the background “because the counts are so low” (I know, I did the same!). There must be a better way to calculate the optimum division of time!
So me and a colleague, Samuel Tardif, spent a little bit of time jotting down some equations, and plotting the result. The result is that for large differences in the signal-to-noise ratio (c.q. sample count rate to background count rate), significant reductions in uncertainty can be obtained through better division of time for any small-angle scattering measurement.
In the case of a Bonse-Hart camera or a step-scan small-angle scattering measurement, each measurement point can be tuned to the optimal dwell time for the sample and background, after a quick initial scan to determine the signal-to-noise ratio at each point. Please let me know how it works for you! The calculation can be checked from this document where we wrote up the results: ideal_background
It is with pleasure that I can announce the publication of another fibre-related work, right here (The electronic reprint will be made available after December on this site). [edit: now available from this page]
This tensile stage was the result of me not wanting to install a massive Instron stage at a beamline, nor did I want to design a vacuum enclosure that would fit around the business end of such a stage. I had a blast designing it and having it built, despite the time constraints imposed by the fast approaching beamtime. I now have the chance to design and build version two of this stage, and I will keep you posted on the progress of that one.
So, I could not do what I promised last time, the Monte-Carlo fitting works on perfect simulated scattering patterns but is as of yet unable to deal with the addition of a flat background. So I will have to take a raincheck.
In the mean time, I have made two videos (part 1 and part 2) together with some colleagues during my time in Denmark. The video demonstrates small-angle scattering using laser light scattering on a hair. In part 2, the diameter of the hair is calculated.
(I watched too many Carl Sagan videos and I am impressed and encouraged by them…)
I have written some small, simple bits of Matlab software that can generate scattering patterns in the range you request for polydisperse, dilute spheres or ellipsoids. While nothing new per se, this implementation is guaranteed to produce correct scattering patterns irrespective of the width of the distribution. Allow me to quickly explain.
The normal way of calculating these patterns in fitting functions and the likes, is to choose an upper size limit (perhaps related to the width and mean of the distribution), and divide the size range between zero and this upper limit into perhaps 100 different sizes. Then the scattering pattern of each of these contributions is calculated, multiplied with their probability (obtained from the probability (or size) distribution function), multiplied with the square of the particle volume for that size, and then summed. In all, then, this is a numeric integration over the volume-square weighted size distribution.
The problem lies in the determination of the upper limit and the number of divisions required. In the past, I have tried using the cumulative distribution function to select “smart” divisions, or adjusting the width and mean to compensate for the volume-square weighting, but this often resulted in the appearance of oscillatory behaviour in the scattering patterns. An alternative solution was therefore required.
These functions are not necessarily fast enough for fitting purposes, but they can be used for checking the applicability of your fitting procedures. You should get out of your fitting functions what you put into these simulated patterns.
These functions work by random generation of a number of spheres or ellipsoids. A scattering pattern is calculated from an initial number of spheres or ellipsoids. Then, the scattering pattern is calculated of the original block with the addition of a new set of shapes. This is repeated until the effect of adding a new block on the scattering pattern no longer exceeds a certain threshold.
Included are the programs for generating scattering patterns using polydisperse distributions of spheres or ellipsoids. All distributions supported by the “statistics library”‘s RANDOM function are available. For ellipsoids, an additional distribution can be used for the aspect ratio. If the use is not clear, let me know and I will write some more extensive documentation.