Nothing new here

2012/03/22 // 0 Comments

So it seems science has beaten us to the punch once again. Remember last week’s optimistic story on how you can make better use of your (measurement) time? Turns out it has been done (at least once) before. The year was 1993, the authors were M. Steinhart and J. Pleštil, and they did the same from a different perspective [1]. Credit where credit is due, their yet un-cited paper contains a good study of measurement stability and its effects on inferred information, and indeed has the equation for effective time-expenditure available (though written up in a confusing way). So all sadness on our side aside (as there is now no short-sweet-and-quick publication possible on this), please use your time wisely and cite that 1993 paper as it deserves. Do not let good methods like this be covered by years of dust. To give you some more ammunition for your citation-gun, here is a good paper detailing dead-time correction, and how Poisson statistics fail when these corrections are applied [2]. I found that I should not blindly square-root my photons (real and imaginary), but that I should use the equations they provided where applicable. If you have a Pilatus detector, there is no reason to sweat much, as this correction is only applied in the very extreme count-rate regions [3]. As always: let us know what you think and leave a comment! [1] STEINHART, M., & Plestil, J. (1993). Possible Improvements in the Precision and Accuracy of Small-Angle X-Ray-Scattering Measurements. Journal Of Applied Crystallography, 26, 591–601.  [2] Laundy, D., & Collins, S. (2003). Counting statistics of X-ray detectors at high counting rates. J. Synchrotron Rad (2003). 10, 214-218 [doi:10.1107/S0909049503002668], 1–5. International Union of Crystallography. doi:10.1107/S0909049503002668 [3] Kraft, P., Bergamaschi, A., Broennimann, C., Dinapoli, R., Eikenberry, E. F., Henrich, B., Johnson, I., et al. (2009). Performance of single-photon-counting PILATUS detector modules. Journal Of Synchrotron Radiation, 16, 368–375. doi:10.1107/S0909049509009911


Making better use of your time: optimizing measurement time

2012/03/14 // 1 Comment

Often, especially when measuring on big facilities, you are given a limited amount of time. So when it comes to measuring the sample and the background, this limited time has to be divided between a measurement of the sample, and a measurement of the background. Normally, one would spend about 50% of the time on a sample, and 50% on the background, or even more time on the background “because the counts are so low” (I know, I did the same!). There must be a better way to calculate the optimum division of time! So me and a colleague, Samuel Tardif, spent a little bit of time jotting down some equations, and plotting the result. The result is that for large differences in the signal-to-noise ratio (c.q. sample count rate to background count rate), significant reductions in uncertainty can be obtained through better division of time for any small-angle scattering measurement. In the case of a Bonse-Hart camera or a step-scan small-angle scattering measurement, each measurement point can be tuned to the optimal dwell time for the sample and background, after a quick initial scan to determine the signal-to-noise ratio at each point. Please let me know how it works for you! The calculation can be checked from this document where we wrote up the results: ideal_background


A compact pneumatic fibre tensile stage – publication

2011/11/15 // 0 Comments

It is with pleasure that I can announce the publication of another fibre-related work, right here (The electronic reprint will be made available after December on this site). [edit: now available from this page] This tensile stage was the result of me not wanting to install a massive Instron stage at a beamline, nor did I want to design a vacuum enclosure that would fit around the business end of such a stage. I had a blast designing it and having it built, despite the time constraints imposed by the fast approaching beamtime. I now have the chance to design and build version two of this stage, and I will keep you posted on the progress of that one.


Live FT video

2011/02/04 // 1 Comment

A demonstration of the live Fourier Transform showing scattering patterns can be seen here:


More Youtube videos

2010/10/31 // 0 Comments

So, I could not do what I promised last time, the Monte-Carlo fitting works on perfect simulated scattering patterns but is as of yet unable to deal with the addition of a flat background. So I will have to take a raincheck. In the mean time, I have made two videos (part 1 and part 2) together with some colleagues during my time in Denmark. The video demonstrates small-angle scattering using laser light scattering on a hair. In part 2, the diameter of the hair is calculated. (I watched too many Carl Sagan videos and I am impressed and encouraged by them…) Check out the videos: Part one: Part two:


“Perfect” 1D pattern generation software.

2010/08/30 // 0 Comments

I have written some small, simple bits of Matlab software that can generate scattering patterns in the range you request for polydisperse, dilute spheres or ellipsoids. While nothing new per se, this implementation is guaranteed to produce correct scattering patterns irrespective of the width of the distribution. Allow me to quickly explain. The normal way of calculating these patterns in fitting functions and the likes, is to choose an upper size limit (perhaps related to the width and mean of the distribution), and divide the size range between zero and this upper limit into perhaps 100 different sizes. Then the scattering pattern of each of these contributions is calculated, multiplied with their probability (obtained from the probability (or size) distribution function), multiplied with the square of the particle volume for that size, and then summed. In all, then, this is a numeric integration over the volume-square weighted size distribution. The problem lies in the determination of the upper limit and the number of divisions required. In the past, I have tried using the cumulative distribution function to select “smart” divisions, or adjusting the width and mean to compensate for the volume-square weighting, but this often resulted in the appearance of oscillatory behaviour in the scattering patterns. An alternative solution was therefore required. These functions are not necessarily fast enough for fitting purposes, but they can be used for checking the applicability of your fitting procedures. You should get out of your fitting functions what you put into these simulated patterns. These functions work by random generation of a number of spheres or ellipsoids. A scattering pattern is calculated from an initial number of spheres or ellipsoids. Then, the scattering pattern is calculated of the original block with the addition of a new set of shapes. This is repeated until the effect of adding a new block on the scattering pattern no longer exceeds a certain threshold. Included are the programs for generating scattering patterns using polydisperse distributions of spheres or ellipsoids. All distributions supported by the “statistics library”‘s RANDOM function are available. For ellipsoids, an additional distribution can be used for the aspect ratio. If the use is not clear, let me know and I will write some more extensive documentation. The programs are: perfectpattern_spheres and perfectpattern_ellipsoids. Have fun!