measurement accuracy

Does it matter? part 3: Natural background radiation

2012/08/06 // 0 Comments

[With thanks to dr. Masato Ohnuma for bringing this to my attention] It is all around us, and occasionally all through us, and every now and then, your small-angle scattering detector might see them: photons from nature. Do we need to consider these all-natural organic photons in the data corrections we do, or can we safely neglect them as “present in homeopathic concentrations”? The short answer: we absolutely need to take it into account! The slightly longer answer is a bit more nuanced. Natural background should work the same as a darkcurrent measurement, it is detected independent of the state of the X-ray generator or sample transmission, indeed, it bypasses most of these. It should only be a function of time and location (and maybe wind direction…). Considering that some of this radiation might come from the giant nuclear reactor in the sky, could it be significantly dependent on time as well? At hand we have: 1) a SAXS instrument in a state of institute-mandated shutdown, with a nice 100k (single-panel) PILATUS detector sitting idle, and 2) a week-end of time. Now that the week-end is over, 12 6-hour measurements have been collected of nothing but natural background on the detector. Surprisingly, the collected counts were quite numerous. Here listed in the sequence they were measured, divided over timeframes: (A)fternoon (13:47 to 19:47) (E)vening (19:47 to 01:47) (N)ight (1:47 to 7:47) (M)orning (7:47 to 13:47) The total collected counts on the single PILATUS panel (195 x 487 pixels) were: A……64189 E……64063 N……64811 M……65468 A……63748 E……63051 N……64746 M……64694 A……64277 E……65546 N……63967 M……63233 So, as we can see, significant count rates are collected for 6-hour measurements, and there is no clear indication of nature’s addiction to sunshine in these results (though statisticians should feel free to prove me wrong). With a mean of 64414.5 counts, and a standard deviation of 702.7 counts, spread over 94965 pixels and 21600 seconds, that works out to a countrate of 31.4 microHz (micro-counts-per-pixel-per-second, note that the number of significant digits roughly follows the accuracy indicated by the standard deviation). One side-note, this natural radiation correction is not automatically taken care of by a normal background subtraction. That is to say, this correction is to be applied to the data before most other corrections are done. For example, transmission factor does not play a role here, nor incoming flux or geometry corrections (see equation 2.5 in the document linked to from this post). It behaves, for all intents and purposes, like a time-dependent variant of a dark-current correction. A normal background correction only corrects for this natural background radiation if your transmission factor is 1, for lower transmission factors the correction becomes increasingly necessary. So how much does it matter? Well, to answer that question I have measured a sample with two different sample-to-detector positions. There should be a significant portion of the data that overlaps between the two geometries, and imperfections in the data may show up clearly here. This sample has a transmission factor close to 0.5, and was measured for a wholesome amount of time (6-12 hours). In Figure 1, a comparison before and after subtraction of a constant natural background radiation value is shown. As evident, a significant change can be ascribed to this simple subtraction! So, to summarize: for long measurements of absorbing samples, even on very nice equipment, your data will be significantly affected by natural background radiation. You will not automatically know your data is affected by this, and therefore you are better off taking this correction into account just in case it matters for your analysis. Happy Scattering!


Ruler vs. Silver behenate: Trust but verify

2012/06/18 // 4 Comments

One of the most important parameters you need when analyzing small-angle scattering patterns is the distance from the sample to the detector, as this defines the scattering wave vector q. This is often determined using the same approach as for wide-angle diffraction: measurement of a standard crystalline sample. Whereas in the past, stretched rat tail collagen was l’objet du jour, these days people prefer the much more sensible silver behenate. This saves you from having to find a suitable rat donor, preparing the tail, and stretching it just so that it would give you the right answer (as the degree of stretching would affect the distance). Silver behenate is easier to apply, easy to get and shows a nice round crystalline peak for all but the smallest angles. For a while now, though, I and some others have measured the sample-to-detector distance with a ruler when measuring. There are several benefits to this. Firstly, it is dead simple to do (there is no need to recode my diffraction ring-fitting method into Python, for example) and tools are readily available. Secondly, it is easy to calculate that even being off by a centimeter on the meter, the impact on the scattering vector q is minimal compared to the uncertainty in the measured intensity. Lastly, I do this because there is a rumor that silver behenate is in fact unstable, hygroscopic and essentially untrustworthy. Now, though, it is time to put it to the test: Is there really so much variety in silver behenate? To test this, Pawel (one of my colleagues) collected all the silver behenate samples from the labs he could find (five, some of which indeed looked rather dubious), and we set about measuring and analyzing them using his standard method (300 seconds, after which the measurement was analyzed using Jan Ilavsky & co.’s excellent brainchild). Without further ado: these are the results from the measurements, the sample-to-detector distance in meters: Sample A 1.3602 Sample B 1.3598 Sample C 1.3606 Sample D 1.3595 Sample E 1.3578 While this is hardly a comprehensive test, these results seem to indicate that silver behenate is about as accurate as a ruler, with a variance of a few millimeters to the meter. While I am not going to use silver behenate in my methodology just yet (I still haven’t programmed a ring fitting method in my new Python code, you see), I am now less likely to label silver behenate as unreliable.


Nothing new here

2012/03/22 // 0 Comments

So it seems science has beaten us to the punch once again. Remember last week’s optimistic story on how you can make better use of your (measurement) time? Turns out it has been done (at least once) before. The year was 1993, the authors were M. Steinhart and J. Pleštil, and they did the same from a different perspective [1]. Credit where credit is due, their yet un-cited paper contains a good study of measurement stability and its effects on inferred information, and indeed has the equation for effective time-expenditure available (though written up in a confusing way). So all sadness on our side aside (as there is now no short-sweet-and-quick publication possible on this), please use your time wisely and cite that 1993 paper as it deserves. Do not let good methods like this be covered by years of dust. To give you some more ammunition for your citation-gun, here is a good paper detailing dead-time correction, and how Poisson statistics fail when these corrections are applied [2]. I found that I should not blindly square-root my photons (real and imaginary), but that I should use the equations they provided where applicable. If you have a Pilatus detector, there is no reason to sweat much, as this correction is only applied in the very extreme count-rate regions [3]. As always: let us know what you think and leave a comment! [1] STEINHART, M., & Plestil, J. (1993). Possible Improvements in the Precision and Accuracy of Small-Angle X-Ray-Scattering Measurements. Journal Of Applied Crystallography, 26, 591–601.  [2] Laundy, D., & Collins, S. (2003). Counting statistics of X-ray detectors at high counting rates. J. Synchrotron Rad (2003). 10, 214-218 [doi:10.1107/S0909049503002668], 1–5. International Union of Crystallography. doi:10.1107/S0909049503002668 [3] Kraft, P., Bergamaschi, A., Broennimann, C., Dinapoli, R., Eikenberry, E. F., Henrich, B., Johnson, I., et al. (2009). Performance of single-photon-counting PILATUS detector modules. Journal Of Synchrotron Radiation, 16, 368–375. doi:10.1107/S0909049509009911


Making better use of your time: optimizing measurement time

2012/03/14 // 1 Comment

Often, especially when measuring on big facilities, you are given a limited amount of time. So when it comes to measuring the sample and the background, this limited time has to be divided between a measurement of the sample, and a measurement of the background. Normally, one would spend about 50% of the time on a sample, and 50% on the background, or even more time on the background “because the counts are so low” (I know, I did the same!). There must be a better way to calculate the optimum division of time! So me and a colleague, Samuel Tardif, spent a little bit of time jotting down some equations, and plotting the result. The result is that for large differences in the signal-to-noise ratio (c.q. sample count rate to background count rate), significant reductions in uncertainty can be obtained through better division of time for any small-angle scattering measurement. In the case of a Bonse-Hart camera or a step-scan small-angle scattering measurement, each measurement point can be tuned to the optimal dwell time for the sample and background, after a quick initial scan to determine the signal-to-noise ratio at each point. Please let me know how it works for you! The calculation can be checked from this document where we wrote up the results: ideal_background


Data processing flowchart and news on an old publication

2012/02/29 // 1 Comment

Short news first; by going through the motions and waiting for Elsevier to get back to me, I have gotten permission (for the royal sum of 0.00 eurodollars) to repost one more paper from Polymer on my site. So that has now gone in the 2010 publications page here. Then it is time to give you something. For those who have to do their own data processing and would like to get my way of doing it, I have attached my data processing flowchart to this post. It is not a perfect method, but as far as I can tell it works quite well. If you are interested in getting the actual Python code that does all this work, drop me a line. Since the code is quite new, it does not support many strange detectors, so if support needs to be built for a particular detector, I’ll be happy to spend some of my time looking at whether it can be done. So there:imp_imagecorrect_and_imgint. Let me know if there are improvements, obscurities or if you have any other comments on this.