Unfortunately, this week I do not have anything ready for you. It is with great regret that I must therefore skip this week, and break my “once-a-week” posting schedule. Regular scheduling should return next week.
Just a quick note, however. Thanks to Sylvain Prévost bringing this back to my attention; Brûlet et al. derived a very similar equation to what was derived in last week’s post. Additionally, Strunz et al.  have some additional considerations for transmission factors that may need to be considered in X-ray scattering as well (in particular for ultra-small angle X-ray scattering).
Hopefully I will have the time to look into these things in the near future and give you some more insight on the magnitude of these problems. As suggested by Sylvain, it may be a good idea to adapt the “imp2″ data reduction software to be able to handle a more general consideration of transmission factors (then supporting both SAXS and SANS). A bit of thought is needed on how to enable fancy background subtraction while keeping the modular, flexible nature of the program.
: Annie Brûlet, Didier Lairez, Alain Lapp and Jean-Pierre Cotton, “Improvement of data treatment in small-angle neutron scattering”, J. Appl. Cryst. 40 (2007), 165–177. [journal link], [free link]
: P. Strunz, J. Scoversheet.dviaroun, U. Keiderling, A. Wiedenmann and R. Przenioslo, “General formula for determination of cross-section from measured SANS intensities”, J. Appl. Cryst. 33 (2000) 829–833. [journal link]
So it seems science has beaten us to the punch once again. Remember last week’s optimistic story on how you can make better use of your (measurement) time? Turns out it has been done (at least once) before.
The year was 1993, the authors were M. Steinhart and J. Pleštil, and they did the same from a different perspective . Credit where credit is due, their yet un-cited paper contains a good study of measurement stability and its effects on inferred information, and indeed has the equation for effective time-expenditure available (though written up in a confusing way).
So all sadness on our side aside (as there is now no short-sweet-and-quick publication possible on this), please use your time wisely and cite that 1993 paper as it deserves. Do not let good methods like this be covered by years of dust.
To give you some more ammunition for your citation-gun, here is a good paper detailing dead-time correction, and how Poisson statistics fail when these corrections are applied . I found that I should not blindly square-root my photons (real and imaginary), but that I should use the equations they provided where applicable. If you have a Pilatus detector, there is no reason to sweat much, as this correction is only applied in the very extreme count-rate regions .
As always: let us know what you think and leave a comment!
 STEINHART, M., & Plestil, J. (1993). Possible Improvements in the Precision and Accuracy of Small-Angle X-Ray-Scattering Measurements. Journal Of Applied Crystallography, 26, 591–601.
 Laundy, D., & Collins, S. (2003). Counting statistics of X-ray detectors at high counting rates. J. Synchrotron Rad (2003). 10, 214-218 [doi:10.1107/S0909049503002668], 1–5. International Union of Crystallography. doi:10.1107/S0909049503002668
 Kraft, P., Bergamaschi, A., Broennimann, C., Dinapoli, R., Eikenberry, E. F., Henrich, B., Johnson, I., et al. (2009). Performance of single-photon-counting PILATUS detector modules. Journal Of Synchrotron Radiation, 16, 368–375. doi:10.1107/S0909049509009911
Often, especially when measuring on big facilities, you are given a limited amount of time. So when it comes to measuring the sample and the background, this limited time has to be divided between a measurement of the sample, and a measurement of the background.
Normally, one would spend about 50% of the time on a sample, and 50% on the background, or even more time on the background “because the counts are so low” (I know, I did the same!). There must be a better way to calculate the optimum division of time!
So me and a colleague, Samuel Tardif, spent a little bit of time jotting down some equations, and plotting the result. The result is that for large differences in the signal-to-noise ratio (c.q. sample count rate to background count rate), significant reductions in uncertainty can be obtained through better division of time for any small-angle scattering measurement.
In the case of a Bonse-Hart camera or a step-scan small-angle scattering measurement, each measurement point can be tuned to the optimal dwell time for the sample and background, after a quick initial scan to determine the signal-to-noise ratio at each point. Please let me know how it works for you! The calculation can be checked from this document where we wrote up the results: ideal_background
Those of you who have been reading this weblog for a while now, may remember the calculation of the sample self-absorption correction for plate-like samples. The result of this was a straightforward equation which could be used to correct the scattering of strongly absorbing samples (>30%) with a plate-like geometry. It was mentioned then, that the calculation of this correction for capillary samples is more complicated, but would be good to have. This sample self-absorption of a capillary will show up as a butterfly-shaped shadow on your scattering pattern.
In the latest issue of J. Appl. Cryst., there is a new paper discussing exactly this. Sulyanov et al. have (programmed) a solution to calculate the sample self-absorption factor for cylindrical samples. The code they provide is available in Fortran, and I will spend some time to try to transcode this into Python in the near future. Judging from their solutions, I am happy I did not try to solve it. The solution seems to be a little bit more complicated than I thought.
Additionally, in the same issue, Zeidler has a solution for samples of spherical geometry. While I have not encountered a problem requiring this solution before, it is certainly noteworthy, and may be of use to some of you doing scattering from suspended objects.
Lastly, there is a new video of one of my latest short presentations online here, explaining a little about my work as well as the monte-carlo analysis method. It’s very short, and there will be a more detailed MC method explanation shortly (as I have promised for quite a while now).
Last week-end, while visiting friends nearby, a copy of the book “bad science” by Ben Goldacre was dropped in my lap. Having read the occasional post on his weblog (http://www.badscience.net/), I had already planned to get it. So I started reading the book with rather high expectations. (This copy, if I did not misunderstand my friend, happens to be an unofficial “homeless” book. The idea is that they are passed along after you have read them. An idea I like!)
Do not fret, for I have more software (with documentation!) lined up for presentation on this website soon, but I am still working on the documentation.
Please bear with me as I shamelessly promote another publication of mine that came out just days ago. The paper is available here: http://dx.doi.org/10.1016/j.polymer.2010.07.045
It concerns curious observations of oscillations in the scattering pattern from looped single filaments of aramid filaments. As an aside, loading the 12-micron filaments into the microchannel devices is for young eyes only, and even then may be accompanied by expletives.
Nevertheless, I am very happy that this is published.
Catching up to current affairs, I stumbled across this beauty. Now, I find this paper starts a little bit chaotic, but very quickly we come across some very useful equations indeed, and a link between the used equation for their analysis of phase transitions in fluids, and various other equations such as the Ornstein-Zernike structure factor and the Debye-Bueche equation. The equations published in this paper appear ready to be applied to a wide variety of amorphous scattering patterns, capable of extracting quite a few physical parameters! There will likely be much more on this topic as I get to apply these. To top it all off, the data used in the paper has been “extracted” from published graphics by Ms. A. Höhle. I can see her sitting there now with a ruler and a paper, meticulously noting down her estimates for the q and S values for each datapoint…. Perhaps it would be a good starting point for publishing some of our best data online so others can have a go at analysing it?
On another note, I want to point you towards the horror of textbooks. How fitting then, that the next special issue of the J. of Appl. Cryst. is about teaching! As you may have read on Slashdot a few weeks ago, textbooks are still very expensive, even for topics as stagnant as elementary mathematics. prof. Feynman also had a few words to say on the topic of textbooks. And there are now some alternatives popping up allowing your students an alternative to ridiculous and expensive textbooks: Free textbooks.
If you know more, let me know and I will add them to the list!
I found some interesting papers for you, and a talk. Let me start with the talk. It is a TED talk (naturally) concerning TED talks. This nice introspective talk is actually of interest for all of us as it gives a few pointers to the set-up of excellent (and terrible) talks, with a fascinating slide on the colours used to evoke certain responses from the audience. Funny and applicable to us to make our talks better (and we know we need it, right?). The talk is here.
Then there are some papers, two of which I found to be closely related to what I did. One paper discusses the stretching of voids in tensile experiments, simulating the 2D patterns with cylinders (but unfortunately not using a 2D fit, but 1D slices to arrive at a solution). That paper is here (yes, you have probably already read it since it is in j.appl.cryst., but just in case you have been too busy like me to read the table of contents…). Another one is similar, but I must admit I have not managed to completely read it yet. Looks interesting, though.
Also, it is not everyday you see a new geometry diffractometer being suggested. I wish these guys good luck in the further development of their diffractometer and I hope they publish some fantastic results when they get to it.
At the moment, I am trying to do a literature research (something I should have done much, much earlier) on in-situ particle growth studies using SAXS. I have come up with quite some references by now, but if you know an excellent study, do drop me a line at brian at stack dot nl, and I will be eternally grateful. If you are, on the other hand, interested in co-authoring a small review paper on the topic, I am always open to collaborate!
Dear all, the reason I have been silent for a few weeks was that I was waiting for this:
Herewith (with a little bit of pride), I would like to present the first paper published as a result of my Ph.D. research. This groundbreaking paper is naturally essential reading for all working in the fields of small-angle scattering, fibres, world politics and astrology. The paper is published at:
Journal of Applied Crystallography, 2010, Volume 43, pages 837-849
And is also available for download from here.
Abstract: After consideration of the applicability of classical methods, a novel analysis method for the characterization of fibre void structures is presented, capable of fitting the entire anisotropic two-dimensional scattering pattern to a model of perfectly aligned, polydisperse ellipsoids. It is tested for validity against the computed scattering pattern for a simulated nanostructure, after which it is used to fit the scattering from the void structure of commercially available heat-treated poly(p-phenylene terephtalamide) fibre and its as-spun precursor fibre. The application shows a reasonable fit and results in size distributions for both the lengths and the widths of the ellipsoidal voids. Improvements to the analysis methods are compared, consisting of the introduction of an orientation distribution for the nano-ellipsoids, and the addition of large scatterers to account for the effect of fibrillar scattering on the scattering pattern. The fit to the scattering pattern of as-spun aramid fibre is improved by the introduction of the large scatterers, while the fit to the scattering pattern obtained from the heat-treated fibre improves when an orientation distribution is taken into account. It is concluded that, as a result of the heat treatment, the average width and length of the scatterers increase.
Thank you all for your interest! I will be posting again in a few weeks.
Just a quick heads up before I start on the ellipsoid form factor alternative: I spoke last post of the efforts for data archiving for possible open-access purposes. Shortly after that post, this news appeared. It seems we may be heading (more rapidly than I thought) towards an age where we have to make data public, which means archiving with metadata and storing in an archival format. I do hope (Matlab) writing and reading functions for NeXus format files become available soon.
Regarding the ellipsoid form factor, I have mainly been using the ellipsoid adaptation to the Rayleigh sphere scattering function. However, this function requires integration over all orientations (see f.ex. equation 3.46 in the SASfit manual). This becomes very time-consuming for use as a fitting function if you furthermore would like to integrate over minor- and main-axis size distributions.
An alternative form factor appears to have been published in a light-scattering paper in 1969  by Beattie and Tisinger. Although this procedure requires an iterative approach, it may prove to be faster to use than the aforementioned method. Efforts are underway to implement this method. Assistance, as always, is highly appreciated.
 Beattie and Tisinger. Light Scattering Functions and Particle-Scattering Factors for Ellipsoids of Revolution. Journal of the Optical Society of America (1969) vol. 59 pp. 818