A Geiger Mueller Discharge tube is a device designed to measure any form of ionizing radiation. It consists of long tube with a thin wire running down its center. GM tubes are vacuum sealed and filled with a noble gas such as Neon. An electric field is maintained between the thin wire and the walls of the tube; the thin wire is the anode and the walls of the tube are the cathode. This field is one of the parameters that must be fine-tuned for each device.

When some form of ionizing radiation enters the Geiger tube, it will collide with one of the electrons on the gas molecules inside the tube. This collision will knock the electron out of its orbit. No longer tightly bound to a nucleus, the electron is pulled towards the anode. Carrying momentum from its collision with the incoming radiation, it will knock other electrons out of their orbits on its way to the anode. Each of these electrons will knock others out of place, and thus a chain reaction occurs, as more and more electrons are knocked out of place as they are pulled towards the anode. Every time an electron is knocked free, a neon molecule is ionized. Being far heavier than the electrons, the neon molecules are not moved nearly as fast by the electric field – the result being that as more electrons are knocked out of place, the tube fills with positively charged ions.

Geiger Mueller tubes suffer a dead time, while the voltage across the detector builds back up to the original value. This dead time is constant for a counter operating at a given voltage. Suppose m is the measured count rate of a sample, and n is the true count rate; i.e. the count rate that would be detected by a device with no dead time. If t is the time after each count that the detector is disabled, is the fraction of the time that the counter is disabled and thus the fraction of the counts 'lost'. Therefore, 1 – mt is the fraction of particles that were actually counted. So and , or, though the binomial expansion, . In order to measure t, we will need two samples, which give us rates and separately, and rate when counted simultaneously. If is the background count, then we should have, where is the true rate of count . Substituting from above and solving for t, ignoring the term since it will be very small, we get:

I made four different measurements of the rates of decay of two pieces, separately and then together. I thought that I had made some mistakes taking the first set of data, so I flipped the pieces upside down and moved them to different distance from the detector for the following runs. It turns out that my first set of data gave the most reliable result:


The brochure for the device we are using says its dead time should be 150 microseconds. My value of 170.29 has an error of 13 % which is acceptable considering the accepted value is coming from a product brochure talking about the class of devices, not a specific measurement from my device.

In the next part of our experiment, we wished to determine the reliability of our device. If it takes the same count repeatedly, the variations between counts should be on the order of the natural statistical variation inherent in nuclear decay.

We took three sets of 6 one minute counts of the activity of a sample of Tl 204, and measured the standard deviation of each set, as well as the number of counts from each set that was not within one standard deviation of the mean. We then calculated the standard deviation of all the counts, and the number of counts that were within one standard deviation of the average of all the counts. If our devices work properly, multiple measurements of the same radioactive source should vary according to the Poisson distribution – the mathematical function that describes the occurrence of random events.

For each set, we calculated the average and the standard deviation according to the Poisson model. We then counted the number of counts that were within one standard deviation of the average. For the data as a whole, we calculated the standard deviation, without assuming a Poisson distribution, and found the number of measurements that fell within one standard deviation.


It should be noted that the Non-Poisson standard deviation was smaller than our Poisson-assumed standard deviation. The implication is that our data had a smaller standard deviation that would be predicted when measuring a randomly occurring event. Ten out of the 18 counts fit within one true standard deviation of the average of all the counts; this finding fits with what is expected from a standard distribution.