The Crab Nebula is the remnant of a supernova which exploded - or rather was seen to explode - in 1054AD. The pulsar at the heart of the nebula - the neutron star left over when the star's core collapsed at the moment of the supernova - was discovered in 1968, very shortly after Jocelyn Bell's discovery of the first pulsar using the Cambridge radio telescope.
The pulsar, and its associated nebula, have been and continue to be of great interest to astronomers. But the pulsar itself has a direct practical application, in that it's a very precise and well-studied clock. Rotating once every 33 milliseconds, the pulsar acts as a cosmic lighthouse, sending out beams of electromagnetic energy which happen to sweep across the Earth. Visible in both the optical and radio bands, the pulsar offers a reliable means of determining the accuracy and precision of time-based astronomical observations.
S-Cam, the instrument I helped work on, was a photon-counting detector. Each of its supercooled pixels was connected to a complex chain of electronics which enabled S-Cam to record not only the energy and position of incoming photons - individual particles of light - but also their arrival times. As each photon arrived, it triggered a cascade of electrons which rose and fell in a well understood fashion. The amplitude of that electron burst gave one an idea of the energy, or colour, of the photon, and the onset of the burst told one about the arrival time. S-Cam's electronics were connected to a GPS receiver, a piece of hardware which provided a definitive timestamp, accurate to millionths of a second, for each photon.
The raw output data of S-Cam, in simple terms, therefore consisted of a long list of sequential photon events. There would be an arrival time - referenced to some offset zero point - then the X,Y coordinates of the pixel which had seen the event, and finally a parameter which was proportional to the energy of the photon.
In other words, something like this:
0.0045 3 5 15
0.0067 1 4 28
0.0143 2 5 09
... and so on. A typical observation could easily contain more than a million photon events, but we didn't need to concern ourselves with the individual lines of data; we had software to chew the numbers and spit out processed data in astronomically interesting formats: spectra, time versus intensity curves, and so on.
Occasionally, though, we had to dig down really deeply into the data, because something didn't quite make sense.
The Crab pulsar was a high-priority target for S-Cam for one simple reason: it offered us the only independent, microsecond-level test of our time-tagging. It was almost impossible to verify that the electronics chain was working well in the lab. We couldn't use the GPS hardware to generate a test signal because the GPS hardware was already part of the electronics chain - it would have been like trying to use a ruler to measure itself. We were confident that the instrument's absolute time-tagging was good to within a second, and we had no reason to doubt the precision of the individual photon events. But only an observation of the Crab would settle matters.
To our relief, all seemed well. The "heartbeat" of the Crab, as revealed by our instruments, looked the way we expected it to. Importantly, the main peak - the higher of the two "blips" in the pulse profile - was arriving bang on the nail. We could be sure of this because the main dish at Jodrell Bank makes regular observations of the Crab, and the small variations in the Crab's rotation period are tracked and published from month to month.
As an aside, the big dish at Jodrell - now the Lovell Telescope - was completed in 1957. The two hinge points, on which the dish swivels, incorporated components from the gun turret mechanisms of the British battleships HMS Revenge and HMS Sovereign.
So, all well and good. We had our well-calibrated instrument with a reliable time-tagging system. We could then go ahead and do lots of actual astronomy, safe in the knowledge that the individual photon arrival times could be trusted.
But it wasn't that simple. Much later in the program, some of our colleagues raised an interesting point. While our Crab pulse profile looked fine from a standpoint of absolute phasing, there was something a bit fishy about it. If the profile was a heartbeat with two spikes, then the spikes themselves were about 30% fatter than they should have been.
We performed an exhaustive analysis of the system and its data processing software, trying to make the pulse profile conform. But nothing we did fixed the problem. And the deeper we looked into it, the more troubling the discrepancy began to look. That "fattening" of the pulse profile was a hint that, down at the level of the individual time tags, something was going wrong. Some percentage of the photons - some, but not all - were being assigned erroneous timetags.
It only showed up in bright objects. If we go back to that list of photon events above:
0.0045 3 5 15
0.0067 1 4 28
0.0143 2 5 09
and imagine a small error - an addition or subtraction of some small number - being applied to the timetags. Now, those photons can't break the laws of physics - they must arrive in strict time order! And indeed, that's what appeared to be the case - most of the time.
But occasionally we'd see a timetag where it appeared as if a photon had come in earlier than its predecessor:
0.0045 3 5 15
0.0067 1 4 28
0.0059 2 5 09
Now, this made no sense. And it only "showed up" in observations where we were getting a sufficiently high flux of incoming photons for one to hop the queue and seem to arrive earlier than the one before it.
It took weeks to get to the bottom of the problem. And in the end it turned out to be due to a fault at the actual hardware level. A piece of electronic circuitry was not behaving properly, due - it eventually became clear - to a piece of stray conducting material bridging two parts of the electronics board. This component was a set of binary registers designed to convert the raw arrival time of the photon into a different data format, using something called a "Gray Code".
Now what the hell is a Gray Code? I had no idea, but I quickly got an education. A Gray Code, or "reflected binary code" is a very clever mathematical procedure. Incorporating a Gray Code converter into our electronics made very good sense, because what a Gray Code ensures is that there are no sudden "spikes" in the data transmission system.
Imagine sending pulses down an electronic line, encoding arrival times. If your photon events happened at nice intervals, you might get:
0.991
0.992
0.993
0.994
0.995
0.996
0.997
0.998
0.999
1.000
1.001
... and so on.
But that "roll over" from 0.999 to 1.000 is bad news, because instead of just one digit changing, four have changed. And (at least far as I understood it) that's not good in the context of electronic signal processing, where you want things to be as smooth as possible.
The Gray Code solves that. It cleverly ensures that two successive values will only differ by exactly one bit, meaning that - as far as the electronics cares - there is nothing to hint that there has been a "roll over". Later, when you want the data to be intelligible, you apply a reverse Gray Code to put it back into its normal format. And that's exactly what was going on in S-Cam.
Except that, during the Gray Code conversion, that stray bit of conducting material was screwing things up. Basically, if a digit appeared in one register, it would "contaminate" the one next to it, propagating an error throughout the data analysis.
But this effect was so subtle that we had not seen it until someone noticed that our Crab pulse profile was too fat.
Once we understood what was going on, it was relatively simple to construct some simulation software. This verified that we had a complete, self-consistent grasp of the problem. Of course, that didn't help us fix the data that was already affected by the fault.
But actually, it did! By applying our understanding of the Gray Code issue, we were able to build a piece of software which took old datasets and unscrambled the erroneous time tags. It only worked for relatively bright, high-photon output objects - but those were exactly the ones where correcting the time tags really mattered. It was hugely satisfying, at the end of this months-long analysis, to be able to regenerate our original Crab pulse profile and see what we should have been all along.
That's almost it - but for one curious twist. The Jodrell Bank observations were crucial to our understanding of the problem, and I've already mentioned that the Lowell Telescope rides on battleship turrets. Gray Codes have many real-life applications, but one of the most useful is in position encoders - especially for rotary shafts.
Think of a shaft sticking up from the floor to the ceiling. Now imagine parts of the shaft painted in an insulating material, and other parts left in bare conductive metal. Now also imagine metal brushes contacting the shaft at different positions. As the shaft rotates, the brushes will either touch a conducting patch or an insulated patch. The question is, can you design a shaft such that these brushes always give an absolutely unambigous reading of the shaft's momentary rotation angle? Well, you can - but you have to use Gray Codes to do it. And one of the first uses for reliable position encoders was in ... you've guessed it ... battleship turrets.
Very cool story, thanks!
ReplyDeleteBy coincidence, I'm reading Bernard Lovell's "The Story of Jodrell Bank", and it's one of the most entertaining reads I've had. Lovell is an excellent astronomer and he also turns out to be a very good writer - his descriptions of the agonies in building the big telescope (notably watching the cost balloon out of control at several points) has made it a surprisingly compelling read.
They also have pictures of the building process, and you'll be interested that he had a lot of agonies getting the battleship turrets installed in the towers....
As both an SF fanatic AND current Physics student, these last two posts have probably been the most enjoyable posts I have read!
ReplyDeleteI for one would love to hear more Science stories. Are you ever directly inspired from scientific articles or theories, that lead right into plots or conflicts etc?
-Chris
You write about this stuff as straightforwardly and compellingly when it's real as when it's fictional -- thanks for posting!
ReplyDeleteThanks, all. If I can remember any other choice anecdotes, I'll post them here.
ReplyDeleteChris: I'm always looking for ideas, so I read New Scientist, SciAm etc, with half an eye on anything that might have fictional possibilities.
Case in point: the last story I finished was based on a SciAm article by Lawrence Krauss about the limits of cosmological observation. And the story I'm working on now was inspired by a throwaway line relating to the moons of Neptune in a book on solar system astronomy.
Nice story!
ReplyDeleteEver thought sbout writing "popular science stuff" ? Because I think you would do an excellent job!!
Not that I would want you to stop writing those excellent stories of yours!
Thanks for all the thrills!
-Tomas
Hi Tomas - I enjoy pop science but there's a difference between posting the odd anecdote here and finding the thematic spine for a long article or even book.
ReplyDeleteI do think that if you find a science as a whole fascinating, rather than any one small part of it, there are basically two career paths open to you: pop science or SF writer.
Cool stuff Al, but could you explain just that little bit after "The Crab Nebula is the remnant of a supernove which exploded..."lol
ReplyDeleteDid you ever work on James Burke's research staff?
ReplyDelete;)
Too much of that made sense to me. Nice round-about.
ReplyDeleteI'm reading through the collection 'Zima Blue' for the first time. Library in the new town has it. Thoroughly enjoying Merlin's Gun. That, and House of Suns (strikingly similar baroque-ness!) need more. MOar, I say. :)
But chiefly. I would really like a book of your anecdotes. Has anyone ever approached you about doing non-fiction?
Finally, how's the summer treating everyone?
Take care,
Jeff S.