Monday, October 21, 2013


I turned the Rocketometer over to the rocket guys back in July, and had no contact with it until after the flight today. It had been exposed to every environment it was going to see except for vacuum, but I still had no confidence that it would work.

Well, it did.

The first time I stuck the SD card into my computer after the flight, it said "what card?"

 That was disappointing.

 It took a few minutes for the computer to recognize the card, but when it did, I saw that it had recorded 419 MiB in 66 files. One of the last was also the longest, so I ran it through my quick extraction program, then through IDL, and saw the characteristic acceleration pulse during first and second stage.

The first thing I did after that was press the Victory button on my phone:

 No one else in the lab either heard that or got it, so I had to shout, "It Worked!!!".

Now I have to analyze the data...

Minimum mission success achieved!

At about 12:01:12MDT today, the Rocketometer achieved minimum mission success by riding above 100km and therefore reaching space.

It will be some time yet before I can recover the device to see that it worked, which will represent full mission success.

Saturday, October 12, 2013

The Curiously Recurring Template Pattern

Go look up on Wikipedia what it is. I am going to talk about how I am having to use it.

I was doing fine with the Rocketometer analysis code in C++, using the NewMat library to handle matrices, with a Quaternion layer on top that I wrote myself. After five days of doing battle with various things, I finally got something that worked, but I was curious if this was the "best" way to do it. The C++ Standard Template Library didn't have anything directly related to matrices. The Boost library had a section called uBLAS, but the documentation for it kind of de-recommended itself. It suggested several alternatives, and the one that looked best is called Eigen.

Eigen is interesting in that it is entirely header files, containing almost all of its code in C++ templates. Templates are cool, mostly because when they are instantiated, the compiler gets to see the code in the context that it is used, and gets to inline and optimize it there. Specifically, Eigen contains a dynamic-sized matrix, but also is a template for fixed-sized vectors and matrices. I want to use these as much as possible because all vector sizes used in Rocketometer data analysis are known at compile-time, so the compiler can unroll loops and so on to best optimize the code.

However, templates do not mix with virtual methods, so I had to figure out how to make that work, since I used virtual methods to implement the physics model.  I had code that looks like this with NewMat:

class SimModel {
  /** Physics function. Calculates the derivative of the state with respect to time
  virtual ColumnVector fd_only(double t, const ColumnVector& x)=0;
  /** Measurement function. Calculates the measurement from the given state and time   */
  virtual ColumnVector g_only (double t, const ColumnVector& x, int k)=0;
  /** Physics function with process noise. Uses fd virtual function to calculate physics, then adds process noise.*/
  ColumnVector fd(double t, const ColumnVector& x,        const ColumnVector* v);
  /** Measurement function with measurement noise. Uses g virtual function to calculate measurement, then adds measurement noise.   */
  ColumnVector g (double t, const ColumnVector& x, int k, const ColumnVector* w) {ColumnVector result=g_only (t,x,k);if(w)result+=*w;return result;};
But I wanted to adapt that to use Eigen, specifically with the fixed-length vectors, since the size of the state vector is determined by the problem and known at compile time. That means that ColumnVector has to go, to be replaced by Matrix<double,n,1> where n is a template parameter determining the size of the state vector. But what about the measurement? The purpose of the k parameter to g_only is to tell which of several kinds of measurements to use. For instance, in the Rocketometer problem, we have a measurement vector coming from the inertial and magnetic sensors, treated as a single 9-element vector. We also have measurements coming from radar or equivalent, treated as a 3-element vector. So, we need a template function g_only, which generates either a 9-element vector or a 3-element vector. You can't do that and have it be virtual, too. Basically, virtual functions are a runtime binding issue, while templates are a compile-time binding. So, I can't have a virtual g_only function, callable by the base class g function.

Enter the Curiously Repeating Template Pattern (CRTP). As it happens, this is something that I read about just a few days ago, just reading up on the C++ language in general. For us, the pattern goes something like this:

template<int n, class Derived> class SimModel {
  template<int m> Matrix<double,m,1> g (double t, const Matrix<double,n,1>& x, int k, const Matrix<double,m,1>* w) {
    Matrix<double,m,1> result=static_cast<Derived*>(this)->template g_only<m>(t,x,k);
    return result;
Note that g_only isn't even defined in this class template, only used. In fact, one of the weaknesses of CRTP is that it implies definitions without expressing them, so it is hard to document. Also note that extra template keyword there after the arrow operator. See here for details.

The derived class then looks like this:

template<int n> class RocketometerModel: public SimModel<n,class RocketometerModel<n>> {
  template<int m> Matrix<double,m,1> g_only(double t, const Matrix<double,n,1>& x, int k);
 So what happens is that the compiler
  1. Parses the template for SimModel, but doesn't compile it, because it's a template, not actual code yet. Therefore it doesn't matter that g_only is undefined yet.
  2. Parses the template for RocketometerModel, and again doesn't compile it.
  3. Parses the main code, compiling as it goes along until it hits RocketometerModel. It instantiates and compiles RocketometerModel, in the process instantiating and compiling SimModel.
  4. When SimModel is being instantiated and compiled, it has a call to RocketometerModel g_only, but this is ok, since that is available already, since step 2 has already happened.
Now the derived class might not be a template, it might be an actual class. In this case, the base class is instantiated and compiled with the derived class. In either case, everything is available before it is used, even though the code might look otherwise.

Now this part I will write in bold, so that Google can see it. The other curiously repeating template pattern is having to use the word .template when using (not defining) a template member function. This solves the error invalid operands of types ‘<unresolved overloaded function type>’ and ‘int’ to binary ‘operator<’ .

It appears that there is a weakness in the C++ operator precedence table, such that in certain cases it can't distinguish the opening angle bracket of a template parameter set from a normal compare-for-less-than. In order to disambiguate, you throw in the word template after the dot (it will also work after a pointer arrow -> if you are using one of those). I don't understand it completely myself, but Eigen uses this itself, which is how I found out about how it works in the first place.

I am gradually coming to the conclusion that Java did it right, with generics which are real compiled code. Java generics are enabled by the "everything is an object" model in which all objects descend from a common class. I am also beginning to think that Java did it right with operator overloading. Operator overloading, even when fully appropriate, like defining * to mean matrix or quaternion multiplication, is fun to use but a nightmare to implement. And, if it is implemented wrong, it might be left to the user to find out when he tries to do something the implementer did not foresee.

All in all, I give Eigen 4/5, especially recommended for new projects, but not for converting old projects. The biggest advantage is speed. What took IDL over an hour, took Java and C++ with NewMat about 4 minutes, but takes Eigen only 20 seconds. Also, templated matrix and vector sizes are nice, because they resolve matrix size mismatches at compile-time. Finally, zero-based component indexing is what I expect, and the reason I don't suggest converting old projects from NewMat. Also be aware that the Eigen quaternion library uses the convention \(\vec{v}_{to}=\mathbf{p}\vec{v}_{from}\mathbf{p}'\), which is fine and internally consistent, but not consistent with the math I had seen for quaternion derivatives. As a consequence, my code is liberally festooned with q.conjugate() and in some places q.conjugate().conjugate(). It's almost a case of two wrongs making a right, but not quite.

Thursday, October 10, 2013


It's kinda weird, but it turns out that C++ is the best language for processing Rocketometer data. There is a cool library NewMat which creates appropriate operator overloads to do matrices in C++, and I have extended it to include quaternions. C++ was doing in minutes what it was taking IDL hours. However, it took me 5 days to translate from IDL to C++, so I had better process a lot of data to ever get that time back.

The time that the Rocketometer spends in zero gravity may be the most valuable calibration time I ever get. Zero acceleration, zero rotation, and I expect a wide range of temperatures.

Remember that the goal of this is to get something into space, but that goal requires no further effort on my part to achieve. The secondary goal is to be able to calibrate the data and report something useful to Tom. The long-term goal though is to measure the track of Space Mountain.

So I am putting something in Space to get it ready for a roller coaster.

That made me laugh.

Saturday, August 3, 2013

Code A113

I was thinking about this at work, explaining it to one of the students I work with. He had never seen Wall-E, so I explained it in a generic and non-spoilery way. Since no one but me reads this anyway, I am going to write it the spoilery way here. You have been warned.

In that story, the systems on Axiom are programmed to look for habitability on Earth, and when it is proven, the ship is to return to Earth. It's an interesting but very expensive backup for just a normal recall message.

So, the pseudocode looks something like this:

while in_space do begin
  if(check_earth_habitability()) {
  sleep(1 year);

The function check_earth_habitability() is incredibly expensive to evaluate, as it involves sending a spacecraft with several EVE probes all the way back to Earth. It's like us making an implementation of go_to_saturn().

Directive A113 instructed the autopilot to not return to Earth under any conditions. It could have been implemented by cancelling the entire loop (including never sending EVE back to Earth), but instead the implementation looks like this:

while in_space do begin
  if(check_earth_habitability()) {
  sleep(1 year);

A normal person would ask "Why does Auto send Eve back to Earth year after year if it has been ordered to not return?" I ask a similar question "Why did they do the code that way?" The thing is, I have done things similar to this: Evaluate an incredibly expensive function, just to ignore almost all of its output.

An example from work is sim_sample. A lot of times I only care about a small part of the result of sim_sample, or I have already calculated the result and could just run viusalize_1c manually. I don't because it is easier to run sim_sample and wait for it to finish than it is to do it efficiently.

Wednesday, July 31, 2013

Rocketometer has been integrated

The rocketometer is now an integral part of the rocket payload
Attached to the CCD heater card

In the control section card cage

And here we have a short video demonstrating the size of the Rocketometer:

Wednesday, June 26, 2013

Unattended running

The Rocketometer has been running unattended now for several hours. For instance, last I checked it was just over 7 hours, and had generated 77 files of 4MiB each. This represents a data rate of about 12kiB/second. At this rate, the device will fill up in about 350 hours, or 14 straight days.

So, I asked about which direction would be vertical when the Rocketometer is installed in the rocket. It is vertical when accelerometer reads most strongly in the  +Z direction. Coincidentally, this means that the interface is on the bottom, and the lights will be hidden.

I put in a piece of code which detects when the Z axis is stronger than the others. Mindful of the Genesis reentry failure, I put in code such that if the board is either +Z or -Z, it will decide that it is vertical. I can't test that Z is correct, but I can observe the board in place in the control section and visually confirm that Z is correct.

Now this piece of code will just reduce the rate of readout from once every 3 ms to once every 30ms, still quite quick, but only 1/10 the data rate. Even if it is wrong, we will still get some data.

I will need a log of when the CCD board is powered in order to match up non-flight measurements with anything, but the flight should be either the last or second-to-last record on the board.

For the unattended run, the system will be set to chunk its data into 1GiB pieces, and use only the last digit of the 4-digit number for chunks in a run, reserving 3 digits for runs. I do not expect the Rocketometer to be power-cycled more than 1000 times.

Monday, June 24, 2013

Bugs crossing Boundaries

I am getting the Rocketometer ready for delivery. The hardware appears to be fine, but the software had one particularly weird bug -- the name of the FAT volume on the SD card was changing. This meant that something was poking into the root directory block where it did not belong, which meant that it could be anything.

As it turned out, it was the packet buffer becoming full. Now how in the world did that corrupt the file system buffers? As it turned out, the file system by design wasn't using buffers, in order to save memory.

An SD card is addressed a sector at a time. Data can only be read and written in full sectors. So, if you want to change a single entry in the root directory or file allocation table, you have to read the whole sector into memory, change the bits you want, then write the whole sector back. Now the root directory and file allocation table only needs to be changed when the file is written, and in this case there is already a buffer which has just been used, so I foolishly decided to have the code use this.

In the mean time, this same buffer was part of the circular buffer accumulating packets. If the buffer ever filled up, new bytes would be rejected, and the head pointer would not move. If you start to write a packet to an already full buffer, nothing gets written, and when you finish, the packet length seems to be zero. Now the fun begins.

Upon finishing a CCSDS packet, the code counts how many bytes have been written, subtracts 7 because that's according to the protocol, then goes back and changes the buffer. In the current case, since no data had been written, it actually changed marked data.

Now we combine this with the fact that the interrupt task was filling the buffer. Here then is the final sequence:

  1. The buffer is full. No more bytes can be written.
  2. The main loop detects this, and starts to drain the buffer and write it to the SD card. It writes the packet, then reads the root directory sector, changes it, then writes it back.
  3. During the write, an interrupt fires.
  4. The interrupt task writes another packet. The bytes are dropped on the floor.
  5. The interrupt task finishes the packet. The calculated length is zero, since no new bytes were written
  6. The length to be written is -7, 0xFFF9
  7. The length is written to the buffer, but because of timing, it writes it over the top of the root directory sector in memory. The root directory is now corrupted.
  8. The interrupt task finishes
  9. The SD writing code writes the corrupted root directory sector back to the card.
  10. The card is now named RKT\FF\F9 instead of RKTO3.
It would be pretty easy for this same process to shoot holes anywhere in the filesystem metadata. This bug is hazardous!

So, to fix the bug, we do the following:

  1. Give the file system its own scratch buffer, and never let it do scratch work with the user buffer. This doesn't actually fix anything, but it does make the bug, and any similar undiscovered bug, less dangerous.
  2. Handle the full buffer case better. When the buffer is full, it throws away the new incoming bytes. Further, it backs the head pointer to the mid pointer, in effect throwing away the fractional packet accumulated before the buffer became full. Even further, since the buffer is now no longer full, to prevent getting the last half of a packet, we keep a flag that says that the buffer was full, and don't accept any more bytes until the flag is cleared by a drain operation.
  3. The interrupt task checks if the buffer is already full. If it is, we don't bother to read the sensors, a time-consuming chore which will just discard the result and take time away from the SD writing routines.
This clears a major bug, perhaps one causing all the observed halts. A bug which punches random holes in the metadata can do anything. It only triggered recently when I cranked the data recording rate to 200Hz, which appears to be just beyond the equipment capability. Previous rocket flights must have run at lower rate and not triggered the bug, or not triggered it as hard.

So how did I find it? Lots of print statements and blinklocks. First, I had the SD driver print out the buffer it was going to write. Then I had the direntry driver (the thing which actually handles the root directory) print out its scratch buffer after it had been read, as it changed each byte in turn, and before it wrote it out. From that I found out that the buffer was changing in between lines. That meant that an interrupt task was doing it. Further puzzling out and seeing 0xFFF9, then counting on my fingers to see what number that is in signed decimal, revealed that it was -7, a key number in calculating CCSDS packet sizes. However, it wasn't writing bytes 4 and 5, where it would be if the packet started at byte 0 in the buffer. Instead it wrote one byte before that, implying that the phantom packet started at -1, a key number in the circular buffer pointers. Once the head pointer is one less than the tail pointer, the buffer is full.

For a while I thought that the code was starting a new packet before it finished an old one, which would result in weird numbers. So, I put in some code which blinklocked the device if a packet had not been finished before the next one was started. That never hit. So, I put in some code which blinklocked when the circular buffer filled up, and that finally hit and proved it all. This interacted weirdly with the overlapping packet checker, so I had to fix a bug there. 

Now the only remaining known weakness is what we do if the first cluster of the root directory is full. We can make a cluster as big as 64kiB, we can handle multiple sectors in a cluster in the root directory, and we can fit 2ki entries into a cluster that big. But, what if we fill it? The file search code will need to know how to go to the next cluster, and the file creation code will need to know how to allocate a new cluster. Or, the format process can write 10000 files then delete them all.

The flight version of the code will not blinklock, but reset, when blinklock() is called. Between that and gathering enough runtime, I can gain some confidence that the device won't lock up.

I set the circular buffer size back to two sectors, down from the 6 I had given it. Next version will put the circular buffer into USB ram to save space for the stack, and allow me to have up to 16 sectors. USB memory doesn't work for some reason... maybe the USB system needs to be on for it? Then we are exchanging memory for power consumption. I know that the system is dropping packets now, more so than it was at 6 sectors.

Tuesday, June 18, 2013

Not another Rocketometer

I got an Arduino Leonardo and a Sparkfun Pro Micro as breadboard models of the Arduino-based Rocketometer. Based on my experience with those, I conclude that the ATMega32U4 and the current bootloader are Not Ready for Prime Time, and Not Ready for Spaceflight. Further, I would have had to write my own USB Mass Storage driver anyway.

Wednesday, May 29, 2013

Yet another Rocketometer

The Arduino Leonardo can effectively be thought of as an Arduino on a chip, except that an Arduino pretty much already is one chip. What makes a Leonardo special is that its core is an ATMega32U4, which has a USB port. The chip can be programmed as normal over ISP, and once it is programmed with the appropriate Arduino bootloader, it can be programmed over the USB port. The Arduino library for Leonardo also supports using the USB as a CDC serial port, and has much other USB support.

The chip only runs at 8MHz. It can run at 16MHz at 5V, but I am planning on running it in the Rocketometer, which only has 3.3V available, which limits the chip to the aforementioned 8MHz.

Now how do I think I can get away with this? Simple: Even with the LPC2148 running at 60MHz, it was spending most of its time waiting for the bits to crawl across the I2C and SPI buses. And how much can we optimize a busy wait loop?

I bought a Leonardo to practice with, and a Sparkfun Pro Micro to practice some more, especially programming the bootloader. I will not be using the 6-pin ISP header on the Rocketometer, but I got the great idea to use the MicroSD sniffer to get at the SPI signals through the Rocketometer MicroSD slot, and I can just hold down the reset button, or solder one wire to the contact of the Reset switch - which reminds me, I should put the solder bridge on the top side of the reset switch. In that case, I can use the solder bridge itself as a temporary terminal, solder a wire to it long enough to program, connect the rest of the SPI signals to the microSD sniffer, then program it with an ArduinoISP.

While I was at it, I also designed a Precision clock around this new chip. It doesn't need a hardware UART to program or talk to the host computer, so no FT232 and no multiplexer is needed. The chip has more pins, so this design doesn't use literally every single possible pin (those two ADC-only pins don't count, since they are not possible to use in this project).

Monday, May 13, 2013

NTRS is back!

I am happy to say that I was wrong about NTRS. Plus, the EDL kernel has been released (at least is present on NAIF).

I know what we're doing tonight.

Sunday, April 28, 2013

DirectTask and nested interrupts

One problem with the old rocketometer code is that the sensor readout and SD card writing code were in the same thread, meaning that when the SD card took its endless milliseconds to write the data, the sensors were not being read, leaving an irregular gap in the record. My brilliant idea was to read the sensors in a task at interrupt priority, effectively creating another thread. First effort was with the task manager I described below, which was a dismal failure.

For whatever reason, and perhaps the same reason (see below), the task was not able to read the sensors. I came up with a much simpler task manager with which I am getting incredible accuracy.

I call it the DirectTask manager. Its concept: Rather than using one match channel of the timer and a heap priority queue, we just use all three available match channels (there are four, but the zeroth channel resets the timer every 1 minute). This limits the flexibility enormously, but I only need two tasks. I set up one task to reschedule itself on a regular basis (5ms in my first test) and I use the other task to read the BMP sensor.

However, the sensor readout runs an I2C read, which itself is interrupt driven. The code does not currently support nested interrupts, which means all interrupts are delayed until the current one returns. The I2C state machine was interrupt drive, and its interrupts were getting delayed, including the one which makes the state machine stop waiting forever, so the state machine waited forever.

So, we put in an option to I2C to run without interrupts, instead using a busy loop to check the I2C interrupt status bit, then calling the same state machine driver. We weren't getting anything by being interrupt driven anyway, since we had to wait for the read to finish.

With that, the DirectTask manager worked fine. Maybe the heap task manager would have worked fine too, but this one is simpler.

It takes 641 microseconds to read the sensors. We could probably easily bump the read rate to 500Hz, and maybe to 1000Hz, but this doesn't count anything but reading the MPU and HighAcc sensors.

Wednesday, April 24, 2013

Flight Level 390 and autopilots

Started reading an interesting blog - Flight Level 390 by an airliner pilot who obviously wishes to remain anonymous.

One of the interesting things he pointed to was an incident where the ground crew turned off the pressurization system of a 737, failed to turn it back on, and the flight crew failed three times during the preflight check to turn it back on. As a result, the plane never pressurized on the way up on its flight from Cyprus to Athens, Greece. The oxygen masks in back dropped at 18000 feet, and no communications from the crew happened after the plane flew through 28000 feet. Presumably the crew lost consciousness at this point.

The interesting thing to me is what happened next. The autopilot leveled off at its flight plan altitude of 34000 feet, flew to Athens automatically, and then entered the holding pattern. The program did exactly what it was supposed to do at that point, wait for further instructions, which never came. After flying for almost 3 hours, the plane ran out of fuel and plummeted to the ground.

I find it amazing that the autopilot could be programmed to do that -- not that it is technically difficult, but that they bothered to do it. I always think of an autopilot as more of just a wing leveller and maybe a beam follower. I would have supposed that the plane would get to Athens or wherever the end of its beam was, then keep flying straight and level in the same direction.

If you can program an autopilot to do that, it's not that big of a jump to complete auto control of the whole flight, with the human crew there just to be able to think on their feet if there is trouble. And here then is the interesting part. In my previous essay on the interaction between pilots and automation, I advocated a system where each part of the system was assigned to the component that could do it best. In that case it was lunar landing, where the computer calculated the best course to the landing spot while the human used his vision and judgement to select that spot. In this case, we assign most of the actual flying to the autopilot and assign oversight and handling of emergencies to the human crew. But, here is where the human and autopilot system are anti-complimentary. The human crew will have very little to do during the 999,999 flights in 1,000,000 where nothing happens. A crew member might go his entire career without ever seeing a true emergency, although I have been on enough flights to know that a good chunk of them try to fly over, under, or around turbulence. Is the latter just a matter of dialing a different number into the altitude hold?

Maybe the autopilot could have been programmed to descend to 18000 feet if it detects a pressure loss (the masks dropping) and gets no command inputs for several minutes. Will this incident type ever happen again? Maybe, ever is a long time. It turns out that the system did have a pressurization alarm, which worked, but the alarm was the same sound as one which can only occur on the ground. The crew thought it was the alarm for the thing that can only happen on the ground, and therefore ignored it as a bad alarm.

The point is, you become good at what you practice, and lose ability at what you don't. If a crew is mind-numbed by 10,000 hours of babysitting an autopilot, what makes us think that the humans will actually be able to handle an emergency? Air France 447 seems to have run into just that sort of difficulty. They crew was so used to the autopilot and fly-by-wire handling things, that when those programs shut down, the human pilot didn't remember how to fly that high up, and stalled the plane for 35000 feet while believing that the fly-by-wire would keep the plane from stalling.

But here's the thing. When autopilots work, they work better than human pilots. They don't make the same kinds of mistakes as human pilots. And, even if the system was fully manual, being mind-numbed by holding the yoke in place for 10,000 hours is no better training for an emergency.

I look at how astronauts trained for the Apollo missions and I see how football teams practice. They spend at least ten times as much time practicing as they do in the event. I look at how airliners are flown and I see how baseball players and basketball players and basketball band musicians practice -- by playing the game. By performing. By actually doing it every single day.

The airlines are doing something right. They fly 10 million operations per year, and there have been no fatal airline crashes in the United States since 2009. Maybe this is just as good as it gets.

Wednesday, April 3, 2013

They took down NTRS

Back when NASA was testing the X-43, they said something that has stuck in my mind ever since. The X-43 is a scramjet test vehicle. The thing is boosted on a Pegasus to near mach 10, then the scramjet is fired for 10 seconds. Afterwards, the vehicle is not recovered. "The only thing we get back from this mission is data."

When you think about it that way, all the robotic space missions are like that. We just get back data. No artifacts, no samples, just bits. We then assemble those bits back into knowledge back here on the ground. When we talk about the $800M SDO mission, we are spending the $800M to get those bits, so they better be valuable. Furthermore, I paid for those bits so I should have access to them.

The NASA Technical Report Server is where the knowledge collected from those bits go. It is now closed.

They claim that it is due to an ITAR review, and that it will be reopened when that is complete. There are over a million papers on NTRS. That is never going to get the kind of review they are implying.

So: For all those mission that only send back data, the only evidence that we have that they even happened is now gone. We spent $XB a year on NASA, and what we paid for is now locked up. This is a bad precedent. Furthermore, the NASA library is now untrustworthy. We build libraries to hold knowledge, and if they close at random without notice, they fail to serve their purpose.

Similarly, I now don't think that the kernels for reconstructed entry of MSL will ever be released.

This is not a good way to start the day.

Thursday, March 21, 2013

Pinewood Derby and single-track Gray codes

It's time for the anuual Pinewood Derby at the cub scout pack I serve. Last year I was out of town during the Derby so I didn't participate. This year I am here, so I am making a car. I am not competing, of course, so I am free of certain constraints. Also, I never do any project like this in some ordinary way. As Emeril would say, I always try to kick it up a notch.

The design of the car is a pretty simple one, inspired by my first car back when I was in Cub Scouts. That year, we (me and my parents) thought aerodynamics was the most important thing, and so we designed the car accordingly. And finished dead last. Anyway, I still have fond memories of that car, like trying to microwave the car to dry the paint and charring the center of the block, nailing AA batteries to the back, putting my Battlestar Galactica 2" pilot action figure as a pilot. The general car design had a duckbill front and a turtle back. It was also solid red (what color do we hate?).

This time I have nicely polished the block to the point that it shines. I am going to leave the block completely unmarked.

But, that's not kicking it up a notch. Kicking it up a notch is installing a rocket engine in it. It's installing a camera. It's installing an electric motor. It's putting in a sensor for the car to time itself. I'm going to do the latter.

I have painted the back right wheel half white, including the tread, but with the hub very carefully masked off so as to not interfere with spinning. Over that wheel, I have installed two QRD1114 line-follower sensors. Each includes a near-infrared LED and a phototransistor. The sensor is close enough to visible light that things like white paint are still white and black is still black. The two sensors are placed roughly 90deg apart around the wheel, and about 1-2mm away from the tread. The idea is that the sensors use the half-white wheel as a single-track Gray code encoder wheel. Thinking about it this way: Suppose the car is rolling, the back sensor is well over the white part, but the front sensor just transitioned from black to white. Which way did the wheel turn? It must have been forward. If the back sensor were white but the front sensor changed from white to black, that means the wheel turned backward. Using this, we can construct a directional odometer, and measure the distance the car has moved. By timing the transitions, we can measure the speed of the car as well. By properly using both sensors, we can always know the direction of wheel spin and the orientation of the wheel to within a quarter turn.
I was disappointed to see that Bele and Lokai wear gray gloves. Bele doesn't wear gloves here!

So, the car will measure its own speed. We start the car with the wheel just past ticking over, so it has to turn one full time to count a rotation. The wheels are 15mm in radius, so we can measure the distance traveled. The microcontroller reading these sensors has a clock with microsecond precision and 4-microsecond resolution, so plenty enough to measure the exact time of each wheel rotation. Putting these together gives the car's speed, which it will print on an LCD display.

The gem this week is Gray code encoders, in particular single-track encoders. These have one track of code, and several sensors over that track at different angular positions. You need one sensor for every bit of code (two in my case for a 2-bit, 4 position code), but with careful design of the code track, you can use the same track for different bits, by shifting the sensors around the wheel a bit. This can be continued until you use the same track for all the sensors.

Wednesday, March 20, 2013

Rocketometer Flight Test 1 and photos

3/16/2013 Flight 1

Site: Alpine Elementary field
Weather: Overcast, calm winds during flight, weather at KCOLONGM64 51.5f 29.97 in 6.7mph 49%RH
Hardware: Rocketometer6050 #2, Estes Star Stryker (modified)

Forgot to pull cap from launch rod before flight. Flew straight, seemed to turn level to ground at apogee, I could see a yellow tinge to the tracking smoke. Parachute ejected but never inflated. Rocket fell free from apogee. Rocketometer was still running after flight. Engine hook unzipped about 1/8 inch to the aft, probably due to ejection. One fin has a cracked glue joint. Minor ding to rocket nose cone due to striking launch rod cap. Rocket could probably be repaired, but not reflown in its current condition.

No video of the flight was captured. I had the camera, forgot to press record.

Rocketometer has one file as expected, RKTO0500.SDS 35,369,984 bytes. Indicates that Rocketometer was running through whole flight.

Switch and card not glued into place. Reset solder bridge in place. Rocketometer ran through flight anyway, indicating that nothing was jarred loose.

Rocketometer on internal power ~11:50am

Nautilus reports weird file sizes for files. fsck.vfat does not report any filesystem errors.

Total packets retrieved: 1,572,103
AD377           STRUCT    = -> <Anonymous> Array[124694]
BMP2            STRUCT    = -> <Anonymous> Array[6234]
DEF             STRUCT    = -> <Anonymous> Array[15]
DUMPDATA        BYTE      = Array[65640]
F               STRING    = 'RKTO0500.SDS'
HMC             STRUCT    = -> <Anonymous> Array[124694]
INF             LONG      =          100
MPU             STRUCT    = -> <Anonymous> Array[1246947]
N_EXTEND        LONG      =       -59604
N_PACKETS       ULONG     = Array[2048]
OUF             LONG      =          100
PKT             STRUCT    = -> <Anonymous> Array[1]
STATUS          INT       =        0
TCC             STRUCT    = -> <Anonymous> Array[68932]
T_PACKETS       ULONG     =      1572103

Greater than 26 minutes of data recorded. Processing took about 14 minutes.
Time stamp difference between first MPU record and last: 1601.1205s (26m41.1205s)
Range zero is 1058s (last top of second before first acceleration)
MPU did saturate during boost

Rocketometer loaded and ready for flight
Flight battery is similar to this, not visible in rocket pictures since it is packed in foam behind the board. Photo from Sparkfun Electronics published under CC BY-NC-SA 3.0

Rocketometer rendering, showing sensor side

Rocketometer rendering, showing interface side

Sunday, March 10, 2013

More on Siding Spring

Red circle - Mars impact locus. Green star - old nominal approach. Black ellipses - old 1,2,3 sigma. Blue dots - new Monte Carlo samples. Orange rings - new 1,2,3 sigma ellipses according to Monte Carlo
They finally published a new set of observations, and based on that, it is now the three-sigma ring which touches Mars, not the 1-sigma ring. There were zero samples in the Monte Carlo which impacted Mars, so there is less than a 1 in 5000 chance of impact.

Tuesday, March 5, 2013

Comet C/2013 A1 (Siding Spring)

B plane plot - Red circle is Mars, green star is most likely trajectory (miss by 55000km), black ellipses are 1,2,3 sigma direct from JPL Horizons, blue dots are 3,892 Monte Carlo samples

What's in a name?

First off, the name is C/2013 A1, which means that it is a C/ non-periodic comet (this is the first recorded observation of the object) 2013 A discovered in the first half of January 2013, 1 first comet discovered in that part of the month. Siding Spring is parenthetical and not part of the name, just the name of the observatory that discovered it, but since it's easier to remember than a number, we will probably end up hearing a lot about comet Siding Spring.

Why do we care?

The comet is on a very close-approach to Mars. We care about Mars because we care about the spacecraft present in orbit and on its surface. Even more interestingly, the 1-sigma uncertainty ellipse currently includes Mars. There is about a 1 in 1000 chance of impact.

What are its effects likely to be?

  1. If it hits Mars, it will almost certainly throw enough junk into low Mars space to kill all the spacecraft in orbit. It is also likely to kill any rover which happens to be within several hundred kilometers.
  2. If it misses, well it's a comet visible from 7+ AU out. It's going to be a big'un. All the usual stuff that applies to comets will apply here. It is likely that Mars (and therefore all the satellites) will pass through the coma, and be subjected to a pretty good sandblasting. We will need to review our experience with the Halley armada, and consider that the closer-passing objects were armored.
  3. If we decide that the comet is a hazard to spacecraft and other mechanical things, we have to decide what to do with Maven. Do we hold it until the next window? Is it worth firing to get a month's worth of data?


I have been watching this for the past week, and while refined orbits have been produced, the nominal miss distance has been shrinking, and impact has not been excluded yet. The last solution I have seen gives a 0.016% chance of impact.

In particular, Horizons says that the 3-sigma uncertainty ellipse has a semi-major axis of 88000km, a semi-minor axis of 22000km, and that the smallest uncertainty ellipse which touches Mars has a value of 0.98 sigmas. Further, the ellipse is rotated 24.20deg from the line between closest approach. I ran all this through an IDL script and got the plot at the top of this article.

I believe that the documentation on Horizons is wrong, and that what is documented as the 3-sigma uncertainty ellipse major and minor is really the 1-sigma ellipse. With this change, the Monte Carlo samples (see below) match the error ellipses well. Otherwise, I can't reconcile the 3-sigma ellipse touching Mars with the reported Nsigma of 0.98.

A new and powerful tool

The Minor Planet Center collects observations, and in principle you can fit those observations yourself, if your name is Carl Friedrich Gauss. Or if you get Find_orb from Project Pluto. I'm still learning it myself and playing with it, but it has the ability to import observations in MPC format. The program is under active development, and appears to have accreted features as interesting real-world events have happened. For our purpose, the two best features are auto-fit and Monte Carlo. The latter is done in an especially clever way. The program creates a cloud of objects, but it doesn't require a covariance matrix. Instead, it adds a bit of noise to each observation in RA and Dec, then fits an orbit to those new observations and creates an object.

So, I got the observations for C/2013 A1 and dropped them in. After 5015 Monte Carlo samples, 7 of them hit Mars, for an impact probability of 0.14%, quite in line with the JPL Horizons number, but produced directly and solely from the observations. As seen above, they almost perfectly fit the JPL Horizons ellipses.

Friday, February 15, 2013

2012 DA14 in Celestia

1) Go to Horizons and get the spice kernel


type 2012 DA14, and say yes to do a name search. Once it finds it, type s to get an SPK kernel. Give the following answers:

SPK text transfer format  [ YES, NO, ? ] : n
SPK object START [ t >= 1900-Jan-01, ? ] : 2013-Feb-01
SPK object STOP  [ t <= 2101-Jan-01, ? ] : 2013-Mar-01

Binary SPK will be created, you will be asked to add more objects, say n.

Now you will get a download link, copy it to your browser and save it. It will be named something like wld17761.16.

2) Install Celestia

3) Create an extra for Celestia
Create a text file 2012_DA14.ssc in the extras folder and create a data folder within that folder. I needed to run a text editor as administrator to do this.

In the text file, paste the following text:

"2012 DA14" "Sol"
    Class "asteroid"
    Texture "asteroid.jpg"
        Color [ 1.000 0.960 0.919 ]
    BlendTexture true
    Radius 0.025

         Kernel "2012_DA14.bsp"
         Target "3599602"
         Origin "SUN"
         Beginning "2013 2 13"
         Ending "2013 2 17"
         BoundingRadius 1.5
         Period 1.0
    Period 5.918

    Albedo 0.048

4) Copy the spice kernel wld12345.16 to the extras/data folder and rename it 2012_DA14.bsp

5) Start up Celestia and look for 2012 DA14 in the solar system browser. It should be the last item under Sol. If it's not there, use ~ to bring up the debug screen and up and down to scroll it, and see if there are any errors for 2012 DA14.

Closest observed distance is just under 30000km. It travels from south to north past the night side of Earth.

Friday, February 8, 2013

Gem of the Week - On-Demand Printing

The Big One, by Stuart Slade. World War II went very differently, with England basically surrendering after Dunkirk, immediately drawing the USA into the war. With the Western front secure, Germany was able to focus on the Russian front. They captured Moscow (and killed Stalin, good riddance) but were stopped short of the Urals by the combined Russian and American armies, where they stalemated for five years.

One day in 1947, that changed. On that day, over a thousand planes were launched from bases all along the East Coast of the United states, carrying over 200 Mark III atomic bombs. These were upgraded from the Mk3 used in our timeline, with a typical yield of 35kT instead of the 20kT of Trinity and Fat Man. The main bombers were B-36s, with four bombs each. Each bomber had two escorts, also B-36s.

That smaller plane to the left is a B-29. To put it mildly, the B-36 is a large aircraft.

Interestingly, once the bombers were over Germany, they turned on Salvage Fuses, which would set the bombs off once they reached 2000ft, even if still in the plane. So even shooting down a bomber won't save you. In fact, the second blast was due to this very effect. That's just mean...

One hundred fifteen bombs are accounted for in the book, including twelve for Berlin, eight for Munich, and twenty-five between Dortmund and Bonn. Two were in the Frankfurt area, one in Koblenz, one in Heidelberg, but none in Rothenberg. Estimated casualties are twenty million immediate fatalities and probably that many again from radiation, lingering injuries, illness, and the collapse of civilization and famine caused thereby. The population of Germany was about 60 million in 1945, real world timeline.

It's an interesting story, but it has some holes in it, like how in the world did General Groves keep Manhattan secret for two additional years while they built up the stockpile? I got the book largely to see the answer to that question, but I didn't find it. Also, I am surprised they didn't base in Russia. However looking at the map, it is almost as far from the Front to Germany as it is all the way from America.

But none of that is the Gem. I found out about the Big One on the TV Tropes Wiki in late January, and after failing to find the story in free form on the net, I decided to shell out and buy the book on Friday, 1 Feb 2013. The last page of the book is marked:

Made in the USA
Lexington, KY
02 February 2013

The book was made after I ordered it. It seems that it was literally made for me.

Saturday, February 2, 2013

Gem of the Week - EEPROM I2C Memory

Of course I am designing a Rocketometer around the Propeller. How could it be otherwise?

In doing so, I have to get my own parts. The Sparkfun breakout is an interesting jumping-off point, but they don't even use bypass caps, so I added a set of those. Also one thing which concerns me about this whole endeavor is that all the memory is limited. Each cog gets 2kiB, for a total of 16kiB, but the images for each cog must be stored in main memory, which has 32kiB of RAM and 32kiB of ROM. There is no way that I am going to be able to get the program source code embedded with that little amount of memory.

This brings us back to the EEPROM on the breakout board. The bootloader burned into the Propeller knows how to bit-bang an I2C interface and read a memory on the bus. It treats the memory as a 32kiB byte-addressable memory with auto-increment, so it can set an address once, then read and read and read and fill main RAM. The Propeller documents say to use a 24LC256 or similar, so I got to looking at Digikey to see if they had that chip, and perhaps more interestingly, if there was a bigger chip which would still work. It turns out that the design of the EEPROM is the gem of the week.

First, the hardware. The 24LC256 is a 256kib memory organized as a 32ki x 8 (byte-addressable) memory. It is in a kind of large package, a whopping 5mm wide, with 8 pins. This is a lot for a device which is I2C and could in principle work with only 4 pins. But here is the cleverness. The device has three address pins, allowing the 7-bit I2C address to be anything in the range 0x50-0x57. So, you could just stack up to 8 devices on the bus, with only the address pins different, and get 32x8=256kiB of memory, at the cost of using 8 devices.

The 32kiB space is addressable with 15 bits, but the I2C protocol is byte-oriented, so you have to send 16 bits, of which only A0-A14 are considered, and A15 is ignored.

These facts work together to allow the address space to be easily extended. First, a 64kiB part is doable just by considering A15 in the address. Next, a device can internally answer more than one address, for instance by ignoring the A0 hardware pin and answering both 0x50 and 0x51. This of course means that you have to readdress the device when you cross from the address space covered by 0x50 and the one covered by 0x51. You would have to do that anyway if there were multiple real devices.

This all adds up to the STMicro M24M02, which appears to be compatible with the 24LC256, in that if you talk to an 'M02 like a '256, it will answer like a '256. So, you can use the larger device and the bootloader should happily just work. However, it is a 2Mib memory organized as 256ki x 8 (256kiB), eight times as much memory as the reference design, and approaching comparability with the LPC2148 and its 512kiB of Flash. Then when your application takes over, you can use your own bit-bang to access the full chip.

The 'M02 still has one address pin, so in principle you could make a 512kiB memory in 1 chip without changing anything. Going beyond this will break the nice de-facto protocol going on here.

Friday, January 25, 2013

Gem of the Week - the Parallax Propeller

I write this before ever using one, so consider this a review of the concept rather than the implementation. The Parallax Propeller is a microcontroller with a couple of interesting features, and perhaps more interestingly, a couple of intentionally missing features.

First off, it has no peripherals other than 32 GPIO pins.

Secondly, it has no such concept as interrupt.

Thirdly, it has eight independent processors, called "cogs", each with its own memory. Each one can run on its own resources without interfering with any other processor. Each cog is a 32-bit processor, and gets access to 2kiB of ram, shared between code and data.

Fourth, it has a set of resources that all processors can access, called a "hub". This basically consists of more memory and a round-robin memory controller which each cog can access in turn. The hub has 32kiB of ROM with the bootloader, Spin interpreter, and a couple of tables, and 32kiB of RAM.

The missing features are what make the controller interesting. Want an I2C? Write a program for a cog which can bit-bang it. Same for SPI, UART serial, etc. Presumably it could bit-bang low-speed USB, but high-speed would be difficult due to limited processor speed.

Further -- want an interrupt? Too bad. Instead, assign a cog to sleep-wait for the appropriate signal.

Programming such a beast is clearly a different problem than programming an ARM of any flavor. ARMs are all about peripherals, registers, interrupts, etc. Propeller is about bit-banging. Effectively you can use a cog as a soft-peripheral to do basically any digital process.

As I mentioned above, the processor comes with a Spin interpreter. Spin is a custom language for programming Propeller, which gets compiled into byte code and interpreted with the Spin virtual machine. Of course you give up performance, but they used an interpreter in the Apollo Guidance Computer. There, it was for memory saving - a single interpretive instruction could take the place of many instructions in AGC machine language. They gave up time to gain space. Spin could have similar benefits, but it seems like the main purpose is providing a language which natively handles the very different concepts needed to handle a processor as weird as the Propeller.

A Propeller program then consists of a bunch of Spin and machine language routines, all stored in hub RAM and copied from some external source whenever the chip resets. Aside from self-modification or using a cog to bit-bang a memory bus, this is all the space you get. This is a rather tight restriction, in fact less memory than in the AGC. But, they fit a full-blown Kalman filter into that.

Of course I have already designed it into a Rocketometer. It's what I do. One of the great things about the GPIO and bit-banging style of the Propeller is that I can put the SPI bus on the pins which are closest to their targets outside the chip. This makes routing the board MUCH easier.

So I will say for now that Propeller shows a lot of promise. The design is a gem. We will see about the implementation.

Saturday, January 19, 2013

Can't... bring... the... funny...

Wonderduck has decided to drop Ben-To, which I only downloaded and watched on his recommendation. Thanks for that... Although I guess writing a careful episode-by-episode writeup isn't really a recommendation from him -- consider Rio Rainbow Gate, where it is more like a warning.

Anyway, the Anime of the Week then is Vividred Operation (vivid-red, in case you miss the space and can't parse the title). Episode 1 established that it was that kind of show. I mean magical girl -- what were you thinking of? Even though it is technological magic, it was still magic, since they still had the magical clothing change transformation sequence. That's right, the defining characteristic of a magical girl is magical clothing change.

Episode 2 can be subtitled "Friendship is insufficiently magic".

I thought about doing an episode writeup, but I only thought of a couple of jokes, and Wonderduck got them himself, so I will restrict myself to what I can do -- one line jokes.

Episode 3 features them skipping directly to "Form Blazing Sword", a clear violation of protocol since you are supposed to wait until you are almost defeated before pulling out your game-changing weapon.

Wednesday, January 2, 2013

Gem of the Week: The Free Market

Or:  They had Opportunity, in their very Community

Dear Princess Celestia,

Recently there was a severe shortage of apple cider here in Ponyville, which we had the opportunity to remedy, but because of short-sidedness on the part of some of your subjects and your apparent policy to grant monopolies in too many areas of the market, we chased that opportunity out of town.

In Ponyville, the cider franchise is granted to the Apple family of Sweet Apple Acres. Their orchard is a relatively small business and is incapable of supplying the demand. Cider prices are also too low, due to your policies of price control on cider, so the Apples are losing out as well. As a consequence, hundreds of ponies, a good fraction of the citizens of Ponyville, are deprived of cider, and those that do get it are forced to pay for cider with their time rather than with bits, by camping out at the gate. The time that they are granted, all those moments that will never exist again, they were forced to spend waiting rather than doing what they wanted.

If it seems cruel and heartless to give priority to ponies just because they have more bits, consider that your current policies are cruel and heartless as well to those who don't get any cider because they don't have as much time to waste in line. Even those ponies who do choose to spend those moments that will never exist again, pay for cider with both bits and time. Those customers spent that time to no benefit to anyone -- not themselves, and not the Apple family. If time were bits, it is as if those bits of time were carefully collected, from both the ponies who got cider and those that didn't, and dropped into Tartarus, never to be recovered, and never doing any pony any good. Even worse, those ponies who don't end up getting any cider pay time for cider they don't get.

A pair of entrepreneur ponies, the Flim Flam brothers, tried to remedy this, but because of the cider monopoly, they were unable to simply purchase the Apple family apples as a raw material and make the additional cider the market demands. The new business attempted to enter into collusion with the established players, but were unable to negotiate a cartel, which may have been even worse than the monopoly. By denying market competition, they were forced to compete in other ways, in this case a single-elimination to-the-out-of-business contest of pure quantity of production, rather than the ability to satisfy customer demand. In this competition, the Apple family was forced to work far beyond its sustainable capacity to near-exhaustion, and the Flim Flam brothers were forced to compromise their quality control, leading to inferior cider and tired, thirsty ponies.

If your policy of granting monopoly franchises was rescinded, the Flim Flam brothers could have just purchased apples from the Apple family or other suppliers at the market rate, and the Apples and Flim Flam could have worked together without having to collude. If your policy of cider price control was rescinded, the Apples could have raised the price of their cider to the point that some ponies would have decided that the cider wasn't worth it. The same number of ponies would have been served, the Apples would have made more money, and those ponies at the back of the line but more willing to pay would have been able to get their cider. With another supplier, more ponies would have been able to be served, and even if Apple family cider is better than Flim Flam cider, something is better than nothing, and the market would have decided that Apple family cider was worth more, and would have paid more. Those unwilling to pay that much would have been able to pay less to get Flim Flam cider.

They call it capitalism, but it isn't really an -ism of any kind, just the invisible hoof at work, ponies working towards their own benifit but supplying the needs of all ponies.

Please consider changing your economic policies to allow freedom to your subjects to pursue the interests that seem best to them.

It's a new world with tons of cider, fresh squeezed and ready for drinking -- and also plenty of quills, sofas, anemometers, and maybe even things nopony has even thought of inventing yet, but would if they had time to think and invent instead of waiting in line for cider.

Your Fellow Citizen,
St. Kwan the Just