## Tuesday, April 8, 2014

### Music Structure and Program Structure

I am a programmer by nature. I learned on my father's knee when I was single-digits old. I passed him in skill when I was a teenager. But, I am not only a programmer. "Specialization is for insects."

One of the other things that I am is an amateur musician. I may not have ever been a good, or even an average, tuba player, but I had fun doing it. I had fun marching, and I had fun watching football and basketball games. I realized early on that I was there not really so much as a musician, but as the carrier of a big metal flag. I was there for visual impact as much as anything. I even achieved a certain amount of fame as the "spinning tuba guy" in the early 2000's. One of my proudest moments was when I was featured in the opening montage of SportsCenter for one whole second.

As I said, I was never very good, but to be even a below-average tuba player, you must acquire certain skills. You have to know how to read music, and I did. I could convert note positions into fingerings and play approximately the right note. I could count or clap out rhythms, and with some practice, I could play the music well enough to fit in with the rest of the band.

One of the things I noticed being both a programmer and musician is that there are some similarities between program flow and music flow. In both cases, the most common flow is from one line or measure to the next, just sequential. Programming has loops, music has repeat signs. Programming has if/then, music has first and second endings. Music is somewhat limited, in that it is deterministic. It doesn't have to deal with input. Therefore, in music, some things have to go together, like multiple endings and repeat signs. Otherwise, the second ending would never be played, and would effectively get optimized out.

I have often wondered if this mapping could be made more complete. There are certain concepts we use in programming that tend not to get used in music, but maybe could, like subroutines. That got me wondering if there were concepts in music that could be mapped to programming, but aren't, and I finally came up with one today: the coda.

In music, you will see markings such as D.S. al Coda, indicating that the flow jumps from here, back (never forward) to a special marking, and then continues from there to another mark coda. This mark is ignored the first time through, but the second time, indicates a jump forward (never back) to the marked coda section.

It first occurred to me today that this is similar to exception handling. When a handled exception is thrown, the flow jumps to the handler. In a sense, this is like taking the coda branch in music.

You could use this in normal flow, throwing an exception when you want to make an early exit from a loop or function. Most functions and loops should have one entry point (enforced by the language, except for Fortran), and one exit point. However, sometimes it is convenient to do an early return from a routine, an early break from a loop body, or an early re-loop in a loop body. Most languages I use support these things with the return, break, and continue statements, respectively. However, there is a good reason for the 'only one exit' rule. Often the routine needs to do some cleanup on exit, closing files, calculating final results from accumulated variables, etc. If you do an early exit, you have to make sure that the cleanup is done appropriately each time you have an exit. If you want to change the cleanup, you have to do that in multiple places. The alternative is that rather than having multiple cleanup-and-exit blocks, you have something like goto end, and then at the end label, you do the cleanup.

However, gotos are to be avoided, for good reason. While the code will work just fine, source code is for communication with humans, not machines. If it were otherwise, we would write code once, keep the binary code, and delete the source. In this case, the humans we are communicating with are most likely our future selves.

Since source code is for humans, semantics matter more than they do if it were just for machines. Machine language doesn't normally have many advanced control structures, just jumps. The code is effectively strewn with goto statements, and the more optimized the code is, the worse this tends to get. Even disassembled code from a modern compiler is hard to reconcile with its source code. Optimizations make everything dramatically out-of-order.

Source code on the other hand, has structured statements rather than a spaghetti nest of goto statements, because they mean something to us. To a machine, a for loop means set up an induction variable, do something to it at the end of each loop, check the loop condition, and go back and do it all again as necessary. To a human, it means run the loop a predetermined number of times. This is why we have the rule to not monkey with the induction variable, as it disrupts the loop semantics. The human reading it will think that this block will run a predetermined number of times, but monkeying with the induction variable disrupts this expectation. In a sense, this is why break and continue are frowned upon, because they also disrupt the expectation.

Random goto statements are just that, random. They have no semantics. They can be used to do anything, so they mean nothing. All structured programming constructs can be done with goto (in fact they have to be in machine language) but not all goto constructs can be represented by structured programming. Sure, there is a proof that you can do it, but that's just because structured programming and goto statements are both Turing complete. I was once dealing with a program, SDP4, which was originally written in Fortran with no structured programming constructs, but pure goto spaghetti. The code arrived to me translated into (Turbo) Pascal, and I translated it further into Java. The previous programmer did a pretty good job of translating most of the gotos into structured constructs, but there is one part where he left in the gotos because the original code was so tangled he couldn't figure it out. He had the luxury of leaving in the goto statements in that case, because (Turbo) Pascal supports it, but I in Java did not. I ended up using a case statement inside a while loop and depending on fall-through. It was technically a structured construct, but just emulated the original goto nest. Vallado points to a solution that involved jettisoning that entire block of code and re-writing it from the original math.

So, while you can write a goto end, it doesn't carry the appropriate semantics. What I want is a statement that means to coda and coda. This carries the semantic that we will shortly be exiting the routine, but that there is a certain amount of cleanup to do. It would be added to the arsenal along with the early return, break, and contiune. One way to implement it in Java is to put the coda in a finally block of a try/catch/finally statement. Then when you want an early cleanup, you throw an exception. However, this violates the semantics of an exception, which is supposed to only be used for an error condition. I once wrote a program which used exceptions in this sense, asked the guys on TheDailyWTF forum what is a better way to do it, and basically got laughed off the forum.

Therefore, I think that in order to capture this semantic, a new statement is needed. Coda is a fine word for it for me, since I am a musician, but maybe there is a better term. Until this construct is added to our arsenal of structured constructs, we are stuck with goto coda, which is better than nothing, because it does capture the appropriate semantics.

## Wednesday, April 2, 2014

### Regulator

Once again this is hooked up as indicated in the MIC5319 datasheet. I finally am using the switch right, no more useless machine (and useless transistor) for me. As it turns out, that transistor was effectively built-in to the regulator, that's what the EN input is there for. This circuit supplies a nice 3.3V output on VCC to everything else. Previous versions had a current sensor, but this one doesn't, as I haven't gotten one of those to work yet.

It isn't visible in this image, but the bottom rail is GND.

### Charging Circuit

This one is a pretty straightforward hookup of the MCP73831 charging circuit, with one addition: D302 is there so that the charging circuit does not have to treat the whole device as a battery to be charged. If USB is connected, VIN will get the full 5V (minus the voltage drop of a Schottky diode). If not, the battery is used (again minus the voltage drop). Finally, both diodes need to be there so that the battery doesn't feed back to the charger input and try to charge itself. The programming resistor tells the resistor to charge the battery at a maximum of 100mA. This is 1C for a 100mAh battery, the smallest one I have and the one that flies with the Rocketometer.

The status light uses a resistor from the 1.5k pack in the USB connection.

Other versions of this circuit have had a voltage divider between VLIPO and GND so as to allow the MCU to measure the battery voltage. I haven't used it in a while, and it does draw some current, so it is gone from this circuit.

### Doing USB right

AN11392 - Guidelines for full-speed USB on NXP's LPC microcontrollers (19 Feb 2014). This finally answers all my questions about what all the USB parts are. Based on it, let's review the Loginator USB/charge/power supply section.

We are now using a Micro USB connector with through-hole posts for better mechanical security and easier alignment during soldering. Micro USB takes up less board space and is compatible with the cords used for Android phone charging.

The first thing the app note says is that there must be a 33 ohm resistor on each of D+ and D-. This, plus the internal resistance of the LPC itself, add up to the 90 ohm total impedance required. It implies that there is 12 ohms on each of the pins inside the LPC. This is what I have been carrying all along from the Logomatic design.

Secondly, USB_Softconnect is required if the device is self-powered, or if it takes longer than a certain amount of time to boot up. Since my device can be self-powered and might never boot up and connect, I intend to use this as intended. However, I still like a PMOS rather than PNP transistor. The 1.5k resistor required for softconnect to work is also well-sized for the LED, so I use an array.

The signal lines have capacitors to ground, for exactly the purpose I deduced - shorting high-frequency signals to ground. The app note says that they are not strictly necessary but that it has been reported to improve certain noise issues. I have always built circuits with these included, so I shall continue. These are one of the rare cases where it makes sense to use a capacitor array.

Next, we have R011 and R015, which I completely whiffed on. In my defense, I am not the only one. My design came from the Logomatic, which came from the microbuilder.eu LPC2148 Reference Design which appears to be a mistranscription of the Keil MCB2140 schematic. Even then, the Keil board does not seem to be what was intended.

The idea is that P0.23, USB_ON, is 5V tolerant if and only if the rest of the circuit is powered. So, if it is not possible for the MCU to be off while the USB voltage is present, then you can just plug VBUS into P0.23. However, if the MCU is not powered, the voltage on that pin is supposed to be limited to less than 3.6V. The app note recommends a voltage divider, with 24k on the high side and 39k on the low side, resulting in about 3.09V on the input.

The way both the microbuilder circuit and Logomatic circuits are arranged, that isn't a voltage divider at all, and the pin eats a full 5V. Since the MCU can be turned off (the power switch disables the 3.3V VCC line), this is technically out of spec.

R011 should be connected to the right pin of R015B, and should be closer to 20k. This divider will draw three times as much current as the recommended value, but that is still less than 1mA.

There is supposed to be between 1 and 10 uF between VBUS and ground, visible through regulators and other parts. The real spec is that the inrush current should be limited, but I don't intend to ever submit my device to USB compliance testing, so as long as it works for me, it's fine. There is 4.7uF on the input to the voltage regulator, so I do not inculde any intentional capacitance in this section.

## Monday, March 10, 2014

### Rocketometer Flight Data Published

This documents the data produced by the Rocketometer during NASA Sounding Rocket Flight 36.290, 2013 Oct 21.

I was getting sick of looking at this data, unable to fully process it but holding it jealously. This has to change. The data wants to be free. Get it at https://drive.google.com/#folders/0B_jwoDhgAF3cNXo0OG5MS2RoM3c

## Thursday, February 20, 2014

### Once More Unto the Breach...

It's official, St Kwan's has re-entered the robot business. Project Yukari Mk2 will be racing on June 21, 2014.

## Monday, October 21, 2013

### IT WORKED!!!

I turned the Rocketometer over to the rocket guys back in July, and had no contact with it until after the flight today. It had been exposed to every environment it was going to see except for vacuum, but I still had no confidence that it would work.

Well, it did.

The first time I stuck the SD card into my computer after the flight, it said "what card?"

That was disappointing.

It took a few minutes for the computer to recognize the card, but when it did, I saw that it had recorded 419 MiB in 66 files. One of the last was also the longest, so I ran it through my quick extraction program, then through IDL, and saw the characteristic acceleration pulse during first and second stage.

The first thing I did after that was press the Victory button on my phone:
No one else in the lab either heard that or got it, so I had to shout, "It Worked!!!".

Now I have to analyze the data...

### Minimum mission success achieved!

At about 12:01:12MDT today, the Rocketometer achieved minimum mission success by riding above 100km and therefore reaching space.

It will be some time yet before I can recover the device to see that it worked, which will represent full mission success.

## Saturday, October 12, 2013

### The Curiously Recurring Template Pattern

Go look up on Wikipedia what it is. I am going to talk about how I am having to use it.

I was doing fine with the Rocketometer analysis code in C++, using the NewMat library to handle matrices, with a Quaternion layer on top that I wrote myself. After five days of doing battle with various things, I finally got something that worked, but I was curious if this was the "best" way to do it. The C++ Standard Template Library didn't have anything directly related to matrices. The Boost library had a section called uBLAS, but the documentation for it kind of de-recommended itself. It suggested several alternatives, and the one that looked best is called Eigen.

Eigen is interesting in that it is entirely header files, containing almost all of its code in C++ templates. Templates are cool, mostly because when they are instantiated, the compiler gets to see the code in the context that it is used, and gets to inline and optimize it there. Specifically, Eigen contains a dynamic-sized matrix, but also is a template for fixed-sized vectors and matrices. I want to use these as much as possible because all vector sizes used in Rocketometer data analysis are known at compile-time, so the compiler can unroll loops and so on to best optimize the code.

However, templates do not mix with virtual methods, so I had to figure out how to make that work, since I used virtual methods to implement the physics model.  I had code that looks like this with NewMat:

class SimModel {
protected:
/** Physics function. Calculates the derivative of the state with respect to time
virtual ColumnVector fd_only(double t, const ColumnVector& x)=0;
/** Measurement function. Calculates the measurement from the given state and time   */
virtual ColumnVector g_only (double t, const ColumnVector& x, int k)=0;
/** Physics function with process noise. Uses fd virtual function to calculate physics, then adds process noise.*/
ColumnVector fd(double t, const ColumnVector& x,        const ColumnVector* v);
public:
/** Measurement function with measurement noise. Uses g virtual function to calculate measurement, then adds measurement noise.   */
ColumnVector g (double t, const ColumnVector& x, int k, const ColumnVector* w) {ColumnVector result=g_only (t,x,k);if(w)result+=*w;return result;};
};

But I wanted to adapt that to use Eigen, specifically with the fixed-length vectors, since the size of the state vector is determined by the problem and known at compile time. That means that ColumnVector has to go, to be replaced by Matrix<double,n,1> where n is a template parameter determining the size of the state vector. But what about the measurement? The purpose of the k parameter to g_only is to tell which of several kinds of measurements to use. For instance, in the Rocketometer problem, we have a measurement vector coming from the inertial and magnetic sensors, treated as a single 9-element vector. We also have measurements coming from radar or equivalent, treated as a 3-element vector. So, we need a template function g_only, which generates either a 9-element vector or a 3-element vector. You can't do that and have it be virtual, too. Basically, virtual functions are a runtime binding issue, while templates are a compile-time binding. So, I can't have a virtual g_only function, callable by the base class g function.

Enter the Curiously Repeating Template Pattern (CRTP). As it happens, this is something that I read about just a few days ago, just reading up on the C++ language in general. For us, the pattern goes something like this:

template<int n, class Derived> class SimModel {
public:
template<int m> Matrix<double,m,1> g (double t, const Matrix<double,n,1>& x, int k, const Matrix<double,m,1>* w) {
Matrix<double,m,1> result=static_cast<Derived*>(this)->template g_only<m>(t,x,k);
if(w)result+=*w;
return result;
};
};

Note that g_only isn't even defined in this class template, only used. In fact, one of the weaknesses of CRTP is that it implies definitions without expressing them, so it is hard to document. Also note that extra template keyword there after the arrow operator. See here for details.

The derived class then looks like this:

template<int n> class RocketometerModel: public SimModel<n,class RocketometerModel<n>> {
template<int m> Matrix<double,m,1> g_only(double t, const Matrix<double,n,1>& x, int k);
};
So what happens is that the compiler
1. Parses the template for SimModel, but doesn't compile it, because it's a template, not actual code yet. Therefore it doesn't matter that g_only is undefined yet.
2. Parses the template for RocketometerModel, and again doesn't compile it.
3. Parses the main code, compiling as it goes along until it hits RocketometerModel. It instantiates and compiles RocketometerModel, in the process instantiating and compiling SimModel.
4. When SimModel is being instantiated and compiled, it has a call to RocketometerModel g_only, but this is ok, since that is available already, since step 2 has already happened.
Now the derived class might not be a template, it might be an actual class. In this case, the base class is instantiated and compiled with the derived class. In either case, everything is available before it is used, even though the code might look otherwise.

Now this part I will write in bold, so that Google can see it. The other curiously repeating template pattern is having to use the word .template when using (not defining) a template member function. This solves the error invalid operands of types ‘<unresolved overloaded function type>’ and ‘int’ to binary ‘operator<’ .

It appears that there is a weakness in the C++ operator precedence table, such that in certain cases it can't distinguish the opening angle bracket of a template parameter set from a normal compare-for-less-than. In order to disambiguate, you throw in the word template after the dot (it will also work after a pointer arrow -> if you are using one of those). I don't understand it completely myself, but Eigen uses this itself, which is how I found out about how it works in the first place.

I am gradually coming to the conclusion that Java did it right, with generics which are real compiled code. Java generics are enabled by the "everything is an object" model in which all objects descend from a common class. I am also beginning to think that Java did it right with operator overloading. Operator overloading, even when fully appropriate, like defining * to mean matrix or quaternion multiplication, is fun to use but a nightmare to implement. And, if it is implemented wrong, it might be left to the user to find out when he tries to do something the implementer did not foresee.

All in all, I give Eigen 4/5, especially recommended for new projects, but not for converting old projects. The biggest advantage is speed. What took IDL over an hour, took Java and C++ with NewMat about 4 minutes, but takes Eigen only 20 seconds. Also, templated matrix and vector sizes are nice, because they resolve matrix size mismatches at compile-time. Finally, zero-based component indexing is what I expect, and the reason I don't suggest converting old projects from NewMat. Also be aware that the Eigen quaternion library uses the convention $\vec{v}_{to}=\mathbf{p}\vec{v}_{from}\mathbf{p}'$, which is fine and internally consistent, but not consistent with the math I had seen for quaternion derivatives. As a consequence, my code is liberally festooned with q.conjugate() and in some places q.conjugate().conjugate(). It's almost a case of two wrongs making a right, but not quite.