Monday, December 17, 2012

Gem(?) of the Week - the Minkowski metric

The Pythagorean theorem is one of those gems that I won't go into a lot of detail about, because it has been so well-covered before by others. Let's just say that it is a property of Euclidean space, in particular a consequence of the parallel postulate. I just mention it here because it is an example of a topic I am going to go over in great detail.

In 2D Euclidean space, we can figure out the distance of any point from any reference point by setting up a rectangular coordinate system with the reference point at the origin. If there were just two points we cared about, we could align the axes such that the X axis went through the other point, and then we could just read the distance off that axis. That's not really exploiting 2D space, so we will think about one reference point but lots of other points all over the plane. We can use the Pythagorean theorem to measure the distance from the origin to any point in the plane:

\[s^2=x^2+y^2\]

With this coordinate frame you can figure the distance between any two points, even if one is not at the origin, as follows:

\[s^2=(x_2-x_1)^2+(y_2-y_1)^2\]

By using the standard delta-notation from engineering, we can simplify this back to 

\[s^2=\Delta x^2+\Delta y^2\]

Now what is true for 2D Euclidean space is also true for 3D. The Pythagorean formula works, you just have to extend it to cover the third dimension
\[s^2=\Delta x^2+\Delta y^2+\Delta z^2\]

Those crazy topologists have generalized the Pythagorean theorem to fit their weird bent rubber spaces. They say that any space, along with a function which takes two points as an argument and returns a number, is called a metric space, where that function is called the metric. The metric must obey certain axioms, most important of which is commutativity - the distance from point A to B is the same as the distance from B to A.
Some of the topologists twisted imaginings don't really admit such a thing as a straight line. They get around this by saying that if you look at any small enough piece of the space, it is close enough to flat that we can define a metric there. Some spaces are so bent as to not even permit this, but those spaces which do, are called manifolds. In a manifold we talk about points which are close together, and represent this in our metric with differential notation

\[ds^2=dx^2+dy^2+dz^2\]

Now on a manifold we can specify a series of points to draw a path through, find the distance between each, add them all up, and get the length of the path. In calculus-speak, we have every point on the path, and we integrate along the curve to get the distance. However for flat space, there is such a concept as a straight line, and it is the shortest distance between two points. If you take the delta-form above and integrate the differential form, you end up with the same thing. Because of this, we will just show things in differential form from now on.

The film Dimensions is all about extending the same concept to four- and higher- dimensional space. All the Euclidean axioms apply, and the fourth dimension is exactly like the other three, so it works into the metric the same way:

\[ds^2=dx^2+dy^2+dz^2+dw^2\]

One narrator talks about how 4D space is the prettiest, because it contains such things as the 24-cell. Also he says that it may be because real physical space is 4-dimensional also once you consider time. Blah blah Einstein aggressively ignore history blah blah blah. Spacetime is 4-dimensional, but here's the weird part.

Time is not the same as Space.

"Wait a minute" I hear you all saying - "Obviously time isn't the same thing as space. Duh." But, the whole reason we call time a dimension, and the same reason we don't call temperature a dimension, is that there are coordinate transformations that mix in time with space. I'm going off of memory, but this argument came to me through a little book called "Relativity and Common Sense". For instance, imagine a plane where every point is painted a different color. So, at each point we can measure three things, its x and y coordinate, and its color. If we rotate the coordinate frame, we mix together the x and y coordinates

\[\begin{eqnarray*}
x'=& &x \cos \theta&+&y\sin \theta \\
y'=&-&x \sin \theta&+&y\cos\theta\ \end{eqnarray*}\]

But, there is no rotation, no coordinate transformation which preserves our understanding of what coordinate transformation means, which can mix color and spatial coordinates. Color is not a dimension in this sense.

Well, Time is.

The transformation isn't just a rotation, but the Lorenz transformation from coordinates measured by one observer to coordinates measured by a relatively moving observer, depends on the relative speeds of the observers. And here's the weird thing - time is a dimension, but it is not just like the other three dimensions. In fact, the metric for the spacetime observing the Lorenz transformation is called the Minkowski metric, and it looks like this:

\[ds^2=dx^2+dy^2+dz^2-dt^2\]

See the minus in the time term? It says that the longer the time between two events, the shorter the distance, all other coordinates being equal.

It gets weirder than that. If the time difference is long enough, it drags the whole right side negative. The squared distance between two events is negative. The distance between two events is imaginary. In special relativity, we say that when \(ds\) is real and positive, it is called proper distance, and it is the distance between two events as seen by some observer who sees them happening at the same time. Further there is no way for a single observer to be present at both events without exceeding the speed limit in the space, which introduces its own problems. When \(ds\) is imaginary, the (real) coefficient is called proper time, and represents the time interval between two events as seen by an observer who sees them happening in the same place. A single observer can be present at both events without exceeding the speed limit.

See what I mean by weird? In Minkowski space, there is a speed limit imposed as part of the fundamental geometry of space. If we though of time as just a dimension like space, this is equivalent to saying that it is impossible for a line to exceed a certain slope (change in space dimension per unit change in time dimension). No such limit exists in Euclidean space. Spacetime is 4D, but not Euclidean. It is not the same 4D space discussed in Dimensions. The Pythagorean theorem is false, and therefore the parallel postulate is false. Minkowski space is the only flat (metric works across long distances) space I know of which is non-Euclidean.

Now the question is, are the regular polytopes the same in Minkowski space? Does it make sense to talk about polytopes? Is a polytope regular from one point of view but not from another? These and other questions can be answered by the Minkowski metric, but I don't know the answers. I was only recently even able to form the questions.

Let's finish this off with a visualization:

Sunday, December 16, 2012

Putting my hardware where my mouth is...

I had been keeping this quiet, but it is the natural culmination of all that I have written on this blog.


At my day job, I work with space projects, but I have only been in the same room as flight hardware once, and that was purely as a tourist, to get my picture taken with it. I have written code that has gone into space, but never touched the hardware that carried it.

In October 2013, that will change. I am building space hardware. In a sense, that has already changed, since I have touched the hardware, but it's not space hardware yet. I have arranged for a version of the Rocketometer to fly on a rocket all the way into space.

The people I work for at my day job run an instrument in space that needs to be calibrated every so often. Every year or so, we fly a sounding rocket with a copy of our instrument, pop it up above the atmosphere for a few minutes, then let it fall back down into the atmosphere and descend on a parachute. It goes into space (well over 100km) but not into orbit.

On the campaign building up the rocket for the last flight this past summer, I was tinkering with my rocketometer (actually the 11DoF) and got to talking to my scientists about it. I told them about my daydream to actually get this thing on the rocket, and they said it was a great idea. Naturally it was far too late to get on board that last rocket, but with this next one I have plenty of time, especially considering that the hardware is finished, and I potentially could fly it now. The baseline mission is just to collect the data as fast as possible, and not bother with any on-board processing. That code was demonstrated with the speed test I did a few weeks ago.

My test plan and to-do list then looks like this:

  1. Adapt the old code to the new hardware. A couple of the I/O lines were reassigned to simplify the board design.
  2. Write an offline Kalman filter to process the data. IDL will work fine for that. Mostly I just need a set of equations of motion that allow the compass, gyro, and accelerometer to calibrate each other.
  3. Calibrate the sensors. I have an old record player with no needle which will be perfect for this. I may be able to use some stuff in the labs at work to help with this.
  4. Do a test flight in a model rocket. These generate a similar scale of forces and rotation rates, just for much shorter durations, seconds rather than minutes. The Rocketometer was designed to be carried in any rocket with a payload section 1" or larger in diameter.
  5. Get USB Bootloader++ working. This is low priority, as I can program the part over serial as I have been doing for a while.
  6. Consider on-board processing of the data.
I will be using the Rocketometer2148 with an MPU6050 6DoF sensor, an ADXL377 analog high-g accelerometer, an AD7991 12 bit ADC to read it, an HMC5883 compass, and a BMP180 pressure/temperature sensor.

With a couple of changes to main.cpp and gpio.cpp to tell it where the sensors and lights are on this board, the thing works! It may also be working at my goal rate of 1000Hz, attributable to a faster SD card, reading the compass only 1 of 10 times that the 6DoF is read, and not reading the HighAcc.

Monday, November 19, 2012

Things that there are never enough of, part 1

There is never enough bandwidth.

I can get an MPU6000 from Newark for about $39, or I can get an MPU6050 from Sparkfun for about $20. The only issue with the 6050 is that it runs on I2C, and maxes out at 400kHz. In burst mode, I can get a byte across in 9 clock cycles. An MPU6050 readout has three 16-bit gyro registers, three 16-bit acc registers, and a 16-bit temperature readout, a total of 14 bytes, plus addressing the part, for 15 bytes. Nine cycles each gives 135 cycles, plus a couple more for starts and stops, call it 140 cycles.

Is this enough? Of course not. There is never enough bandwidth. But is it enough? In 1 millisecond, there are 400 cycles, so in theory the MPU6050 being read at 1000Hz will use up 33% of the available bandwidth.

Now there is also the high-accelerometer, read out using an I2C ADC. The ADC has three 16-bit data registers, so 7 bytes counting address. 7 bytes at 1000Hz, call it 70 more cycles. Now we are at 52% bus usage. We need to throw in a compass read and pressure sensor read every once in a while, but it looks like enough to run the sensor at whatever rate I want.

So maybe there is enough. One way to get more bandwidth is to use the other I2C bus on the chip, but this involves a fairly heavy redesign of the board. We would put the MPU and compass on one bus, and the HighAcc and pressure sensor on the other bus.

Onto the measurements. I put together a program which reads the sensors with no delay, in effect as fast as possible with blocking. The program records the tick count (TC), which increments at 60MHz, before each reading, then reads the ADXL345, HMC5883, MPU6050, and L3G4200D in that order. So, the time spent doing the ADXL345 read is the TC of each HMC packet minus the TC of the corresponding ADXL packet, and so forth. The ADXL345 consistently takes 131 microseconds with a variation of less than 1 microsecond. The HMC takes 483 microseconds. And for the moment of truth, the MPU6050 takes 539 microseconds. All is fine and good, right? Nope. We spend most of our time waiting for the card to write out. On a good write, it takes 9 milliseconds, 18 times longer than an MPU read, to write a sector to the SD card. A lot of that could be gotten around by not busy-waiting for the card to finish.

In any case, there is plenty of bandwidth for the sensors, so there is no point in splitting up the traffic to two I2C buses. Reading twice as many sensors as necessary and producing more data than necessary (the real rocketometer will only carry one set of gyros), it still is reading out at about 300Hz.

Dual (or dueling) gyroscopes

The experiment that I wrote about yesterday was actually carrying two gyroscopes: The one in the MPU6050 discussed then, and the L3G4200D on board the 11DoF. This gyro was connected to the Loginator by SPI running at 1MHz. The 11DoF tab was oriented such that during the South integration, +X was East, +Y was Up, and +Z was South. In the same integration, the MPU tab was oriented such that +X was West, +Y was South, and +Z was Up (not Down as marked on the board -- the silkscreen is wrong, proven by the positive signal on the +Z MPU signal.)

So, we match up axes as follows:

MPU605011DoF
+X-X
+Y+Z
+Z+Y
Therefore we just perform the calculation using +Z on the 11DoF just like yesterday we used +Y.

The result doesn't look so good. First, data from yesterday, showing what a detection looks like:

This shows 1 minute of data from before and after the rotation. The noisy part on both ends is 10 seconds of raw data, and the smoothed bit in the middle shows a 2000-sample (roughly 20 second) boxcar smooth. The white data was with +Y pointing north, and the red data was with +Y pointing south. In that middle region, the difference is due to the rotation of the Earth.

Now, the data from the L3G, also taken yesterday, but not reduced until today.

Since the signals cross, there is no clear detection, even with a 20 second boxcar average. Why not? As it turns out, this sensor was set to 2000°/s, its least sensitive setting, with 1/8 the resolution of the MPU. So,  it's not a fair test.

Sunday, November 18, 2012

Detection of the Rotation of the Earth

Abstract: An MPU6050 6DoF MEMS sensor is used to measure the rotation of the Earth. The device is run pointed north for 1 minute, then south for 1 minute, taking 98samples/s during the runs. Actual rotation difference between north and south is 0.0063837°/s, taking into account the cosine(latitude) effect. Measurement is 0.0064822°/s±0.0015299°/s, representing a clear detection of the rotation.  The MPU6050 gyroscope is extremely accurate, probably sufficient for the Rocketometer mission.

This is also a review of the MPU6050 6DoF sensor. This part has a three-axis accelerometer with ranges ±2g, ±4g, ±8g, and ±16g, and a three-axis gyroscope with ranges ±250°/s, ±500°/s, ±1000°/s, and ±2000°/s. Since the intended mission is flying on a rocket, and certain rockets that may fly with this have been observed to accelerate at around 25g, it will need to be supplemented with a high-accelerometer, but the gyroscope will handle the 4.5Hz rotation expected.

The earth rotates 360° in 24h*60m*60s=86400s, or 1° in 240 seconds, or 0.00416666°/s. The gyroscope's most sensitive setting is ±250°/s and reads out at 16 bit resolution, resulting in 500°/s/65536DN=0.0076294°/s/DN, not quite enough to distinguish earth rotation from a standing stop, but enough to distinguish when one axis is pointing north, then south. Unfortunately, that presumes that the gyro is noise-free, which we will soon see is false.

The current experimental setup is on a breadboard connected to a version 1.1 Loginator by I2C at 400kHz, carefully aligned to true north by the following extremely accurate procedure: Since the walls of the secret underground laboratory are not aligned with true north, I pulled up the Google map of the lab and oriented the map such that the walls on the map were parallel to the real walls. The case of the phone was then aligned to true north. I put down a line of tape to mark this orientation. In this setup, the MPU6050 axes were aligned with +Z pointing down (thus we expect to see the 1g field read negative), the +Y axis pointing north first then south, and the +X axis pointing east first then west.

Of course, all measurements are contaminated with noise. This can often be beaten back by taking many measurements of the same thing. If you take into account a number of simplifying assumptions, the noise of the average of \(N\) measurements of the same quantity is \(\sigma/\sqrt{N}\). In short, if you take 100 measurements of the same thing, the average of them is expected to have 1/10 the noise of the original measurements. My previous efforts have taken data over very long stretches of time, hundreds of samples per second over hours. This time I decided to take data for 1 minute at a time. I sampled at 98samples/s (2samples/s were spent reading the pressure sensor, a topic for another day) for 1 minute with the Y axis pointing north, then 1 minute with the Y axis pointing south.

The estimated noise on each measurement, calculated with the standard deviation of all the measurements, was 11.02DN, or 0.084°/s, about 20 times that of the rotation of the earth, so it is obviously impossible to measure with one sample. But, I took 5880 samples, giving a predicted noise on the average of 0.14DN or 0.001°/s, plenty small enough to measure the rotation of the Earth. But did I? And if I did, why did I fail before?

Data:
Y north: -0.2217195°/s±0.0010972°/s (1σ)
Y south: -0.2282107°/s±0.0010972°/s (1σ)

Difference: 0.0064822°/s±0.0015299°/s (North is greater than South by this much)

Now, what is the expected value? First, is it positive or negative? Relative to inertial space, the device is rotating according to the right-hand rule around the Earth's axis. The device measures a right-handed rotation around an axis as positive, so the device should read more positive when pointing north than when pointing south, as it does.

Also, as mentioned above, the earth rotates at 0.00416°/s, but my Y axis is inclined 40° relative to the earth's rotation axis, so I should only expect to see \(\cos 40^\circ\) as much. Also, I would expect to see twice as much as that, because I am not comparing a standing stop to rotation, but rotation one way to rotation the other way.

Taking all this into account, the predicted measurement is....

0.0063837°/s.

My measurement clearly brackets this. In fact, the measurement is much better than I have any right to expect, being only 0.06σ above the expected value. I would have accepted any measurement with the proper sign and an error of less than 1σ.

Why did I fail before, with measurements taken over hours and hours? Gyro drift. Ideally, the zero point of the measurement would be zero DN, but we are happy if the zero point is just constant. But it's not. Temperature changes and other unmeasured effects cause the zero point to drift, and when measuring for hours and hours, this gyro drift swamps the signal I am trying to measure. To get a good rotation measurement, you want as many samples as possible, but taken over a relatively short time such that gyro drift is small.

So, back to the review. The MPU sensor is kind of noisy, with 11DN noise per sample, but can be read out sufficiently quickly and has sufficiently small gyro drift that it can measure the rotation of the Earth.

Friday, October 12, 2012

The USB protocol stack

SPI is a nice, simple, uncomplicated way of communicating between two or more embedded devices. Set a couple of registers defining clock rate, phase, and polarity, and you are ready to go at up to about 10Mb/s. It's a convenient way to talk to microSD cards, accelerometers, basically anything on board with an embedded device. Lots of device-defined protocols can be layered on top of it to do anything the host and device agree to.

USB is a nice, COMPLICATED way of communicating between a host and multiple devices, and multiple applications within the devices. It's not just a bus protocol, it's practically a network in itself. This has its advantages and drawbacks. If my device learns how to speak Mass Storage, any host computer in the world can use it. But, it is considerably more complicated. I can't just look up in a reference guide and bit-bang a protocol out like I can with SPI or I2C.

USB is in fact a stack of protocols, much like HTTP/TCP/IP/Ethernet. Some layers are handled by the hardware autonomously, some need the cooperation of the hardware and the firmware, and some is up to the firmware completely.

The USB project I had been working off of, LPCUSB, is a set of C routines with no readily apparent structure. You can trace through the handlers, to find handlers on top of handlers and handlers all the way down. For one thing, there is code just to work with the USB hardware, then there is code to implement the mass storage class and then code to implement the serial port class. Much of the interaction between these is through callbacks. In reorganizing it into C++, I had the ideal that the device could operate both as mass storage and serial at the same time. There would be a low-level USB class, and then on top of that, a Mass Storage class, and separately a Serial class, which could both be active at the same time.

I am going to abandon that idea for now. I don't know enough about the stack to do this yet. So, we do a USB class with several abstract virtual methods, then an MSC subclass which implements the virtuals purely as mass storage, without worrying about sharing. Likewise, we are going to have a USB control endpoint handler, then a MSC subclass which does what is needed for MSC.

Battle of Compression

Everyone else was doing it, so I might as well give it a try also. Here's my use case: I want to compress the C++ source code and anything else needed to rebuild my firmware (mostly Makefiles) into one tight little package, then append that package to the firmware itself. Naturally smaller is better. Especially I would like the Bootloader++ (I'll explain it when I'm ready to publish, but its a bootloader for a Logomatic-type circuit which handles fat32 and sdhc) code and source pack to fit within the 64kiB it has allocated for itself.

So, the test case. I already have a rule in my makefile to pack the code:
$(TARGET).tar.$(TAR_EXT): $(ALLTAR)
        $(CC) --version --verbose > /tmp/gccversion.txt 2>&1
        tar $(TAR_FORMAT)cvf $(TARGET).tar.$(TAR_EXT) -C .. $(addprefix $(TARGETBASE)/, $(ALLTAR)) /tmp/gccversion.txt

$(TARGET).tar.$(TAR_EXT).o: $(TARGET).tar.$(TAR_EXT)
        $(OBJCOPY) -I binary -O elf32-littlearm $(TARGET).tar.$(TAR_EXT) $(TARGET).tar.$(TAR_EXT).o --rename-section .data=.xz -B arm


I'm quite proud of the latter, as it packs the archive into a normal object file, which my linker script makes sure gets packed into the final firmware image, with symbols bracketing it so I can dump just the source code bundle.

Anyway, we will look at our challengers:
  • No compression, just a tar file. This one is actually a bit bigger than the total of the file sizes
  • gzip, the old standard, both with no special flags and with the -9 option
  • compress, the really old standard .Z file using the (expired) patented LZW algorithm
  • bzip2, the second generation compresion algorithm notable for both better compression and longer compression time than gzip, used both with no special flags and with the -9 option
  • Lempel-Ziv-Markov algorithm, implemented as the Ubuntu command lzip and xz. third generation compression algorithm, once again better compression, once again longer time
  • lzop, a compressor optimized for speed and memory consumption rather than size
  • PKZIP, implemented via the zip command available in Ubuntu. This might not be a fair test, as it is not compressing the TAR file, but is in fact using its own method to compress each file individually. So, it has an index, plus each file is compressed anew, meaning there is no advantage from the previous file's compression.
  • 7z, implemented via the 7z command available in Ubuntu. Same notes as with PKZIP.
  • zpaq, a compressor which at each step tries several methods and picks the best. This one takes a monumental amount of time and memory, but seems to be worth it if minimum file size is the goal.


So, we notice a couple of things. One, sometimes -9 doesn't improve things measurably, and sometimes makes things worse. Next, zpaq rocks out loud as far as compressing C++ source code. It's still larger than the firmware binary image, which is 12423 bytes. It might take more time and more memory than any other compressor, but all that time and memory is in a beefy desktop machine, and not in the Loginator.

Sunday, October 7, 2012

Gem of the Week - Kepler's and Newton's laws and universal gravitation.

The discovery by Kepler of his laws of planetary motion is one of the more amazing bits of observational science, made more amazing by the lack of tools which he had to work with. But, that's not our gem of the week. Instead, we will see how Newton deduced that there is such a concept as universal gravitation, and proved that it worked. Actually we won't see how he did it, but we will see how it can be done with modern techniques such as vectors.


Monday, October 1, 2012

Gem of the Week - Euler's Identity

I am going to present a new feature to all my 0 readers - the "Gem of the Week". This is a reprise of a sometime feature on my old private blog, "Chemical of the Day". I am expanding the topic somewhat from chemicals to anything I find interesting. None of these are necessarily news, but they might be.

This week, it's Euler's Identity.

Keeping secrets

I hate secrets.

Some people have to keep secrets because they are legally obligated. This includes any government classified information. Boy am I happy I don't have to deal with that headache. I bet Robert did.

Some people have to keep secrets because they are contractually obligated. Some projects LASP works on are with customers who treat some aspects as proprietary. For instance, I was brought in on Sentinel long before it was announced, in October of last year. Ball, perhaps under orders from B612, required us to keep the mission proprietary. It is a really cool mission, and I hated not being able to talk about it for months on end.

I keep some secrets because the time is not yet right to publish. I have something cool in mind for the Loginator, but I don't want to shoot my mouth off before I know that it is going to happen. So, watch this space...


File system driver

My C++ification of the Loginator code continues. As noted below, I have the startup code now in full C++ (with 18 lines of inline asm), and I have overthrown the tyranny of main() (that sounds familiar, have I written on this topic before?). I have taken Roland Riegel's sd_raw driver and heavily modified and simplified it. Basically I made it a pure block device. I have dropped all the buffering. You can open an SD card (SDHC fully supported), read a block, write a block, and get the card info.

I looked into extending c++ifying the partition and fat32 driver, but it looked too complicated and messy. One of the things I am dead set against is dynamic memory allocation in an embedded process. What if it fails? When that happens, the software crashes (it wouldn't ask for memory if it didn't desperately need it) and when that happens, it is a good possibility that the device it is flying crashes too.

So, I get to write a fat32 driver myself. Once again, only whole blocks at a time. And to start with, only that which the USB bootloader and Logomatic need: read a file, write a file, delete a file. Also to start with, we fully support FAT32, but do not support long filenames.

One area where I am going to get myself in trouble is writing the file. Sometimes when you write a file, you have to change the file allocation table. When you do so, you need to read the sector containing the change, make the change, then write the new sector. This is all easy, but you need a buffer to do it. Also, you will need to read the table to find the next cluster. What buffer do you use? I know that the LPC2148 is not really memory-limited, but it still seems a waste to set aside a whole block buffer for this.

I started by writing a partition driver. You pass it an open and started SD object and a partition number, and it reads the partition table to get the info for that partition. From then on, you use the partition object to read and write blocks.

Friday, September 28, 2012

A review of the ADXL345 and BMA180

First off, the ADXL345 is exactlty what it claims to be. It's my own fault for not reading the data sheet, or rather reading it but not comprehending the information in it.

So, my first experience with digital accelerometers was with the Bosch BMA180. The part had 14 bits of precision and 6 scale settings, from 1g to 16g. When you switched to the lower scales, you got more precision, as expected.

I finally got all 11 DoFs working on the 11DoF board. The first SPI part I broughs up was the ADXL345, so I learned about its SPI protocol that way. For one thing, those of you used to I2C, SPI is different! For instance, on both the ADXL345 and the gyro on the 11DoF, the L3G4200D, the register addresses are six bits. This is fine, since that is a big enough space. You send this address as the first word of any transfer in either direction from these devices. However, both devices use 8 bit words in the protocol. The other two bits control the direction of transfer for the other words, and whether the device is to expect more than one word -- that is, whether it should increment the register address pointer for the next words.

For instance, say you want to read all the measurement registers in the ADXL345. The register map says that this is registers 0x32 through 0x37. Since SPI is a full-duplex protocol, every time you send a byte, you receive a byte. So, send 0x32, ignore what you receive, then send six more bytes (doesn't matter what, I used 0) to read 6 bytes that you care about. Right? Almost, but not close enough. I did this first with the ID register, and it worked, sometimes. But, when I actually tried to read the data, I got back the same word six times. What gives? First, bit 7 of the address is read/#write. You have to set it to read the register, otherwise it interprets the MOSI data as data to be written to the registers. The data registers are read-only, so the part ignored me. Next, bit 6 is the multibyte flag. Set this bit if you are going to read/write multiple registers in one transaction (one continuous #cs assert). Doing this will cause the part to increment the address pointer each time it sends or receives 8 bytes. Since I set neither of these, my part became very confused, since I was telling it to write to a single read-only register six consecutive times.

tl;dr - Tell it you are using address 0xF2, then read 6 more times to get the 6 data registers.

This is an unnamed protocol layered on top of SPI, which by itself knows nothing of registers. It happens that the gyro uses the same protocol, so turning it on was a simple matter of verifying that the protocol was the same, and copypasting the code.

Now for the review. The ADXL345 has a programmable range, with choices ±2, 4, 8, and 16g. We will need 16g for the rocketometer. It has a readout precision of 13 bits, almost equal to the 14 bits the BMA180 gives. Now for the bad part. The readout precision is only 12 bits for 8g, 11 bits for 4g, and 10 bits for 2g. It is as if the part was always running at 16g, but reporting saturation if it was out of its current range.

I might as well use an analog part if I am only going to get 10 bits. I had always known this, but only realized the significance when I finally got it up and running, and only got about 250DN out of the part in the 1g field. So, the ADXL gets 2 of 5 stars, not recommended on the Chizumatic scale.

The BMA180 is what I used before, in flight. It has a programmable range, with choices ±1, 1.5, 2, 3, 4, 8, and 16g. It produces 14 bits of precision, and this 14 bits is constant across all ranges, so if you use the 1.0g range, you actually have zoomed in, and get better resolution than when you are at 16g. I can't speak to its accuracy, so it gets 4 of 5 stars, not recommended. Why not? The part was discontinued without a suggested replacement as of today, and is no longer available on Digikey. In fact, I am hoarding three of them still in their cut tape, purchased from Sparkfun today at great expense, not even on a breakout board. None of the other BMA accelerometers are as good, and none of the Analog Device accelerometers are as good either.

Thursday, September 27, 2012

8 DoFs so far...

I have gotten the ADXL345 accelerometer, HMC5883L compass, and BMP180 pressure/temperature sensors working with the Loginator and 11DoF board. This means that all SPI and I2C signals work on both boards. Last thing to do is to get the STMicro L3G4200D gyroscope working.

I just saw a board basically identical to the 11DoF (same sensors except a BMP085). The problem is that it is all done up in I2C.

Friday, September 21, 2012

Starting up an ARM processor with 90% C code

Now why would you want to do a darn fool thing like that? Startup.S works fine, why mess with it?

One word: Understanding.

Let's face it: Assembly is hard to read. Especially ARM assembly, with its special registers, shift-operands, and treating memory (load/store) fundamentally different from registers (mov). You can't get more than a few lines through an assembly listing without having to crack the ARM ARM.

Besides, I have a philosophy to use a consistent language throughout a program to the extent possible. So, for the embedded stuff, it's C++ when you can, C when you have to, and asm only when you really have to.

So what stands in the way? Mostly it's the fact that the toolchain makes it difficult to put things exactly where you want:

  1. The interrupt vector table. This is what keeps the code from being 99% C instead of 90%. On an x86 processor, the interrupt vector table is purely a list of addresses. Upon interruption, the processor looks up the correct vector, and loads it straight into the instruction pointer, causing the processor to branch to the handler. In many other processors, the table is actually code. When an interrupt happens, the processor jumps to the correct slot in the table and executes it. Usually this is a jump to where the handler really is. In ARM, there isn't enough space for a long jump in a single instruction, so each vector says to load the program counter with another value from memory, usually in a table located right after the true interrupt table. In any case, C just isn't a good language for building the table. What we do then is use inline asm. We make a function called vectorg which is purely inline asm. The first half is instructions, specifically the long jump instructions with embedded pointers to the second half, which is the address table. This is populated with symbols, so the linker can patch it up.
  2. Putting things where we want. The old Turbo Pascal had a mechanism for assigning a variable to a particular point in memory. In asm, this is easy: just define a symbol with a hard-coded address. This just isn't possible in C. So, we need cooperation from the linker. We need to specify the linker script. In particular, we need to say that a particular named ELF section is to be linked right at the beginning of flash, and then make sure that the table is in fact in that section, at the beginning of it. The easiest way to do that is to turn on -ffunction-sections when compiling, then link .text.vectorg at the beginning. The source code and the linker script have to agree on this.
  3. Registers - This is the other 1%. To set up the program, the reset handler has to set up the stacks, move the initialized data from flash to RAM, zero out the uninitialized data, and call all the constructors for global objects. But, in order to set up the stacks, the code has to be able to set the CPSR register, so it can flip through modes and set each mode's stack register. It also has to be able to write directly to the stack register itself.

Wednesday, September 19, 2012

Around the world in (considerably less than) 80 hours

A thought experiment. I would need to go home and throw some stuff in a backpack, and get my passport, but this is doable. It was 2012 Sep 19 11:00am MDT as I searched.


Layover From Depart Arrive Flight Time
Airport TZ UTC rel Local UTC Local UTC


Boulder Mountain Daylight Time -6 2012 Sep 19 11:00AM 2012 Sep 19 17:00 2012 Sep 19 11:00AM 2012 Sep 19 17:00 Look up flight 00h00m
09h45m Denver (DEN) Mountain Daylight Time -6 2012 Sep 19 08:45PM 2012 Sep 20 02:45 2012 Sep 20 12:35PM 2012 Sep 20 11:35 American 6169 as British Airways 218 08h50m
02h10m London (LHR) British Summer Time 1 2012 Sep 20 02:45PM 2012 Sep 20 13:45 2012 Sep 21 12:50AM 2012 Sep 20 20:50 Etihad 20 07h05m
01h35m Abu Dhabi (AHU) Gulf Standard Time 4 2012 Sep 21 02:25AM 2012 Sep 20 22:25 2012 Sep 21 03:25PM 2012 Sep 21 07:25 Etihad 424 09h00m
07h05m Manila (MNL) Philippine Time 8 2012 Sep 21 10:30PM 2012 Sep 21 14:30 2012 Sep 21 08:00PM 2012 Sep 22 03:00 Philippine 104 12h30m
09h50m San Francisco (SFO) Pacific Daylight Time -7 2012 Sep 22 05:50AM 2012 Sep 22 12:50 2012 Sep 22 09:21AM 2012 Sep 22 15:21 United 729 02h31m
1d06h25m Total Layover



Total trip time 2d12h36m Total flight time 1d15h56m
Also as of 11:00am, this flight had a cost of $5,351.89. I couldn't swing that right now, and I have work to do for the next few days, but Phileas Fogg wouldn't have any such trouble. He had £20000 cash in his pocket, and this trip would cost about £55.16 There are probably possible trips with tighter connections. There are surely trips that are cheaper with a longer lead time -- I found one in January for ~$3500.

This trip is definitely around the world. It crosses all the meridians. But, it is only 32833km as the crow flies. I have heard that a trip around the world must cover a distance longer than one of the tropic circles (36787km). This trip doesn't cut it if it follows the great circle route, but if the actual routing is 10% inefficient then it counts.

Wednesday, September 12, 2012

Mathjax

Here is a new cool thing: MathJax

\[x=\frac{-b\pm\sqrt{b^2-4ac}}{2a} \M{A} \MM{P}{^-_i}\]

Mathjax is apparently a TeX implementation written entirely in Javascript. It looks like it scans the source code of your page, looks for delimited equations, then interprets the TeX within and renders it using math fonts.

Instructions for how to get it to work for Blogspot are here.

Now I get to go through all of my Kalman filter stuff and fix the math there into something actually readable.

Wednesday, September 5, 2012

Complete Precision

Project Precision has reached its successful conclusion. I now have a physical hardware clock with an hour, minute, second, and third hand, and enough accuracy to justify needing a third hand.



As I have mentioned before, I noticed that an Arduino Nano has precisely the pinouts necessary to drive a charlieplex with 240 lights. The interesting thing is how few leftover resources there are. There are two analog inputs that are useless in this design. Every single other pin is used. I even considered giving up the crystal inputs to get two more digital pins.I had to include a digital multiplexer since the ATMega only has one serial port, and it needs to listen to both the USB port and the GPS.

As I said before, I will not make one for you for less than $300. The parts alone cost almost that much. However, I am going to publish everything you need to make one yourself.

This is the Digikey part list:


Quantity Digikey Part Number Part Value Case Placement Price per Min quantity Ext Price
2 445-7483-1-ND Capacitor 4.7uF Ceramic 0603 C010 C418 $0.24000 1 $0.48
2 478-6025-1-ND Capacitor 18pF 2% NP0 Ceramic 0603 C407 C408 $0.40000 1 $0.80
2 445-1316-1-ND Capacitor 100nF Ceramic 0603 C420 C502 $0.10000 1 $0.20
61 754-1359-1-ND LED Red 320mcd LED 0603 D000-D059 D502 $0.14040 1 $8.56
60 754-1124-1-ND LED Yellow 150mcd LED 0603 D100-D159 $0.11160 1 $6.70
60 350-2036-1-ND LED Green 300mcd LED 0603 D200-D259 $0.51840 1 $31.10
61 350-2037-1-ND LED Blue 140mcd LED 0603 D300-D359 D501 $0.48960 1 $29.87
4 CRA4S847CT-ND Resistor Pack 47 CRA04 R1 R2 R3 R4 $0.04300 10 $0.43
2 P680GCT-ND Resistor 680 SMD 0603 R501 R502 $0.10000 1 $0.20
1 CRA4S810KCT-ND Resistor Pack 10k CRA04 R606 $0.04300 10 $0.43
1 SW1021CT-ND Switch SPST B3U-1100P S429 $1.03000 1 $1.03
1 ATMEGA328P-15AZCT-ND Microcontroller ATMEGA328P 32-TQFP U401 $6.45000 1 $6.45
1 768-1007-1-ND USB interface FT232RL 28-SSOP U501 $4.50000 1 $4.50
1 NC7SZ157P6XCT-ND Multiplexer Noninv 2 input SC-70-6 U602 $0.41000 1 $0.41
1 887-1319-1-ND Crystal 16MHz 7M Y401 $1.69000 1 $1.69







Lights $76.23







Rest $16.62







Total $92.85
This costs on the order of $100, but the vast majority of the cost is in the 242 lights (240 for the hands, 2 for the TX/RX indicator). I get the above prices today from Digikey with no tax or shipping added on.You can get cheaper lights if you are satisfied with not using green or blue.

You will also need some connectors for the boards:
Sparkfun Female Header Pack - a set of two 6-pin and two 8-pin sockets. You will need this complete set, plus another 6- or 8-pin that will be cut down to 4 pins. You might as well get two of these sets, since they are cheap
Two Arduino 6-pin stackable headers
A strip of male straight headers and male right-angle headers. You need 32 pins' worth of straight headers and 4 of right-angle headers.
A long USB-A plug to USB-Anything cord. You are going to cut the cord off as far from the A end as possible.You will also need a way to connect this to the 4-pin right angle connector. I used a 5x2 ribbon connector (yes, 5, even though only 4 are needed. It's what I had on my bench at the time.).

While we are going through the Sparkfun shopping list, I recommend getting the UP-501 GPS receiver. In principle, any GPS receiver can work, and you can even set the clock over USB and have it run free without any GPS at all, but then you don't get sufficient precision to justify the third hand. The socket on the circuit board is designed for this UP501, and it fits nicely on the back of the board in between the four screws. If you use another GPS, you will need to make a connector for it. Get one that runs at 3.3V (or has a voltage adaptor) and one that has a PPS signal.

Finally, you need the light pipe parts. Ponoko does great work, but it is quite a bit more expensive than just the bare plastic sheets cost. The light pipes as I designed them are exceptionally fragile, and I forgot to put tabs on the light pipes to connect them directly to the four screws. If I were to make another clock, I would fix the latter flaw. As it is, the light pipes have holes for each LED, and these are used to hold the hands in place.

The firmware is plain ordinary Arduino code. There are two separate sketches, one to test each light in a controlled condition, and one to actually be a clock. The Charlieplex driver is put into a library so that the test code tests the same code that the clock code uses.

Wednesday, August 29, 2012

AppArmor

AppArmor is one of those things that our distribution engineers like to put into our Linux distributions without telling us. If you don't know about it, it can cause some WEIRD errors.

First off, AppArmor is basically another more restrictive set of file permissions, based not on the userid, but the filename of the process itself. If a process is not allowed access to a file by AppArmor, it will fail, just as if the permissions were set wrong.

If you don't know about this, it can be a head scratcher. Like for instance, I just moved my MySQL tables from their natural home to the raid. All the permissions are set properly, because mv does that when it can, and because I was root at the time. But MySQL still wouldn't work.

There is a set of file restrictions in /etc/apparmor.d/ . Find the right file, named after the process path but with dots instead of slashes (/etc/apparmor.d/.usr.bin.mysqld controls access by /usr/bin/mysqld). Set the permissions in there, and things will work.

Wednesday, August 15, 2012

Resurrecting Omoikane

So, as you may or may not know, I ran a nice little Linux server with all my data on it, including a filesystem dating back to at least 2003 with files back to 1999. I used LVM to spread the file system across all the drives I had, so that I didn't worry about which file was on which drive. I let the filesystem driver handle that.

Well, a couple of months ago, one of the drives in Omoikane started emitting this terrible shaking noise, which panicked the kernel. When I restarted the system, that drive was dead.

So, now I get to learn more than I cared to about the LVM and ext4 filesystems, in order to recover what I can from the good drive. As it happens to turn out, the system was in five "stripes", continuous blocks of LVM extents. Three of them, including the first one, are on the good disk, representing 2TB of the total 3.5TB system.

First thing is to write a really primitive LVM driver. I used the LVM tools to get a map of the stripes, then hard coded that map into my recovery program. This means that my program is not directly applicable to your problem, if you stumbled across this looking for LVM-saving hints. But, I did learn something about LVM: It uses the first 384 sectors (512 bytes each, 192kiB total) of each physical volume to record info about the entire LVM system the volume is participating in. This means that if any drive is still good, I can use it to reconstruct the structure, and find out which stripes I have and which I don't.

After the LVM header, each physical volume is just a trackless waste of unstructured bytes. The stripe information in the LVM header is needed just to see the order of the LVM extents on the physical volume. This is actually good, as it means that I don't have to interpret anything in the sea of data, at least in an LVM sense. To find the Nth extent of a logical volume, use the stripe map to get which extent of which stripe on which drive, then seek to 384*512+M*65536*512 to get to that extent, where M is the physical extent number you get from the stripe map.

Next it's on to writing a really primitive ext4 driver. Ext4 is rather sophisticated, in that it keeps track of a lot of data and uses complicated algorithms to decide where and when to write what. The good news is that Ext4 is a straightforward extension of Ext3, which again is an extension of Ext2, which was quite a bit simpler. Because it makes an effort at backward compatibility, much of Ext4 is readable with the simpler Ext2 logic.

For instance: Ext4 is divided up into block groups, same as Ext2. Each block group has an inode table and some bitmaps to help the full-blown driver allocate things quickly and optimally. Some block groups, always including the first, include a superblock with information about the entire file system, and a table of block group descriptors. Each block group table contains descriptors for all the block groups. So, in the first block group, we find a superblock, descriptors of all the block groups, and an inode table which lets us start finding files.

Now here's the clever bit. For a variety of reasons, pointers in different structures may point to blocks in other groups. That's ok, because all block pointers are absolute, meaning that they are all relative to the beginning of the filesystem. In one of the ext4 sophistications, it groups block groups into metagroups, and combines the inode tables and bitmaps from several block groups into one contiguous stream. For instance on Omoikane, 16 contiguous block groups made up a metagroup, so that the inodes for all 16 groups are in the first group. My code doesn't care, because the inode pointer in the block group descriptor points to the right place inside the inode metatable.

Another clever bit is in the directory structure. As is typical with a Unix filesystem, all the important information about a file is in the inode, including the file's length, permissions, owner, and block map. Everything you need to read a file, in other words. The directory only contains the name and an inode index. This is how hard linking is implemented: If two directory entries point to the same inode, then the file has two hard links, and each entry has equal claim to being the true name of the file. The directory entries don't even have to be in the same directory.

Ext2 just searched each directory entry linearly. Ext4 has the option of indexing the directory file, but it is done in such a way that Ext2 logic will completely ignore the index. The index data is actually in the directory entry for '..' after the file name.

In ext2, the closest thing to a "file allocation table" analogous to FAT filesystems is the block map. This map starts in the inode, which has the index for the first 12 blocks used by the file. Files of up to 48kiB are accomodated thusly. If the file takes more than 12 blocks, the 13th entry in the index points to another data block, the indirect block, which is completely full of pointers to the actual data blocks, allowing 4kiB*1ki blocks=4MiB more data, with the cost of 4kiB extra index data. For larger still files, we have the 14th entry which points to the double indirect block. Each pointer in this block points to another block full of pointers to the actual data blocks. 4kiB*1ki blocks*1ki blocks=4GiB more data, at the cost of 4MiB+4kiB more index data. Similarly the 15th entry points to the triple indirect block, which allows 4TiB more data at the cost of 4GiB+4MiB+4kiB of index data. Each step means about 0.1% overhead in storing a large file. Larger block sizes make the indirect blocks able to hold more pointers, so the level factor is more than 1024.

However, in ext4, a new block index called an extent tree (not the same as LVM extents) is used. The block map tree is fairly complicated, and needs to mark every block a file uses. Extents basically run-length compresses this by using one extent entry for each contiguous stream of blocks the file uses.

Sunday, August 5, 2012

I told you it would work!

The above image is a 256x256 thumbnail from one of the front Hazcams on the Curiosity Rover, and represents more science data than sent back by any surface mission not sent by Americans.

And then there was this:

 While recording the MSL entry data in canister mode, MRO also snapped this photo of MSL on its parachute. They said that this photo would be harder, because even though (or rather because) it's closer, the angular rates are higher. Well, this is a much better picture than that of Phoenix.

I gotta Feeling that Tonight's going to be a Good Night...

Go MSL! Stick the landing! Thousands of people have worked hard on you, millions are wishing you well.

Some humor:
When I say "overheard", I actually mean that I said that when I was working at a summer job at JPL. I'm not saying that I had any influence over arming the rover, but let's just say that all successful surface missions have been American.

Thursday, August 2, 2012

Why do I do this?

I am throwing a zombified lobotomized Omoikane back into the atomic banana peeler, just for its compute power. I have Unplugged All the Things (drives) and am going to run it just off of a USB stick with Ubuntu Precise on it.
I'm no Dan Maas. I don't have the time necessary to give this the attention it deserves. I don't have enough computer power to get a full model of both the lander and the terrain into view at once.

What I can do is easy. Spice kernels take all of the work out of predicting where things are. I don't need an aero model, I don't need a guidance program, I don't even need a numerical integrator.

So why do I bother? Especially in the face of this?

Because I like it. I love doing computer animations. I love collecting data and models. It's tradition started from Phoenix. If I had MER data I would probably do them too. And above all, Absurd Accuracy is Our Obsession. I have seen how SUFR works and how the balance masses deploy. They look weird, but a bit of though suggests its probably right. I have seen the lander scream into Gale Crater and descend against the backdrop of Mt Sharp. I have seen the parachute deploy and the backshell swing on it. I have seen the Skycrane Maneuver, and it doesn't look half as crazy as it once did. To be honest, its the part of powered descent before the skycrane that has me most worried.

Besides, it's not all bad. I finally found a nice map of Gale crater. So, my model of Mars lacks in detail, dozens or hundreds of meters resolution in topography, smaller but still large blocks in image map. Better is available, but not in color and difficult to mosaic. Besides, I don't have the memory for better. POV is particularly inefficient with meshes, and the maps I do have tax Aika's memory.

Anyway, with that map in place, with the backshell scorched, and so on, it looks good to me. Maybe I am comparable to Dan Maas. A little more time, a little more greeblies on the descent stage and rover (and perhaps Santa will bring me a copy of SolidWorks to do it with) and a little more patience and memory, and I might have a world-class animation. I at least hope to have a LASP-class animation to show on Sunday night.


Sunday, July 29, 2012

There's always one more thing...

...This time it is deploying various jettisonable objects. I could just Do It, but there is always a philosophy problem.

Jettisoning things is easy. We know the exact time and place ahead of time where each object will be jettisoned. It's easy to get an exact analytical solution ahead of time so that all we need is the time an object is jettisoned, and its position, velocity, and orientation at that time. There is an analytical solution to its future trajectory, and just apply some random spin to its orientation.

Except, I don't know that there really is an analytical solution when drag is significant. We can fake it with the entry balance masses, since drag is not a big part of their lives. But what about the heat shield? What about the backshell and parachute? What about the descent stage in its flyaway phase? We'll save that for last.

Apparently there is such an analytical solution, but only for drag proportional to the velocity, not the square of the velocity, which is the regime in which conventional drag coefficients work. There is another solution, but the horizontal and vertical velocities are implicit functions of each other. So, it's on to everyone's favorite brute force application! No, not a hammer, a numerical integrator. Almost the same thing though, I can see why there would be confusion.

Now the problem with numerical integration is that it requires a state. It requires memory. And due to the nature of the animation loop (I haven't restored persistent variables to Megapov yet, and it wouldn't do well in the Atomic Banana Peeler anyway) there is no convenient way to maintain this state. So, on each frame, we get to integrate all the way from the time of jettison to now. We can stop when we know the object in question is off-screen, so it will be for a few seconds at most.

Here's the current version of the animation, with the art for the descent stage and rover in place. Almost all the art is done now. I need some decorations on the backshell, to match reality and to show the capsule rolling (or not). Then I still want to char the backshell, and perhaps the heatshield too, as a function of integrated heat.

Tuesday, July 24, 2012

Megapov+CSPICE documentation

I have added a feature to my version of Megapov which allows access to a limited subset of JPL Spice directly from the Povray scene description language. Here is the sad part: It was written for Project Blinn, but Project Blinn was (may have been) lost in the Great Hard Drive Crash of 2012. So, I am going to use it on the MSL movie I am (still) working on.

Wednesday, June 6, 2012

A new weapon in the war against Bugs and Photino Birds

Keil μVision 4 Debugger/Simulator. This is a small part of the larger (450MB, really guys?) μVision suite, but the only part I care about. With it I was able to debug my Task code, which is interesting given that I did not write or compile my code in μVision. Get the program and install it (it installs fine under Wine on Linux) and then bring up any example project, compile it, and get into debug mode. Then, right-click the disassembly window and select "Load Hex or Object File...". Now select the master .elf file that was compiled with GnuARM. It will load, and the simulated processor will be reset, so it starts at location 0 (of your code, apparently it skips the ISP).

And now for a word (or thousand) about tasks...

Tuesday, June 5, 2012

There's a little black spot on the sun today...

I saw it! I owed it to Captain Cook to make my best effort to see the transit (insert 100+ years, 2117, etc) and I did.


OK, I didn't directly see it, but I did the next best thing, see it with a projection system. I had my binoculars pointed straight at the sun, then held up a white (so as to see it) box (so as to not blow around in the wind) as a screen. I used the shadow of the binocs to aim, and then used the focusing knob to, well, focus. I have previously seen sunspots with this setup, but it was too cloudy to really see them today.

Speaking of which, today is a partly cloudy day here at St Kwan's, and I was clouded out of seeing ingress. These images are from about 00:10 UTC 6 Jun 2012.

Saturday, May 19, 2012

SDHC USB Bootloader and Logging Firmware work!

I have the old Logomatic 2.3 working with the new SDHC USB bootloader. At first, FAT32 was not enabled. because the mass storage part doesn't even need a filesystem driver, the host takes care of it. However, In order to find and install FW.SFE, the bootloader needs FAT32 however, so this version has it.

Also, the old Logomatic firmware has been updated the minimum amount to use SDHC and FAT32.

All of this works for me, and generates log files, but it sometimes takes a LONG time to mount over USB (sometimes a minute or more).

So now I have put my code where my mouth is.
USB_SDHC_Bootloader.zip
Logomatic_SDHC_FAT32.zip

Friday, May 18, 2012

More on SDHC

So here's the problem. There are in fact two SD card drivers in the Logomatic firmware. Hey, by the engineering method, anything that works is good, and the Logomatic code works, but it's not great code. There are entire sections that are included but not used, like the USB driver in the main logging firmware. Now we have two pieces of code to do the same thing -- access the SD card. There is Roland Riegel's code used by rootdir.c and used by both the bootloader and main code to read and write the card, and there is the code used by the USB driver, which is entirely separate and entirely unready to do SDHC.

So, I am going to update the USB driver to use Roland's code. This just means mapping what the code is doing now to the various parts of sd_raw.c . Piece of cake, probably. We'll see.

Anyway, once this is done, I'll post a message about it on the Sparkfun forums, direct them here, and get literally some readers on this blog, rather than just myself like I get now.

No bragging until I can put my code where my mouth is.

By the way: I am doing all of this development on the Loginator 1.1, so the SD card slot and USB port work. Yay for me!

The LPC1768

It's sooooo close to being awesome. Cortex M3 with all the improvements (don't be scared by the word Harvard, the program-visible memory map is flat) and full pin compatibility with the 2368.

Almost.

This part has no SD/MMC port. Still, with SPI and DMA, maybe we don't need SD/MMC. The part is otherwise pin-compatible with the 2368, so the 2368 breakout board should work with a 1768 also.

I am going to get a 2368 talking to an SD card and sensor before I try messing around with the 1768.

Loginator2148 v1.1 is assembled

I have every confidence that the board is correct. I have tested that the known bugs in the old board have been eradicated. I have every confidence that the parts are properly placed and that all joints are good. I have tested the power supply, charge circuit, and lights.

So why am I not more confident in this board?

Is it live...

...or is it Memorex?
All of the switches and the battery terminal were partially melted/burned with hot air. The power switch is especially bad. Everything seems to work, though. It's just ugly. Next time, we bake the plastic parts. Maybe we bake everything. Maybe we salvage this board by lifting all the burned parts, replacing them, and baking it. Maybe we just don't make this board again. I would kinda like to get out of the 2148 business and into the 2368 business.

Update: The thing is alive, the Arduino terminal just doesn't know how to talk to it. I sent ? and saw 'Synchronized' on the logic analyzer (great device, I'll write a review some day). It just didn't show up in the serial terminal. This means that the controller is alive enough to communicate over serial, which means that the crystal and decoupling caps are on and the power is properly hooked up. The thing was running on battery when I tested this, so the whole battery part works too. Wow, I guess a lot is tested just with "Synchronized".

I wish these pictures showed the glorious royal purple and gold of the boards.
This is a NON-FLIGHT configuration. Not that the Loginator is ever intended to fly, anyway. This is just so that I can learn how to talk to the sensors and get the IMUinator working

Wednesday, May 16, 2012

11DoF is working

I finally put together an 11DoF. All sensors are on board, but I decided to forgo the OR gates until I see that I need them. That means that both solder bridges on the back are closed, and the board is subject to Analog silliness if it decides to show its head.

Here we have the design...

And here is the physical implementation. Is reality ever as clean as design?

All of these sensors work, in that the I2C compass and baro/temp sensor actually work, and the SPI accel and gyro respond to SPI commands and return their proper chip IDs. I haven't tested full functionality on those guys yet.

So, do I start making and selling these things for $50 each? Or do I just release the design and say "have at it" yourself?

If the latter, here are the files:

11DoF Schematic (Eagle 6.x)
11DoF Board


Next version? There's always a next version. Both STMicro (makers of the L3G4200D) and Invensense (makers of the ITG3200 that I gave a glowing review to) have 6DoF sensors, and one of them even makes a 9DoF sensor with a magnetometer in the same chip as the accelerometer and gyro. The problem is that these parts are not yet available from Digikey, and therefore not really available at all. Also, the interfaces are ugly: Both parts have two SPI interfaces. The Invensense part has one for the gyro and one for the accelerometer, while the STMicro part has one for the gyro and one for the acc/compass. It is almost as if two devices are just crammed into the same case. I can cram together parts, in fact that's what the 11DoF board is. I want a 6/9DoF part to have a single SPI interface with a single chip select, and perhaps more importantly, I want the acc and gyro to sample at the same time and have one interrupt line.

Wednesday, May 9, 2012

Designing a large charlieplex

I saw a new cool way to design a charlieplex, one which is easy and obvious, even for an extremely large LED matrix. This was buried in a post below where I blather on about my own problems, so I broke it out into a post that Google is more likely to find.

Monday, May 7, 2012

Details Matter...

...and they seem to cost about $50 each.

The boards for Precision 1.1 arrived on Saturday, part of an ocean of purple boards. I had previously ordered all the electronics for it, so I was all ready to assemble it, except...

  1. In Eagle, I copied the crystal from some other source, and it was still labelled 12MHz like it was from some LPC2148 board. Before, I had manually ordered parts, but this time I used add-digikey.ulp, which dutifully noted that the part was 12MHz and the same case as the Logomatic crystals, so it ordered a 12MHz crystal and I missed it.
  2. I had decided to use 91Ω resistors to try to solve the "extra lights" problem, but after having the clock set up, I decided that brightness was more important than extra lights, so I decided back to use 47Ω resistors like with Precision 1.0
This wasn't that much of a problem, as I had a complete but unusable board from a previous forgotten detail, so I could lift the 47Ω packs and 16MHz crystal from it, without ruining the working front board I already had. This avoids the "one brief shining moment" problem, when you have something that kinda works, but you want to improve it and destroy it in the process. I still have a working Precision 1.0 front board.

Now for today's $50 detail. The primary features of Precision 1.1 are the green wire fixes from last time and a port for the UP501 GPS unit in place of the EM406 port from 1.0. Well, I was planning on using the back board from Precision 1.0 as-is, since it already has $50 of blue and green lights on it. However, it has the old GPS port on it, with certain pins grounded and certain pins connected to VCC - the wrong pins, as it turns out. One of the new 3.3V pins matches one of the old VCC pins, so there is a direct 1.7V short, which I'm surprised didn't completely burn out the FT232. Maybe it did partially burn it out, but there is no way to check right now. The new PPS pin is grounded on the old board, in a flood of polygon that is impossible to isolate. Even though the back board has no electronics on it, it still interferes. This is the $50 detail. Now I have to order a new set of lights. Or... I can lift the lights off of the old board, but that would create a brief shining moment problem. So, it will be new lights.

Friday, April 20, 2012

Program 66

Or: Chauffeur or Pilot?

I just got finished reading Digital Apollo, where one of the major themes is how much automation is the right amount. The author refer to the two poles as "Chauffeur", someone who just wants to get a job done, and "Airman" (I will say "pilot" from here on out) who flies for the joy of flying and personally controls a complicated machine. This debate can be easily seen in other machines, as well. For instance, do you like a manual or automatic transmission?

Are you worthy of the term "driver" if your idea of driving is getting on I-80, turning on cruise control, and doing only what is necessary to stay on the road and not hit anyone? Would you like to turn those tasks over to your car as well?

On the other hand, do you love choosing the exact shift points to get the most out of your car? Do you want to be able to do things in a manual that you just can't do in an auto, like a push start or engine braking? Why don't you use a manual choke as well as a manual transmission? Is there a manual timing adjust in your car? Fuel injector control? Theoretically, a manual transmission can be more efficient than an auto, but is it really more efficient in your hands?

I for one am squarely in the automation camp. I drive because I want to get somewhere. I program for the fun of it, the way that some people drive for the fun of it. After being around and fascinated by the aerospace business over my whole life, I finally discovered that I am not a pilot. I like the idea of space travel, and I wish I could do it, but I'm not one who likes to control my machines that much.

Digital Apollo is mostly about the struggle between the computer and software engineers, who just wanted to get the job done, and the astronauts who wanted to actually fly in space, rather than just ride. In the late 50's and early 60's, it was becoming increasingly difficult to justify human controllers. For instance, no US rocket has ever been manually flown from the ground into orbit. Ascent guidance was developed for ballistic missiles which obviously couldn't carry a pilot, and this guidance was adapted to launch vehicles targeting orbit instead of targeting flaming death on our enemies.

Even if a human was to control a rocket, they would not be able to navigate or guide it. Navigation is done by the inertial measurement unit (IMU) and some software to integrate its measurements. Guidance is done by some complicated formulas which take the current and target state vector as inputs and produce a vector to align the thrust to. All that a human would be able to do is point the rocket in the direction that the automatic system already calculated.

All this brings us to the most complicated maneuver ever attempted in space, deorbit and landing on an airless world at a particular target in a safe spot. Surveyor performed landings, but had an error ellipse on the order of tens of miles, and it landed directly from an impact orbit. Apollo was a complicated coordination of humans and machines, each doing what they were best at. The program was broken up into several major modes:

  • P63 was called Braking, and lasted from before powered descent initiation (PDI) to the "high gate", where the landing site came into view. P63 was basically full automatic, where the only job of the astronauts was monitoring. Its job was to get from orbital speed, about 1.8km/s, at about 16km altitude, to almost stopped, only 200m/s or so at 3km altitude, in the most fuel-efficient way possible, and without hitting the ground. Calculus of Variations provides a unique trajectory, which the guidance system attempted to follow. It carried an approximation of this trajectory, perhaps similar to the powered explicit guidance routine. Early in the descent, the surface of the moon was too far out of range for the landing radar, so the guidance source was the inertial platform. After descending lower, the radar became active and the Kalman filter navigation system used the radar and inertial observations to produce its estimate. The guidance equations took the Kalman filter output as its estimate of where the vehicle is, and calculated the correct descent engine pointing to fly the best approximation of the optimum trajectory from the current location to the high gate.
  • P64 was the visual approach. At this point, the lander tipped over so that the commander could see the landing site through the windows. Before this point, the vehicle was pointed almost level, with the standing astronauts' feet towards the direction of travel and faces straight up into space. P64 was targeted to a low gate point, and flew the best trajectory it could, with the constraint that the commander could see it through the window. It also calculated where it would land if allowed to land automatically, calculated where in the window that would be, and displayed the Landing Point Designator (LPD) angle, which the lunar module pilot read aloud. The commander would use the angle scale engraved on the windows to see where the vehicle thought it was going to land, and could change this point with the rotational controller. If the commander did so, the program would recalculate the location of the low gate, so that guidance would fly to the new point instead of the old. This is an example of cooperation between manual and automatic processes. The commander used his senses, particularly vision, to evaluate the landing site. The machine can't see at all, it is basically a blind machine with a radar as a white cane. The human chooses and evaluates a landing site, and the machine figures out the best way to get there.
  • P65 was the automatic landing program. After passing low gate, the program nulled the horizontal speed and followed an altitude versus vertical speed profile to land. At this point, there was no targeting involved. The program was committed to the last LPD in P64, and just used the radar to measure altitude and run the vertical speed profile, and to measure horizontal speed to null it. No mission was ever landed with P65. During P64, the commander always switched to...
  • P66, altitude rate control. In this mode, the commander chose a descent rate, and changed it by clicking a switch up or down. He also chose the attitude by using the rotational controller. The machine calculated whatever thrust was necessary to hit the commanded descent rate, controlled the descent engine throttle, and calculated but did not control horizontal velocity. It displayed that on the cross pointers on the commander's instrument panel, and left it up to the commander to control horizontal rate. This is almost always referred to as the commander taking manual control, but as we can see, again there is a strong automatic assist, the machine doing the calculating, the human doing the guidance. Horizontal speed and position was controlled similar to a helicopter, by pitching the correct direction to use some of your thrust to add or subtract to your horizontal speed. It was up to the commander to make sure that by the time the vehicle was on the ground, its vertical speed was slow enough and its horizontal speed was close to zero. All Apollo missions were landed with P66. There was also...
  • P67, Full manual. The machine used the radar to calculate altitude, vertical and horizontal speed, and so forth, but only to display them. The commander had full control of the throttle and attitude. This mode was never used.
  • Actual attitude control, even in the semi-manual and full manual modes, was provided by the digital autopilot, in what we would now call fly-by-wire. This program figured out which RCS jets to fire, based on inputs. The autopilot itself could be driven by the guidance system, which calculated the best way to get to a chosen attitude (KALCMANU routine) and drove the jets that way. It could go on semi-automatic Rate Command, where when the commander moved the rotational controller, the machine would figure out the axis the commander wanted, then use the jets to get the vehicle rotating around that axis at a given rate. As the commander held the rotational controller, the vehicle would continue to rotate, with the jets quiet, since the vehicle is now up to speed. When the commander released the controller back to center, the digital autopilot would stop the vehicle rotation. There was also an almost full manual mode, where the jets were on as long as the controller was moved. This is most similar to the native Orbiter mode.
Similarly, the Space Shuttle guidance system could do deorbit braking, entry, descent, and landing all the way to touchdown, all on automatic. The only thing it couldn't do automatically was lower the landing gear. This was for safety so that the gear was not lowered in space, since they could not be raised once lowered.

The shuttle was flown in autopilot through 134 of 135 entries, but 0 of 135 landings. The commander and pilot always flew the final approach, from near or in the HAC to touchdown.

The automatic entries are a little bit surprising, as in Orbiter, I like to fly space shuttle entries on almost full manual, and I don't even like to fly. Skidding across the atmosphere at mach 25, 40deg nose up and 80deg banked, may be some of the most fun and exciting flight possible. I use the Glideslope MFD to see the reentry corridor, but stay on it myself. I'm not just flying to pre-calculated needles either, like I would in an ascent. I am actually doing what pilots do - flying by the seat of my pants.

There was a proposal to do a fully automatic unmanned landing of the lunar module, but among other things the astronauts hated it. I think the same thing happened with the shuttle. If the full auto system is proven, then why do we need pilots?

Automation is your friend, not your master. Besides, if you are the one building the automation, then it is the friend you built. I have heard it said that you can program a computer to dance and Irish jig, but only if you know how to dance an Irish jig. I would add further that while you have to know how to do it, you don't have to be able to do it yourself.

If you are buying your own car or plane, for the fun of driving or flying it, be my guest and have it however manual or automatic you want. If you are flying for me, whether I am a passenger on the aircraft with you or just a taxpayer paying for your ride, you are obligated to do whatever will maximize the chance of mission success. If that's full manual, so be it. If full auto, so be it. Some combination? Choose the right combination. Don't be like some astronauts who thought that they were destined to fly a lunar module and wanted as little automation as possible, to prove how macho they were as pilots. You are there in support of me, the science-consuming public. We did not spend billions of dollars so that you could fly. We spent it so that we would get the mission results.