Monday, December 15, 2014

The Useless Machine

Once upon a time, I accidentally designed and built an electronic circuit that turned itself off, therefore accidentally re-inventing the Useless Machine. My nephew saw a YouTube video of one, and fell in love with the concept. So I got one:

It's a quite elegant machine, in that it is completely powered off when closed, and runs on two switches, with no other logic, no ICs, no nothing. It's circuit is a DPDT toggle switch, which appears to be the "on/off" switch but is more accurately referred to as the "forward/reverse" switch. It also has a microswitch inside which trips when the arm pulls all the way in. When the switch is forward, the motor runs forward until the arm pops out, collides with the switch, and puts it into reverse. The reverse circuit switch is in series with the microswitch, so it only runs until the arm retracts completely. It's kind of an interesting logic problem: How do you arrange the switches to do what is desired?

Thursday, October 23, 2014

Tuesday, September 2, 2014

Mt Evans and the amorphous project

This is the benchmark on the "top" of Mt Evans. I say "top" because there is another boulder within 20 feet which is about 3-4 feet higher, and another one within 50 feet which is 2-3 feet higher still. However, this is the one with a readable altitude (if just barely). The two devices are my phone in GPS mode, and the controller for Project Yukari, running a program called "Southwest" because I intend to take it on various flights I make around the country in the future. One of the interesting things about this recorder is that it carries a pressure and temperature sensor.

Sunday, July 13, 2014

St Kwan's Campanile is complete!

My name is Kwanzymandias, king of villagers: Look on my works, ye Mighty, and despair!

This is a full-scale model of the bell tower in St Mark's Square. I searched for days, making camp three times before discovering this village in a lake, about 150m from the third camp. I created this whole world with the express intent to find such a village, and when I did, I defended it and built my base there.

It's not quite The One Eight, yet, but it does have some nifty features. It is 102m tall, including the Tower of Shininess on the roof, consisting of one obsidian and four gold blocks. Its floor plan is a square 12m on a side, with walls 2m thick, just like the real thing. It shares with the real tower a spiral ramp around an empty center. The interior space is marked by four brick pillars, with a 4m square space in between them and the 1m wide ramp around them. Unlike the real tower, this is a working building, crammed to the gills with FTB technology. There is a floor in the center space every 6m, once per lap of the ramp.

The first floor is the lobby, and has nothing.

The second floor is the power station. It features the building tesseract, used to pipe lava from the pumping station in the Nether. The power station consists of 10 magma dynamos running on the lava pumped in, and their power goes out through the tesseract to power the pump. Until the local Nether lava ocean is drained, I will have power.

The third floor is the main AE room, with a 4m MAC, along with the controller, drive, and crafting terminal and monitor. My power armor tinker table is also here.

The fourth and fifth floors are gardens, with automatic planters and harvesters. These were used to grow earth and water seeds, used to make clay, then bricks for the walls (the walls were made of dirt first). The fifth floor also has the Tinker's Construct station for fixing tools.

The sixth floor is the chicken factory. It uses Xisuma's chicken cooker design to make several cooked chickens per hour, more than enough for my needs.

The seventh floor is the Nether portal room, with a floor of soul sand for growing nether wart.

The eighth floor is the twilight and Mystcraft portal room.

The ninth floor is just below the transition from brick to marble, and has the ore processing system. More buildcraft-type machines will go here as I need them.

The tenth floor is the belfry, where the big arched windows are. Here I have the bedroom, as well as a second crafting terminal, unifier, and uncrafting table. I plan to hang some note blocks from the ceiling to simulate the bells.

The eleventh floor is where the redstone for the bells will go.

The twelfth floor is not spoken for, it is the brick area above the marble and below the roof.

The thirteenth floor is under the roof, and is also not spoken for. I had planned at one point to put a mob farm here (in the dark) and have the mobs plummet through the tower, but I don't need any mob farms at the moment.

Thursday, July 3, 2014

Cleaning up the code

"All the most important mistakes on a project are made on the first day"
--Old design adage

"There is no teacher but the enemy. No one but the enemy will tell you what the enemy is going to do. No one but the enemy will ever teach you how to destroy and conquer. Only the enemy shows you where you are weak. Only the enemy tells you where he is strong. And the only rules of the game are what you can do to him and what you can stop him from doing to you."
-- Mazer Rackham

"Reinhardt regarded the mysteries of the Universe not as indifferent questions of physics or chemistry, but as implacable, malicious foes. They were to be assaulted with science, vanquished at any cost, forced to yield their treasure house of knowledge."
-- from The Black Hole by Alan Dean Foster

In any interesting problem, you don't know enough about the problem to begin with to even form a plan for solving it. You do the best you can, and in the process of solving the problem, you learn what you should have been doing from the beginning. If time and resources weren't an obstacle, you could go back and solve the problem right from the beginning. But, usually for any interesting problem, time and resources are an obstacle, and you are stuck with what you started with. At each point in solving the problem, you only change what came before to the absolute minimum degree necessary.

As a result, sometimes legacy design choices make the solution quite a bit more complicated than a clean solution would be.

Now I have time and resources -- a whole year of time. It's time to clean house.

Sunday, June 22, 2014

A custom board for Yukari

The Loginator is a fine circuit. It is great at what it was designed for, logging. Even as a robot controller, it still spends a good chunk of time logging.

To really spiff it up as a robot controller, it needs a couple of things:

  • Two free PWM pins. PWM5 is AD6, so it is already free. PWM1 and 3 are also UART0, and PWM4 and 6 are also UART1. PWM2 is also part of SPI0, which talks to the SD card, but any GPIO can do the CS thing. All the other pins are already used, but I wouldn't mind parting with one of the I2C ports. In particular, I2C1 is used to talk to the 11DoF. It might make sense to switch that to I2C0, since SDA1 is also BSL. Then again, SDA1 and BSL don't interfere with each other. Another alternative is to give up AD7 or AD8, since I don't use the free pins much.
  • A GP2106 port. I have no complaints about the GP2106. It worked fine. It should have a port on the Loginator, either to interface with the existing breakout, or direct with the 1.8V parts on the Loginator board directly.
  • Perhaps rather than the 11DoF port, have the sensors directly attached.
An LPC2368 would go a long way towards fixing all of these things. It has 100 pins, so there is less overlap. Perhaps a Propeller would work too.

AVC 2014 Event Report

Or: Give Up! It's time for you to throw in the towel, capitulate and raise the white flag...

I made it farther than I did last time. I made it to the starting line. I turned on the robot, and watched it run straight as an arrow into the fence. It looked like it didn't even try to make the turn.

After that first run, I looked at the wiring and misread it. Since the LPC2148 is short on pins, all of the PWM outputs are multiplexed with other functions, functions it turned out I needed. PWM 1 and 3 are also UART 0, PWM 4 and 6 are also UART 1, and PWM 2 and 5 are also part of SPI 0. As it turns out, I needed all of those things, so I had to pick one to discard. I would have liked the GPS on UART 1 so that I could program the device over UART 0, but since that was the one thing that was possible to do without, I had to give it up. I had the GPS wired to UART 0, but only when it was in robot mode. When it was in bootloader mode, UART0 had to be connected to the host (through the FTDI cable). When I glanced at the board after the run, as I said, I misread it. I thought that the wiring was set up for bootloader. This would have made it impossible to do waypoint navigation. It would have resulted in the robot running straight as an arrow forever (or until it hit the fence).

As it turns out, it did try to make the turn. Like I said, I had misread the wiring and the GPS was fine. I could see from the logs that at about the right place it signalled to turn -100 (out of the full servo range of -128 to 127). It wanted to turn -600, but of course it was limited. I was let down by the steering mechanism again, but this time it wasn't a twitchy servo, it was my own circuit.

I was worried about running a solderless breadboard as the base of the robot controller, so I got some matching solder protoboards, but ended up never using them, mostly because I didn't want to commit the Loginator to this project. I didn't want to solder things down, so I ended up with two short breadboards stuck side by side. One supported the Loginator and FTDI socket, while the other supported the GPS interface and the opto-isolators for the servo control. The opto-isolator was the weakest part, partly from the sheer number of wires in such a small space, partly because I had to switch the channels at the last minute.

During the test run before the race, the thing was running long and seemed to want to turn only after I had picked it up. It might have already had this fault at this point. I have the logs and can analyze at this point. Soldering down the opto-isolator and its wires probably would have worked.

In any case, I was looking for an excuse to give up. It is one thing to surrender before you have to, and another to realize that there really is an insurmountable obstacle. During the test run on Friday, I had manually driven the thing around the track and flipped it on the finish line. The board popped out, but was apparently undamaged. I was almost hoping that the board was damaged beyond repair so that I could honestly say that I was beaten, rather than had given up.

I had convinced myself on Saturday that the steering problem was in the former  class. I had convinced myself that I needed to solder the whole thing to a board to get it to work. Thinking back on it now, if I had disassembled and reassembled it, it very well might have worked. I had the solder protoboard with me, and I could have borrowed a soldering iron. I could have done it there at the race.

This means I gave up too soon. I haven't finished. I will have to go back next year. Joseph is interested, so maybe I can use him as the motivation I need. Maybe he can help with Yukari, maybe he will help with another 'bot. Yukari is so close, I can feel it.

I did meet some nice people. Team Bloomberg was a dad with his family who had driven out from Wisconsin. Team Deep Space was an RC car with a robot controller, pretty similar to mine, except he went with odometry and a 4WD chassis. However, it had a cool hat, a flying saucer.

Saturday, June 21, 2014


If I was a week ago where I am now, or even a day ago, I would be in great shape.

As it is, I gave the robot its first free run, about 6 feet down the driveway. I also walked it down the street course, holding it up, checking that the wheels turned and roughly following its steering. It found the waypoint, turned around, and headed right off the other end. I may need to have a catcher's mitt for this one.


  1. I'd like to win
  2. I'd settle for finishing
  3. I'll get the one-corner program.
I am going with two boxes, one computer bag, one live and one dead robot and one robot hat. The boxes are filled with a whole bunch of random stuff, not even in the hopes that I need it, but in the hopes that someone else is helped out by my being there. I have programmed the robot with my laptop, so I know I am at least ready for the nominal case.

Friday, June 20, 2014

Friday Afternoon Training

The course was open today at 2:00, and I had hoped to be ready to test Yukari there. As it happens, guidance isn't ready, so I drove it around manually, to record the GPS. While I was there, I saw the barrels. There is a clear lane to the left which is what I plan on using. That means no bumper, either. On finishing my first lap, I opened the throttle all the way, hit the start/finish line bump hard, jumped, flipped, and spilled the controller. It detatched itself from the battery, both the connector and the foam tape, flipped a couple of times, and came to rest upside down. However, no damage was done except for a few superficial scratches on the 11DoF.

It has become evident that I MUST practice guidance with the host-mode passenger.c as well. That means refactoring all the navigation, guidance, control, and config stuff so that it can be run from host mode.

Compass is working!

I implemented what I talked about in the last post, straight on the robot controller first. I tested it by unplugging the GPS, then holding it in my hands at the desk and turning it. Once I was satisfied with that, that was the extent of the testing I could do inside. So, I set the controller for 38.6deg, the azimuth of the street outside my house. I plugged things in so that I was manually controlling the throttle, but the robot was controlling the steering. I put it on the line, then revved it up. It decided to go about 20deg to the right first, but as it picked up speed and covered distance, it turned almost exactly straight with the road.

Next: Guidance! On a normal day, I would have called this a good day's work and called it off for the day. Not today. This is only half of what I had planned on testing at 10:00am today.

A setback

The Kalman filter sensor fusion failed. At the part where it does the measurement update for the covariance, it wasn't producing a symmetric covariance. My next idea was to fake it with a scalar weighting. That didn't work either.

The GPS heading is really good, particularly when the vehicle is going straight. So, we will watch for the gyro reading to be large. If it is, we set a counter to 400 (gyro readings). If it is small, we decrement the counter. When the counter is below zero, we just use the angle from the last GPS reading to the next as the heading, and reset the heading state to exactly that. We keep a heading offset and a separate free-roaming gyro heading. In between GPS resets of the offset, we add that offset to the free gyro heading to keep the actual heading up-to-date.

Thursday, June 19, 2014

How to Train your Robot

Here is my best idea yet: Go out and drive the robot manually around a course that can be seen from Google Earth. Record the GPS and compass data with the robot controller in "passenger" mode. Then take that data and run it through a program written on a desktop computer. It is able to run the filter and navigate in faster than real-time. It can calculate what guidance it should have, and what control it would have used. You can then see if it is working, and do corrections and bug fixes off-line. Only when it can drive perfectly offline, do you load the code onto the robot controller and give it the keys.

We have program passenger.cpp, written in C++ just like the robot controller firmware, with much of its code symlinked directly from the YukariII codebase. The only part of passenger.cpp that is different is that the code reads a recording rather than reading live data. When it encounters each packet, it loads it, then calls the YukariII code to act on it. It then prints out the results.

Tuesday, June 17, 2014

It's really going to work!

This one is even better than the previous steering demonstration. The navigate function is about half complete (it reads and integrates the gyro, it reads but does not integrate the GPS yet). The guide function has yet to be written, but the control function is in place!

I'm worried about it being all on a breadboard and held on with foam tape, but I have some solder breadboards in case it shakes apart in testing. I am still going to use sockets for the Loginator and for the GPS interface, because I am going to want those back.

I still need to think about the "Go" button and the bumper. Watching the replays from last year, it looks like I should do the average G before the start of the race. I turn the controller on, step away from it for a few seconds for it to collect average G, then push the green button when they say go.

Tomorrow early morning I will take the bot to the mall parking lot and drive it manually there, collecting data all the while. I can then use that data to test the guide routine as I develop it. Doing the compass writing on a desktop machine, then porting it to the emulator, worked out well. I am going to do that again with guide().

The current long-range weather forecast is good. I don't want to put a hat on the car if I don't have to.

Sunday, June 15, 2014

Closing the Loop

It is going to work.

My latest great idea is processing the data off-line on my main computer. I can do this in a way such that I can re-analyze it repeatedly, pull out what I want from the data, and do it all without having to re-run the in-motion test.

I took Yukari out again, this time just driving it up and down the street. I ran it down the bike lane line, then over to the center line when the bike lane line ended. I did about 6 laps of this, and got good GPS data the whole time.

Once back, I ran the data through the Passenger program. This code is written in C++, just like the main robot firmware. It reads and parses the recorded data one packet at a time, but does so by reconstructing the variables which the robot originally had on-the-fly. The robot will be able to use this same code to do the same things. And what it is doing is integrating the gyro data, producing a full quaternion of orientation data. Then I transform the nose vector of the robot (it happens to be -Z), then take the atan2 of the z and x components of that vector. This is the gyro heading.

Of course it lacks any absolute reference, and that is what we will use the GPS heading for. If I walk up to the starting line with the robot in GPS lock, in the direction of the first waypoint, the GPS will remember the heading when it stops. I have tested this today.

There was one sick moment when I thought it wasn't going to work. The curve of GPS and gyro headings weren't anything alike. Then I found a bug dealing with going around the corner with TC, which made all the difference. As I was explaining to one of my friends, its like a speedometer. If you know that you are going at 50mph, then in 1 hour you will have gone 50 miles. If your clock calculation says instead that you have travelled for -12 hours, you will calculate that you have gone -600 miles.

I have the gyro turned down intentionally to 100Hz, with a bandwidth of 25Hz. The whole point of the low-pass filter is that the gyro is sampling itself as fast as it can (probably at 800Hz or more), then is taking the average to produce what the gyro would read if it were sampled at a much higher rate and integrated over a much longer time. This smooths out the noise and gives my robot brain less work to do.

Next steps:

  1. Figure out how to relate the GPS and Gyro headings. I figure something like a Kalman filter, but somehow have the accuracy of the GPS heading be related to the distance travelled since the last turn.
  2. Close the loop. Use the calculated heading to drive a heading control loop.
  3. Turntable test. Tell the thing to steer to the north, then while the thing is on a turntable (or in my hands) rotate it back and forth and see how it steers.
  4. Road test. Write code which drives forward for 10 seconds, while steering to the heading of the bike lane line. After 10 seconds, change the commanded heading to 90deg right for 1 second, then 90deg right again, then let it run for 10 more seconds.
Also: Last time I was totally panicked about motors and inductonium, so I put in a set of opto-isolators to protect the controller from the motors. This time, when I had the robot brain ground connected to the BEC, the GPS wouldn't lock. So, I am putting in isolators again, this time to break the ground loop.

Wednesday, June 11, 2014

Capacitance Loss

I didn't learn this one myself the hard way, but that's only luck. I very well could have.

I use 100nF (why doesn't the nanofarad get any respect?) capacitors all over the place on my boards, mostly as "filter" or "bypass" caps. They sit next to each Vcc pin of each digital IC, and the way I have heard it, they act like a little bitty power supply right next to where you need it. If the part needs a bit more power than average, it will suck charge off that capacitor before it sucks current off the Vcc rail. The part is protected from its own variations in power, and the rail (and therefore the other parts) are protected from this part.

I also use much larger caps as specified for regulators and other such power supply and sensor analog devices. The sensors want a much smoother power supply, and the regulators and such use them for feedback. These tend to be 1μF or even larger. Some parts call for as much as 10μF. The USB spec says no more than 10μF equivalent on a device across Vcc to ground, since too large of a value will draw a large inrush current as those capacitors are initially charged.

I tend to use ceramic caps for all of these, because I am obsessed with board space. Besides, a ceramic cap is pretty close to ideal. No polarization to worry about, etc.

Our good friend over on the EEVBlog demonstrates an issue which does bite ceramic caps -- capacitance change with applied voltage. I had heard of this before, but effectively ignored it as something I couldn't do anything about. Ignore the accent and presentation quirks, I am sure I would sound even worse.
He takes a 10μF 6.3V 0805 ceramic capacitor and demonstrates how to measure it with an oscilloscope and an RC circuit. With a 0-5V square wave, he demonstrates that the cap has its rated 10μF. But, when he changes the signal to 5V-6V, the capacitance drops to less than 5μF.

At first I breathed a sigh of relief when I saw the first demo. No capacitance loss even with a 5V signal. But then I thought about how the second demo applies to my circuits. A bypass cap is run with a large DC bias -- the Vcc voltage. This is directly relevant to my interests.

I guess all I can say is that the designs I am following call for a certain size capacitor. When these were tested by their original manufacturer, they either took these effects into account, or didn't, and just put in a larger cap than they needed when the circuit didn't originally work. If the design turns out to call for only 3μF, they specify a 10μF cap knowing that it will still have at least 3μF under the prevailing conditions.

More details at .

Sunday, June 1, 2014

Worse is Better

The thesis "Worse is Better" states that Simplicity of implementation is the overriding design goal of quality software, to the expense of everything else, even Correctness. I don't know if I agree with it in all cases, and I'm not even sure the author agrees with it in all cases. Be that as it may, I am applying this principle to Project Yukari. The main result of this is that if it is simpler (by which I mean takes less time to implement) then do it that way. If I run into trouble using some advanced language or hardware feature, I will see if I can get around it.

Will it be elegant? No.
Will it be extensible? No.
Will it be an example of how to code, a work of art? No.
Will it work? That is the goal above all else. If the robot navigates the course in three weeks, that is Mission Accomplished, nothing else matters.

  • The hardware has a great USB interface. I know that the part is capable of simultaneously acting as a Mass Storage Class and a serial port. However, I haven't learned how to code it. I can't use the Sparkfun bootloader since it doesn't work with SDHC cards. I can't use my upgraded Bootloader_SDHC since it is too slow. Therefore I don't use the USB at all, except for power. I load the software with the LPC2148 monitor. This means that since the PWM is using serial port 1, the GPS is forced to use the same port 0 as the bootloader.
  • The GPS Rx line doesn't seem to be working on the part that I have. I know it has worked in the past, but I can't get it to work now. As a result, I will let the part speak its native NMEA 4800 baud, 1Hz update.
  • The Rocketometer used a timer-interrupt driven task to read the sensors at a precise given cadence. Yukari will instead read the sensors at whatever rate it can, record the timestamp at which it did read, then go with that. 
  • I am having trouble with reading the serial port, and I suspect it has to do with the interrupt-driven nature. Somehow I am not acknowledging an interrupt, and as a consequence no new interrupts are being generated and no new data is being processed. I will make a new NoIntSerial class, as a replacement for HardwareSerial, which will have blocking write and will use the Rx FIFO as its one and only buffer. This will be fine, as long as the main loop is called sufficiently often. In this case the low bitrate from the first item above works to our advantage. There are only 480 characters per second, about one every 2ms. If we run the main loop at least once every 32ms, we won't drop any serial data. This is only 30Hz.
  • There will only be a major loop, with no interrupts. In order, in the main loop:
  1. The inertial sensors are read
  2. An inertial packet is written to the card
  3. Every 10th time, the magnetic sensors are read, and a magnetic packet is written to the card
  4. The serial port is checked, and while there is a byte present, the port is read and parsed as NMEA. If there is a complete sentence, update the 2D state and GPS heading.
  5. The heading Kalman filter is run based on the inertial sensors and GPS heading, if new. This is the navigation portion of Navigate, Guide, Control.
  6. The waypoints are consulted and updated as necessary, and the heading to the correct waypoint is caluclated. This is the Guide portion of Navigate, Guide, Control.
  7. The difference between the navigation heading and guidance heading is used to calculate the steering. This is the control part of Navigate, Guide, Control.
  8. The control value is written to the steering PWM.

Remind me why I am doing this again?

By the third day his eyes ached unbearably and his spectacles needed wiping every few minutes. It was like struggling with some crushing physical task, something which one had the right to refuse and which one was nevertheless neurotically anxious to accomplish.
-- George Orwell, 1984

Here we are, three weeks until the contest. I have no motivation to do this, but at the same time no desire to call it off. As I hate making decisions by default, that means I have to do it. Since I am unlikely to win the contest, nothing tangible is gained by finishing. Last time, I was tremendously motivated, to the point of using every waking hour, even to the point of taking vacation time to work on the robot. This time I can barely motivate myself to take a weekend to work on it. I just can't get into it.

Yesterday I went to the course. As the newspaper says, "'Tis a privilege to live in Colorado". One benefit is that with 20 minutes of driving and $6.25 to get in the gate, I can visit the course whenever I want. I did so yesterday, with the robot brain on a breadboard. I had the GPS on and locked, connected through the Arduino Nano used as a pure passthrough for its FT232 chip. This was piped through the SiRFDemo program running on Natsumi, and recorded.

Yesterday, however, they were setting up for the Boulder Triathlon. I guess I take a certain amount of inspiration from the runners in that race. Thousands of people join up, most of them with no thought of winning, just finishing. Similarly, I do not plan on winning. I just want to do this so that I can say that I finished something. I am not doing this again by myself. Next time, I plan on being part of a team, preferably as a sponsor/adviser.

In any case, here is the plan. I am going to do this by waypoint navigation. The robot will continuously estimate its 2D position and heading (speed doesn't matter). The position estimate will come straight from the GPS. The heading will be a Kalman filtered result from the GPS heading (to remove gyro biasing) and the gyro (for precision and response time). Perhaps GPS position will work in as well, if I can figure out how.

The hardest part is deciding what to do when we are close to the waypoint. I can easily imagine the waypoint getting inside of the robot's turning circle and the thing chasing its tail forever in vain.

Imagine we have three consecutive waypoints, one at the start line (point 0), one at the first turn (point 1), and one at the second turn (point 2). Also imagine we have just left the start line and are heading towards the first turn. How do we know when to turn? Also, what course do we steer heading towards the point? I think we want to set an imaginary steer-to point (point 1') about 20 meters beyond the actual waypoint 1, on the line from point 0 through point 1. We steer towards 1', while keeping track of the dot product between the vector from point 1 to point 0 and the vector from point 1 to the robot. When this dot-product becomes negative, we have crossed a line perpendicular to the line from 0 to 1. At this time, we set the target point to point 2', 20m beyond waypoint 2 on the line from 1 to 2. We then change all the indexes and steer towards point 2'.

The 20m is arbitrary, but intended to be larger than the GPS error.

Friday, May 16, 2014

June 1 Test

When I last entered the AVC in earnest, I remember there being an April 1 test, where you had to demonstrate the vehicle moving and steering under its own power and control. As it happened, I burned out my helicopter and shifted to the car design after this date, but I got a waiver when I demonstrated the car a mere 1 week before the contest. And we all know how that wound up.
April 1 test from last time, completed on April 17

So, I was expecting something like this, this time around. I was watching the Sparkfun web page, expecting to see something there. My phone automatically checks my gmail, so I was watching that. As it turns out, their email got stuck in a spam trap, and I didn't see it until today. So without further ado, I present the June 1 test, completed on February 22, 2014.
This test shows the steering servo connected to an Arduino Nano, powered by the BEC from the ESC, just like the real controller will be used.
Maya wants to help. Some tests are more successful than others...

This test shows the Arduino controlling both the steering and throttle.

Steering is the easy part. It is half of the Control function in the Guidance, Navigation, and Control triad. All the mechanical parts work and if the robot knew where to go, it could control itself and get itself there. The latter is the hard part. I anticipate using a Kalman filter with inputs from the GPS, magnetic compass, and inertial rotation rate sensor. I also am going to add a bumper, so in case it hits anything, it can back up and try again. This is to get around the barrels. I am not currently planning on using the line for line-following robots.

The parts breakdown is about as follows:

Robot chassis - RC car. Includes drive motor, steering servo, battery, ESC, and radio receiver (still carried in robot but disconnected). about $150. Maybe $100 if you count only the parts in the chassis that I actually use, not the receiver, transmitter, or battery charger.
Controller - Kwan Systems Loginator (Logomatic clone) - about $60
Inertial Sensors - Kwan Systems 11DoF - about $40
GPS - GP2106 - about $50
Miscelaneous wires and breadboards - scattered on my workbench.

After last time's debacle, I decided to not go crazy with parts this time. All parts except the chassis are from other projects. I have all these parts already. This is all the hardware I anticipate needing. I use lots of other parts for bench support, such as Arduinos, FTDI chips, oscilliscopes, logic analyzers, computers, etc. None of these are actually part of the robot.

As such, I believe it is a small enough budget that it might fit in the micro class.

As noted in the parts list above, I have chosen to go with the more powerful LPC2148, in the form of the Kwan Systems Loginator. You can read about the design of the Loginator and 11DoF further down this blog. I have repeated all these tests with that controller, but I didn't record them, and they would just look the same as these tests anyway. Video from those tests when it does something more advanced.

Tuesday, April 8, 2014

Music Structure and Program Structure

I am a programmer by nature. I learned on my father's knee when I was single-digits old. I passed him in skill when I was a teenager. But, I am not only a programmer. "Specialization is for insects."

One of the other things that I am is an amateur musician. I may not have ever been a good, or even an average, tuba player, but I had fun doing it. I had fun marching, and I had fun watching football and basketball games. I realized early on that I was there not really so much as a musician, but as the carrier of a big metal flag. I was there for visual impact as much as anything. I even achieved a certain amount of fame as the "spinning tuba guy" in the early 2000's. One of my proudest moments was when I was featured in the opening montage of SportsCenter for one whole second.

As I said, I was never very good, but to be even a below-average tuba player, you must acquire certain skills. You have to know how to read music, and I did. I could convert note positions into fingerings and play approximately the right note. I could count or clap out rhythms, and with some practice, I could play the music well enough to fit in with the rest of the band.

One of the things I noticed being both a programmer and musician is that there are some similarities between program flow and music flow. In both cases, the most common flow is from one line or measure to the next, just sequential. Programming has loops, music has repeat signs. Programming has if/then, music has first and second endings. Music is somewhat limited, in that it is deterministic. It doesn't have to deal with input. Therefore, in music, some things have to go together, like multiple endings and repeat signs. Otherwise, the second ending would never be played, and would effectively get optimized out.

I have often wondered if this mapping could be made more complete. There are certain concepts we use in programming that tend not to get used in music, but maybe could, like subroutines. That got me wondering if there were concepts in music that could be mapped to programming, but aren't, and I finally came up with one today: the coda.

In music, you will see markings such as D.S. al Coda, indicating that the flow jumps from here, back (never forward) to a special marking, and then continues from there to another mark coda. This mark is ignored the first time through, but the second time, indicates a jump forward (never back) to the marked coda section.

It first occurred to me today that this is similar to exception handling. When a handled exception is thrown, the flow jumps to the handler. In a sense, this is like taking the coda branch in music.

You could use this in normal flow, throwing an exception when you want to make an early exit from a loop or function. Most functions and loops should have one entry point (enforced by the language, except for Fortran), and one exit point. However, sometimes it is convenient to do an early return from a routine, an early break from a loop body, or an early re-loop in a loop body. Most languages I use support these things with the return, break, and continue statements, respectively. However, there is a good reason for the 'only one exit' rule. Often the routine needs to do some cleanup on exit, closing files, calculating final results from accumulated variables, etc. If you do an early exit, you have to make sure that the cleanup is done appropriately each time you have an exit. If you want to change the cleanup, you have to do that in multiple places. The alternative is that rather than having multiple cleanup-and-exit blocks, you have something like goto end, and then at the end label, you do the cleanup.

However, gotos are to be avoided, for good reason. While the code will work just fine, source code is for communication with humans, not machines. If it were otherwise, we would write code once, keep the binary code, and delete the source. In this case, the humans we are communicating with are most likely our future selves.

Since source code is for humans, semantics matter more than they do if it were just for machines. Machine language doesn't normally have many advanced control structures, just jumps. The code is effectively strewn with goto statements, and the more optimized the code is, the worse this tends to get. Even disassembled code from a modern compiler is hard to reconcile with its source code. Optimizations make everything dramatically out-of-order.

Source code on the other hand, has structured statements rather than a spaghetti nest of goto statements, because they mean something to us. To a machine, a for loop means set up an induction variable, do something to it at the end of each loop, check the loop condition, and go back and do it all again as necessary. To a human, it means run the loop a predetermined number of times. This is why we have the rule to not monkey with the induction variable, as it disrupts the loop semantics. The human reading it will think that this block will run a predetermined number of times, but monkeying with the induction variable disrupts this expectation. In a sense, this is why break and continue are frowned upon, because they also disrupt the expectation.

Random goto statements are just that, random. They have no semantics. They can be used to do anything, so they mean nothing. All structured programming constructs can be done with goto (in fact they have to be in machine language) but not all goto constructs can be represented by structured programming. Sure, there is a proof that you can do it, but that's just because structured programming and goto statements are both Turing complete. I was once dealing with a program, SDP4, which was originally written in Fortran with no structured programming constructs, but pure goto spaghetti. The code arrived to me translated into (Turbo) Pascal, and I translated it further into Java. The previous programmer did a pretty good job of translating most of the gotos into structured constructs, but there is one part where he left in the gotos because the original code was so tangled he couldn't figure it out. He had the luxury of leaving in the goto statements in that case, because (Turbo) Pascal supports it, but I in Java did not. I ended up using a case statement inside a while loop and depending on fall-through. It was technically a structured construct, but just emulated the original goto nest. Vallado points to a solution that involved jettisoning that entire block of code and re-writing it from the original math.

So, while you can write a goto end, it doesn't carry the appropriate semantics. What I want is a statement that means to coda and coda. This carries the semantic that we will shortly be exiting the routine, but that there is a certain amount of cleanup to do. It would be added to the arsenal along with the early return, break, and contiune. One way to implement it in Java is to put the coda in a finally block of a try/catch/finally statement. Then when you want an early cleanup, you throw an exception. However, this violates the semantics of an exception, which is supposed to only be used for an error condition. I once wrote a program which used exceptions in this sense, asked the guys on TheDailyWTF forum what is a better way to do it, and basically got laughed off the forum.

Therefore, I think that in order to capture this semantic, a new statement is needed. Coda is a fine word for it for me, since I am a musician, but maybe there is a better term. Until this construct is added to our arsenal of structured constructs, we are stuck with goto coda, which is better than nothing, because it does capture the appropriate semantics.

Wednesday, April 2, 2014


Once again this is hooked up as indicated in the MIC5319 datasheet. I finally am using the switch right, no more useless machine (and useless transistor) for me. As it turns out, that transistor was effectively built-in to the regulator, that's what the EN input is there for. This circuit supplies a nice 3.3V output on VCC to everything else. Previous versions had a current sensor, but this one doesn't, as I haven't gotten one of those to work yet.

It isn't visible in this image, but the bottom rail is GND.

Charging Circuit

This one is a pretty straightforward hookup of the MCP73831 charging circuit, with one addition: D302 is there so that the charging circuit does not have to treat the whole device as a battery to be charged. If USB is connected, VIN will get the full 5V (minus the voltage drop of a Schottky diode). If not, the battery is used (again minus the voltage drop). Finally, both diodes need to be there so that the battery doesn't feed back to the charger input and try to charge itself. The programming resistor tells the resistor to charge the battery at a maximum of 100mA. This is 1C for a 100mAh battery, the smallest one I have and the one that flies with the Rocketometer.

The status light uses a resistor from the 1.5k pack in the USB connection.

Other versions of this circuit have had a voltage divider between VLIPO and GND so as to allow the MCU to measure the battery voltage. I haven't used it in a while, and it does draw some current, so it is gone from this circuit.

Doing USB right

AN11392 - Guidelines for full-speed USB on NXP's LPC microcontrollers (19 Feb 2014). This finally answers all my questions about what all the USB parts are. Based on it, let's review the Loginator USB/charge/power supply section.

We are now using a Micro USB connector with through-hole posts for better mechanical security and easier alignment during soldering. Micro USB takes up less board space and is compatible with the cords used for Android phone charging.

The first thing the app note says is that there must be a 33 ohm resistor on each of D+ and D-. This, plus the internal resistance of the LPC itself, add up to the 90 ohm total impedance required. It implies that there is 12 ohms on each of the pins inside the LPC. This is what I have been carrying all along from the Logomatic design.

Secondly, USB_Softconnect is required if the device is self-powered, or if it takes longer than a certain amount of time to boot up. Since my device can be self-powered and might never boot up and connect, I intend to use this as intended. However, I still like a PMOS rather than PNP transistor. The 1.5k resistor required for softconnect to work is also well-sized for the LED, so I use an array.

The signal lines have capacitors to ground, for exactly the purpose I deduced - shorting high-frequency signals to ground. The app note says that they are not strictly necessary but that it has been reported to improve certain noise issues. I have always built circuits with these included, so I shall continue. These are one of the rare cases where it makes sense to use a capacitor array.

Next, we have R011 and R015, which I completely whiffed on. In my defense, I am not the only one. My design came from the Logomatic, which came from the LPC2148 Reference Design which appears to be a mistranscription of the Keil MCB2140 schematic. Even then, the Keil board does not seem to be what was intended.

The idea is that P0.23, USB_ON, is 5V tolerant if and only if the rest of the circuit is powered. So, if it is not possible for the MCU to be off while the USB voltage is present, then you can just plug VBUS into P0.23. However, if the MCU is not powered, the voltage on that pin is supposed to be limited to less than 3.6V. The app note recommends a voltage divider, with 24k on the high side and 39k on the low side, resulting in about 3.09V on the input.

The way both the microbuilder circuit and Logomatic circuits are arranged, that isn't a voltage divider at all, and the pin eats a full 5V. Since the MCU can be turned off (the power switch disables the 3.3V VCC line), this is technically out of spec.

R011 should be connected to the right pin of R015B, and should be closer to 20k. This divider will draw three times as much current as the recommended value, but that is still less than 1mA.

There is supposed to be between 1 and 10 uF between VBUS and ground, visible through regulators and other parts. The real spec is that the inrush current should be limited, but I don't intend to ever submit my device to USB compliance testing, so as long as it works for me, it's fine. There is 4.7uF on the input to the voltage regulator, so I do not inculde any intentional capacitance in this section.

Monday, March 10, 2014

Rocketometer Flight Data Published

This documents the data produced by the Rocketometer during NASA Sounding Rocket Flight 36.290, 2013 Oct 21.

I was getting sick of looking at this data, unable to fully process it but holding it jealously. This has to change. The data wants to be free. Get it at

Thursday, February 20, 2014

Once More Unto the Breach...

It's official, St Kwan's has re-entered the robot business. Project Yukari Mk2 will be racing on June 21, 2014.