Sunday, June 22, 2014

A custom board for Yukari

The Loginator is a fine circuit. It is great at what it was designed for, logging. Even as a robot controller, it still spends a good chunk of time logging.

To really spiff it up as a robot controller, it needs a couple of things:

  • Two free PWM pins. PWM5 is AD6, so it is already free. PWM1 and 3 are also UART0, and PWM4 and 6 are also UART1. PWM2 is also part of SPI0, which talks to the SD card, but any GPIO can do the CS thing. All the other pins are already used, but I wouldn't mind parting with one of the I2C ports. In particular, I2C1 is used to talk to the 11DoF. It might make sense to switch that to I2C0, since SDA1 is also BSL. Then again, SDA1 and BSL don't interfere with each other. Another alternative is to give up AD7 or AD8, since I don't use the free pins much.
  • A GP2106 port. I have no complaints about the GP2106. It worked fine. It should have a port on the Loginator, either to interface with the existing breakout, or direct with the 1.8V parts on the Loginator board directly.
  • Perhaps rather than the 11DoF port, have the sensors directly attached.
An LPC2368 would go a long way towards fixing all of these things. It has 100 pins, so there is less overlap. Perhaps a Propeller would work too.

AVC 2014 Event Report

Or: Give Up! It's time for you to throw in the towel, capitulate and raise the white flag...

I made it farther than I did last time. I made it to the starting line. I turned on the robot, and watched it run straight as an arrow into the fence. It looked like it didn't even try to make the turn.

After that first run, I looked at the wiring and misread it. Since the LPC2148 is short on pins, all of the PWM outputs are multiplexed with other functions, functions it turned out I needed. PWM 1 and 3 are also UART 0, PWM 4 and 6 are also UART 1, and PWM 2 and 5 are also part of SPI 0. As it turns out, I needed all of those things, so I had to pick one to discard. I would have liked the GPS on UART 1 so that I could program the device over UART 0, but since that was the one thing that was possible to do without, I had to give it up. I had the GPS wired to UART 0, but only when it was in robot mode. When it was in bootloader mode, UART0 had to be connected to the host (through the FTDI cable). When I glanced at the board after the run, as I said, I misread it. I thought that the wiring was set up for bootloader. This would have made it impossible to do waypoint navigation. It would have resulted in the robot running straight as an arrow forever (or until it hit the fence).

As it turns out, it did try to make the turn. Like I said, I had misread the wiring and the GPS was fine. I could see from the logs that at about the right place it signalled to turn -100 (out of the full servo range of -128 to 127). It wanted to turn -600, but of course it was limited. I was let down by the steering mechanism again, but this time it wasn't a twitchy servo, it was my own circuit.

I was worried about running a solderless breadboard as the base of the robot controller, so I got some matching solder protoboards, but ended up never using them, mostly because I didn't want to commit the Loginator to this project. I didn't want to solder things down, so I ended up with two short breadboards stuck side by side. One supported the Loginator and FTDI socket, while the other supported the GPS interface and the opto-isolators for the servo control. The opto-isolator was the weakest part, partly from the sheer number of wires in such a small space, partly because I had to switch the channels at the last minute.

During the test run before the race, the thing was running long and seemed to want to turn only after I had picked it up. It might have already had this fault at this point. I have the logs and can analyze at this point. Soldering down the opto-isolator and its wires probably would have worked.

In any case, I was looking for an excuse to give up. It is one thing to surrender before you have to, and another to realize that there really is an insurmountable obstacle. During the test run on Friday, I had manually driven the thing around the track and flipped it on the finish line. The board popped out, but was apparently undamaged. I was almost hoping that the board was damaged beyond repair so that I could honestly say that I was beaten, rather than had given up.

I had convinced myself on Saturday that the steering problem was in the former  class. I had convinced myself that I needed to solder the whole thing to a board to get it to work. Thinking back on it now, if I had disassembled and reassembled it, it very well might have worked. I had the solder protoboard with me, and I could have borrowed a soldering iron. I could have done it there at the race.

This means I gave up too soon. I haven't finished. I will have to go back next year. Joseph is interested, so maybe I can use him as the motivation I need. Maybe he can help with Yukari, maybe he will help with another 'bot. Yukari is so close, I can feel it.

I did meet some nice people. Team Bloomberg was a dad with his family who had driven out from Wisconsin. Team Deep Space was an RC car with a robot controller, pretty similar to mine, except he went with odometry and a 4WD chassis. However, it had a cool hat, a flying saucer.

Saturday, June 21, 2014


If I was a week ago where I am now, or even a day ago, I would be in great shape.

As it is, I gave the robot its first free run, about 6 feet down the driveway. I also walked it down the street course, holding it up, checking that the wheels turned and roughly following its steering. It found the waypoint, turned around, and headed right off the other end. I may need to have a catcher's mitt for this one.


  1. I'd like to win
  2. I'd settle for finishing
  3. I'll get the one-corner program.
I am going with two boxes, one computer bag, one live and one dead robot and one robot hat. The boxes are filled with a whole bunch of random stuff, not even in the hopes that I need it, but in the hopes that someone else is helped out by my being there. I have programmed the robot with my laptop, so I know I am at least ready for the nominal case.

Friday, June 20, 2014

Friday Afternoon Training

The course was open today at 2:00, and I had hoped to be ready to test Yukari there. As it happens, guidance isn't ready, so I drove it around manually, to record the GPS. While I was there, I saw the barrels. There is a clear lane to the left which is what I plan on using. That means no bumper, either. On finishing my first lap, I opened the throttle all the way, hit the start/finish line bump hard, jumped, flipped, and spilled the controller. It detatched itself from the battery, both the connector and the foam tape, flipped a couple of times, and came to rest upside down. However, no damage was done except for a few superficial scratches on the 11DoF.

It has become evident that I MUST practice guidance with the host-mode passenger.c as well. That means refactoring all the navigation, guidance, control, and config stuff so that it can be run from host mode.

Compass is working!

I implemented what I talked about in the last post, straight on the robot controller first. I tested it by unplugging the GPS, then holding it in my hands at the desk and turning it. Once I was satisfied with that, that was the extent of the testing I could do inside. So, I set the controller for 38.6deg, the azimuth of the street outside my house. I plugged things in so that I was manually controlling the throttle, but the robot was controlling the steering. I put it on the line, then revved it up. It decided to go about 20deg to the right first, but as it picked up speed and covered distance, it turned almost exactly straight with the road.

Next: Guidance! On a normal day, I would have called this a good day's work and called it off for the day. Not today. This is only half of what I had planned on testing at 10:00am today.

A setback

The Kalman filter sensor fusion failed. At the part where it does the measurement update for the covariance, it wasn't producing a symmetric covariance. My next idea was to fake it with a scalar weighting. That didn't work either.

The GPS heading is really good, particularly when the vehicle is going straight. So, we will watch for the gyro reading to be large. If it is, we set a counter to 400 (gyro readings). If it is small, we decrement the counter. When the counter is below zero, we just use the angle from the last GPS reading to the next as the heading, and reset the heading state to exactly that. We keep a heading offset and a separate free-roaming gyro heading. In between GPS resets of the offset, we add that offset to the free gyro heading to keep the actual heading up-to-date.

Thursday, June 19, 2014

How to Train your Robot

Here is my best idea yet: Go out and drive the robot manually around a course that can be seen from Google Earth. Record the GPS and compass data with the robot controller in "passenger" mode. Then take that data and run it through a program written on a desktop computer. It is able to run the filter and navigate in faster than real-time. It can calculate what guidance it should have, and what control it would have used. You can then see if it is working, and do corrections and bug fixes off-line. Only when it can drive perfectly offline, do you load the code onto the robot controller and give it the keys.

We have program passenger.cpp, written in C++ just like the robot controller firmware, with much of its code symlinked directly from the YukariII codebase. The only part of passenger.cpp that is different is that the code reads a recording rather than reading live data. When it encounters each packet, it loads it, then calls the YukariII code to act on it. It then prints out the results.

Tuesday, June 17, 2014

It's really going to work!

This one is even better than the previous steering demonstration. The navigate function is about half complete (it reads and integrates the gyro, it reads but does not integrate the GPS yet). The guide function has yet to be written, but the control function is in place!

I'm worried about it being all on a breadboard and held on with foam tape, but I have some solder breadboards in case it shakes apart in testing. I am still going to use sockets for the Loginator and for the GPS interface, because I am going to want those back.

I still need to think about the "Go" button and the bumper. Watching the replays from last year, it looks like I should do the average G before the start of the race. I turn the controller on, step away from it for a few seconds for it to collect average G, then push the green button when they say go.

Tomorrow early morning I will take the bot to the mall parking lot and drive it manually there, collecting data all the while. I can then use that data to test the guide routine as I develop it. Doing the compass writing on a desktop machine, then porting it to the emulator, worked out well. I am going to do that again with guide().

The current long-range weather forecast is good. I don't want to put a hat on the car if I don't have to.

Sunday, June 15, 2014

Closing the Loop

It is going to work.

My latest great idea is processing the data off-line on my main computer. I can do this in a way such that I can re-analyze it repeatedly, pull out what I want from the data, and do it all without having to re-run the in-motion test.

I took Yukari out again, this time just driving it up and down the street. I ran it down the bike lane line, then over to the center line when the bike lane line ended. I did about 6 laps of this, and got good GPS data the whole time.

Once back, I ran the data through the Passenger program. This code is written in C++, just like the main robot firmware. It reads and parses the recorded data one packet at a time, but does so by reconstructing the variables which the robot originally had on-the-fly. The robot will be able to use this same code to do the same things. And what it is doing is integrating the gyro data, producing a full quaternion of orientation data. Then I transform the nose vector of the robot (it happens to be -Z), then take the atan2 of the z and x components of that vector. This is the gyro heading.

Of course it lacks any absolute reference, and that is what we will use the GPS heading for. If I walk up to the starting line with the robot in GPS lock, in the direction of the first waypoint, the GPS will remember the heading when it stops. I have tested this today.

There was one sick moment when I thought it wasn't going to work. The curve of GPS and gyro headings weren't anything alike. Then I found a bug dealing with going around the corner with TC, which made all the difference. As I was explaining to one of my friends, its like a speedometer. If you know that you are going at 50mph, then in 1 hour you will have gone 50 miles. If your clock calculation says instead that you have travelled for -12 hours, you will calculate that you have gone -600 miles.

I have the gyro turned down intentionally to 100Hz, with a bandwidth of 25Hz. The whole point of the low-pass filter is that the gyro is sampling itself as fast as it can (probably at 800Hz or more), then is taking the average to produce what the gyro would read if it were sampled at a much higher rate and integrated over a much longer time. This smooths out the noise and gives my robot brain less work to do.

Next steps:

  1. Figure out how to relate the GPS and Gyro headings. I figure something like a Kalman filter, but somehow have the accuracy of the GPS heading be related to the distance travelled since the last turn.
  2. Close the loop. Use the calculated heading to drive a heading control loop.
  3. Turntable test. Tell the thing to steer to the north, then while the thing is on a turntable (or in my hands) rotate it back and forth and see how it steers.
  4. Road test. Write code which drives forward for 10 seconds, while steering to the heading of the bike lane line. After 10 seconds, change the commanded heading to 90deg right for 1 second, then 90deg right again, then let it run for 10 more seconds.
Also: Last time I was totally panicked about motors and inductonium, so I put in a set of opto-isolators to protect the controller from the motors. This time, when I had the robot brain ground connected to the BEC, the GPS wouldn't lock. So, I am putting in isolators again, this time to break the ground loop.

Wednesday, June 11, 2014

Capacitance Loss

I didn't learn this one myself the hard way, but that's only luck. I very well could have.

I use 100nF (why doesn't the nanofarad get any respect?) capacitors all over the place on my boards, mostly as "filter" or "bypass" caps. They sit next to each Vcc pin of each digital IC, and the way I have heard it, they act like a little bitty power supply right next to where you need it. If the part needs a bit more power than average, it will suck charge off that capacitor before it sucks current off the Vcc rail. The part is protected from its own variations in power, and the rail (and therefore the other parts) are protected from this part.

I also use much larger caps as specified for regulators and other such power supply and sensor analog devices. The sensors want a much smoother power supply, and the regulators and such use them for feedback. These tend to be 1μF or even larger. Some parts call for as much as 10μF. The USB spec says no more than 10μF equivalent on a device across Vcc to ground, since too large of a value will draw a large inrush current as those capacitors are initially charged.

I tend to use ceramic caps for all of these, because I am obsessed with board space. Besides, a ceramic cap is pretty close to ideal. No polarization to worry about, etc.

Our good friend over on the EEVBlog demonstrates an issue which does bite ceramic caps -- capacitance change with applied voltage. I had heard of this before, but effectively ignored it as something I couldn't do anything about. Ignore the accent and presentation quirks, I am sure I would sound even worse.
He takes a 10μF 6.3V 0805 ceramic capacitor and demonstrates how to measure it with an oscilloscope and an RC circuit. With a 0-5V square wave, he demonstrates that the cap has its rated 10μF. But, when he changes the signal to 5V-6V, the capacitance drops to less than 5μF.

At first I breathed a sigh of relief when I saw the first demo. No capacitance loss even with a 5V signal. But then I thought about how the second demo applies to my circuits. A bypass cap is run with a large DC bias -- the Vcc voltage. This is directly relevant to my interests.

I guess all I can say is that the designs I am following call for a certain size capacitor. When these were tested by their original manufacturer, they either took these effects into account, or didn't, and just put in a larger cap than they needed when the circuit didn't originally work. If the design turns out to call for only 3μF, they specify a 10μF cap knowing that it will still have at least 3μF under the prevailing conditions.

More details at .

Sunday, June 1, 2014

Worse is Better

The thesis "Worse is Better" states that Simplicity of implementation is the overriding design goal of quality software, to the expense of everything else, even Correctness. I don't know if I agree with it in all cases, and I'm not even sure the author agrees with it in all cases. Be that as it may, I am applying this principle to Project Yukari. The main result of this is that if it is simpler (by which I mean takes less time to implement) then do it that way. If I run into trouble using some advanced language or hardware feature, I will see if I can get around it.

Will it be elegant? No.
Will it be extensible? No.
Will it be an example of how to code, a work of art? No.
Will it work? That is the goal above all else. If the robot navigates the course in three weeks, that is Mission Accomplished, nothing else matters.

  • The hardware has a great USB interface. I know that the part is capable of simultaneously acting as a Mass Storage Class and a serial port. However, I haven't learned how to code it. I can't use the Sparkfun bootloader since it doesn't work with SDHC cards. I can't use my upgraded Bootloader_SDHC since it is too slow. Therefore I don't use the USB at all, except for power. I load the software with the LPC2148 monitor. This means that since the PWM is using serial port 1, the GPS is forced to use the same port 0 as the bootloader.
  • The GPS Rx line doesn't seem to be working on the part that I have. I know it has worked in the past, but I can't get it to work now. As a result, I will let the part speak its native NMEA 4800 baud, 1Hz update.
  • The Rocketometer used a timer-interrupt driven task to read the sensors at a precise given cadence. Yukari will instead read the sensors at whatever rate it can, record the timestamp at which it did read, then go with that. 
  • I am having trouble with reading the serial port, and I suspect it has to do with the interrupt-driven nature. Somehow I am not acknowledging an interrupt, and as a consequence no new interrupts are being generated and no new data is being processed. I will make a new NoIntSerial class, as a replacement for HardwareSerial, which will have blocking write and will use the Rx FIFO as its one and only buffer. This will be fine, as long as the main loop is called sufficiently often. In this case the low bitrate from the first item above works to our advantage. There are only 480 characters per second, about one every 2ms. If we run the main loop at least once every 32ms, we won't drop any serial data. This is only 30Hz.
  • There will only be a major loop, with no interrupts. In order, in the main loop:
  1. The inertial sensors are read
  2. An inertial packet is written to the card
  3. Every 10th time, the magnetic sensors are read, and a magnetic packet is written to the card
  4. The serial port is checked, and while there is a byte present, the port is read and parsed as NMEA. If there is a complete sentence, update the 2D state and GPS heading.
  5. The heading Kalman filter is run based on the inertial sensors and GPS heading, if new. This is the navigation portion of Navigate, Guide, Control.
  6. The waypoints are consulted and updated as necessary, and the heading to the correct waypoint is caluclated. This is the Guide portion of Navigate, Guide, Control.
  7. The difference between the navigation heading and guidance heading is used to calculate the steering. This is the control part of Navigate, Guide, Control.
  8. The control value is written to the steering PWM.

Remind me why I am doing this again?

By the third day his eyes ached unbearably and his spectacles needed wiping every few minutes. It was like struggling with some crushing physical task, something which one had the right to refuse and which one was nevertheless neurotically anxious to accomplish.
-- George Orwell, 1984

Here we are, three weeks until the contest. I have no motivation to do this, but at the same time no desire to call it off. As I hate making decisions by default, that means I have to do it. Since I am unlikely to win the contest, nothing tangible is gained by finishing. Last time, I was tremendously motivated, to the point of using every waking hour, even to the point of taking vacation time to work on the robot. This time I can barely motivate myself to take a weekend to work on it. I just can't get into it.

Yesterday I went to the course. As the newspaper says, "'Tis a privilege to live in Colorado". One benefit is that with 20 minutes of driving and $6.25 to get in the gate, I can visit the course whenever I want. I did so yesterday, with the robot brain on a breadboard. I had the GPS on and locked, connected through the Arduino Nano used as a pure passthrough for its FT232 chip. This was piped through the SiRFDemo program running on Natsumi, and recorded.

Yesterday, however, they were setting up for the Boulder Triathlon. I guess I take a certain amount of inspiration from the runners in that race. Thousands of people join up, most of them with no thought of winning, just finishing. Similarly, I do not plan on winning. I just want to do this so that I can say that I finished something. I am not doing this again by myself. Next time, I plan on being part of a team, preferably as a sponsor/adviser.

In any case, here is the plan. I am going to do this by waypoint navigation. The robot will continuously estimate its 2D position and heading (speed doesn't matter). The position estimate will come straight from the GPS. The heading will be a Kalman filtered result from the GPS heading (to remove gyro biasing) and the gyro (for precision and response time). Perhaps GPS position will work in as well, if I can figure out how.

The hardest part is deciding what to do when we are close to the waypoint. I can easily imagine the waypoint getting inside of the robot's turning circle and the thing chasing its tail forever in vain.

Imagine we have three consecutive waypoints, one at the start line (point 0), one at the first turn (point 1), and one at the second turn (point 2). Also imagine we have just left the start line and are heading towards the first turn. How do we know when to turn? Also, what course do we steer heading towards the point? I think we want to set an imaginary steer-to point (point 1') about 20 meters beyond the actual waypoint 1, on the line from point 0 through point 1. We steer towards 1', while keeping track of the dot product between the vector from point 1 to point 0 and the vector from point 1 to the robot. When this dot-product becomes negative, we have crossed a line perpendicular to the line from 0 to 1. At this time, we set the target point to point 2', 20m beyond waypoint 2 on the line from 1 to 2. We then change all the indexes and steer towards point 2'.

The 20m is arbitrary, but intended to be larger than the GPS error.