Saturday, June 27, 2015

Splitting a git repository

I keep all my hobby code in a big git repository. The problem is that it is too big. I have something like 28GB in it, of which the vast majority is data, by which I mean results of data-collection processes, such as photos, rocketometer data, etc. It also includes data acquired from other sources, such as spice kernels, shape models, image maps, etc. It is making everything slow.

So, the solution is to split the repository. We split it into code, data, and docs.

Now, how do we split a repository? One option is to use the data we have to build new repositories. We go through the existing one commit-by-commit, generate a diff, filter that diff so that only the right files go into the new repository, then make a new commit in the new repository, making sure to keep the commit message, user information, and timestamp.

That seems hard. Also, it seems like someone should have already done that. So, I did some research to see if anyone else has done this. Mostly I am looking for people who are permanently removing files from a repository. My idea is to make three copies of the existing repository, then remove the files that don't belong using the methods described on the Internet.

In the course of my research, I found a program called BFG. This program does a full remove of a large set of files, in a way that is much quicker than git filter-branch. In order to do this, we need to remove the files from the head of the repository, so we do things like this:

  1. Make three copies of the original workspace with git clone --mirror . These are going to be the three new repositories, so call them code.git, data.git, and docs.git
  2. For each repository, check it out with git clone (no --mirror). This working copy is temporary. We will use the code repository as typical.
  3. Because of the way BFG works, we have to rename any folders which are called Data or Docs. BFG doesn't remove just /Data, but any folder called Data. As it turned out, there were some such files. They needed to be either renamed (to data and docs, note lower case), deleted, or moved.
  4. Remove the files from the working copy with git rm -r Data Docs .
  5. Move all the files and folders from code down one level, since the old code folder will be the new root: git mv code/* .; git rm code
  6. Commit and push the removal and move
  7. Now go back to the new mirrored repository code.git and run java -jar ~/bfg-1.12.3.jar --delete-folders Data and java -jar ~/bfg-1.12.3.jar --delete-folders Docs . This process runs very quickly.
  8. At this point, the repository is cleaned, you can't see any evidence of the files, but there are garbage objects which need to be removed to actually make the repository take less space. To do this, run git reflog expire --expire=now --all --verbose and git gc --prune=now --aggressive . Git garbage collection takes a long time and an immense amount of memory, more than the 8GiB my file server actually had. I had to run the gc over the network on my game rig, which has 32GiB.
  9. Now the repository is shrunk. We went from 28GB to 3GB for the code repository.

Friday, June 26, 2015

More on sensor lag

I have done further analysis of the practice runs, and also hooked up the PPS and recorded some data while fixed.

From the PPS data, the robot takes an average of 0.54s from the time that the PPS fires to the time that it finishes receiving and parsing the RMC sentence. This is true every 4 seconds out of 5, but every fifth second, the GPS sends a burst of GSA and GSV sentences, which apparently delays the RMC to an agonizing 0.96s.

During practice run 3, the log file YUKARI02.SDS was recorded when the robot ran into the barrel. During its run on the second leg, it reached a peak speed of 6.57kt, or over 354cm'/s. It had a sustained speed on the second leg of over 300cm'/s . This is around 3m/s.

So, the sentence is typically processed 0.54s or 150cm' delayed, but up to 0.96 or over 300cm' delayed. Plus, the fixes are uniformly randomly distributed relative to the actual time of waypoint passage, with the delay equally likely to be any time between 0 and 1 second. This is another 0-300cm' of delay, meaning that the robot will detect and react to the waypoint between 1.5m and 6m beyond. I pulled the waypoint back 6.6m when I did the short waypoint test.

Tuesday, June 23, 2015

Yukari 4 and nowcasting

I have been convinced to do Yukari 4, by of all people, my lovely wife Kellie. I have a theory about what is wrong, and how to fix it with one wire and a software change.

Hey mom! I'm on the Internet!

I just noticed this picture on the Sparkfun AVC2015 site:

That's from AVC 2014, and that is Yukari II, breadboard and everything. Those are my toes on the right.

Friday, June 19, 2015

Multipath

This thing might actually work. This morning the biggest problem I was having with it was the fact that the GPS was going crazy. I think it is multipath, and I think putting a big plane of copper below the antenna will work.

In the mean time, I have calibrated all the gyro axes, which needed a 15% scaling in order to read accurately. I trust the gyro compass far more than the GPS at this point, so I will tune the configuration parameters such that the GPS heading is ignored.

Wednesday, June 17, 2015

Operations Manual

The instructuions for the robot in its current state, which might actually work.

  1. The robot has two batteries, one for running the motors and one for running the controller. Make sure that both are charged. Use the motor battery charger to charge the motor battery. The battery may get hot during a charge, so its instructions say to charge it outside the vehicle. The controller may be charged either with the 5V/3.3V FTDI cable plugged into its jack, or with a USB MiniB plugged into the jack on the Loginator. Only do one of these, and don't try to charge with a 3.3V FTDI module.
  2. Make sure that the SD cards are in place. The Loginator card is the 16GB red/gray Sandisk card. It is a good idea to format it (backing up old data first, of course). Make sure that it has the proper CONFIG.TXT for the course. 'make format' will do both of these things. The camera card can be formatted using the menu of the camera.
  3. Set the switches as follows: SW1 (GPS TX->Loginator RX) is ON, SW2 (FTDI TX->Loginator RX) is OFF, SW3 (Boostrap mode) is ON.
  4. Make sure the GPS is in lock. You should see the red PPS light on the GPS interface blink once per second.
  5. Turn on the Loginator. You will see the light blink green or cyan, as the GPS and bootloader try and fail to talk to each other.
  6. Switch SW3 to OFF. This will boot the robot into the main firmware the next time that the controller is reset
  7. Turn on the motor power by sliding the switch forward. The green light on the ESC should blink but then stay off. If the steering is off-center, manually turn the wheels to center.
  8. Take the robot to the starting line. Approach the line from behind, and walk for several seconds in the direction the robot needs to go for the first leg. This sets the GPS heading appropriately.
  9. Set the robot on the line.
  10. Turn on the camera and start recording.
  11. At the starting signal, push the RESET button. This will boot the robot and it will take off within 2-4 seconds.
  12. When the robot comes to a halt, either due to finishing or to hitting an obstacle, turn the motor power off by sliding the switch backwards.
  13. Turn off the Loginator.
  14. Stop recording on the camera and turn it off.
  15. Remove the card, plug it into a computer, and copy the data from the card on to the host computer
  16. Use extractPacket and a spreadsheet to analyze the data. Use the NMEA files produced to drop into Google Earth for map visualization.
  17. Pull the MicroSD card from the camera and copy the results to the host.
The intended startup is slightly different:

  1. Make sure both batteries are charged as above.
  2. Make sure the SD cards are in place as above.
  3. Set the switches to SW1 ON, SW2 OFF, SW3 OFF (changed from above). We will bring the robot up into the main firmware, because it won't go until we push a button.
  4. Make sure the GPS is in lock as above.
  5. Turn on the Loginator as above. The red light for the LED should come on also, which means that the light will blink red/white. This is not an error condition - Those are indicated by a red or blue light blinking out a blink code at one second on, one second off. The robot is now recording GPS and inertial measurements, and actively sending zero throttle and steering commands.
  6. SW3 will already be off.
  7. Turn the motor on as above. It should center the steering automatically, and the drive wheels should not spin. The green light on the ESC should be steady on.
  8. Take the robot to the starting line by any route. The robot will no longer use GPS heading to initialize
  9. Set the robot so that it is pointed as accurately as possible towards the first waypoint. The robot assumes that its initial heading is towards the first waypoint, and uses that as its initial reference.
  10. Turn on the camera and start recording as above. Do this at least 5 seconds before the start of the race. Any jiggling is in principle remembered forever and incorporated int Average G, but in practice is more than 99% forgotten after 2.5 seconds.
  11. At the starting signal, push the STOP button on the Loginator or the green button. Do this quickly, since if the robot is jiggling before half a second before you push the button, the robot will incorporate that jiggling into its average G. We don't want that.
  12. When the robot stops, turn the motor off as above.
  13. Turn off the Loginator as above.
  14. Stop camera recording as above.
  15. Read the log as above.
  16. Use the analysis tools as above.
  17. Copy the camera video as above.
To burn a new program into the robot

  1. Set SW1 to OFF and SW2 to ON. Don't let both SW1 and SW2 be on at the same time.
  2. Set SW3 to ON.
  3. Plug in the FTDI cord to its jack, paying attention to the colors. The black wire is closest to the corner. Make sure there is no USB cord plugged into the Loginator USB jack.
  4. Reset the robot by pushing RESET on the Loginator. The robot should come up in bootloader mode.
  5. Use 'make program' to program the robot.

Tuesday, June 16, 2015

Heading Reference

The hardest thing is getting an accurate heading reference. There are at least 4 possible sources:

  1. Magnetic compass - We are carrying one, but it is close to the motor. I'll have to check some of the data to see if the motors influence it a lot.
  2. Gyroscope - Subject to random walk drift and needs an accurate starting heading
  3. GPS heading - The GPS calculates a heading itself and reports it in RMC
  4. GPS delta-position - Use trigonometry and the position 1 second ago
 A proper solution would use a Kalman filter. I haven't gotten that to work on the robot yet. So, I am using primarily 2, with 3 to initialize and 4 to correct when in motion. It goes like this:

  • We initialize the gyro using "Average-G". This is the average of the last 50 gyro readings on each axis. This average is subtracted from the in-motion readings to produce a zero-biased rotation rate
  • We use the first RMC sentence to get our initial heading
  • Once in motion, we use the gyro as the primary heading reference. We set up an inertial frame parallel to the initial orientation of the robot, and an initial identity quaternion. We use normal quaternion integration to integrate the gyro readings and get a new gyro-based orientation. We use that orientation to transform the vector pointing forward (the nose vector) from the body to the reference frame. The gyro heading is then the angle between the initial nose vector (same as the body nose vector) and the current nose vector. We use the initial heading to calculate the offset between the gyro heading and true north, and add this offset to the gyro heading to get the instantaneous heading relative to true north
  • Whenever a large maneuver is executed, IE a large yaw rate is sensed, a countdown is set to 400 readings (4 seconds). When this countdown expires, we use method 4 to get the heading if the speed of the vehicle is high enough. We use this heading to calculate a new heading offset.
Improvements are as follows:

  • We know the initial heading from the first two waypoints. Provided the robot is accurately aimed toward the waypoint, it will be plenty good enough.
  • We are going to add a green button. Before the button is pushed, the robot will continually be recording the gyro readings, but will remember the last 50. Each reading, we will pop one reading out of the queue, and average it with the running total. The weighting will be 1 point for the new reading and 49 for the old average. In effect this takes the average of all the measurements from robot reset, but each old measurement has an exponentially decaying weight. We include some code to wait for 49 measurements before using a weighting of 49. We remember 50 measurements because the very act of pushing the green button will cause the robot to shake and will ruin the current measurements.
With these improvements,  I expect the initial heading to be much better and I expect to be able to run almost entirely on gyro readings.

Monday, June 15, 2015

(Real life) Failure

Time to test the robot in the real world. Because it is faster than I am, and because I don't trust it, and because I am testing it on the open road, I am testing it by carrying it, walking with it, and turning myself and the robot as its steering wheels turn. Here is the first result: The green line is the intended course, and the blue track is the measured track (it actually started almost exactly on the start of the green line, and never travelled towards the west).


Notice that it didn't turn. Analysis of the guidance packets suggests that the robot was only going at one tenth the rate we expected. As a consequence, when it reached the waypoint in reality, it only thought it had reached one tenth of the way to the waypoint. What happened? It seems to be a problem related to parsing the GPS data. The simulator gave out data with 7 digits after the decimal place, while the real GPS uses 4. In both cases, the intent was to force the length to 5 digits. It turned out that the code was forcing the part after the decimal point to 4 digits, if the number needed to be stretched, which is the case for real data and not for simulated data. The effect was that the fractional minute part was only 1/10 of what we expected. As it happens, the test course does not cross a whole-minute grid line, so the gigantic leap when it thinks it is going from 11.09999 to 12.00000 doesn't happen. The robot only thinks it is going 1/10 as far as it really is going.

Furthermore, this test inspired a new simulator/reality hybrid, called playback. The playback program is quite similar in structure to the simhost program from before. The difference is that it reads a recording from a real robot run rather than simulating from scratch. Whenever a gyro packet is encountered in the recording, a simulated gyro reading with the same DN and timestamp is created. Whenever an NMEA packet is found, this is stuffed into the virtual GPS serial port. In both cases the simulated timestamp is set to the time indicated in the packet.

I was able to put the actual robot code under the microscope with gdb and the playback seen above, and by stepping it there, found the problem with the NMEA parsing. I am not able to use playback to completely test the fix, but I can check if the robot tried to turn at the waypoint.

Results:
The simulator was adjusted to only give 4 digits after the decimal and was re-run against the actual AVC course, to see that this still works, and it does.
The donuts are "victory donuts" done after the robot crosses the finish line. Victory donuts are a bug, but they are after the robot crosses the finish line.

The playback also changes waypoints at about the right time, if I actually have it using the right course. Too many times I ran the playback with the AVC course rather than the local test course.

Friday, June 12, 2015

(Simulated) Success!

If my robot does this at the competition, it will represent full mission success: scoring at least one point.
The left is a spreadsheet plot of the simulated position, and the right is the simulated GPS data dropped into Google Earth.

I had gone into this thinking that the software was good and the hardware flaked out last year. I made a bunch of changes to the software, and it got worse before it got better, but I suspect there were enough bugs in the old code that it wouldn't have worked even if the hardware had worked. Last year, it did try to turn, though.

This proves the value of a good simulation. I have simulated the race more than 100 times, including times when the robot just drove around in circles. The above run was the first which actually successfully sought a waypoint. More about the simulation below the fold.

Thursday, June 11, 2015

Telemetry Database

On one of my previous for-work projects, I was involved in flight software for an instrument bound for low-earth orbit. I am Inordinately Proud of that, because the instrument is still working, in year eight of its two-year mission. The instrument has never crashed. However, that's not what I am talking about today.

One of the things that project used was a command/telemetry database. In the spirit of One Authoritative Source, that database was used to generate PDF documentation for humans (which was then printed out and put into a big three-ring binder for us to take notes on) but also for the ground system to create command packets and interpret telemetry packets. Further, the database was used to generate some of the flight software code involved in interpreting commands and generating telemetry. Mostly in the flight software it was used to make structs, which the flight software used to parse commands and form telemetry. The code had to use the right struct names, but the actual definition of the struct came from the cmd/tlm db.

I have wanted to implement this in Yukari for a long time now, and I have finally done it. The TelemetryDatabase.ods file is the authoritative source for all packet creation and interpretation. This table contains one row for each packet field, grouped into contiguous blocks describing each packet, and eleven columns:

  1. Apid - Application Process ID, the CCSDS term for "packet ID". Each different kind of packet has a different apid, and all packets with the same apid have the same structure.
  2. shortName - Name of the packet, must be a valid C++ identifier
  3. wrapRobot - set to 'y' if we want to automatically generate code to start and end the packet. This is typically set to 'n' when the header is generated in one routine, most commonly in main.cpp, and the body is generated somewhere else, like a sensor routine.
  4. extract - set to 'y' if you want to extract this kind of packet with extractPacket
  5. hasTC - C++ expression, which when evaluated in the embedded code in the right context, produces the timestamp to use on this packet. If the packet doesn't require a timestamp, this field is blank
  6. extractor - One of "csv", "source", "dump". Used to control how the extractPacket ground support program creates extracted packet tables.
  7. source - C++ expression, which when evaluated in the embedded code in the right context, produces the value for this field
  8. field - field name to the outside world
  9. type - C++ field type. If the field is an array (char[] is most typical) include the array size right after the 
  10. unit - Unit of this field, typically some SI unit for floating-point values, or DN or TC ticks for integer values
  11. description in English
Columns 1 through 6 only need to be set for the first field in a packet.  A blank apid is treated as having the same apid (and therefore being in the same packet) as the previous field. Fields in a packet must be contiguous, apids must be unique for packets, and order of fields counts, but order of apids doesn't.

This table is converted from ODS format to CSV using ssconvert, a program which comes with Gnumeric. It is then run through a perl script tlmDb.pl, which generates the following things:
  • A bunch of include files of the form write_packet_$shortName.inc which contain C++ code to write the packets in the embedded code.
  • extract_head.inc  - This file contains all the packets as C++ structs. This file is used in the head of extractPacket.cpp, which is why it is called extract_head.inc
  • extract_body.inc - This file contains a series of blocks of code. Each one starts an appropriate dump file if necessary, reverses each field from (CCSDS-mandated) big-endian to little-endian as necessary, then dumps the packet in the dump file. This is where the extractor field above comes into play. CSV dumps the packet into a CSV file, with all fields printed as ASCII decimal. "Dump" is used for packets with a variable-length payload, like source images and NMEA sentences. These packets have their last field just concatenated to the dump file.
The table itself is useful as a human-readable reference for the packets. Since it is the One Authoritative Source, any change here is immediately reflected in the code. 

This is worked into the makefiles, particularly into the makefile for the packet library. Since the output of tlmDb.pl depends on the table, we treat extract_head.inc as the primary output (it will always be present). We make it depend on TelemetryDatabase.csv , which in turn depends on TelemetryDatabase.ods (we keep the telemetry database in spreadsheet form to take advantage of formulas). All C++ files which create packets then use #include "write_packet_$shortName.inc" in the proper context, and depend on extract_head.inc even if that file isn't used directly.

I spend some time scouring the Internet for a make-friendly program which could convert an ODS spreadsheet to CSV. I finally found ssconvert after trying to get soffice --convert-to to work without success. I also tried the program unoconv, but this appears to be a wrapper around soffice.

Saturday, June 6, 2015

June 6 test

This is a simple open-loop test. It runs purely on timing:
  1. Five second delay after reset
  2. Throttle up and wait two seconds
  3. Turn hard right and wait one second
  4. Straighten up and wait for one second
  5. Throttle down
This is all exactly the same hardware as last year, except for the breadboard and the wire connecting the GPS. Last year I failed because the board shook apart. This year I am using a soldered main board. As a result, I have lots of confidence in the software, which wanted to turn, but couldn't because the wire shook loose. I'll start testing actual navigation, gudance, and control next week.