SPI is a nice, simple, uncomplicated way of communicating between two or more embedded devices. Set a couple of registers defining clock rate, phase, and polarity, and you are ready to go at up to about 10Mb/s. It's a convenient way to talk to microSD cards, accelerometers, basically anything on board with an embedded device. Lots of device-defined protocols can be layered on top of it to do anything the host and device agree to.
USB is a nice, COMPLICATED way of communicating between a host and multiple devices, and multiple applications within the devices. It's not just a bus protocol, it's practically a network in itself. This has its advantages and drawbacks. If my device learns how to speak Mass Storage, any host computer in the world can use it. But, it is considerably more complicated. I can't just look up in a reference guide and bit-bang a protocol out like I can with SPI or I2C.
USB is in fact a stack of protocols, much like HTTP/TCP/IP/Ethernet. Some layers are handled by the hardware autonomously, some need the cooperation of the hardware and the firmware, and some is up to the firmware completely.
The USB project I had been working off of, LPCUSB, is a set of C routines with no readily apparent structure. You can trace through the handlers, to find handlers on top of handlers and handlers all the way down. For one thing, there is code just to work with the USB hardware, then there is code to implement the mass storage class and then code to implement the serial port class. Much of the interaction between these is through callbacks. In reorganizing it into C++, I had the ideal that the device could operate both as mass storage and serial at the same time. There would be a low-level USB class, and then on top of that, a Mass Storage class, and separately a Serial class, which could both be active at the same time.
I am going to abandon that idea for now. I don't know enough about the stack to do this yet. So, we do a USB class with several abstract virtual methods, then an MSC subclass which implements the virtuals purely as mass storage, without worrying about sharing. Likewise, we are going to have a USB control endpoint handler, then a MSC subclass which does what is needed for MSC.
Friday, October 12, 2012
Battle of Compression
Everyone else was doing it, so I might as well give it a try also. Here's my use case: I want to compress the C++ source code and anything else needed to rebuild my firmware (mostly Makefiles) into one tight little package, then append that package to the firmware itself. Naturally smaller is better. Especially I would like the Bootloader++ (I'll explain it when I'm ready to publish, but its a bootloader for a Logomatic-type circuit which handles fat32 and sdhc) code and source pack to fit within the 64kiB it has allocated for itself.
So, the test case. I already have a rule in my makefile to pack the code:
$(TARGET).tar.$(TAR_EXT): $(ALLTAR)
$(CC) --version --verbose > /tmp/gccversion.txt 2>&1
tar $(TAR_FORMAT)cvf $(TARGET).tar.$(TAR_EXT) -C .. $(addprefix $(TARGETBASE)/, $(ALLTAR)) /tmp/gccversion.txt
$(TARGET).tar.$(TAR_EXT).o: $(TARGET).tar.$(TAR_EXT)
$(OBJCOPY) -I binary -O elf32-littlearm $(TARGET).tar.$(TAR_EXT) $(TARGET).tar.$(TAR_EXT).o --rename-section .data=.xz -B arm
I'm quite proud of the latter, as it packs the archive into a normal object file, which my linker script makes sure gets packed into the final firmware image, with symbols bracketing it so I can dump just the source code bundle.
Anyway, we will look at our challengers:
So, we notice a couple of things. One, sometimes -9 doesn't improve things measurably, and sometimes makes things worse. Next, zpaq rocks out loud as far as compressing C++ source code. It's still larger than the firmware binary image, which is 12423 bytes. It might take more time and more memory than any other compressor, but all that time and memory is in a beefy desktop machine, and not in the Loginator.
So, the test case. I already have a rule in my makefile to pack the code:
$(TARGET).tar.$(TAR_EXT): $(ALLTAR)
$(CC) --version --verbose > /tmp/gccversion.txt 2>&1
tar $(TAR_FORMAT)cvf $(TARGET).tar.$(TAR_EXT) -C .. $(addprefix $(TARGETBASE)/, $(ALLTAR)) /tmp/gccversion.txt
$(TARGET).tar.$(TAR_EXT).o: $(TARGET).tar.$(TAR_EXT)
$(OBJCOPY) -I binary -O elf32-littlearm $(TARGET).tar.$(TAR_EXT) $(TARGET).tar.$(TAR_EXT).o --rename-section .data=.xz -B arm
I'm quite proud of the latter, as it packs the archive into a normal object file, which my linker script makes sure gets packed into the final firmware image, with symbols bracketing it so I can dump just the source code bundle.
Anyway, we will look at our challengers:
- No compression, just a tar file. This one is actually a bit bigger than the total of the file sizes
- gzip, the old standard, both with no special flags and with the -9 option
- compress, the really old standard .Z file using the (expired) patented LZW algorithm
- bzip2, the second generation compresion algorithm notable for both better compression and longer compression time than gzip, used both with no special flags and with the -9 option
- Lempel-Ziv-Markov algorithm, implemented as the Ubuntu command lzip and xz. third generation compression algorithm, once again better compression, once again longer time
- lzop, a compressor optimized for speed and memory consumption rather than size
- PKZIP, implemented via the zip command available in Ubuntu. This might not be a fair test, as it is not compressing the TAR file, but is in fact using its own method to compress each file individually. So, it has an index, plus each file is compressed anew, meaning there is no advantage from the previous file's compression.
- 7z, implemented via the 7z command available in Ubuntu. Same notes as with PKZIP.
- zpaq, a compressor which at each step tries several methods and picks the best. This one takes a monumental amount of time and memory, but seems to be worth it if minimum file size is the goal.
Sunday, October 7, 2012
Gem of the Week - Kepler's and Newton's laws and universal gravitation.
The discovery by Kepler of his laws of planetary motion is one of the more amazing bits of observational science, made more amazing by the lack of tools which he had to work with. But, that's not our gem of the week. Instead, we will see how Newton deduced that there is such a concept as universal gravitation, and proved that it worked. Actually we won't see how he did it, but we will see how it can be done with modern techniques such as vectors.
Monday, October 1, 2012
Gem of the Week - Euler's Identity
I am going to present a new feature to all my 0 readers - the "Gem of the Week". This is a reprise of a sometime feature on my old private blog, "Chemical of the Day". I am expanding the topic somewhat from chemicals to anything I find interesting. None of these are necessarily news, but they might be.
This week, it's Euler's Identity.
This week, it's Euler's Identity.
Keeping secrets
I hate secrets.
Some people have to keep secrets because they are legally obligated. This includes any government classified information. Boy am I happy I don't have to deal with that headache. I bet Robert did.
Some people have to keep secrets because they are contractually obligated. Some projects LASP works on are with customers who treat some aspects as proprietary. For instance, I was brought in on Sentinel long before it was announced, in October of last year. Ball, perhaps under orders from B612, required us to keep the mission proprietary. It is a really cool mission, and I hated not being able to talk about it for months on end.
I keep some secrets because the time is not yet right to publish. I have something cool in mind for the Loginator, but I don't want to shoot my mouth off before I know that it is going to happen. So, watch this space...
Some people have to keep secrets because they are legally obligated. This includes any government classified information. Boy am I happy I don't have to deal with that headache. I bet Robert did.
Some people have to keep secrets because they are contractually obligated. Some projects LASP works on are with customers who treat some aspects as proprietary. For instance, I was brought in on Sentinel long before it was announced, in October of last year. Ball, perhaps under orders from B612, required us to keep the mission proprietary. It is a really cool mission, and I hated not being able to talk about it for months on end.
I keep some secrets because the time is not yet right to publish. I have something cool in mind for the Loginator, but I don't want to shoot my mouth off before I know that it is going to happen. So, watch this space...
File system driver
My C++ification of the Loginator code continues. As noted below, I have the startup code now in full C++ (with 18 lines of inline asm), and I have overthrown the tyranny of main() (that sounds familiar, have I written on this topic before?). I have taken Roland Riegel's sd_raw driver and heavily modified and simplified it. Basically I made it a pure block device. I have dropped all the buffering. You can open an SD card (SDHC fully supported), read a block, write a block, and get the card info.
I looked into extending c++ifying the partition and fat32 driver, but it looked too complicated and messy. One of the things I am dead set against is dynamic memory allocation in an embedded process. What if it fails? When that happens, the software crashes (it wouldn't ask for memory if it didn't desperately need it) and when that happens, it is a good possibility that the device it is flying crashes too.
So, I get to write a fat32 driver myself. Once again, only whole blocks at a time. And to start with, only that which the USB bootloader and Logomatic need: read a file, write a file, delete a file. Also to start with, we fully support FAT32, but do not support long filenames.
One area where I am going to get myself in trouble is writing the file. Sometimes when you write a file, you have to change the file allocation table. When you do so, you need to read the sector containing the change, make the change, then write the new sector. This is all easy, but you need a buffer to do it. Also, you will need to read the table to find the next cluster. What buffer do you use? I know that the LPC2148 is not really memory-limited, but it still seems a waste to set aside a whole block buffer for this.
I started by writing a partition driver. You pass it an open and started SD object and a partition number, and it reads the partition table to get the info for that partition. From then on, you use the partition object to read and write blocks.
I looked into extending c++ifying the partition and fat32 driver, but it looked too complicated and messy. One of the things I am dead set against is dynamic memory allocation in an embedded process. What if it fails? When that happens, the software crashes (it wouldn't ask for memory if it didn't desperately need it) and when that happens, it is a good possibility that the device it is flying crashes too.
So, I get to write a fat32 driver myself. Once again, only whole blocks at a time. And to start with, only that which the USB bootloader and Logomatic need: read a file, write a file, delete a file. Also to start with, we fully support FAT32, but do not support long filenames.
One area where I am going to get myself in trouble is writing the file. Sometimes when you write a file, you have to change the file allocation table. When you do so, you need to read the sector containing the change, make the change, then write the new sector. This is all easy, but you need a buffer to do it. Also, you will need to read the table to find the next cluster. What buffer do you use? I know that the LPC2148 is not really memory-limited, but it still seems a waste to set aside a whole block buffer for this.
I started by writing a partition driver. You pass it an open and started SD object and a partition number, and it reads the partition table to get the info for that partition. From then on, you use the partition object to read and write blocks.
Subscribe to:
Posts (Atom)