Tuesday, August 16, 2022

Shipometer

 Having given up on #SoME2, it's time to move on to the next project. Next week I will be going on a cruise. On the cruise, and also on the plane, I wish to record GPS signals. The pocketometer has all the right sensors for that, but unfortunately is not sufficiently reliable. The Raspberry Pi has already proven itself capable of recording GPS from one of the ZED-F9R breakout boards. Now the question is if it can record the sensors on I2C, and the time pulse.

I want to see if I can do this with just the parts that I already have on my desk.

I have a belt bag big enough to hold all the sensors, the Pi, and a 20Ah USB battery pack. That would be a lot less suspicious than stuffing stuff in my pocket.

It would also be great if the Pi could act like a wifi hotspot and serve SSH through it. That way I could look at it on the phone while (literally) in flight.

The last thing that would be awesome is timer capture on the GPIO, of at least the PPS and maybe others, like the interrupt lines from the sensors. If it can't, maybe we could get a program on the Teensy that would do the timer capture and output on UART or as an I2C slave.


#SoME2 post-mortem

 I did not get a video out on time for SoME2. Even for just the descoped "good part" video, I couldn't get it done in time this morning.


Thursday, August 11, 2022

Deadline pressure

I ended up going with the Kalman gain video. I will use this blog as "making of" documentation.

  • I am going to use PictureBox much more than Manim -- I have already forgotten how to use Manim. I think I will only need it for dancing equations, and I don't plan on using those much. MatPlotLib knows how to use TeX, so it can make nice-looking math, but can't make them dance as well.
  • One video, two videos, N videos? I have about 9 minutes of narration so far, and have just covered up to measurement space. It might take 20 minutes to cover the stuff I want just for linear Kalman filtering. KalmanGain implies that the most important or interesting stuff is calculating the gain. That by itself might only take a minute, but all of the pre-requisites might take even more than 20 minutes. For now, I am planning on one long video, covering transformation of uncertainty from state to measurement space and back, and the hand-waving justification for why there even is an optimal gain matrix to transform innovations back into state vector adjustments.
  • Pure video, or text plus video? This interacts with above. The hard thing about videos, is that the maker of the video always has to leave stuff on the cutting room floor. This leads to a contradiction: If I include everything I know, the video will be too long and boring, while if I don't, the pedants in the audience will use this as an opportunity to show how much smarter they are than me. Text plus video would allow me to put the most important bits in a video, which could be run as a continuous playlist. The text part would then include the videos, and footnotes with all the pedantry included.
  • Narration. I hate the sound of my recorded voice. Plus, I don't have a good audio setup yet. Therefore, I am using Amazon Polly to generate narration from a script. I am having neural British Amy read it. This is the first text-to-speech I have heard that I would say is good enough to be plausible as a human-read narration. I would say it's 80% of the way there. I just wish I could adjust the emphasis on some words. American Joanna is also good enough, but as an American, I at least subconsciously buy into the "intelligent British accent" stereotype. I know that I write slightly different for Amy than I would for myself, and a lot different than I do on this blog.
Which brings us to deadline pressure. SoME2 was announced in the middle of June, but I didn't start on it in earnest until this past Monday (August 11). If I had all the time in the world, I would make a video covering time updating, nonlinear models, and maybe various filters beyond Extended Kalman. It might take an hour, but I would break it up into roughly 20-minute chapters. With the deadline pressure, I am not going to be able to carry out this whole program. I might be limited to just the gain matrix.

I am *not* going to do things like publish one chapter before the #SoME2 deadline and extend a playlist afterwards. I am committing to publishing a complete thought, or nothing at all.

Why is #SoME2 even important? For one thing, I don't expect to win. This might be a winning concept in the right hands, but I am learning that my hands are not those. I am also learning why 3b1b only publishes as infrequently as he does. That's the non-reason, so the reason is: I like having large numbers in my YouTube channel. My last-years late #SoME1 got 104,000 views, even though it didn't make the official #SoME1 playlist.

Also, without deadline pressure, I may never actually make this. One of the issues with my projects is that whenever I am working on one, I wish I was working on another. I'm sure that's a common character flaw among humans in general, but I don't know what it's called or how to fight it.


Thursday, June 9, 2022

Decisions, Decisions

Summer of Math Exposition #2 was finally announced. Deadline is August 15, 2022, which fits in quite well with my summer plans -- it's all in between the Florida trip and the other Florida trip.

I could do the next logical step from Exponential beats All. This one was supposed to be a quick explainer for my real video idea, why a rocket doesn't need a heat shield.


Or, I could do a visualization of the Kalman filter. There are of course lots of interesting visualizations to do, but the one I am interested in is showing off the Kalman gain, as the gain matrix which is a linear unbiased estimator, and demonstrating that picking other values for the gain will necessarily produce larger covariance.

Eventually I plan on doing both, but I am leaning towards the "why doesn't a rocket need a heat shield" video. It's more physics and less math, but "math" is given an especially wide breadth in Some2.

Friday, April 29, 2022

A better C++ than C++?

tl;dr -- Is Rust the language that C++ promised but failed to deliver? It's still unclear.

Programmers are adults. We don't need our hands held, and we certainly don't need to be told "Don't do that!" The only kind of advice that is needed is "I see what you are *trying* to do. This might be a better way to do it..."

Case in point: reading data from external sensors. I am still working on the rollercoasterometer (now for 13 glorious years!) and am doing my usual fighting with C++ about getting formatted data out of a byte buffer. Back in the bad old days, we would cast the address of the buffer as a pointer to a struct, and read the data out directly. Then the compiler writers, with their fancy-smancy alignment and such, said to not do that, because there is no guarantee that the structure will line up with what you think. The compiler is free to put any amount of padding between fields, order the fields however it likes, put invisible fields like vptr, etc. 

That by itself isn't bad. The bad part is when I ask how to do what I want, and the answer comes back: "Don't". There is no portable way to guarantee that any field in a struct lands anywhere. This way we can write C++ targeting the Apollo Guidance Computer, with its 15-bit words, no such concept as "byte", ones-complement arithmetic with +0 and -0, etc. It doesn't matter that basically every machine in the last 40 years has been twos complement, 8-bit bytes, and word size of a power of 2 bytes. Basically the only disagreement is endian-ness. 

But, C++ won't let me take advantage of the fact that both machines in a transaction have the same native word format. No, I have to individually extract each byte, shift and OR it myself, etc, to get the data out. If I am lucky, the compiler will see that I am translating from English to English, and optimize it out.

Game designers learned long ago to learn from their users. If all the Minecraft users are building farms, then support the building of farms. If the farm depends on a glitch, consider formalizing the glitch and making it an official feature. Don't just shower them with whatever they are farming for "free", but don't take away their ability to farm either. For instance, Mojang has considered in several instances changing the mechanics of how villagers and iron golems work to discourage farming iron. They got quite a bit of pushback from the community, and have therefore backed off. They don't "support" iron farming, but they haven't removed it either.

The C++ committee on the other hand seems to be driven by two factors:

  1. Backward compatibility indefinitely into the past
  2. Ability of compiler writers to game benchmarks
C++ has a lot of good ideas (some might say too many) but it is a language which has evolved for decades while at the same time hasn't been able to shed old, bad ideas or old, bad implementations of good ideas. 

C++ for whatever reason also has a burning hatred for the preprocessor. The preprocessor and compiler are married, but they are both trapped in a loveless marriage, and the simmering hatred has boiled over to the point where the official C++ FAQ considers macros to be "evil". Now the preprocessor does have some minuses, mainly in type checking. So, we were given constants and templates. We were told that templates would basically eliminate the need for macros. However, when we tried to use them like that, we found that the promise has not quite been kept. Templates do some but not all of the compile-time processing that we want. Constexpr functions are helping, but aren't there yet. 

My use case is that I want to make a self-documenting logger. The rollercoasterometer reads a bunch of sensors, formats the data into packets, then writes the packets to a file on the SD card. Since the data from the sensors doesn't naturally come in packets, and doesn't naturally have a timestamp, the main firmware timestamps the data and formats it into packets. Since I write the firmware, I also have to write the code on the host which interprets the packets. I came up with a [[clever idea]] to have the program emit a series of packet-documentation packets as it starts up. One way to do it is to have the program create a documentation packet the first time it emits each packet, and I have done this. It looks something like this:

void start_packet(apid, packet_desc_str) {
  if apid is not yet documented:
     write a documentation packet for this packet, using the packet_desc_str
  start the packet, perhaps to a backup buffer if we are documenting the packet
}
void fill<type>(data_value, field_desc_str) {
  if apid is not yet documented:
    write a documentation packer for this field, using the field_desc_str
  write the field to the packet, perhaps to the backup buffer
}
void finish(apid) {
  if the apid is being documented:
    take note that we have documented it and don't do it next time
  finish the packet and write it, perhaps from the backup buffer
}

Note that we need to have two buffers. It would be far better if we did something like this:

void start_packet(apid,packet_desc_str) {
  compile_time_code {
    create a packet describing this packet in the packet description block. This block will be a read-only blob as far as the runtime code is concerned
  }
  start the packet, no need for backup buffer
}
void fill<type>(data_value,field_desc_str) {
  compile_time_code {
    Add a packet describing this field to the packet description block
  }
  add the field to the packet
}
void finish() {
  finish the packet and write it
}
void setup() {
   open sd card
   write packet description block
   set up sensors etc
}
void loop() {
   read sensor
   start_packet(0x1234,"sensor packet");
   fillu16(value_from_sensor,"value from sensor")
   ...
   finish_packet()
}

Preprocessor macros might be able to do it, but the preprocessor is Evil. Templates might be able to do it, but it might require template metaprogramming, which is actually evil. It seems like there isn't a way to do it in C++, certainly not a clean way. Therefore we are forced to either use the official preprocessor, write our own preprocessor (which has its own headaches), or do it at runtime.

The language which might be a better C++ than C++ is Rust. This is a statically typed language which is designed to be compiled into good machine code (the reference compiler uses LLVM as its back end) but with some features added and some taken away. I'm not sure if I like mutability and ownership yet, and haven't gotten used to the concept of borrowing yet, but it does look like there is support for forcing a struct to land on certain bytes, and it does look like (with procedural macros) there might be enough compiler support for compile-time computation.

To test this out, I am going to work on three projects:
  1. A conversion of kwantrace to Rust, to experiment with plain application-domain programming.
  2. A packet parser for reading rollercoasterometer logs
  3. Firmware for the rollercoasterometer
The last one is probably not going to be ready in time for my next expedition to a roller coaster.

Tuesday, February 1, 2022

Babbage's Dream

 I just got a pair of 512GB MicroSD cards. Of course, any such card should be tested, since these are about the easiest things in the world to counterfeit and it can be done in a way which normally wouldn't be detected for a while.

For instance, imagine you wanted to fake a 512GB card. If it was inert plastic, it would immediately fail and the user would dispute the transaction. So instead, you make a card with less capacity (say 32GB) and reprogram it to do the following:

  • Report 512GB of capacity
  • Whenever doing a read or write, take the block address given and just mod it with the actual capacity. 
So if you write data to block 0, then to the block at 32GB, the later block would overwrite the first block. This is less than ideal for a storage device... It would work fine until you wrote 33GB on it, which might take a while.

Having said this, there are very many MicroSD cards which are counterfeit in this manner. It behooves us then to test each card immediately.

You can't just test it by writing zeros, as the card may already have zeros on it. You can't just write a fixed pattern (say 0xAA) because a smart enough counterfeit (and note that MicroSD cards have a full-blown microcontroller with its own firmware) would notice this and read back the pattern. You can't just test a small amount of it, because the card (or host) may have a cache. 

So, I have put together a program which writes the output of a cryptographically-secure random number generator to stdout. This can be directed to a file on the SD card to be tested. The chosen CSPRNG is the Keccak-1600 sponge function. We absorb any arbitrary string as the seed, then keep squeezing it forever, or until the device fills up and throws an error message. 

There is no way to beat this, since a CSPRNG by its nature is unpredictable -- there is no detectable pattern unless you know the arbitrary string used as the seed. The internal microcontroller probably could calculate it, since Keccak is a well-known standard alogrithm, but it doesn't know the seed. It's not reasonable for the counterfeiters to guess that I am going to be checking the chip this way, and to try to guess the seed (hint -- it's close to a substring of the title of this blog).

There is no way to derive the seed from the output of the CSPRNG -- that's one of the properties that makes the PRNG cryptographically secure. The stream might have a period of 2^1600 bits, but I haven't seen a proof or even any good reason to believe that Keccak is full-period when used this way. Even if it is much less, it is almost certainly much greater than the 2^42 bits it takes to fill this device. So there is no way to store or cache this stream without actually having a functional amount of storage equal to the advertised amount.

In my case, I couldn't find the exact routine I wanted, so I wrote my own based on the XKCP package (since it isn't wise to implement a cryptographic primitive yourself).

#include "KeccakSponge.h"

#include "stdio.h"


#define r 576

#define c (1600-r)


int main(int argc, const unsigned char** argv) {

  KeccakWidth1600_SpongeInstance s;

  KeccakWidth1600_SpongeInitialize(&s,r,c);

  char buf[r/8];

  KeccakWidth1600_SpongeAbsorb(&s,argv[1],strlen(argv[1]));

  if(argc==2) {

    for(;;) {

      KeccakWidth1600_SpongeSqueeze(&s,buf,sizeof(buf));

      fwrite(buf,1,sizeof(buf),stdout);

    }

  } else {

    FILE* inf=fopen(argv[2],"rb");

    char inbuf[r/8];

    size_t last_match=0;

    while(!feof(inf)) {

      KeccakWidth1600_SpongeSqueeze(&s,buf,sizeof(buf));

      size_t incount=fread(inbuf,1,sizeof(inbuf),inf);

      for(size_t i=0;i<incount;i++) {

        if(buf[i]!=inbuf[i]) {

          fclose(inf);

          printf("Different at byte %ld\n",last_match);

          return 1;

        }

        last_match++;

        if ((last_match%(1024*1024*32))==0) {

          printf(".");

          if((last_match%(1024*1024*1024))==0) {

    printf("%ld\n",last_match/(1024*1024*1024));

  }

          fflush(stdout);

        }

      }

    }

    fclose(inf);

    printf("All bytes up to %ld matched",last_match);

    return 0;

  }

}

This program takes one or two strings as command-line arguments. The first is the seed. If there is only one argument, it uses this seed as mentioned above to absorb, then squeezes the sponge an unlimited amount of times, limited only by the device filling up. It writes to stdout, so the way I use it is to pipe it through pv to see how fast it is going, and then pipe it to a file on the device under test. If the card is genuine, then this test is non-destructive. It also doesn't disturb the original exFAT filesystem formatting that the card came with -- I have heard that there is a non-obvious optimum way to format these cards, and that they come formatted optimally.

If there are two arguments, the first is still the seed, and the second is a file to check to see if it matches. It does this in the simplest way possible, by absorbing the seed, then going into a loop: squeezing once, reading the same amount from the file, and byte-for-byte checking the blocks. It prints a period every 32MiB and a number every 1024MiB. It prints the byte offset of the first non-match (and then exits) or prints the total number of bytes it checked.

As it happens, my chip worked properly. This means the system produces a stream of over 4 TRILLION bits, then perfectly reproduces those 4 TRILLION bits. That's amazing when you think about it, and is Babbage's dream. The legend goes that after being disillusioned by the number of errors in an almanac, he remarked that he wishes that the tables could be generated by steam power. One of his colleagues said "that is possible", which statement changed the course of Babbage's life. He originally wanted to use the difference engine and analytical engine to literally crank out mathematical and almanac tables, with no human intervention between the initial conditions and the printed page. The machine was intended to perform the calculation, then with the results, automatically make a plate to be used in a printing press. Many years later, the first and simplest such difference engine was finally constructed, and printed the first several integer multiples of the circle constant pi. It made a mistake in the last entry. 

Tuesday, February 2, 2021

Dusting off some old code

 In preparation for the Mars Science laboratory landing, I made a video using the best pre-EDL data available, including a simulated EDL kernel set, MOLA topography, and HiRISE imaging. I'm pretty proud of it:


Wednesday, October 30, 2019

Palpatine was Dead

Palpatine was dead: to begin with. There is no doubt whatever about that. His flailing body was flung down the open reactor shaft. Crackling with dark side power, he fell faster and faster until he hit a bridge far down the shaft, and burst like a bomb. Anakin and Luke felt the concussion even hundreds of meters away. And if that wasn't enough, he was killed twice, once by his former apprentice, and once by the terrorists who destroyed the planet-sized station that the pieces of his body were at rest upon. Anakin saw it, Luke saw it, we in the audience saw it.

There is no doubt that Palpatine was dead. This must be distinctly understood, or nothing wonderful can come of the story I am going to relate....

Wednesday, October 23, 2019

What makes a story?

"I wonder what they'll be like?" he mused. "Will they be nothing but wonderful engineers, with no art or philosophy?" --From Rescue Party by Arthur C. Clarke
Specialization is for insects  --Attributed to Robert A. Heinlein

I might think of myself as an engineer (and by no means wonderful) but sometimes I think about art and philosophy as well. Today over lunch I was thinking about what makes a great story -- what makes a story entertaining to me? For instance, why do I like Star Wars? Is it the characters? Is it the message?

For me, the most entertaining part of the original trilogy was the Battle of Endor (which the Empire totally should have won, by the way. Ewoks?). Specifically the best part of that was the run to the Death Star core. Why? Great visuals. We got to see the intricate detail of the inside of a massive, complicated object. The interior of the Death Star is itself a work of art.

Similarly, the boarding and launch of USS Enterprise in Star Trek: The Motion(less) Picture is the best part of that film by far. The new Star Trek had its moments, but they were too fast and filmed in shaky-vision such that we never got a clean look at all of the great models. From the clips I have seen of Star Trek: Beyond, it looks like Starbase Yorktown was done right. It might be way too large and expensive to be practical, but it does look awesome.

So, why aren't all movies just spectacular visual effects?

Firstly, visual effects are not cheap. It is far cheaper per minute to put a bunch of actors on a sound stage and just record a play, compared to special effects.

Second, if we do, we end up with such works as Sonic Vision and The Mind's Eye. These are works of art in themselves, but there is still something missing. I don't think even I could watch Sonic Vision for two straight hours. I finally think I know what it is. It's consistency. In a well-constructed story, all the pieces just fit together. As long as it maintains consistency, the larger the story, the better. In such a story, you can understand what is going on. You can make predictions, and evaluate the actions of the characters. Did it make sense for Han Solo to do that? Did it make sense for Admiral Holdo to do that? Harry Potter seems consistent, and it maintained that through seven books.

A good consistent story, then it seems, must be well-planned from the start. A good story universe, must have a solid scaffold of ideas, and all new ideas added to it must remain consistent. The best ideas might expand the scaffold, but in a way that makes it stronger and able to hold even more ideas. The core of any good story universe is one good story.

So: Why did Admiral Holdo do that? It was a visually stunning 10 seconds, but how does it work as for consistency? In order for it to make sense, she must have had some idea that it could work -- not necessarily a sure thing, but at least a reasonable chance to be worth trading her life for. If hyperspace ramming works, then why isn't it always used?

Also, the Holdo Maneuver didn't even get a mission kill -- Supremacy was not destroyed, only damaged. It was not stopped, only slowed. It still was able to launch an invasion of Crait. She aimed for a wing, rather than the core.

Here are the facts as shown in The Last Jedi:
  • Raddus was almost out of fuel -- it had enough for one jump, and no further fuel to travel through normal space.
  • Admiral Holdo ordered the abandonment of Raddus with her alone staying aboard. 
  • She turned the ship to face Supremacy.
  • The crew of Supremacy thought that Raddus was either trying to escape or was trying to block/stall to give the rest of the fleet a chance to escape.
  • Just before Raddus jumped to hyperspace, the crew of Supremacy realized what Raddus was trying to do, and started to take action to counter, but they no longer had enough time to prevent it.
  • Raddus jumped to hyperspace with its trajectory through the right wing of Supremacy. It isn't clear from the footage whether Raddus actually entered hyperspace, or just hit Supremacy at high speed in normal space. In any case, the right wing of Supremacy was sheared off, and at least four trailing ships were destroyed by debris from the collision.
Other facts seen in other parts of canon:
  • During Rogue One, Devastator jumps into Scarif orbit just as the rebel fleet is trying to flee. At least one GR75 (transport-class) ship hits Devastator's hull and is destroyed, and its pieces are just brushed off.
  • In one of the Legends comic books, a squadron of three star destroyers  drops out of hyperspace right on top of Executor. All three ships are smashed to atoms, and while Executor has to stop what it is doing, its shields brush the collision aside, such that the paint isn't even scratched.

We can enumerate all the possibilities, and dismiss each of them. The Holdo Maneuver is inconsistent with what has come before.

  • Admiral Holdo is such a great military leader that this idea is original to her. In the twenty thousand year history of the galaxy, no one else has had this idea. We dismiss this because even a cursory study of either galactic or Earth history would have revealed many examples of ramming as a reasonable tactic. I have seen Youtube videos by one of the numberless online jabberers stating exactly this, before The Last Jedi was released, as an alternate way to destroy the Death Star, use an X-Wing in kamikaze mode.
  • The shields of Supremacy should have been able to protect it from the collision, as seen in other hyperspace-shield interactions. Admiral Holdo should have known this and not even have attempted the ram.
  • You can't just jump one X-wing to hyperspace and expect to destroy the Death Star -- a single X-wing isn't big enough. Anything bigger is supposedly too expensive. However, if just mass will do the trick, there is enough plain mass in the form of such things as asteroids to do cheaply.
  • There is something special about Raddus which makes it uniquely qualified to ram. If this is the case, Raddus is just a machine, and any machine can be duplicated.
  • The only theory which is not immediately dismissable is that there is something special about Supremacy, perhaps related to the hyperspace tracker. Admiral Holdo could not have successfully rammed any other ship. Even so, Holdo would have had to know how the hyperspace tracker worked, at least well enough to know that it created this vulnerability. Besides, one of the Incredible Cross Sections books contains text to the effect that hyperspace tracking was merely an algorithm, and the hyperspace tracker on Supremacy was merely a large computer facility.
The point is that in any conceivable scenario, whatever Admiral Holdo did at that moment could have been, and therefore should have been, weaponized long before. Either it was, and all battles in the galaxy would have been different, or it wasn't, because it is impossible.

Saturday, April 6, 2019

C++ Cleverness

Yukari was in some ways an amazing piece of code. It didn't actually drive the robot all that well, but of all the things in it, I am most proud of it's self-documentation. Each run of Yukari produced a file which recorded three things:


  1. Data packets showing what the robot was doing and what it was thinking
  2. An image of its code and any other files I thought needed attaching
  3. A description of how to parse the record file, partly in English, partly machine readable. With this description, anyone who had the file could in principle write a piece of code to parse the file.
I am redoing this code for the Loginator. While it is easy to just use the same logic that I used on Yukari, that code was inefficient. It did the following:

  • The packet start function took a pointer to a string describing the packet, and each kind of fill function took a pointer to a string describing the field. This was nice because the documentation for each field in the code is right next to the actual code for it.
  • The start function and each fill function called the writeDoc() function, which took care of documentaion. After that was finished, it wrote the field.
  • The writeDoc() function kept track of apids which have already been documented. If this apid has already been documented, writeDoc() returns immediately. Otherwise it write a field description packet.
  • In order to make this work, the actual packet data had to be stashed somewhere. If the packet was in the process of being documented, writeDoc() for a packet start set up pointers such that the packet being written went to this stash buffer, and writeDoc() got to create actual packets in the proper buffer.
What I have in mind is much more clever. It will see the compiler generate the core of the documentation packets at compile-time. These will then just be in ROM, which we have bucketloads of. I think this will involve template meta-programming and constexpr functions. 

Each call to start and fill will be immediately followed by a template class instantiation. This class template will take as parameters the apid, the string description, and perhaps some other stuff (units, conversion, etc). The class will declare a couple of static constant member fields, which will result in them getting stuck in the .rodata section, or maybe a special section. Once it is someplace in the read-only image, the startup code will write out all the documentation in the same way that 

The interface will look like this:

start(apid_blah,TTC(0)); template class packet_doc<apid_blah,0,"This packet records exactly how blah things are">;
fillu16(blah); template class packet_doc<apid_blah,t_u16,"This field records the blah level">;

It's not quite as clean as the old way, but it is purely a run-time thing. To begin with, we would have a template something like this:

template<int A, int T, const char* D>
class packet_doc {
  static const int __attribute__ ((section(".packet_doc"))) apid=A;
  static const int __attribute__ ((section(".packet_doc"))) fieldType=T;

  static const char* __attribute__ ((section(".packet_doc"))) desc=D;
}

We could make things fancier by keeping track of the position in the packet using more advanced template metaprogramming.

Update:
Nope, defeated. While you can use an address as a template parameter, a string literal doesn't necessarily have an address on its own. You can set up a const array with a string literal in it, and use that as the template parameter, but that starts getting way too ugly. I'll do it the old way, and use the bottom of the stack for temporary space. It isn't secure, because what happens if the stack and this buffer collide, but I'm not going to worry about that.


Wednesday, October 17, 2018

Interesting things about Voyager 1 and 2 launches

There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.
It's amazing how much we can do with such a small amount of data. The following analysis is based solely on the dates of launch and arrival of the Voyager spacecrafts, and an ephemeris of the planets.

Voyager 2 was launched on 1977-08-20T14:29:00Z, while Voyager 1 was launched later on 1977-09-05T12:56:00Z. Voyager 1 passed Voyager 2 on the way out (the exact time depends on how you define "pass") and arrived at Jupiter on 1979-03-05T12:05:26TDB, while Voyager 2 arrived on 1979-07-09T22:29:51TDB.

Notice that while the arrival reference is a full-blown set of orbital elements, we did not use these. Instead, we use the Lambert/Gauss targeting method to plot a course which departs from the center of the Earth on the indicated time, considers only the gravity of the Sun and ignores the gravity of the Earth, and arrives at the center of Jupiter at the indicated time. There is one unique trajectory which crosses the indicated places at the indicated times and is prograde.

So, even though Voyager 1 is launched later, it arrives first. This is for two reasons:
  1. Voyager 1 has a higher heliocentric energy
  2. Voyager 1 is launched on the "outside track". Normally the inside track is faster, but not in this case. If both spacecraft are going to intercept Jupiter, the first to arrive must be on the outside track because Jupiter is moving from "outside" to "inside".
We end up with the following picture:

There is a lot to look at in this picture, so here are the thousand words. Red is Voyager 1, blue is Voyager 2. The small white circle is the orbit of Earth, with the sun marked as a yellow dot, Jupiter the next larger white circle, and Saturn the largest. The tick marks are 30-day intervals starting at the launch date of Voyager 2. This means that the ticks for Voyager 1 and 2 are directly comparable. We see Voyager 1 pass at about the 4th tick mark. A zoom in reveals that from this perspective directly from Ecliptic North, the trajectories actually cross twice. Normally the trajectory diagrams don't show them crossing at all.
Another interesting thing is that the orbit of Voyager 2 if it did not intercept Jupiter, would have continued about 1AU past Jupiter's orbit. However, Voyager 1's orbit would have gotten almost out to Saturn. Did they use a larger launch vehicle for Voyager 1? No, both spacecraft had almost identical launch speeds (vinf=10.951km/s for Voyager 1, 10.316km/s for Voyager 2). The main difference is just the direction of launch. Voyager 2 was launched above the ecliptic plane (departure asymtote 17 deg above the ecliptic) while Voyager 1 was launched almost in the ecliptic (departure asymtote 4 deg above ecliptic). Also, Voyager 1 was launched closer to the direction of travel of the Earth. Voyager 2 was launched 18 degrees inward, while Voyager 1 was launched 4 deg outward.

Friday, August 10, 2018

Another blast from the past

Once upon a time, I participated in the Internet Ray-tracing Competition. It turns out that they have an archive, and it turns out that most of the source code for my images are still there. I was able to recover another one today:

A modern render of an ancient image
My notes say that this took 6 hours to model and 1h30m45s to render on an AMD K6-2 300MHz 192MiB memory machine. The re-render at the same resolution takes 4.423 seconds on an Intel Xeon E3-1505M laptop with 32GiB memory. This is over 1200x faster. At HD resolution it takes about 13.7 seconds.

Tuesday, February 6, 2018

Falcon Heavy

Today is the scheduled launch date for the Falcon Heavy demonstration. As of this writing, the launch is on schedule for an 11:30 MST launch.

There is very little official guidance from SpaceX as to what to expect. Elon Musk has stated that minimum mission success is clearing pad 39A far enough such that any further failure doesn't destroy that pad. There has never been a catastrophic failure at pad 39A, and Musk would like to keep it that way.

However, the plan is to do a boost, then three burns of the upper stage. The first finishes launch to LEO. The second is about 30 seconds, and seems to put the booster into a GTO-like orbit. Lifting a heavier spacecraft into full GTO takes about a minute, so there is some hint that this will go into an elliptical orbit that is short of GTO. Part of the demonstration is a 6-hour coast. They are doing it on this flight because the upper stage is very similar to any normal Falcon 9 upper stage, and any demonstration on this stage would apply there. This coast is what would be needed for a 3-burn GSO direct insertion, that apparently is very interesting to the military. For one thing, it would demonstrate that the upper stage could put a GPS satellite directly into its target orbit, like the much more expensive Delta IV medium. A bit more oomph and a similar endurance would put a spacecraft directly into GSO.

In any case, the consensus on NasaSpaceflight is that the target high orbit is one with a period of around 6 hours. After this coast, the second stage would be back at perigee, ready to take maximum advantage of the Oberth effect.

SpaceX has claimed that they will put the payload (A cherry-red Tesla Roadster) into an "earth-mars heliocentric orbit". The launch window for Mars is in May, so they will be launching 3 months out of the window, but since this is such a light payload, they should have plenty of C3 and probably could target Mars if they wanted. However, I think that they will instead target an orbit with periapse at Earth and a C3 typical of launching to Mars. The payload will reach the vicinity of Mars orbit, but Mars will be far far away by then. In fact, to be responsible about Planetary Protection, they should launch into an orbit which will not actually intersect Mars orbit at all, so that there is never any possibility of the car impacting Mars.

Running the numbers based on the Trajectory Planner 1.1.1 from Orbit Hangar, I get a departure C3 of 23.9 km^2/s^2, with a departure today and an arrival on October 17, 2018. The flight time is 252 days. This C3 is high for a Mars launch, but should be doable with such a light payload.

If they are targeting Mars, then the launch vehicle must be able to adjust azimuth in order to target Mars at any time during the window. If they are just going for a given C3, they can use the same azimuth whenever they go. Since ASDS is parked somewhere definite to catch the core stage, I estimate that they are targeting a fixed azimuth independent of launch time.

There are no signs of high-gain antennas or solar panels on the payload, so it is almost certain that once the battery runs down, the payload will become inert. However, the payload is an electric car, with many many amp-hours of battery life. The car radio might run for hours or days.

Monday, August 21, 2017

Live, from the eclipse path

The Kid Attractor - works on adults too.

Cookie Monster
I'm in Rexburg, ID where the crowds are not as bad as I feared. I left Logan, UT this morning at 4:00am. Traffic on I-15 was interesting. It never slowed down below the speed limit, but there were easily 20 times as many cars heading north as opposed to south. We arrived at Rexburg at 7:30 and found plenty of parking in a church parking lot just south of the Temple.

Just after first contact
Things of note:


  • It did get noticeably darker as the eclipse passed 50%
  • We could see the shadow approach from the west. It was hazy that direction, and I could see a band of dark start at the horizon. It looked like it was getting vertically wider rather than closer.
  • Totality itself cannot be done justice in pictures. I took pictures but used my eyes too. Light level was close to that of just after sunset.  The sky was a deep blue purple. We saw the diamond ring, which was MUCH brighter than the rest of the corona. The corona is white, and rather narrow with 3 long streamers  (more than one solar diameter) One at 12:00, one at 1:30 and one at 7:00. The disk of the moon was the same color as the sky.
  • I thought I got crowd shots, and everyone around said that it was totally worth it. Unfortunately for some reason either the video was never captured or was deleted :(
  • No one bothered to stay long after totality - we just all packed up and ignored the reverse of the spectacle we had just seen.


See you all in 2024!

Saturday, July 22, 2017

Today's Episode of "It's my responsibility, but..."

"...The datasheet diagrams didn't have the pins labeled."

This time, it is the encoder board.


Here, it isn't obvious which pad in the footprint goes with which pad on the part.

It doesn't matter anyway, because a closer examination of the footprint and the Digikey list of optical sensors reveals that I bought the wrong part. The footprint on the board is marked QRE1113, but it is actually for an Omron EE-SY193. D'oh!


Well, back to the fab again, for both boards. I'll use the QRE1113 parts that I have to test if the parts even fit in the hole. If not, I'll have to use the EE-SY193.

All parts of this project are my responsibility. It can't be otherwise, since there isn't anyone I report to or who reports to me. My partner is a special case.

Friday, July 21, 2017

Dagnabbit!

That beautiful purple board I received yesterday won't work. If it had been stuffed and plugged in, it would have immediately shorted out anything plugged into it.

One of the features on the board is a super-wide (for a 6-mil PCB) strap between the two adjacent 5V pins coming from the Pi. When I had a close look at it, I saw a thin little gold crescent around part of the hole. It took a little bit for it to dawn on me that this was the ground plane, which was visible through the mask, plated in gold, and not protected from shorting with the other contacts:
 The effect is everywhere on the board. I saw it first on the strap in the upper-left, but this magnified version shows it on the motor power section and connection to the Arduino. Note the crescents around the top of D3~ and RST, and the slivers of ground plane visible through the mask around the isolator footprint.

I don't think that there is anything that can be done to fix it. I also don't think that the encoder board is affected, so at least I can still use that. I'll try to stuff it, but I will check continuity closely. If any of the solder bridges the 6mil gap, then the short will exist.

I'm pretty sure the problem comes from settings in the ground plane. Since OSHPark can make 6mil boards, I take full advantage of the feature. Unfortunately, this interacts with a bad default in Kicad. Fortunately, that is easy to change, but it would have been nice to know fifteen dollars and two week ago. The money isn't a big deal, it's the time.

Kicad was even trying to tell me that something bad was about to happen. Not from the DRC (although that would have been nice) but in the images. Here is the old, bad design:

The purple rings around the pads are the soldermask gap. You can see a slight red tinge around the edge where the mask gap overlaps the ground plane.
The OpenGL preview shows it too, perhaps in an easier-to-understand form.

To fix it, use the Dimensions/Pads Mask Clearance menu option

Change the Solder mask clearance value from its default... (Yes, I design my boards in US units. You got a problem with that?)

Change it to zero.

The borders of all the pads are now black, indicating the solder mask gap doesn't span the space between the pad and the ground plane




 This option has settings in several places: The global setting I talk about above, the zone setting (which I haven't found), the footprint setting, and the individual pad setting. The later settings in the list have priority over the earlier ones, but the earlier ones are used as defaults.

In general with modern fab processes, you should set this value to zero. This will describe a hole in the solder mask the exact size of the pad. The fab can and will edit this to match their process, so don't worry about getting it too small.

Thanks to a YouTube video by My 2uF for help finding this setting.



Thursday, July 20, 2017

Yukari 5 parts!

Front side of each board
Look how awesome the gold logos look!
Here is the Pi wearing it's hat (upside down, because the logos are so cool)
And the hat wearing it's accessories




Thursday, August 25, 2016

Radiation-pressure propulsion for nano-spacecraft

Abstract: The Breakthrough Starshot project proposes using radiation-pressure propulsion to send multiple spacecraft in the gram-range mass to interstellar targets. There are numerous engineering challenges to accomplishing this mission, but their project discusses speeds and accelerations previously only discussed for cannon projectiles and particle physics experiments. This article examines the problem from first principles to see if the project is even orders-of-magnitude possible. A preliminary check on their numbers seems to confirm that the project is possible in principle.
 Radiation pressure has been hypothesized since the early days of special relativity and quantum mechanics. It is a simple consequence of the mass-energy equivalence and the photon nature of light. Quantum mechanics states that light is made of photons, and that each photon is a discrete bundle of energy. The energy \(E_{pho}\) carried by each photon is proportional to its frequency \(\nu=c/\lambda\), and the proportionality constant is Planck's constant \(h\).

\[E_{pho}=h\nu=\frac{hc}{\lambda}\]

Everyone knows the famous mass-energy equivalence equation \(E=mc^2\). However, that is only a special case of the momentum equation:

\[E=\sqrt{p^2c^2+m_0^2c^4}\]

We can solve this for the momentum of a photon, considering that its rest mass is zero.

\[\begin{eqnarray}E_{pho} & = & \sqrt{p_{pho}^2c^2+0^2c^4}\\
& = & \sqrt{p_{pho}^2c^2} \\
& = & pc\\
\frac{hc}{\lambda}&=&p_{pho}c\\
\frac{h}{\lambda}&=&p_{pho}\end{eqnarray}\]

Note that momentum is a vector quantity, but this just deals with the magnitude. The direction of the momentum vector is the direction the photon is traveling.

Now, we know the energy and momentum of each photon, so we know that if we throw so many joules of photons at a target, it will transfer so much momentum. The irradiance is defined as the power \(P\) (energy \(E\) per unit time \(t\)) of the light hitting each unit area of the target, so the units are W/m^2 or J s^-1 m^-2. We can calculate it from the amount of energy striking the target of known area \(A\) per unit time:

\[I=\frac{P}{A}=\frac{E}{tA}\]

So given a certain amount of irradiance with a known photon wavelength, how much momentum does that light carry? First, figure out how many photons per time per area \(I_{pho}\) that irradiance represents:

\[\begin{eqnarray}I_{pho}&=&\frac{I}{E_{pho}}\\
&=& \frac{I\lambda}{hc}\end{eqnarray}\]

The dimension of photon irradiance is photons per time unit per area unit (photon s^-1 m^-2 in SI units).

Now a bit about momentum. Again, everyone knows Newton's second law \(\vec{F}=m\vec{a}\), but again this is a special case. Force is defined as the change in momentum per unit time:

\[\vec{F}=\frac{d\vec{p}}{dt}\]

For massive objects at sub-relativistic speeds, momentum is defined as the mass of the object times the velocity of the object. For relativistic conditions, we use the energy relation above, where energy still has the customary units J=kg m^2 s^-2. Working through the units of the energy equation in the special case of a photon, we find that momentum still has the same units as it does in sub-relativistic conditions.

So from this, a given amount of energy of photons carries a certain amount of momentum. A certain power of photons carries a flow of momentum per unit time, or in other words Force. A certain amount of momentum flow impinging on a certain area is the same as a force exerted on that area, or a Pressure \(\rho\). So, we can calculate the pressure exerted by a given irradiance of light from the intensity in photons/time/area, multiplied by the momentum of each photon, which gives momentum/time/area which equals force/area which equals pressure:

\[\begin{eqnarray}\rho&=&I_{pho}p_{pho}\\
&=&\left(\frac{I\lambda}{hc}\right)\left(\frac{h}{\lambda}\right)\\
&=&\frac{I}{c}
\end{eqnarray}\]

So the pressure exerted by the light is the irradiance of the light divided by the speed of light. That is amazingly simple, and notice that both Planck's constant and the wavelength cancel out. Now we can start plugging numbers.

First, let's do one of Clarke's space yachts. It weighs on the order of 1000kg and has on the order of 1km^2=1,000,000m^2 of sail, and it uses natural sunlight. How much force does it collect and what is its acceleration? Note that since the sails are reflective, the light is reversed in direction and has a momentum change of twice its original momentum, so we collect twice as much force as the momentum suggests.

\[\begin{eqnarray}I& & &\approx&1400\mbox{W/m}^2\\
c& & &=&299,792,458\mbox{m/s}\approx 3\times 10^8 \mbox{m/s}\\
\rho&=&2\frac{I}{c}&=&2\frac{1400}{3\times 10^8}=9.4\mu \mbox{N/m}^2\\
A& & &=&1,000,000 \mbox{m}^2\\
F&=&P_{pho}A&=&9.4\mbox{N}\\
m& & &=&1000 \mbox{kg}\\
a&=&\frac{F}{m}&=&\frac{9.4}{1000}=9.4\mbox{mm/s}^2\approx1.0\mbox{mg}
\end{eqnarray}\]

So the whole sail exerts a measly couple of newtons. Pulling on a ton of mass, we get almost a full milli-g of acceleration. Not a whole lot, but in order of magnitude of that described in the story.

Now we get into the big numbers. The game Adventure Capitalist teaches us to not be afraid of big numbers, so lets go get some. The Starshot project describes using gigawatts of power on spacecraft weighing grams. Among the enormous engineering challenges are:

  • Focusing gigawatts of power onto square-meters sized targets over distances of billions of meters
  • Having a reflective enough sail that the GW/m^2 incident power doesn't vaporize the sail
  • Having a tough enough sail that the MW/m^2 absorbed power doesn't vaporize the sail
  • Having a thin enough sail to stay within the mass budget
  • Building a spacecraft with a useful payload, power source, and communications system that will work over interstellar distances and stay within the mass budget
  • By the way, the mass budget is about 10g total.

Putting all of these aside, they discuss a spacecraft with a mass of about 10g, a sail area of about 16m^2, and multiple gigawatts of power focused on it. Let's go with 1GW/m^2, or 16GW total, to see what we get:

\[\begin{eqnarray}I& & &\approx&1\mbox{GW/m}^2\\
P_{pho}&=&2\frac{I}{c}&=&2\frac{1\times 10^9}{3\times 10^8}=6.66 \mbox{N/m}^2\\
A& & &=&16 \mbox{m}^2\\
F&=&P_{pho}A&=&16\times 6.66=106\mbox{N}\\
m& & &=&0.01 \mbox{kg}\\
a&=&\frac{F}{m}&=&\frac{106}{0.01}=10.6\mbox{km/s}^2\approx1080\mbox{g}
\end{eqnarray}\]

Well, that's some git-up-and-go, alright. How long does it take this to get to a target speed of a good chunk of the speed of light, say 60,000km/s (0.2\(c\))?

\[v=at\]
\[\begin{eqnarray}t&=&\frac{v}{a} \\
 &=&\frac{60,000,000}{10600}&=&5660 \mbox{s}
\end{eqnarray}\]


That's not too bad, a little over 1.5 hours, and real close to 1 LEO period, but not what the paper is discussing, which is on the order of 10 minutes. What acceleration is needed for that?

\[v=at\]
\[\begin{eqnarray}a&=&\frac{v}{t} \\
 &=&\frac{60,000,000}{600}&=&100 \mbox{km/s}^2\\
F&=&ma&=&0.01(100,000)=1000\mbox{N}\\
P&=&\frac{F}{A}&=&\frac{1000}{16}=62.5\mbox{N/m}^2\\
I&=&Pc&=62.5(300,000,000)=18.75\mbox{GW}
\end{eqnarray}\]

This is about 10000g. Now we are talking cannon-type acceleration. Since the acceleration is about 10 times greater, it will require 10 times the power, or a couple hundred gigawatts. This is a good fraction of the electricity usage of the United States, so it is a large but doable amount of power. We only need it for 10 minutes. The paper discusses putting the laser in space, which would require a gigawatt-scale power source in space. I suppose a couple of square kilometers of solar cells could do that.

If all of the engineering problems are solved, this could work. There isn't anything physically impossible about it. The engineering challenges are large, but the Starshot project claims that each can be solved incrementally. We don't have to shoot at stars first -- imagine getting back to Neptune in a couple of days, and then being able to orbit once you got there?

References

Saturday, August 6, 2016

August 20 progress passed!

Yesterday Yukari 4 passed the August 20 progress verification point. A whole 15 days early, too. Here is video proof: