Tuesday, February 2, 2021

Dusting off some old code

 In preparation for the Mars Science laboratory landing, I made a video using the best pre-EDL data available, including a simulated EDL kernel set, MOLA topography, and HiRISE imaging. I'm pretty proud of it:


A couple of things that can be improved:

  • NOT the obvious border between different photo areas. I like that effect, and it makes it harder for others to pass it off as anything other than a simulation
  • Proper drop-off of the entry balance masses
  • A couple of editing details here and there
  • Better display of the landing ellipse
Things which will be stolen as-is:
  • Model of the lander, entry capsule, and parachute
  • Flame and scorch effect
Available data:
  • Current best simulated trajectory is believed to be this kernel covering cruise, EDL, and on-surface at the landing site
  • No attitude data yet. It might be possible to simulate it myself, based on how MSL flew.
  • Solar system geometry from NAIF, including DE430 and MAR097 Solar System geometry.
  • NASA public-facing website showing the ground track of the rover. Before landing, it only shows the landing ellipse. This has image map data down to ~1m resolution around the landing site, in an extractable format.
  • Mars Global Surveyor MOLA data -- the stuff we want is radii, at 128 pixels per degree.
  • Landing ellipse -- The given landing ellipse has a major radius of 4.24km, a minor radius of 4.00km, and a major-axis azimuth indistinguishable from due east. I believe this to be the 99% confidence curve, which I think is about 3-sigma.
A lot of the old code depended on KwanPov (then called AstroPov). Unfortunately that version was lost in the Great Hard Drive Crash of 2012. Therefore I will be using modern KwanPov, which has the usual Spice stuff and double-precision variables, but not the old tile map stuff.

So, what will we be using instead? 

Topography

For topography, we used to use mesh2 objects. Now, we will use rectangular height fields. There is quite a bit of optimization in a height-field that we can't take advantage of with a regular mesh:
  • Height fields can use images as their base data, while meshes require text, which takes much longer to load.
  • Height fields are smaller and use less memory and storage.
  • Height fields are optimized -- imagine each height field is a cityscape, where each pixel is a skyscraper. When tracing a ray, the path of the ray across the height field can be projected down to two dimensions, and only those pixels in the height field which are crossed by the ray need to be checked in full 3D. 
The minus side is that this will take quite a bit of pre-processing through something like Python (with Numpy and Scipy). The MOLA data will be resampled on a 3D cartesian grid. We will rotate the globe so that the part we want is on top. We will grab each section in a square aligned to the XY grid, and then use the Z coordinate as the value of the height field. Since you can't tessellate a disco ball with squares without having gaps, we won't try -- we will use the same rotation for all tiles. The square borders will fall along constant X and Y, not constant latitude and longitude.

We will sample each tile so that the full range of Z in that tile is mapped to [0..255], round Z off to the nearest integer, then record the X, Y, and Z bounding box so that the height field can be appropriately translated and scaled to fit back into the world. I'll use 8-bit resolution until it proves unsuitable, because there are many more tools to work with 8-bit grayscale PNG images than 16-bit.

Image map

For image maps, we can use the ill-documented repeat and offset keywords in an image map, along with the better-documented once keyword. This will be done as a layered texture, something like this:

object {
  blah blah blah 
  texture {
    pigment {
      image_map {
        png "tile03x04.png" 
        map_type spherical 
        once repeat <tiles_x,tiles_y> offset <3,4>
      }
    }
  }
  texture {
    pigment {
      image_map {
        png "tile03x05.png" 
        map_type spherical 
        once repeat <tiles_x,tiles_y> offset <3,5>
      }
    }
  }
}

The repeat function does what it seems -- causes the given image to be scaled so that it can repeat so many times in the U and V direction. Either of these repeat components can be fractional, so you can do things like repeat 0.5 times (only use half the image), 1.5 times, etc.

Similarly, the offset function does what it seems -- causes the given image to be moved around image space. If the image actually repeats, it doesn't make much sense for the offset to be out of range of [0,1) but if it is used only once, then any values can be used for U and V.

I think order makes a difference, so if you repeat first then offset, you get a different result than if you offset then repeat.

The once keyword means that the image is used only once, rather than tiling the whole UV space. If you check the point of a texture that is "outside" the original image, texture calculation is immediately aborted and this layer is treated as completely transparent.

We take advantage of layered textures. If more than one texture is specified for an object, and the top texture is transparent at the given point, then the next texture is checked, and so on. I have no idea how efficient it is to do hundreds or thousands of layers, but I don't think there is any issue other than efficiency.

GIS data

Data from the GIS appears to be as follows:

  • Structured in layers, where each layer is exactly twice as zoomed in as the previous.
  • The highest-resolution dataset goes up to level 18
  • There are two base maps -- "Basemap" which appears to be derived from especially-high-resolution, targeted images from HiRISE, and "North East Syrtis Base Map" which is distinctly lower resolution, only going up to level 12.
  • The data is structured as level/column/row, where column strictly increases with east longitude, and row increases with north latitude.
  • The level is broken up into 256-pixel tiles.
Establishing the exact scale will take some doing. On level 18:
  • There is a black border around the data
  • The corner of the light data is visible and can have coordinates measured. 


  • The lower-left darkest-but-not-black corner is in tile 187259/144561, at pixel column 30, row 12 (from top). This is at 77.16126356273891E, 18.21102425519005N
  • The upper-left about 50%-black corner is in tile 187258/144593, at pixel column 137, row 99 (from top). This is at 77.16046560555698E, 18.721192572014488N
  • The upper-right not-quite-black corner is in tile 187650/144953, at pixel [xxx,xxx], at location 77.69832998514177E,18.7211973350726N
  • The lower-right not-quite-black corner is in tile 187650/144561, at pixel [xxx,xxx], at location 77.69832998514177E,18.211024892155642N
  • On a spherical surface of radius 3396km, 1 degree is 59.271km, so 1 millidegree is 59.271m and one microdegree is 59.271mm.
  • At level 18, one pixel at the bottom edge of the basemap spans about 77.69832730293275-77.69833266735078=5.3 microdegrees, or about 318mm.
It looks like there are 262144=2^18 tiles across, which makes perfect sense for being level 18. The leftmost column has its left edge at longitude -180deg. The latitude isn't coming out quite so cleanly.

No comments:

Post a Comment