Monday, April 8, 2024

Totality 2024

Sketch of what the eclipse looked like to my Mk1 eyeball

 We almost didn't even make this trip. We had seen it before, and the eclipse was scheduled for a very inconvenient time for us.

The original plan was to drive to Oklahoma and stay with family. Their place in Oklahoma isn't in the path, but was close. So the plan was to drive from there to whatever part of the path that had the best combination of closeness and good weather. We had penciled in Dallas, since that was the closest spot on the path that would have good historical weather. The history showed that further south was more likely to be cloud-free.

As it happens, the weather was very different from the average. New England was clear, while Texas was cloudy with danger of severe storms. If we had gone with this plan, we probably would have headed towards Arkansas.

For various reasons, we didn't do that plan. For a while, we had decided to skip it altogether. However, another family event with different family came up. They live in Indianapolis, and the other event was the Friday before the eclipse on Monday. So our plan changed once again to driving to Indiana, then judging the weather there. We were willing to drive wherever we needed, but Indianapolis is already in the path so we were hoping for good weather there.

Three weeks ago, the longest-range forecast was excellent for Indiana, so we decided to go there. Ten days ago, the forecast was about the worst imaginable, with a frontal weather system almost perfectly along the path, cloudy from coast to coast. Since then, the forecast improved, with Indianapolis having a cloud cover of between 6% and 50% depending on the model and time out.

As late as yesterday I was still contemplating driving to Maine, but we couldn't have taken all the family. We decided to stay here and take our chances. If it was cloudy, we would just see the clouds get dark.

The weather report on Monday morning wasn't great. The front had passed overnight, but there were still lots of cirrus clouds, and I was worried.

As the eclipse started, the sky was still full of cirrus clouds. It wasn't affecting visibility of the partial phase. As usual, the eclipse was only barely noticeable until it got over 90%. 

The "party" as such was set up in the back yard, with a table and a bunch of lawn chairs. The rest of the party played cards while I did my usual anti-social thing. 

We made a pinhole camera with a cardboard box, foil, paper, and a glue stick. It worked fine, projecting an image maybe a couple of millimeters in diameter but clearly with a bite out of it. We also had a full complement of eclipse glasses and one solar binoculars. 

With the binoculars, I could see one sunspot near the center of the sun, maybe at 9:00. 

The soundtrack for the partial phase is "There's a little black spot on the Sun today"

The most interesting thing i remember from the partial phase of the 2017 eclipse was the crescent shadows cast by the trees. There were no trees in the back yard. The family had a number of kids, and I shouted out if anyone wanted to come out front with me to look for something interesting. There were several trees out front, so I did see the crescent shadows, but it also meant I had my wife and all of the family's kids out front with me during totality.

The memorable thing this time was that totality looked like a hole in the sky, like a portal. "There's a hole in the sky, through which things can fly." The eclipse itself was silent of course, but the soundtrack in my head was the sound of the "whale probe" from Star Trek 4. 

Also there was a clear red spark at about 6:30. This turned out to be a prominence. I only remember seeing it after a minute or so, but everyone around me saw it too.

The moon was the same color as the sky, and the corona was white. During the 2017 eclipse, there were several long corona streamers. I didn't see anything like that this time -- the corona was a relatively thin band, with a definite sharp inner edge and a fade-out on the outer edge, but still pretty thin. The sketch above is about what I saw, and a couple of my party agree it looked like that.

The sky in general was dark like twilight. I don't remember exactly how dim it got last time, but it might have been dimmer this time.

Totality was long enough for me to get a video of my party, a couple of telephoto images through my camera, and still experience the event fully with my Mk1 eyeballs.



I did see the diamond ring at third contact, for about 2 seconds before glasses on. I could still see the corona, but along with an overwhelmingly bright patch of sun. The corona was still visible on the opposite side.

Saturday, September 23, 2023

Gravity Simulator #1,000,006

I've written a lot of gravity simulators. This one is the first time I've built one into a game engine. First, I created a Node2D and a Sprite2D with a texture so it's visible. I then attached the following code to the sprite node:


extends Sprite2D


var v:Vector2=Vector2(100,0)
var center:Vector2=Vector2(500,200)
var mu=3e6


# Called when the node enters the scene tree for the first time.
func _ready():
	var vec=Vector2(500,0)
	global_position=vec


# Called every frame. 'delta' is the elapsed time since the previous frame.
func _process(delta):
	var r:Vector2=(global_position-center)
	var a:Vector2=-mu*r/r.length()**3
	v+=a*delta
	global_position+=v*delta
The _process() function is a very simple Euler integrator. The acceleration is the standard test-particle gravity equation of motion, where the center is considered to be infinitely heavier than the test particle, such that the gravity of the test particle doesn't affect the motion of the center. 
The next obvious things to do:
  • Use Runge-Kutta instead of Euler
  • Gravitate towards another sprite instead of an invisible point
  • Give both points finite mass 
  • Gravitate towards any number of masses, to make an N-body simulator.

It looks like I might be right about how to do physics in Godot, but it still doesn't feel quite right. Godot includes ragdoll physics and joints. In order to integrate custom physics, I would have to be able to generate custom forces and moments, instead of directly doing the numerical integration myself. I expect something more like:

# Called every frame. 'delta' is the elapsed time since the previous frame.
func _process(delta):
    var r:Vector2=(global_position-center)
    var F:Vector2=-mu*global_mass*r/r.length()**3
    add_force(F)
    var M=... #calculate moment on the object
    add_moment(M)
This way if the engine does a higher-order integrator or has its own forces, this slides in nicely.

func _process(delta)

 I wish that GodotScript was just Python. It isn't, for a reason, but I don't know the reason yet. It could be:

  1. GodotScript came first, or before Python was widely known.
  2. Python is too hard to integrate. In this case, they considered *creating an entire new language interpreter* easier than incorporating Python
  3. True Python implies that the entire marketplace of Python libraries can be used, and that may have been too difficult to achieve.
  4. Python doesn't match the authors' internal model for how to do scripting
  5. Godot was started as a personal project, and a scripting engine was one of the "fun" things that they wanted to do with it.
In any case, I have reached the part of the tutorial where scripting is introduced. The template for a script for a node includes the following interesting code:

extends Node2D


# Called when the node enters the scene tree for the first time.
func _ready():
pass # Replace with function body.


# Called every frame. 'delta' is the elapsed time since the previous frame.
func _process(delta):
pass


The interesting one is _process(). I can imagine a mode of using the game engine where I just use it as a graphics engine -- every frame, this function is executed by the graphics engine, where it does all of my custom physics etc, changes the position properties of a bunch of nodes, then returns and lets the graphics part of the engine do its thing. 

Seriously, this function can implement a Runge-Kutta numerical integrator from the ground up -- it could use the node properties as part of its state, can keep global or static variables for the rest, and implement any law of motion I can think of. I'm not smarter than Newton or Euler and I intend to use their laws, but I don't *have* to.

If this was all Godot did for me, it might be just right. I'm still exploring, seeing what else it can do for me (I'm on part 25 of 47 of the Intro to Godot course). I strongly suspect that the engine does have a physics component though, since I have seen "gravity" as a property of some of the nodes.


Friday, September 22, 2023

Fallout from the Unity Disaster

 I have always been a low-level kind of programmer. I have a good intuitive sense of How Things Work, but I tend to build my own mental abstractions on top of these and I often have trouble learning other peoples' abstractions.

For instance, one of my unfinished projects is making math explainer videos. I knew about Manim from 3blue1brown, but he uses a completely different mental model of how animations work. It's tough to argue with success -- he has hundreds of videos and millions of subscribers, while I have zero and zero. Plus, I could never get the dancing equations to work as well in PictureBox as he does with TransformMatchingTex().

Similarly, I have always wanted to get into 3D simulator development -- not necessarily a game, more like a virtual environment where I can program virtual robots. One of my other long-term unfinished goals is to animate the launch of various historical spacecraft, much like Jim Blinn animated the flybys of Voyager. I want to be able to write a physically realistic rocket guidance program similar to what a Titan would have actually used, and use that to create a physically accurate, properly scaled animation of the launch. The legend goes that the second Voyager launch (Voyager 1, don't ask) only had about two seconds of fuel left in the upper stage when it finished the boost. How much did the first launch (Voyager 2) have? Is two seconds a lot? This can't really be answered without proper scale.

Anyway, one of the things I have been avoiding as someone else's abstraction, is game engine design and usage. I always thought myself capable of writing my own, even though I kept bouncing off of OpenGL and *its* abstractions. I have written simulator engines any number of times, basically just fancy numerical integrators. I have never learned anyone's game engine -- I have never explored anyone else's abstractions.

In the last couple of weeks, the Unity game engine has claimed authority to alter its relation with its developers. 



Lets just say that the development community hasn't taken it well.

One aspect of the fallout of this announcement is a mad scramble to other engines. It might be too late for programs which are years along and too intertwined with Unity, but many many people are looking at alternatives, and one of the ones that came up is Godot (https://godotengine.org/). This one is free and open source, so it can't help but stay free.

Even though it is free, I did spend about $26 on an online course for it (https://www.humblebundle.com/software/everything-you-need-to-know-about-godot-4-encore-software). The biggest thing I am looking for is how well their abstractions match up with my own. This will be interesting, as I'm not even sure what I expect a game engine to do for me. How does the physics part work? Can I write my own physics? Is there a numerical integrator underlying things? Are translational and rotational kinematics already implemented, and I just have to write my own dynamics? The engine might do too little, and require me to implement my own numerical integrator, or it might do to much, and implement things in such a way that I can't do a rocket.


Wednesday, May 31, 2023

Velocicoaster

 On Friday, May 26, I rode the Velocicoaster in Universal Florida. I almost didn't for a couple of reasons, but I am glad I did. I'm not quite as sure I would go on it again, but I did offer to do so with my niece once she heard that I went on it.

Monday, April 17, 2023

Shipometer progess

 We are on Shipometer 23.04 now. Shipometer 23.03 is a failure, for perhaps multiple reasons.

First, the shipometer is a HAT for a Raspberry Pi that can carry a ZED-F9R GPS+IMU, a BME280 pressure/temperature/humidity sensor, and an ICM20948 9DoF sensor. Even though the GPS+IMU has an IMU built into it, I want the ICM20948 because I can control it at a lower level, and because it has a magnetometer in it as well. 

Finally, it carries an LPC210x as a precision timer. The ZED-F9R generates a pulse-per-second (PPS) which is accurately timed to have its rising edge right at the top of each second, with sub-microsecond accuracy. This PPS is routed to and captured by the Pi, but is also routed to the timer capture input of the LPC210x. This microprocessor is an old ARM7TDMI, but has a hardware 32-bit timer capable at running at 60MHz, with multiple usable capture inputs. The LPC210x is programmed to count at 60MHz, reset after 3.6 billion cycles, capture several inputs, and output the exact timer count of each pulse to its serial port, which is wired to the Pi UART1. This way, the time of each PPS and sensor data-ready signal can be recorded with no latency due to things like interrupts and real-time software.

Wednesday, February 1, 2023

Jewel of the whenever: Matrix Multiplication

Eigenchris (I kinda wish I thought of that first) has a series of videos on tensors. I can't give my opinion on the series yet, because I haven't seen them all yet. However, he does something cool that I have never seen before as a mnemonic for matrix multiplication (start at 4:32):
 

We arrange the multiplication as follows. The final product goes in the bottom right. To the left of that, we put the left matrix, and *above* that we put the right matrix. Each cell is then the dot product of the row vector it is on and the column vector it is on

\[\begin{matrix}  & \begin{bmatrix}    . & w_0 & . & . \\    . & w_1 & . & . \\    . & w_2 & . & . \\\end{bmatrix}\\ \begin{bmatrix} v_0 & v_1 & v_2 \\ . & . & \end{bmatrix}& \begin{bmatrix} . & \vec{v}\cdot\vec{w} & . & . \\ . & . & . & . \end{bmatrix}\end{matrix}\]

Friday, September 9, 2022

The One Picture that explains Phase Locked Loops

 A Phase Locked Loop has always been mysterious to me until now. The following three pictures explain it all, and the third one is where the light goes on. Here are the first two, to build dramatic tension and also to do the best job I have ever seen of explaining the block diagram of a PLL. It's from Shawn Hymel's series on FPGA programming.


First diagram:

The PLL consists of three sections:
  • The phase detector produces a signal based on whether the reference and fed back signals are in phase. I'm not sure of the details, but it might be something as simple as comparing both signals to zero (returning 1 if positive and 0 if negative) then XORing those comparisons. If the signals are in phase, they will always be on the same side of zero, and the phase detection output will be constant. If they are out of phase, sometimes they will be on opposite sides and the phase detection will not be constant.
  • The low-pass filter takes the phase detection signal, treats it as a PWM, and converts it to analog just by running it throug a resistor-capacitor (RC) circuit. The output is then some analog signal that is a function of the average of the phase detection signal.
  • The voltage-controlled oscilator (VCO) then takes that signal as an error signal. I'm sure it does some fancy PID magic, which finds just the right output signal to keep the input error signal at zero. It feeds this to the oscillator which then runs at the commanded frequency.
  • The output is fed back to the phase detector to produce a proper closed-loop control system.
In this diagram, the output of the VCO is significantly out of phase with the reference, *because* it is not the right frequency. It's impossible for two signals of different frequency to stay in phase.

Second diagram:

In this diagram, the output phase has locked. The error signal from the phase detector and LPF is zero, and the controller in the VCO knows that whatever settings it is using now are correct, and keeps them there. (Note that in this diagram, the clock is an analog sine wave. Just pretend it's a digital square wave.)

Third diagram, and critical part:


This shows a clock divider in the feedback part. Digital clock dividers are relatively easy to implement, requiring a counter. To divide by N, make a counter big enough to count to N. Each input clock, increment the counter, but when the counter is about to reach N, reset it instead. If N is even, then it's pretty easy to set up some logic so that whenever the counter is in the first half of its run, a low signal is output, and vice versa. Odd is a little bit trickier, but still doable.

Multipliers on the other hand are difficult, and in fact are why we need all this fancy PLL stuff to begin with. With a PLL and a *divider* in the feedback path, we can implement a *multiplier*.

If you put a divider on the input reference signal as well, you can get frequency multiplication by any rational factor.