Category Archives: Astrophotography

Flat Fields Matter

For a couple of months, High Point Scientific had my new tripod on backorder. It is apparently quite popular, being solid yet inexpensive. I was pretty excited to get the notice that it was shipping.

As it is winter and Orion is quite prominent in the night sky, I thought it would be nice to capture the Orion nebula. Because of it’s location, basically formed around Orion’s dagger, it should be easy to find, at least compared to many, maybe most, deep sky objects.

For my first try, I had my CLS filter in place. We have a big sodium light that is basically in the same direction as Orion when I am set up in what is arguably a very handy place, in my driveway, just outside the garage. This filter, however, is pretty dark and it made it more difficult to find anything. At some point, I decided that maybe I was pointed the right direction and that I just couldn’t see the nebulosity in my test shots, so I set the thing loose taking 180 x 30 second subs. I spent some of the capture time in the house doing things that needed doing and some of it waiting in the car with the heater on, which was kinda novel.

I had started a little later than planned, plus all that attempting to find a nebula push my capture kind late. The exact place I had set up was inadvertently planned for my capture plan. This was where the camera was pointed at the end of 180 frames.

I captured a really pretty field of stars and I had only missed the nebula by this much:

As luck would have it, a couple nights later was clear and a Friday, so I set up again. This time I removed the CLS filter, hoping it would make it easier to find the nebula. I am not sure whether or not it made a real difference, but I did find it!

It was very exciting not only to see the nebula show up on the viewscreen, but also to be able to frame it so perfectly.

I captured another set of 180 x 30 second subs, about 30 darks, flats and bias. I set up a little bit out in the yard to keep from catching the house if capture went long. I also set up a heater and for the most part, sat with the equipment for most of the capture time.

I also ran a small test of two other bits of equipment, a dew heater for the astrograph and my Bluetti power station. While the power station was not purchased specifically for astrophotography (power loss during winter was the big thing), using it for possible dark site travel was a consideration.

This was the first time I had even powered up the dew heater. According to the Bluetti display, on high, it draws 6 watts. I could hardly even tell it was warm against the aluminum dew sheild of the Redcat. I suppose that all it has to do is keep it just warm enough to discourage condensation. Shrug. Whether or not it was succesful would come up soon enough.

We had plans for early Saturday afternoon, so I decided to stay up and do at least a preliminary stack of the capture. I scrolled fairly quickly through the subs and discovered a couple of where I presume I had bumped the tripod and excluded those subs. The final stack came out… ummm… odd.

This is after a bit of stretching in GIMP. There are two anomalies about this image. The most obvious to me is that the bottom 1/3 or so seems blurred, out of focus. I had run through checking the subs pretty quickly, but certainly none were way out of focus. It also strikes me as odd to only be out of focus on the bottom of the image. The top and middle seem to in sharp focus.

The other thing, and this was harder to notice because of the blurring, but there is a definite linear gradient from top to bottom.

It was too late and I was too tired to do much about it just then, so I hit it again the next morning. One of my more careful trips through the subs, I noticed that several towards the end of the capture seemed to have soft focus, so I excluded them and it was essentially unchanged. I reviewed my flats and noticed that they had a linear gradient to them and thought, oh, that makes sense, so I restacked again without flats: no change. It occurred to me later that I may have unchecked all the flat captures, but maybe not the flat master that the previous stack process created.

I posted a png of the stretched blurry image on the Nebula Photos Patreon community page, with some details about the capture. Nico took an interest and a few private emails later, I had much more carefully tried stacking without the flats. I cleaned all of the .info files out of the lights folder, moved the exclude lights to another folder, as well as moving the master flat to another folder. When I stacked this time, it was 131 lights, zero flats and the presumably good dark master and bias master. The image came out great and with a couple of stretches and a crop:

My favorite astro image thus far!

To further verify that the flats were the issue, I kept all the rest of the conditons the same and added back the flats and I got this different image. It may seem to be ok, but upon closer examination, it is still very wrong.

It is hard to tell at full size, but the bottom half of the picture, getting worse as it goes lower, the stars split into three divergent images of red, green and blue.

I will be the first to admit that I do not understand the inner workings of Deep Space Stacker and how it uses the calibration files, but it now seems obvious that if there is an issue with those files, it can damage your final image in probably unpredictable ways.

Some of the discussion with Nico was about my flat capture process and I am going to rework how I am doing that. For this session, flats were captured by holding a USB tracing pad up to the end of the dew shield and adjusting exposure until it was just a little underexposed, which turned out to be 1/2500″. This is probably way too fast and catching a pattern of sensor noise as well as PWM flicker and shutter artifacts from the brightness control of the panel itself.

There are several ways to address this and I will report on what works well for me.

Is the Write Speed the Right Speed?

I have been aware that SD cards, and particularly MicroSD cards, can have read and write speed limitations, however, I only recently have had two separate issues that turn out to have been due to slow write speeds.

Though I didn’t realize it at the time, write speed was likely the issue that caused some videos taken by my little DJI Mavic Mini to fail. I started it recording and flew around for a while. Later, the video was only about a minute long; I had definitely intended to record more than that. I now think the slow SD card write speed caused the high resolution video to simply overwhelm the card and the camera just shut off. I presume there was no notice, but I will look for some kind of on screen warning in the future.

I also had troubles with a recent astrophotography capture. I was getting 100 subs of 30 seconds each. The length of the capture doesn’t affect the size of the file, but when you are going to capture for nearly an hour, you don’t want to wait any longer between shots than necessary. Most DSLR cameras will capture to an internal buffer then write that image to the memory card between pictures. Generally, the write time of the camera is hidden from the user because we tend to take a picture or two then put the camera down while we wait for something else to take a picture of to come around. However, with astrophotograhy, you are taking dozens or even hundreds of long exposures in a row. In the example above, I had the camera set to pause for two seconds between exposures. That pause time accounts for nearly four minutes in the whole capture process. I noticed that a little while into the capture, the busy light was staying lit past the time for the for the next picture to take. Because the intervalometer just sends a 1 second signal to the camera and the camera was using it’s internal shutter timer, when this busy event would happen, the camera miss a shutter event, which would then allow it to catch up on the write process then sit idle while the 31 or so second wait on the intervalometer would time out. It would then capture 5 or 6 images before the write busy would add up enough for it to miss another shutter event. So, my 100 captures would have turned out to be 90 or so without intervention.

I changed the delay between shots on the intervalometer to 5 seconds instead of two. This helped it get to 10 or 12 shots before the camera was busy and missed a shutter event. I set it to 8 seconds for the remaining 40-50 shots, the busy light did not miss any more shots.

Had the 8 second delay been in place for the entire 100 shots, it would have added 14 minutes to the entire process. It’s not like that is a huge part of one’s life, but after you capture 100 lights, then you need to capture 30-50 darks at the same shutter speed and 30-50 flats. The flats will be at a shorter shutter speed, but that actually makes the write speed problem worse.

I found, not surprisingly, that a) the read and write speed on memory cards is rarely specified and b) when it is, write speed has a bigger affect on price than capacity. 64GB cards with 250MB/S write speed cost more than 128GB cards with 130MB/S write speed and c) anything slower than about 100MB/S will probably not show the spec and those will pretty much always be inexpensive.

To address both problems, I ordered four 64GB cards that specify 250MB/S read and 130MB/S write speed from B&H Photo.

By the time they had arrived, I found some somewhat questionable data that indicated that the write speed of the particular card I had used in both the DJI Mavic Mini and the EOS Rebel T6i probably has a write speed more along the line of 30MB/S. I found a simple disk benchmark program and tested the new and old cards.

The old card, as expected, was pretty slow:

The new card was much faster, exceeding the write spec, assuming they specify the best spec rather than the average:

No SD card does well with random reads and writes.

For perspective, here is the report on the 250G SSD in my laptop:

… and my Toshiba 2TB USB drive that I use for various archiving and backup tasks:

In practical terms, I set the camera to it’s fastest shutter time of 1/4000 second and set it to continuous shooting. Press and hold the shutter button, and with the new card, it takes 7 pictures at the max speed of 5 frames per second, then it slows down to about 1 per second. Release the shutter button and it takes about 5 seconds for the busy light to go out. With the old card, you still get the 7 shots buffered in the camera, but the catchup is more like 1shot every 2 seconds, then it takes nearly 10 seconds for the busy light to go out. This card should definitely be an improvement.

Star Tracking

Although it was not strictly necessary in order to use my star tracker, I found decoding it’s terminology handy for finally helping me understand what acension and declination are. As a long time low intensity astronomy geek, I am embarassed to admit that I never really pursued understanding those terms. As it turns out, they are not particularly complicated. Viewing all of space as the inside surface of an imaginary sphere, right acension is essentially longitude and declination is latitude. Right ascension refers to the ascension, or the point on the celestial equator that rises with any celestial object as seen from Earth’s equator, where the celestial equator intersects the horizon at a right angle. The origin of the numbers is a point defined by the location of the sun on the March equinox; that line is currently in the constellation of Pices, but due to Earth’s axial precession, it moves about 1 degree west over a 72 year span. Declination is an extension of the earth’s equator into the celestial sphere and declination does follow the earth’s tilt and it also slowly moves in response to Earth’s axial precession.

In order to take longer individual exposures, I bought a tracking mount. A tracking mount, most commonly called a star tracker, is a telescope mount with a motor drive in it. If you set up the unit such that it’s right acension axis is parallel to Earth’s rotation axis, then the motor drive can rotate the mount in sync with the apparent motion of the sky, allowing you to capture significantly longer individual exposures with no (or very very little) distortion of the stars. This axis is generally abreviated RA for right acension and pronounced as the letters R A. This axis also usually has a clutch to temporarily disconnect it from the motor drive for coarse positioning.

Connected to the driven right acension axis is the physical mounting for your camera and/or telescope. This is called the declination mount and is usually abreviated DEC and pronounced “deck”. This mount also usually has a clutch or other release to facilitate coarse positioning the telescope.

I chose the iOptron SkyGuider Pro. I happened to purchase mine from High Point Scientific and at time of purchase, it was $488 USD. I also got the companion ball mount for my camera. This is not the absolutely cheapest tracker I could find, but it’s definitely among the least expensive options available. It’s size and flexability are well suited to my interests.

The SkyGuider Pro can carry up to 5 kg (11 pounds) of load balanced, but up to 1.5 kg (3.3 pounds) unbalanced. This is perfect for mounting a DSLR with a pretty normal lens without the complications of adding the declination mounting bracket with the counterweight attached.

It is about as simple as one could hope to set up. I have thus far used it on a decent photography tripod, though I imagine I will upgrade to a more astronomy minded tripod. However, I have not detected any issues that could be blamed on the tripod. To a point, the heavier and more rigid the tripod, the better.

The SkyGuider Pro can be set to track in the Northern or Southern hemispheres. It has 4 tracking speeds. 1X is straight sidereal tracking for astronomy. 1/2X is apparently for tracking sky and horizon together, though I have not tried this out, so I’m not sure how half speed helps either of those views. It can also track the sun or moon, as they move at slightly different speeds compared to sidereal.

It has a built in polar alignment scope with a reticle to help with proper alignment. The reticle has details for northern or southern hemisphere use and there are several apps to determine exactly where in the reticle Polaris or Sigma Octantis needs to be placed, based on the time and date and your location on the planet.

Polaris and Sigma Octantis are not precisely on the rotational axis of the Earth, just close. The crosshair is the actual axis and, for the time and date in this example, Polaris would need to be placed at the position shown by the little green cross for the tracker to be aligned with the polar axis.

It has two more features that I have not yet needed, an HBX port to connect an external control panel and an ST-4 compatible port for an external guiding signal.

The external control panel gives a little more control over tracking speed and some other parameters. The same port can be connected to a PC via an RS232 serial adapter to provide similar features that way.

The ST-4 port allows any of several guide scope/camera combos that use internal image processing to correct the tracking rate for VERY accurate tracking, which allows even longer exposures or longer focal lengths, where a tracking error would be more apparent. I am intrigued, but at this point in the hobby, I am shooting really wide fields and such additional guidance is not yet necessary.

So, for all these words, the SkyGuider Pro can be polar aligned, then the camera attached and pointed at a target and the tracker can kinda be forgotten; it just works.

This image was made from 100 stacked 30 second exposures. Admitedly, it’s not an exciting picture. That night I had hoped to get a shot of the unimaginatively named C/2017 K2 (PanSTARRS) comet. It’s closest approach was to be the following night. However, there was also a super moon in the same region of sky and I could not find the comet due mostly to the moon’s glare. Without much of anything else to specifically capture, I pointed the camera generally where the comet was expected to be and let it run as a test. The really bright moon is why the lower left of the frame is kinda foggy. I can’t find the comet in there anywhere, but if you zoom in on the stars, they are all nice and round; no streaking or egg shaped stars with a star tracker and 30 second exposures.

There *may* be a little evidence of nebulae in the lower left corner, but it is so overwhelmed by the moonglow that I’m not at all sure that’s what it is. By stretching it extensively, you can see a little bit of the formless form that is a nebula. Maybe. I will definitely be trying for more when the sky is darker.

I did try using the lunar speed setting and tracked the moon perfectly. I did not, however, *focus* the moon perfectly, so the moon shots were not worth sharing.

In any case, though I have not utilized it as heavily as I should, I have been very pleased with the SkyGuider Pro tracker and I will be using it much more this summer, especially after a new thing arrives. šŸ˜‰

Bahtinov Mask

Short version: I used my laser engraver to cut a custom Bahtinov mask for a particular camera lens I have.

Ripping pretty much directly from WikiPedia, a Bahtinov mask consists of three separate grids, positioned and angled such that the grids produce three angledĀ diffractionĀ spikes at theĀ focal planeĀ of the instrument for each bright image element.

As the instrument’s focus is changed, the central spike appears to move from one side of the star to the other. In reality, all three spikes move, but the central spike moves in the opposite direction to the two spikes forming the “X”. Optimal focus is achieved when the middle spike is centered between the other two spikes.

It didn’t take much searching to find webpage where someone much more brilliant than me had created a Bahtinov mask generator. This site takes various parameters and outputs SVG code/file that will import directly into LightBurn to run my laser.

Sidenote: The links on the Bahtinov mask generator page lead to some other versions of the mask and discussions about them and their development. It is an interesting read, though much of it was about colimating the optics on larger telescopes, so a bit off topic for me. Still, interesting stuff.

The target lens is a (cheap) Opteka 500mm reflex lens. I did not even know about astrophotography when I bought it; I was hoping to catch some wildlife around the house. There are mixed reviews about it and it’s stablemates, but it’s still the longest lens I currently have. It is an EF full frame lens, so a crop factor of 1.6 makes it perform on my Canon Rebel T6 as if it was an 800mm.

The default mask parameters appear to be for a largish telescope, 8+ inches in diameter. The parameters for my lens were:

The “Outer Diameter” is actually the inner diameter of the ring at the front of the lens. The “Inner Diameter” is the outer diameter of the center mirror. Stem/Slit width is the ratio between the elements of the grid. 1:1 means the slits and stems are the same width. Checking the “3rd order spectrum” increases the size of the stems and slits and makes the most sense with really large masks. I left mine unchecked. The (typo’d) “Bartinov Factor” is used in the math somewhere to determine the size and thus number of stems and slits, or basically how fine the elements are. I chose 120 experimentally. It yielded stems and slits that were about as wide as my black acrylic is thick. This is a fine pattern that seems strong. “inW” and “outW” is the margin between the inner and outer diameters and the elements of the Bahtinov pattern. I originally chose 1mm, and that’s probably ok, but I think 2mm would mask a little stronger, more robust mask. Finally, “Rounding” determines whether the ends of the slits are cut square or rounded. I chose rounded. Click on “Draw Bahtinov Mask” and you get:

I downloaded the SVG file, opened it in Lightburn and then proceeded to experiment with cut speed and power.

I have a 10 watt laser, which is adequate for every task I have asked it to do thus far, though I do need to experiment a bit for best results, especially with cutting, as opposed to engraving. For this material, I eventually landed on 100% power and a cutting speed of 8mm per second, with 2 passes. About 70% of the cutouts could be removed with the least pressure and the rest didn’t take much more work. I think I may try again with 6-7 mm per second to see if I can make them all fall free.

Of course, the first one did not go perfectly.

When I started cutting these, I had my wooden surface under the laser. it has a grid of alignment markings on it to help align targets for things engraving. When the laser would cut through the acrylic, it was marring that grid surface. I had the acrylic suspended on wooden blocks about 1-1/2″ above the grid, so I grabbed a piece of waste stock and was going to slip it under the acrylic, but I bumped the block and it moved the acrylic. I decided to let it finish so I could still test removing the cut parts, but it was not a pretty piece.

Note the off center cut, with no margin at the top and the overlapping cuts on the right side. I put the proper honeycomb aluminum cutting surface under it for the final cut.

As eluded to in the parameters section above, I think I would prefer a heavier margin between the edges and the grid. If I have cause to recut this one, I will make that adjustment. Also, I presume this is an artifact of the kerf for the laser, but note that stem/slit width ratio is not 1:1, as the parameters would suggest. A future mask may need that adjusted as well.

In any case, here is the completed mask, cleaned up and in place. All I need now is a night to test it. Interestingly, it fits under the lens cap, so I have a place to store it when it is not in use.

Astrophotography

No, not that one.

We’ve all seen stunning images of deep space objects. It turns out that, at pretty basic levels, those pictures aren’t particularly hard to capture. It does take more than pointing the camera and clicking the shutter, though.

I don’t remember for sure which is was, but YouTube recommended one of Nico Carver’s videos and it captured my attention. It may have been this one.

And before I go on, let me brag on Nico Carver a bit. His videos are chock full of how-to information, not just “look what I did” like some of the other channels I found. His is not the only one, but it is largely my goto for learning how to do this stuff.

As the above video suggests, if you have a DSLR camera, you may have everything needed to capture possibly stunning pictures of deep sky astronomical objects.

Astrophotography is done in two main steps, capturing the image then postprocessing the image. If you don’t capture good data, no amount of postprocessing can’t bring out the details you hope to see. If you capture good data, you can always process it over and over until you are happy with it.

Capturing the image is really done by capturing a LOT of images then using post processing techniques and software to “stack” them. This accomplishes two important things. First, deep sky objects tend to be dim, often too dim to see with the naked eye. Taking many short exposures then “stacking” them gives you a composite exposure that is MUCH longer. Taking 300 exposures is not uncommon. If they are 20 second exposures, the final stacked image could represent as much as 6000 seconds or 100 minutes of exposure. Of course, it will take a little longer than 100 minutes to take that many 20 second exposures. More on that later.

The absolute minimum to capture data is a DSLR camera with a decent lens, a sturdy tripod, a remote shutter release and a reasonably dark sky. Just a couple of fairly inexpensive accessories will raise the ease and quality of your captures and help ensure success.

An intervalometer is basically a remote shutter release with a programmable timer built in. You can set it to hit the shutter button 300 times every 21 seconds, for example. This not only automates the picture taking but also keeps you from shaking the camera to hit the shutter button. Most intervalometers as of this writing range from $20-50 on Amazon.

Focusing on stars is more difficult than it seems and sharp focus is critical to successful astrophotography. One of the simplest focusing aids is a Bahtinov mask. A Bahtinov mask is a cleverly arranged grid of lines cut into basically a lens cap. It sets up a diffraction spikes and when those spikes are in the proper orientation, sharp focus is assured. Bahtinov masks are widely priced, $15-$50 depending on size. My favorite is the right size to kind of clip into the internal threads of a skylight filter, which can then be screwed onto the end of my lens when needed.

For my first night of astrophotography, I was armed with my Canon Rebel T6 camera, a 75-300mm zoom lens that came with it, a Koolehaoda tripod I had originally purchased for something else and an intervalometer. The Amazon link to the one I purchased goes to a simple remote shutter release, but the link above is a physically identical device, with a different brand name šŸ™‚

I also had a camp chair, a flashlight and easy access to beverages.

I did not have a Bahtinov mask at that time, but I did manage to get a reasonably good focus because Jupiter was very easy to find in my west southwest sky and bright enough to get a good focus on. However, my desired subject was the Andromeda galaxy to the northwest.

It took quite a while for me to find Andromeda. I used either of a couple of apps (Sky Map and Star Walk 2) but my own fairly myopic eyesight, even corrected, had trouble seeing Andromeda, which *can* be seen with the naked eye in a dark enough sky. I started taking 10 second exposures to see if the camera could see it. After about 4-5 tries, it finally showed up. Also, once I finally knew exactly where to look, I could just see one star that was fuzzy, especially with binoculars.

I was able to carefully zoom and recenter and zoom and recenter until I finally had as big an image of it was I was gonna get.

Picture saved with settings embedded.

That little fuzzy blob in the middle is Andromeda. I did not know or notice at the time that there is another farther galaxy in the same shot. I am pretty sure you can’t see it here. The overall picture seems a little underwhelming at this point.

One of the things to experiment with, especially that first night, was how long of an exposure I could get. With no tracking mount at the time, I had to balance the maximum length of exposure to get the most light versus minimum exposure without the stars smearing from the earth’s rotation. You can determine this experimentally by starting with a guess, maybe 3 seconds, take an exposure and zoom in with the camera viewer to see if the stars are round. If they are, go higher, maybe 6 seconds and look again. When the stars start becoming egg shaped, back off the exposure time a little and check again. Keep at it until you get the longest exposure you can without distorting the stars.

There are also ways to calculate the exposure and several web calculators can be found to determine the exposure scientifically. However, untracked exposures will always be pretty quick, so I’m not sure it’s worth the trouble when there are only a small number of options in the less than 5 second range and it’s really easy to determine experimentally.

For me, with my particular collection of gear that night, it was 3 seconds. I decided to take 100 exposures at 3 seconds each, resulting in 5 minutes of total exposure. I didn’t realize it at the time, but that would be enough to get an ok image, but not nearly enough for the detail I had hoped for.

An important part of the capture process is to capture additional specialized images to help the processing software to do a better job at stacking all your exposures and reducing noise in the final image.

For terminology purposes, exposures are often called “frames” in astrophotography. I have not found a satisfactory explanation, but I presume it is based on frames of photographic film predating digital capture. The research continues. The exposures of your target are most often called “light frames”, meaning a collection of the light from our target object, collectively called “lights” or “subs”, for sub-exposures.

The calibration process has you capturing a number of frames under certain conditions. “Dark” frames or darks, are exposures at the same camera settings (ISO, exposure time, etc), and ideally at the same general time, as your light frames, but of complete darkness. Opinions vary, but most sources seem to recommend 30-50 or as many as 100 darks. This is super easy to accomplish. When you have finished capturing your lights, use a lens cap and perhaps an additional opaque cover over that to ensure that no light gets into the camera, then set your gear up to take another 30-50 shots with the same camera settings that was used for the lights. These frames are to capture what the noise from the camera sensor should look like so that the stacking software can account for it. If you use PhotoShop or GIMP to stretch the contrast of these darks, you will find that they are not completely dark. They have little spikes of non-dark which represents the electrical noise introduced by the current conditions in the camera.

The next calibration is with the camera set to the same ISO as your lights, but with a flat white unfocused subject and the shutter speed adjusted to a proper exposure and these are called “flat” frames or flats. The stacking software uses these frames to account for anomalies like dust or scratches on the lens or vigneting, a tendanacy for some lenses to not illuminate the sensor evenly, leaving the corners darker than the center. Your camera probably has a histogram feature to help set exposure and using it it probably the most accurate way. Accomplishing these is pretty easy. One easy way is to point the camera straight up, put a white T-shirt taught enough to not be wrinkled over the end of the lens, then put an even white light over then T-shirt, such as an iPad or a white LED tracing pad. Adjust the exposure to a reasonable setting, according to the exposure meter on the camera. Take another 30-50 or as many as 100 flat frames.

Another set of calibration frames is called “bias” frames. Similar to darks, these are captured with no light coming into the camera, but instead, they are with the camera set to it’s highest shutter speed. This shows the software another type of noise, the base noise pattern from the sensor in the camera without the averaging that happens in a longer exposure. Take another 30-50 or as many as 100 bias frames.

Postprocessing is a two step process. The first can be somewhat automated, using software like Deep Sky Stacker. It is certainly not completely automated, but DSS does the heavy lifting. It takes your lights, darks, flats and bias frames and analyzes all the details. It will align the stars in your lights so that they all stack correctly, analyze the calibration files to help eliminate noise and other anomalies and finally stack all your exposures into one low noise output image with a composite exposure time of all the (valid) light frames.

The next step is to crop the target and “stretch” the contrast with a photo editor like Photoshop or GIMP. This is not a particular difficult step, but it is kinda fiddly. I will defer the reader to Nico Carver’s videos for more and better information about that.

While the focus was good, the total exposure was pretty short, so cropping really close was a disappointing image, so this larger field is more pleasing.

The next time, I got 300 exposures of 3 seconds each, resulting in 15 minutes of total exposure. I had an iOptron SkyGuider Pro mount by then, though I was not super familiar with it and did not lengthen the individual exposure times, though I really could have.

The postprocessed results were about the same. I think part of the issue was that I had not nailed the focus as well. However, there was more light to work with, so I got a closer crop.

For boring reasons, I did not get to do any more captures before this summer, nearly an entire year.