Decapping Integrated Circuits

TL;DR: I am hoping to fix a broken USB drive….

For purely scientific interest, I have been interested in chip decapping for a while. Decapping is slightly archaic because only the larger integrated circuits tend to be made with actual removable caps these days.

Instead, ICs are FAR more commonly cast into epoxy and often glass reinforced epoxy.

There are a couple of destructive ways to decap epoxy chips. Sometimes you can literal just break the package and expose all or part of the silicon die with all the interesting bits therein, but that is really random. You can also use pretty extreme heat to soften the epoxy enough to peel it off the die.

The most effective way, and one which leaves at least a chance that the chip may still operate is to use heated 70% nitric acid to *dissolve* the epoxy. This is really effective and pretty hazardous, so requires good preparation and a not insignificant list of supplies, not the least of which is the 70% nitric acid. As I am writing this, a couple of online sources sell it to individuals for about $50 for a 100ml bottle (a little less that 1/2 cup Imperial) plus a $40 HazMat fee from the shipper. You need borosilicate glass (Pyrex is a brand name of borosilicate glass) because nitric acid will react with regular soda class. There are a lot of YouTube videos on the subject. I may tackle it for scientific purposes in the future, whenever I feel like dropping a hundred bucks on a half cup of evil juice.

What brings this whole subject to light is that a) I have had a physically damaged USB drive for many years and b) I have seen a few videos lately where talented individuals have used a fiber laser to vaporize the epoxy off of an SD card to expose the factory test connections in order to recover data on that card.

Now, I don’t have a spiffy fiber laser, but I do have an xTool D1 that I have been using for a variety of cutting and engraving projects. Surprisingly, as of this writing, the laser does not have it’s own blog category. Must fix.

This USB drive was in a laptop that was dropped. Instantly, it stopped working correctly. It would kinda work, showing a drive letter in Windows, but every attempt to interact with it would produce either an error or a bunch of nothing. For example, drive E: is there, with zero bytes in use, zero bytes available, etc. The card does mount, but there appears to be a disconnect between the controller and the actual storage within the device.

So, disassembly revealed that under all that plastic was a single tiny module, completely potted in epoxy, with a visible crack across it’s width.

On the other side of this module is the 4 pin USB connector on what is pretty obviously a printed circuit board. I suspect the entire device is a small board with naked chip dies attached and the whole thing potted in epoxy. All the plastic physical packaging is just to position and protect this little module. I should mention that this is a PNY brand device and it’s been long enough that I don’t remember the capacity, but it’s likely a 16MB or maybe 32MB. It’s not big by today’s standards.

This device was dropped quite some time ago, maybe 2012 or so. I think if I ever gain access, I’ll be able to realistically time stamp it, but I’m jumping WAY ahead. During those intervening years, I have occasionally tried to access the data. Mostly, I have tried to reflex the crack enough to make contact with whatever conductors were broken by the crack, which is generally my assumption as to what is broken. Maybe the PC board cracked in the right place to deny power to the storage chips, but not the controller. I don’t recall the specific utilities I used, but at least two of them were able to verify that the controller was responding, but that it could not see the storage.

So, back to the xTool D1. My particular device is a 10W blue diode laser that will cut up to about 10mm thick plywood, but it’s much happier with thinner stuff than that. I have cut everything from copier paper to acrylic to EVA foam and, with a little experimentation with the speed and power settings, it has performed wonderfully.

I didn’t want to just blindly charge into attempting to cut away the epoxy in my lobotomized USB drive, so I found a known dead circuit board and started eroding the epoxy on a couple of it’s onboard chips. I happened to pick the dead wireless link device that I had replaced earlier this year.

I set the xTool to ‘fill’ a 11mm square and positioned the laser over the left-most chip shown in the picture above. I played with various speed and power settings until I had an idea how it would chew it away.

I am not sure if it is specifically ash or if the laser burns away everything except the glass reinforcement, but a pass or two of the laser leaves a gray/white layer and the laser seems to no longer have much affect. Maybe if I had more than 10 watts to work with, it would be simpler.

After chipping and scraping away some layers of epoxy off of that chip, I decided that I had nothing left to lose by starting with those settings and seeing how it did with the USB drive module.

For the first pass, of COURSE I didn’t have the chip lined up perfectly, so it left a millimeter or so of chip untouched on the righthand side. I readjusted and ran the fill again. I noticed that the material was less “reactive”, it tended to reflect blue light rather than glow with a bright white burning color. I removed the module and used the corner of a single edge razor blade to chip away the gray/white ash layer.

If I was very careful, I could remove fairly large chips of this ash material. After a couple of passes, it looked like this:

Note a few specifics… I think rectangle in the upper left may be a capacitor being exposed. I hope the laser does not damage it.

Next, edges of the crack seem to erode quicker, leaving a trough that follows the crack. This may be a localized thermal effect.

Finally, you can definitely detect the 0.05mm spacing I chose to fill in the pattern. My experiments with the old wireless system chips showed that the default 0.1mm spacing was pretty crude at this level, but it *was* twice as fast to run. 🙂

I ran the pattern a few more times and exposed the mother lode:

A bundle of bond wires, apparently all broken by the crack in the module.

It is good news because I am now confident I have found the problem. It is bad news because instead of maybe one or two broken power traces on the PC board, I have (I think 29) broken wires that fit roughly into a 3mm space.

I am *planning* to attempt to use fine solder paste and rework gun heat to solder bridge between the broken ends, somehow *not* also bridging the tiny 0.01mm gap between wires. I may make an attempt first to see if it will work at the current depth I have burned away with the laser. If not, I will need to tweak some laser values and try it again….

Starlink Ethernet Adapter

It’s a $20 part that, arguably, we shouldn’t have to add to our system, but that’s the way they built it, so….

My ethernet adapter arrived today. Mechanically, it’s very simple. A cable and box that plugs in between the rectangular dishy and the “router”/power supply.

To properly use it, one must disable the Starlink router, which is kinda bogus, by going into the Starlink app on your phone and setting the wifi mode to “bypass”. This mode setting, once committed, is reversible only by a manual factory reset. Sigh. Great design, guys.

In any case, it pretty much works.

I got a new IP address. It is pretty much just as useless at the previous one, but at least it’s in a different subnet.

The new IP for WAN is All indications are that probably all systems with a square dishy will get the same semi-bogus address. According to ARIN, it is a “block is used as Shared Address Space.” Digging deeper, this range is set aside for what is called Carrier Grade Network Address Translation, or CGNAT. It makes sense that Starlink would use such a NAT scheme. It does cause some slight problems with inbound stuff, like port forwarding.

It replaced a WiFi access point, which was wired to my LAN and was the only WiFi device connected to Starlink, depicted below as FlyingHippy SSD.

This functioned perfectly, but as my whole LAN is arguably over complicated already, any simplification has to help. Also, although the WiFi specs published for the BrosTrend device are full speed on the wireless side, 867Mbps on 5GHz WiFi or 300Mbps on 2.4GHz WiFi. For my use case, where Starlink is the only WiFi device connected to it, those wireless specifications mean little because the wired ethernet is 100Mbps.

Since Starlink can exceed 100Mbps (sadly, only occasionally in practice), it was considered a potential bottleneck.

Is the Write Speed the Right Speed?

I have been aware that SD cards, and particularly MicroSD cards, can have read and write speed limitations, however, I only recently have had two separate issues that turn out to have been due to slow write speeds.

Though I didn’t realize it at the time, write speed was likely the issue that caused some videos taken by my little DJI Mavic Mini to fail. I started it recording and flew around for a while. Later, the video was only about a minute long; I had definitely intended to record more than that. I now think the slow SD card write speed caused the high resolution video to simply overwhelm the card and the camera just shut off. I presume there was no notice, but I will look for some kind of on screen warning in the future.

I also had troubles with a recent astrophotography capture. I was getting 100 subs of 30 seconds each. The length of the capture doesn’t affect the size of the file, but when you are going to capture for nearly an hour, you don’t want to wait any longer between shots than necessary. Most DSLR cameras will capture to an internal buffer then write that image to the memory card between pictures. Generally, the write time of the camera is hidden from the user because we tend to take a picture or two then put the camera down while we wait for something else to take a picture of to come around. However, with astrophotograhy, you are taking dozens or even hundreds of long exposures in a row. In the example above, I had the camera set to pause for two seconds between exposures. That pause time accounts for nearly four minutes in the whole capture process. I noticed that a little while into the capture, the busy light was staying lit past the time for the for the next picture to take. Because the intervalometer just sends a 1 second signal to the camera and the camera was using it’s internal shutter timer, when this busy event would happen, the camera miss a shutter event, which would then allow it to catch up on the write process then sit idle while the 31 or so second wait on the intervalometer would time out. It would then capture 5 or 6 images before the write busy would add up enough for it to miss another shutter event. So, my 100 captures would have turned out to be 90 or so without intervention.

I changed the delay between shots on the intervalometer to 5 seconds instead of two. This helped it get to 10 or 12 shots before the camera was busy and missed a shutter event. I set it to 8 seconds for the remaining 40-50 shots, the busy light did not miss any more shots.

Had the 8 second delay been in place for the entire 100 shots, it would have added 14 minutes to the entire process. It’s not like that is a huge part of one’s life, but after you capture 100 lights, then you need to capture 30-50 darks at the same shutter speed and 30-50 flats. The flats will be at a shorter shutter speed, but that actually makes the write speed problem worse.

I found, not surprisingly, that a) the read and write speed on memory cards is rarely specified and b) when it is, write speed has a bigger affect on price than capacity. 64GB cards with 250MB/S write speed cost more than 128GB cards with 130MB/S write speed and c) anything slower than about 100MB/S will probably not show the spec and those will pretty much always be inexpensive.

To address both problems, I ordered four 64GB cards that specify 250MB/S read and 130MB/S write speed from B&H Photo.

By the time they had arrived, I found some somewhat questionable data that indicated that the write speed of the particular card I had used in both the DJI Mavic Mini and the EOS Rebel T6i probably has a write speed more along the line of 30MB/S. I found a simple disk benchmark program and tested the new and old cards.

The old card, as expected, was pretty slow:

The new card was much faster, exceeding the write spec, assuming they specify the best spec rather than the average:

No SD card does well with random reads and writes.

For perspective, here is the report on the 250G SSD in my laptop:

… and my Toshiba 2TB USB drive that I use for various archiving and backup tasks:

In practical terms, I set the camera to it’s fastest shutter time of 1/4000 second and set it to continuous shooting. Press and hold the shutter button, and with the new card, it takes 7 pictures at the max speed of 5 frames per second, then it slows down to about 1 per second. Release the shutter button and it takes about 5 seconds for the busy light to go out. With the old card, you still get the 7 shots buffered in the camera, but the catchup is more like 1shot every 2 seconds, then it takes nearly 10 seconds for the busy light to go out. This card should definitely be an improvement.

Star Tracking

Although it was not strictly necessary in order to use my star tracker, I found decoding it’s terminology handy for finally helping me understand what acension and declination are. As a long time low intensity astronomy geek, I am embarassed to admit that I never really pursued understanding those terms. As it turns out, they are not particularly complicated. Viewing all of space as the inside surface of an imaginary sphere, right acension is essentially longitude and declination is latitude. Right ascension refers to the ascension, or the point on the celestial equator that rises with any celestial object as seen from Earth’s equator, where the celestial equator intersects the horizon at a right angle. The origin of the numbers is a point defined by the location of the sun on the March equinox; that line is currently in the constellation of Pices, but due to Earth’s axial precession, it moves about 1 degree west over a 72 year span. Declination is an extension of the earth’s equator into the celestial sphere and declination does follow the earth’s tilt and it also slowly moves in response to Earth’s axial precession.

In order to take longer individual exposures, I bought a tracking mount. A tracking mount, most commonly called a star tracker, is a telescope mount with a motor drive in it. If you set up the unit such that it’s right acension axis is parallel to Earth’s rotation axis, then the motor drive can rotate the mount in sync with the apparent motion of the sky, allowing you to capture significantly longer individual exposures with no (or very very little) distortion of the stars. This axis is generally abreviated RA for right acension and pronounced as the letters R A. This axis also usually has a clutch to temporarily disconnect it from the motor drive for coarse positioning.

Connected to the driven right acension axis is the physical mounting for your camera and/or telescope. This is called the declination mount and is usually abreviated DEC and pronounced “deck”. This mount also usually has a clutch or other release to facilitate coarse positioning the telescope.

I chose the iOptron SkyGuider Pro. I happened to purchase mine from High Point Scientific and at time of purchase, it was $488 USD. I also got the companion ball mount for my camera. This is not the absolutely cheapest tracker I could find, but it’s definitely among the least expensive options available. It’s size and flexability are well suited to my interests.

The SkyGuider Pro can carry up to 5 kg (11 pounds) of load balanced, but up to 1.5 kg (3.3 pounds) unbalanced. This is perfect for mounting a DSLR with a pretty normal lens without the complications of adding the declination mounting bracket with the counterweight attached.

It is about as simple as one could hope to set up. I have thus far used it on a decent photography tripod, though I imagine I will upgrade to a more astronomy minded tripod. However, I have not detected any issues that could be blamed on the tripod. To a point, the heavier and more rigid the tripod, the better.

The SkyGuider Pro can be set to track in the Northern or Southern hemispheres. It has 4 tracking speeds. 1X is straight sidereal tracking for astronomy. 1/2X is apparently for tracking sky and horizon together, though I have not tried this out, so I’m not sure how half speed helps either of those views. It can also track the sun or moon, as they move at slightly different speeds compared to sidereal.

It has a built in polar alignment scope with a reticle to help with proper alignment. The reticle has details for northern or southern hemisphere use and there are several apps to determine exactly where in the reticle Polaris or Sigma Octantis needs to be placed, based on the time and date and your location on the planet.

Polaris and Sigma Octantis are not precisely on the rotational axis of the Earth, just close. The crosshair is the actual axis and, for the time and date in this example, Polaris would need to be placed at the position shown by the little green cross for the tracker to be aligned with the polar axis.

It has two more features that I have not yet needed, an HBX port to connect an external control panel and an ST-4 compatible port for an external guiding signal.

The external control panel gives a little more control over tracking speed and some other parameters. The same port can be connected to a PC via an RS232 serial adapter to provide similar features that way.

The ST-4 port allows any of several guide scope/camera combos that use internal image processing to correct the tracking rate for VERY accurate tracking, which allows even longer exposures or longer focal lengths, where a tracking error would be more apparent. I am intrigued, but at this point in the hobby, I am shooting really wide fields and such additional guidance is not yet necessary.

So, for all these words, the SkyGuider Pro can be polar aligned, then the camera attached and pointed at a target and the tracker can kinda be forgotten; it just works.

This image was made from 100 stacked 30 second exposures. Admitedly, it’s not an exciting picture. That night I had hoped to get a shot of the unimaginatively named C/2017 K2 (PanSTARRS) comet. It’s closest approach was to be the following night. However, there was also a super moon in the same region of sky and I could not find the comet due mostly to the moon’s glare. Without much of anything else to specifically capture, I pointed the camera generally where the comet was expected to be and let it run as a test. The really bright moon is why the lower left of the frame is kinda foggy. I can’t find the comet in there anywhere, but if you zoom in on the stars, they are all nice and round; no streaking or egg shaped stars with a star tracker and 30 second exposures.

There *may* be a little evidence of nebulae in the lower left corner, but it is so overwhelmed by the moonglow that I’m not at all sure that’s what it is. By stretching it extensively, you can see a little bit of the formless form that is a nebula. Maybe. I will definitely be trying for more when the sky is darker.

I did try using the lunar speed setting and tracked the moon perfectly. I did not, however, *focus* the moon perfectly, so the moon shots were not worth sharing.

In any case, though I have not utilized it as heavily as I should, I have been very pleased with the SkyGuider Pro tracker and I will be using it much more this summer, especially after a new thing arrives. 😉

Bahtinov Mask

Short version: I used my laser engraver to cut a custom Bahtinov mask for a particular camera lens I have.

Ripping pretty much directly from WikiPedia, a Bahtinov mask consists of three separate grids, positioned and angled such that the grids produce three angled diffraction spikes at the focal plane of the instrument for each bright image element.

As the instrument’s focus is changed, the central spike appears to move from one side of the star to the other. In reality, all three spikes move, but the central spike moves in the opposite direction to the two spikes forming the “X”. Optimal focus is achieved when the middle spike is centered between the other two spikes.

It didn’t take much searching to find webpage where someone much more brilliant than me had created a Bahtinov mask generator. This site takes various parameters and outputs SVG code/file that will import directly into LightBurn to run my laser.

Sidenote: The links on the Bahtinov mask generator page lead to some other versions of the mask and discussions about them and their development. It is an interesting read, though much of it was about colimating the optics on larger telescopes, so a bit off topic for me. Still, interesting stuff.

The target lens is a (cheap) Opteka 500mm reflex lens. I did not even know about astrophotography when I bought it; I was hoping to catch some wildlife around the house. There are mixed reviews about it and it’s stablemates, but it’s still the longest lens I currently have. It is an EF full frame lens, so a crop factor of 1.6 makes it perform on my Canon Rebel T6 as if it was an 800mm.

The default mask parameters appear to be for a largish telescope, 8+ inches in diameter. The parameters for my lens were:

The “Outer Diameter” is actually the inner diameter of the ring at the front of the lens. The “Inner Diameter” is the outer diameter of the center mirror. Stem/Slit width is the ratio between the elements of the grid. 1:1 means the slits and stems are the same width. Checking the “3rd order spectrum” increases the size of the stems and slits and makes the most sense with really large masks. I left mine unchecked. The (typo’d) “Bartinov Factor” is used in the math somewhere to determine the size and thus number of stems and slits, or basically how fine the elements are. I chose 120 experimentally. It yielded stems and slits that were about as wide as my black acrylic is thick. This is a fine pattern that seems strong. “inW” and “outW” is the margin between the inner and outer diameters and the elements of the Bahtinov pattern. I originally chose 1mm, and that’s probably ok, but I think 2mm would mask a little stronger, more robust mask. Finally, “Rounding” determines whether the ends of the slits are cut square or rounded. I chose rounded. Click on “Draw Bahtinov Mask” and you get:

I downloaded the SVG file, opened it in Lightburn and then proceeded to experiment with cut speed and power.

I have a 10 watt laser, which is adequate for every task I have asked it to do thus far, though I do need to experiment a bit for best results, especially with cutting, as opposed to engraving. For this material, I eventually landed on 100% power and a cutting speed of 8mm per second, with 2 passes. About 70% of the cutouts could be removed with the least pressure and the rest didn’t take much more work. I think I may try again with 6-7 mm per second to see if I can make them all fall free.

Of course, the first one did not go perfectly.

When I started cutting these, I had my wooden surface under the laser. it has a grid of alignment markings on it to help align targets for things engraving. When the laser would cut through the acrylic, it was marring that grid surface. I had the acrylic suspended on wooden blocks about 1-1/2″ above the grid, so I grabbed a piece of waste stock and was going to slip it under the acrylic, but I bumped the block and it moved the acrylic. I decided to let it finish so I could still test removing the cut parts, but it was not a pretty piece.

Note the off center cut, with no margin at the top and the overlapping cuts on the right side. I put the proper honeycomb aluminum cutting surface under it for the final cut.

As eluded to in the parameters section above, I think I would prefer a heavier margin between the edges and the grid. If I have cause to recut this one, I will make that adjustment. Also, I presume this is an artifact of the kerf for the laser, but note that stem/slit width ratio is not 1:1, as the parameters would suggest. A future mask may need that adjusted as well.

In any case, here is the completed mask, cleaned up and in place. All I need now is a night to test it. Interestingly, it fits under the lens cap, so I have a place to store it when it is not in use.


No, not that one.

We’ve all seen stunning images of deep space objects. It turns out that, at pretty basic levels, those pictures aren’t particularly hard to capture. It does take more than pointing the camera and clicking the shutter, though.

I don’t remember for sure which is was, but YouTube recommended one of Nico Carver’s videos and it captured my attention. It may have been this one.

And before I go on, let me brag on Nico Carver a bit. His videos are chock full of how-to information, not just “look what I did” like some of the other channels I found. His is not the only one, but it is largely my goto for learning how to do this stuff.

As the above video suggests, if you have a DSLR camera, you may have everything needed to capture possibly stunning pictures of deep sky astronomical objects.

Astrophotography is done in two main steps, capturing the image then postprocessing the image. If you don’t capture good data, no amount of postprocessing can’t bring out the details you hope to see. If you capture good data, you can always process it over and over until you are happy with it.

Capturing the image is really done by capturing a LOT of images then using post processing techniques and software to “stack” them. This accomplishes two important things. First, deep sky objects tend to be dim, often too dim to see with the naked eye. Taking many short exposures then “stacking” them gives you a composite exposure that is MUCH longer. Taking 300 exposures is not uncommon. If they are 20 second exposures, the final stacked image could represent as much as 6000 seconds or 100 minutes of exposure. Of course, it will take a little longer than 100 minutes to take that many 20 second exposures. More on that later.

The absolute minimum to capture data is a DSLR camera with a decent lens, a sturdy tripod, a remote shutter release and a reasonably dark sky. Just a couple of fairly inexpensive accessories will raise the ease and quality of your captures and help ensure success.

An intervalometer is basically a remote shutter release with a programmable timer built in. You can set it to hit the shutter button 300 times every 21 seconds, for example. This not only automates the picture taking but also keeps you from shaking the camera to hit the shutter button. Most intervalometers as of this writing range from $20-50 on Amazon.

Focusing on stars is more difficult than it seems and sharp focus is critical to successful astrophotography. One of the simplest focusing aids is a Bahtinov mask. A Bahtinov mask is a cleverly arranged grid of lines cut into basically a lens cap. It sets up a diffraction spikes and when those spikes are in the proper orientation, sharp focus is assured. Bahtinov masks are widely priced, $15-$50 depending on size. My favorite is the right size to kind of clip into the internal threads of a skylight filter, which can then be screwed onto the end of my lens when needed.

For my first night of astrophotography, I was armed with my Canon Rebel T6 camera, a 75-300mm zoom lens that came with it, a Koolehaoda tripod I had originally purchased for something else and an intervalometer. The Amazon link to the one I purchased goes to a simple remote shutter release, but the link above is a physically identical device, with a different brand name 🙂

I also had a camp chair, a flashlight and easy access to beverages.

I did not have a Bahtinov mask at that time, but I did manage to get a reasonably good focus because Jupiter was very easy to find in my west southwest sky and bright enough to get a good focus on. However, my desired subject was the Andromeda galaxy to the northwest.

It took quite a while for me to find Andromeda. I used either of a couple of apps (Sky Map and Star Walk 2) but my own fairly myopic eyesight, even corrected, had trouble seeing Andromeda, which *can* be seen with the naked eye in a dark enough sky. I started taking 10 second exposures to see if the camera could see it. After about 4-5 tries, it finally showed up. Also, once I finally knew exactly where to look, I could just see one star that was fuzzy, especially with binoculars.

I was able to carefully zoom and recenter and zoom and recenter until I finally had as big an image of it was I was gonna get.

Picture saved with settings embedded.

That little fuzzy blob in the middle is Andromeda. I did not know or notice at the time that there is another farther galaxy in the same shot. I am pretty sure you can’t see it here. The overall picture seems a little underwhelming at this point.

One of the things to experiment with, especially that first night, was how long of an exposure I could get. With no tracking mount at the time, I had to balance the maximum length of exposure to get the most light versus minimum exposure without the stars smearing from the earth’s rotation. You can determine this experimentally by starting with a guess, maybe 3 seconds, take an exposure and zoom in with the camera viewer to see if the stars are round. If they are, go higher, maybe 6 seconds and look again. When the stars start becoming egg shaped, back off the exposure time a little and check again. Keep at it until you get the longest exposure you can without distorting the stars.

There are also ways to calculate the exposure and several web calculators can be found to determine the exposure scientifically. However, untracked exposures will always be pretty quick, so I’m not sure it’s worth the trouble when there are only a small number of options in the less than 5 second range and it’s really easy to determine experimentally.

For me, with my particular collection of gear that night, it was 3 seconds. I decided to take 100 exposures at 3 seconds each, resulting in 5 minutes of total exposure. I didn’t realize it at the time, but that would be enough to get an ok image, but not nearly enough for the detail I had hoped for.

An important part of the capture process is to capture additional specialized images to help the processing software to do a better job at stacking all your exposures and reducing noise in the final image.

For terminology purposes, exposures are often called “frames” in astrophotography. I have not found a satisfactory explanation, but I presume it is based on frames of photographic film predating digital capture. The research continues. The exposures of your target are most often called “light frames”, meaning a collection of the light from our target object, collectively called “lights” or “subs”, for sub-exposures.

The calibration process has you capturing a number of frames under certain conditions. “Dark” frames or darks, are exposures at the same camera settings (ISO, exposure time, etc), and ideally at the same general time, as your light frames, but of complete darkness. Opinions vary, but most sources seem to recommend 30-50 or as many as 100 darks. This is super easy to accomplish. When you have finished capturing your lights, use a lens cap and perhaps an additional opaque cover over that to ensure that no light gets into the camera, then set your gear up to take another 30-50 shots with the same camera settings that was used for the lights. These frames are to capture what the noise from the camera sensor should look like so that the stacking software can account for it. If you use PhotoShop or GIMP to stretch the contrast of these darks, you will find that they are not completely dark. They have little spikes of non-dark which represents the electrical noise introduced by the current conditions in the camera.

The next calibration is with the camera set to the same ISO as your lights, but with a flat white unfocused subject and the shutter speed adjusted to a proper exposure and these are called “flat” frames or flats. The stacking software uses these frames to account for anomalies like dust or scratches on the lens or vigneting, a tendanacy for some lenses to not illuminate the sensor evenly, leaving the corners darker than the center. Your camera probably has a histogram feature to help set exposure and using it it probably the most accurate way. Accomplishing these is pretty easy. One easy way is to point the camera straight up, put a white T-shirt taught enough to not be wrinkled over the end of the lens, then put an even white light over then T-shirt, such as an iPad or a white LED tracing pad. Adjust the exposure to a reasonable setting, according to the exposure meter on the camera. Take another 30-50 or as many as 100 flat frames.

Another set of calibration frames is called “bias” frames. Similar to darks, these are captured with no light coming into the camera, but instead, they are with the camera set to it’s highest shutter speed. This shows the software another type of noise, the base noise pattern from the sensor in the camera without the averaging that happens in a longer exposure. Take another 30-50 or as many as 100 bias frames.

Postprocessing is a two step process. The first can be somewhat automated, using software like Deep Sky Stacker. It is certainly not completely automated, but DSS does the heavy lifting. It takes your lights, darks, flats and bias frames and analyzes all the details. It will align the stars in your lights so that they all stack correctly, analyze the calibration files to help eliminate noise and other anomalies and finally stack all your exposures into one low noise output image with a composite exposure time of all the (valid) light frames.

The next step is to crop the target and “stretch” the contrast with a photo editor like Photoshop or GIMP. This is not a particular difficult step, but it is kinda fiddly. I will defer the reader to Nico Carver’s videos for more and better information about that.

While the focus was good, the total exposure was pretty short, so cropping really close was a disappointing image, so this larger field is more pleasing.

The next time, I got 300 exposures of 3 seconds each, resulting in 15 minutes of total exposure. I had an iOptron SkyGuider Pro mount by then, though I was not super familiar with it and did not lengthen the individual exposure times, though I really could have.

The postprocessed results were about the same. I think part of the issue was that I had not nailed the focus as well. However, there was more light to work with, so I got a closer crop.

For boring reasons, I did not get to do any more captures before this summer, nearly an entire year.

Argus Panoptes

Ok, Argus Panoptes may be a little over the top since I currently only have five cameras instead of 100.

In the time we have lived out here in our largely rural property, I have had a few network cameras. My first foray into IP cameras was with a pair of DLink DCS-932 cameras, which if you follow that link, you will learn as I did that I can now only find it on D-Link’s non-US websites.

This camera served me fairly well. Its a wifi camera that can utilize, but does not require, a “cloud” presence to work. It outputs in fairly decent for it’s time 640×480 VGA, had auto night vision and built in IR illumination. It worked by a built in web server and could operate with either an ActiveX or Java applet.

I had one in the barn overlooking the stalls and one by the front door.

I still have them and so far as I know, they still work. I might add them to the mix someday.

More recently, I had an issue with a FedEx delivery. I was expecting a shipment that required a signature. I left the gate open and put a sign on the door that I work from home and to please ring the bell. Long story short, they “made” three delivery “attempts” and were about to return my goods, having not once rang my doorbell. My desk is within site of the front door and the dogs would not have slept through a doorbell.

I had to call and get a bit nasty and even then, the driver was literally walking away from my door 10 seconds after ringing the bell.

We hates them. They used to be the premier delivery service, but now I generally prefer even USPS.

That whole experience encouraged me to get a doorbell camera.

As mentioned in other blog posts, I am not exactly the best guy to try to sell on The Cloud. I appreciate the concept and my company is quite dependent on cloud services, not the least of which is RingCentral telephones, which is my particular gig within the company.

What I like about the cloud is great; it lets you export high powered data processing and storage to what the cloud really is, just someone else’s servers.

What I hate about many cloud dependent devices is that without a fast and reliable internet connection, it’s completely worthless crap that gets between you and a physical thing that you bought and are holding in your hand.

Thus was the state of affairs at our house. Our DSL was mostly worthless. That is to say, it was better than dialup. My first foray into home automation was with a Nexia z-Wave controller. I discovered it’s absolute reliance on an internet connection only once I had it installed. It worked, but the delay was unworkable. The worst case scenario was my wife hearing something while I was working out of town and hitting the button to turn on the outside lights, which, thanks to crappy DSL, would promptly turn on 20-30 seconds later, if at all. That was not gonna work.

So, I discovered Vera, which has cloud features, which are good, but it does not depend on an internet connection to work. That whole thing is documented elsewhere.

Those last few paragraphs are just to explain that I will not install something as critical as a security camera that depends on a internet connection to even work. I’m looking at you, Ring.

My shopping lead me in pretty short order to the Amcrest AD110, available from Amazon. Of course.

The installation of the doorbell camera was neat. I ordered a wedge thingy with it that pointed it more directly into the fourier that is our front entry.

The first issue I faced is that the AD110 is powered from the doorbell circuit and it requires at least 16V. It took some failures and some Googling to resolve it. Our existing doorbell transformer was only 10V, so a quick trip to the hardware store for a replacement fixed that issue.

Eventually, things settled in just fine and the next really important delivery, via FedEx of course, went better. I can’t actually claim that the camera had anything to do with it. At least the $1400 guitar my wife won from Sweetwater didn’t go back three times before they let us have it. Plus, I have Toni on camera walking out the front door right past that huge box without seeing it….

Flushed with this success, I ordered a couple more Amcrest cameras, a surface mount and a bullet mount. Both are PoE powered and have SD card slots for local recording. The surface mount ended up looking at the driveway and the bullet is in the barn looking at horse stalls.

I had some 32GB SD cards laying around and, not surprisingly, it doesn’t take long to fill this card up, especially if you don’t want to trust all recording to be triggered by motion alarms. The camera has parameters for when to start recording over older material. 32GB worked out to about 3 days. Since 256GB is 8 times the storage of 32GB, you’d think you would get 24-ish days, but instead, it only showed about two weeks.

I noticed that the camera had an option to store to Network Attached Storage and I had been thinking of building or getting one anyway, so that was all the excuse I needed.

For all the advantages of building a NAS from a Raspberry Pi or repurposing a PC or laptop, et cetera, I have reached an age and station wherein I just don’t wanna mess with stuff like that. I expired my cutting edge tech card a couple of decades ago. I like cool tech, but I just want it to work. Consequently, I shopped and decided on what is probably quite literally the lowest spec NAS that Synology currently sells, a DS120j. It is a 1 bay unit with an ARM processor. The ARM processor is the brains in a cubic buttload of smartphones, though it is not the only one. They are what most techies would call “markably powerful, within their limitations.” Good, cheap, but not the best. In any case, my DS120j, as ordered, came with a 2TB drive.

I’ll have to give it to them, though, it was a flawlessly easy setup. The NAS and the drive came in separate packaging, so the hardest thing was installing the drive into the cabinet. It took, maybe, 5 minutes. They are designed to go together, after all.

Before I even had much chance to set up a shared folder for the cameras, I noticed that one of Synology’s many available apps for the NAS is called Surveillance Station and it is basically a Network Video Recorder that is just included with a Synology NAS. It has WAY more features than the cameras alone and it bundles all your various cameras into a single interface. One throat to choke, so to speak.

One of the coolest things the NAS can do is give me access to data and more specifically, viewing the cameras, from outside of the house. This can be done with either a cloud redirect service Synology maintains or with port forwarding done in my own router. The DSCam app on my phone lets me check in remotely at pretty much any time. Plus, I just added every MP3, OGG and Apple music file I could find to the audio hosting feature it can also do.

Between the cameras and all the other stuff I have running on my NAS, such as backing up home directories on our various PCs, I was not necessarily pushing the 2TB storage, but in looking to the future, I elected to upgrade the space to 6TB anyway.

Anyway, this is night and day views of my five current cameras, four different models of cameras by Amcrest.

That magic number five turns out to be kind of an issue.

I quickly discovered that Synology Surveillance Station requires a license for each camera attached to it. The DS120j (and all models, I think) comes with two included licenses. Out of the box, you can only attach two cameras. I’m not a big fan of the license model; if you own a thing, you shouldn’t need a note from your Mom saying it’s ok to use it. I understand it, however, and I can choose to not put cameras on Surveillance Station or I can choose to buy licenses. I elected to purchase a four-pack of licenses because I could easily foresee six cameras in my system.

My next discovery was that the DS120j is limited to five cameras. It turns out that Synology, wisely, limits the number of cameras a NAS can monitor based on the capabilities of the host device hardware. The number is buried in the specs somewhere, but each model Synology NAS has a maximum number of cameras it can handle and mine can handle five. Consequently, I bought six licenses, one of which is not doing me any good.

Upgrade options are plentiful, though. Even the arguably least effective but cheapest upgrade is to the DS118, another single bay NAS, but it has significantly more processing power and it is rated for 12 IP cameras, $180 on Amazon as I write this. A more sensible upgrade would be for a two-bay system, where I could put the old 2TB together with the 6TB or upgrade for even more storage. The DS218 will do 25 cameras and the DS220 will do 20, for $250 or $300 respectively. More cameras and $50 less makes more sense to me.

In practical terms, I have captured a variety events, some planned, some not.

The very first thing I caught with the doorbell camera, which was the first camera as well, was escaped horses. I was expecting another delivery, so when the motion alarm went off, I hurried to the door and nobody was there. I went out side to see if I could catch them and what I caught instead was our two horses in the driveway. 🙂

We live in the country, so we have wildlife. A family of raccoons frequents our barn. I’ve caught them on occasion just going out there, so I did expect to see some in the barn camera and was not disappointed. I did find it disappointing that they consider my camera to be climbing hold, even after I tried to cover it with something very uncomfortable to step on. Note the old D-Link camera post. That has been removed since this pic was taken. It was yet another climbing aid.

I had purchased a camera to put by the gate, but had not yet deployed it, so I set it up temporarily in the barn walkway, pointed at a live trap.

I eventually decided I would want a permanent camera for outside the barn doors and finally deployed the gate camera at the gate.

The challenge was power. There is a gate opener out there with a small solar panel to charge it. Experience has shown that it is adequate for keeping the gate battery charged and I considered just connecting the camera to that battery. The opener pulls such a tiny bit of current unless the operators are actually moving the gate, but the last thing I want to do is have a camera pulling current 24/7 and run down the battery just in time to lock my wife in or out.

The gate camera has a USB power cord and I have ordered a 12V to USB adapter dongle to connect directly to a battery. Due to the nature of the USB power, I could feel reasonably confident it should draw 10W or less. The new solar panel is larger, a 25W panel, probably more than is needed. I also found a charge controller that has USB outputs, so I don’t even need the power dongle anymore. I already had a couple of battery boxes and elected to rob a lawn tractor battery from a mower and replace it later.

I mounted the camera, connected it to my Wi-Fi network and reconfigured it as needed.

The gate controller is other side of the board. Yeah, the fence needs painting. I painted the gate back in October, though 🙂

The camera catches any delivery vehicle, trash pickup, our own comings and goings, etc. I have the motion detection set to catch motion directly in front of the mailbox and, due to the close angle, I can get an alarm when the gate itself opens or closes.

I was not surprised to find that the Wi-Fi signal from the gate to the house is ok, but not really solid. My intent was to deploy an old Belkin Wi-Fi extender from years past. I have seen it recently, but I could not find it. I presume it is in a box that I didn’t open. Instead of continuing the search, I ordered a BrosTrend AC1200 to extend the house WiFi into the garage, which is closer to the gate. This increased signal strength somewhat. I think better positioning of the extender to expose it more to the garage window would bring it up even more.

The last camera I have on the NAS for now is another of the same model as the gate camera, mounted outside the barn with a view of the water trough and stall doors.

Sharp eyes may notice a dove nesting in the rafter, behind the minutes digits (43) of clock display.

One reason I wanted a camera out there is just to monitor the water trough, largely because one of the horses has done mischief with the equipment. That is why there is a steel guard covering the float valve. With the camera in place, it didn’t take long to catch some hijinks.

As I suspected, it’s Bonk. While you can’t see it in this still view, there is a green plastic tub that normally covers the water circulating pump. He had knocked it off earlier, then picked it up and moved it closer to the trough. The splashing he’s doing above was enough to put a couple of inches of water in the bottom of that tub. I found it entertaining.

One funny side effect of the cameras switching to night mode and turning on their IR illumination is that one camera can see evidence of another. The driveway camera can see the gate camera so well that I thought I had left the floodlight feature activated, visible here as a bright slash at the top center.

In this one, the light spilling from the stall doors is actually the illuminator on the stall camera.

Ignorance Is Bliss

One of the many jobs a Synology NAS can do is to serve as a syslog server. Most network gear can support sending it’s logs to a syslog server and in doing so, you can have one place to go to review logs on all those devices.

On the other hand, what you don’t know can’t hurt you, right? Right?!

Seriously, the issues I have discovered in just over 12 hours of using Log Center are arguably not actually serious, but they are bothersome just because now I know.

I configured Log Center to collect logs from my pfSense router/firewall, two Cisco switches and, just for the entertainment value, one of my IP cameras.

Activity from the switches is very light. I verified that it would log a port being disconnected and reconnected and other than an hourly DHCP refresh from each switch, they have been pretty quiet.

The camera is also pretty quiet. It mostly shows login and logout activity from both my laptop and the NAS while playing with some settings and silence since then.

The router, on the other hand, is quite chatty. It also has more granular control over what gets sent to syslog.

Note that I have unchecked Firewall Events. Before doing that, the log was just stupid busy. The firewall blocks a LOT of traffic. I do need to analyze that traffic at some point. Some of the blocked traffic is internal.

The thing that bothers me but probably shouldn’t is the number of DHCP requests from stuff that is obviously online and operating.

Does my camera out by the gate really need to refresh it’s IP every 4 seconds ALL DAY? The camera alone accounts for almost 71% (32,552 out of 46,198) of the log events between midnight and a bit after 11AM when I pulled the log to look at it. Why does a camera that is online and operating have to do that?

I will figure it out….

I Re-re-renumbered My Home LAN

As part of deploying Starlink as my primary internet provider, I had to renumber my LAN. Starlink uses as its default client network and that is not alterable, so far as I can tell.

pfSense didn’t know when I started configuring it, so it didn’t argue, but also didn’t work. I finally took a guess that the same IP on both sides of the router could confuse it and changed mine to and my stuff started working!

Well, it did until the first time I connected to my work VPN, Cisco AnyConnect. At first, I didn’t specifically notice that the VPN was the trigger. I was working away at something and wanted to check on my camera system to see if the mail had arrived, but I couldn’t get to my NAS. I didn’t panic or anything, but later (without realizing that the VPN was disconnected, I was able to reach the NAS and cameras just fine. Some time later still, I connected the VPN then tried to refresh the camera display and it finally dawned on me that with the VPN connected, I couldn’t get to the NAS. I tried to verify the issue by connecting to something else, one of the switches I think, and: no luck. Disconnect VPN, get right in. Reconnect VPN, get nowhere. Well, except for my router. I suppose the VPN needs the default gateway.

AnyConnect info includes a list of secured routes. It turns out there are secured routes cover most, though thankfully not all, of RFC-1918 address space. All of is either in use or allocated internally. Quite a few legacy locations use various ranges within, especially in the bottom half or so. Finally, there are only a few used within

I dashed off a note to the network team exlaining my issue and asking for assistance in finding a subnet I could use without interference, but that note as yet has not been answered. I used my Linux jumpbox, which is a VM in one of our data centers, to investigatively Nmap parts of this space, looking for some vacant space. Running Nmap from home is limited to only the secured routes provided through AnyConnect; running it from the data center should also detect systems that are not specifically accounted for in the VPN.

First run through, it looked like a large block near the top of the space should be open. I chose as being easy to remember and easy to type.

Even though I was just testing, I was still going to have to update the DHCP reservations I have because the devices I need to be able to reach all have reserved IPs. The process is to edit the LAN IP address, but don’t apply the changes yet, edit all of the DHCP reservations, then apply the changes under both menus.

As soon as you apply those changes, the IP you are using to connect to the router will no longer be valid, so you need to refresh your IP address then log in to the router again.

And it still interferes with the VPN.

Just for the entertainment value, I tried again, but this time, since the gateway for Starlink was already connected, pfSense wouldn’t let me use it.

I then went to my next alternative, the range. There are a couple in use, but they are at the low end of the range. Ages ago, we had a WAN provider that had assigned the entire range to our company within their network. At that time, about half of our branches were numbered as 172.29.X.0/24 or 172.30.X.0/24, where X was the branch number and 29 or 30 revealed whether that location connected via VPN over internet or a point to point T1. It worked well for us. It was fun to realize that my muscle memory could still type really fast, so….

I renumbered my LAN again to and FINALLY, all my stuff coexists with the work VPN!


I was not fully aware of Starlink as an internet service for a while. I must have started paying attention on or about January 28, 2021, which is the date I received confirmation for signing up for emails about the service.

On February 19th, 2021, I received the email that it was available to order. The email came in around 7PM and it was 10ish before I saw it and jumped on to pay my $99 deposit. In a mere 34,387,200 seconds, it arrived.

Starlink wisely did not email reminders that their dates were sometimes slipping. You have to log in to your account to see the status of your order, which rarely changed. Mine was originally something like “mid to late 2021” then “late 2021” then “first quarter 2022”. In December 2021, it finally said “March 2022”. Since they were willing to narrow the window down to a specific month, I then had the confidence to step up my preparation efforts. The chronology on these various efforts is a bit muddled. I can specify what date I ordered some piece of gear, but for this story, what I did and tried to do are more important than exact dates or times they were tried. It will be like a movie that skips around in the timeline, but it still has a beginning, a plot and an ending. My Starlink kit arrived March 24th, 2022 and it’s working pretty well in my network right now as I write this. This post is the story of making it work.

As detailed elsewhere, I had switched my TP-Link Wi-Fi router for a pfSense router and reconfigured the TP-Link to be an access port. The pfSense router was specifically the Netgate 1100 appliance preloaded with pfSense+.

This appliance has three ports, labeled WAN, LAN and OPT. Most commonly, OPT is used for either a 2nd LAN, like a DMZ, or for a 2nd WAN connection, which was my intent.

I didn’t really start documenting the network until I had pfSense in place, so there aren’t a whole lot of differences between this diagram and the diagram for the network as built.

Even as I look at this now, I see a couple of minor typos in text, but this is a pretty good representation of what was in place around October 2021. I don’t remember for sure, but I think this diagram was inspired by the installation of the TP-Link switch in the house. Before this, it was just whatever little 8 port switch I had, maybe a Belkin.

What is not illustrated well here is the number of Wi-Fi devices in the house. Just as I am writing this, there are 25 Wi-Fi devices that I can list, plus a couple of my wife’s devices that aren’t here right now.

Following r/Starlink and other research indicates that obstructions to the dish are a big issue. Also, though I didn’t know it at the time, the dish was likely to orient itself pointing north. From the house, north is pretty much where all the trees in the back yard are, but there are no such obstructions north of the workshop.

The easiest solution I could think of to have the dish at the workshop but connected to the router in the house via VLAN. One port on the switch at one end untagged in a unique VLAN and one port on the switch on the other end untagged in the same VLAN should be all that was needed.

What follows is a completely non-exhaustive description of what a VLAN is. I could be wrong, but this is what I think I know about them.

A VLAN is a group of ports on a switch that are separated by an identifying tag in the ethernet frame. Terminology can vary between manufacturers, but most commonly, the physical port VLAN membership is “untagged”, meaning the frames going in and out of that port do not themselves have a VLAN tag. Internally, the switch tags those frames with the VLAN number. No matter what the device is or what it knows about VLANs, everything on that port is in the same VLAN. Most often, this is also the “default” VLAN, which is usually but not always VLAN 1.

More sophisticated switches will have a VLAN mode for each port, usually “access”, meaning that port provides access to it’s untagged VLAN number, and “trunk”, meaning the switch will pay attention to the tags in frames going to and from that port. A trunk port can allow all VLANs or just a list of VLANs in or out of the port.

The nice thing about a trunk port is that if you have two or more switches connected by trunk ports in both switches and you have everything configured correctly, ports in both switches configured for untagged VLAN 2, for example, will pass traffic between themselves, yet they will be isolated from the other ports on the switch. It is a simple implementation of this characteristic that I am using to backhaul my Starlink dish over the wireless link between the house and workshop, without anything else in the network seeing, or more importantly responding, to those packets.

Lesson one is that even if a switch advertises VLANs, that doesn’t necessarily mean it will do all this. It took a good bit of experimentation to determine that my TP-Link switches, even if a port is configured for tagged VLANs, that doesn’t mean it is actually a trunk port.

In order to connect Starlink to an existing network, it needs an ethernet port. The previous version of dishy, or more precisely, the previous model Starlink power brick, had an ethernet port that could be used for that. The newer rectangular dish comes with a one box power supply/router that does not have an ethernet port. To plug in, you need an ethernet adapter that is cheap enough that I wish they would have just built it in and charged me $20 more. When I completed the dishy order, I also ordered one of these adapters, but it is not expected to ship until mid April. [ed: shipped April 13]

There are all kinds of dishy hacks all over the internet, but for me, waiting for the official adapter is less painful than risking non-warranty damage to the equipment. What I have done in the mean time is use a Wi-Fi extender that also has an ethernet port. I ordered a BrosTrend AC1200 extender, configured it to extend the Starlink Wi-Fi, but I largely ignore that and use the ethernet. It supposedly runs up to 850-something Mbps in 5GHz mode. The bad part is the that ethernet port is only a 100 Mbps port, so that will cap the maximum speed available from this setup to some fraction of 100 Mbps.

When I took Starlink out of the box, it was very simple to set up and use. Knowing it was temporary wiring anyway, I opened the window into the bedroom where the equipment is, fed the dishy cord through and connected everything up. I then moved the dish as far to teh west as the cord would reach, while still keeping away from trees to the north.

It was about where the dot is.

I powered it up. There was a little interaction with the Starlink app, mostly setting the SSID and password and not much else. After some time, maybe 5 minutes, probably less, it connected well enough to run a speed test. The first one from my phone was not terribly impressive.

It would turn out that speeds on my cell phone are never as good as on a PC. I presume there is a reason that I have not seen as yet. All the tips tend to be for making sure you have a strong signal. <shrug>

At this point, I had not yet received the Wi-Fi extender, so for a little while, my only option was to connect to the Starlink router for high speed or connect to my existing equipment for anything else. It was only for a day or so.

When it did arrive, testing it was easy and setting it up was easy, too. When you power it up, it provides an admin network to connect to and a webserver for configuration. Basically, tell it what network you want it to extend, provide credentials and it’s done.

I tried for a while to connect it directly to the OPT port of the pfSense, but I couldn’t quite figure out how to configure it. I did find that I could just plug it into the WAN port and the router would just get an IP from Starlink and work. It didn’t care that it was a different subnet than OneSource was. I began to suspect, however, that the issue may have been related to teh fact that Starlink comes up with the same 192.168.1.X network that I am using on the LAN side. As much as I hated doing so, it kind of made sense to renumber my network, since Starlink doesn’t give you any control of it’s subnet.

Sadly, that didn’t do it, either. Furthermore, it would soon come to light that the new IP range, 192.168.2.X, conflicts with routes on my work laptop when I connect to the company VPN. If VPN is connected, I can get to my router, but nothing else on the LAN. The biggest effect is that I can’t get to Gnaz to view the cameras and I can’t remote to my non-work laptop to when Cisco Umbrella blocks a website I want to get to. All that means I will need to renumber again, this time avoiding all the routes in the AnyConnect. I think it the LAN and WAN subnets should be different, if for no other reason that clarity, so I won’t be going back to 192.168.1.X.

Somewhere in here, I had broken something on the router that I couldn’t recover and ended up recording all the static DHCP mappings manually and factory resetting pfSense. This time, I configured it with Starlink on WAN and OneSource on OPT and I guess I had cleared something that was keeping it from working or I had a better YouTube video to follow or something, because I got it to work and even switch to OneSource on Starlink fail. Even now, I’m not totally happy with the failback. If the gateway goes offline due to failed pings to its monitor IP and comes back on its own, it will switch back, and I’ve seen that happen a few times naturally. However, if I unplug either provider’s cable to simulate an outage, it doesn’t want to switch back to it when it comes back up. I have to manually set it to down and then back up.

With that part of the puzzle working, I could move on to local testing of the VLAN setup. At this point, the both TP-Link switches were in the same cabinet with Starlink and really everything else. I had Starlink Wi-Fi to the AC-1200, the AC-1200 plugged into port 8 of the workshop switch, a cable between port 7 on each switch to simulate the link to the workshop and the WAN port of the router connected to port 8 of the house switch.

I started with the simplest configuration I could think of and both switches configured the same. Ports 1-6 were untagged on the default VLAN 1. Port 8 was untagged VLAN 2. Port 7 was to be the trunk port, so I first tried untagged VLAN 1, tagged VLAN 2, which should have allowed 7 to carry both VLANs. In short, I tried all four combinations of tagged and untaged VLANs 1 & 2 on port 7.

The two would not pass traffic between port 8 workshop and port 8 house.

I Googled a bunch and finally found a thread where someone explained it well. Those tagged and untagged determine how the local switch treats the ports and VLANs. The packets coming out of the switch do not carry the tagging information at all, so it can’t pass it to another switch. In short, those are not trunk ports.

As I had mentioned a few times in the network blog, I have a Cisco SG200-26P. This switch has 26 ports, 12 of which can provide up to 100W of PoE and it can definitely trunk VLANs as I expected. I just needed to get it a playmate that could the same. I ended up ordering a SG250-08, an eight port switch with essentially the same set of features, except no PoE and, of course, fewer ports.

Even that was not an instant success. The two switches are different enough that the dialog for same setting may not look the same between them. In the 8 port, the port types are “access” and “trunk”, period. In the 26 port, there are “General”, “Access”, “Trunk” and “Customer”. Choosing Access grays out all the default VLAN, etc. General looked like what I probably needed for all but the trunk ports. Customer had a note beside it that made me wonder if it was some kind of auto VLAN, putting each port on a VLAN of its own, like if you were using it in a hotel or multitenant situation.

Long story short, it still didnt quite work, but it was getting close. The MAC address of the AC1200 plugged in to the 8 port began showing up in the MAC address table of the 26 port, but it still didn’t pass any traffic. More Googling lead to a recommendation that all ports be Trunk, but just control what VLANs are available for each trunk. I reconfigured all the previously “access” ports to “trunk” and made sure my tagged and untagged stuff was configured as I thought it should be and it suddenly started working! I was hitting Starlink from the house and it was traversing both switches to get to the WAN port of the router.

It was definitely too late to start moving stuff to the roof, so I would leave it like that to run overnight and through the next day.

The plan was a simple one. Move Starlink, the AC1200 and the 8 port switch to the workshop, with the dish on the roof. Though it was kind of slow going, that all worked out well.

At first I grumbled about having to get on the roof to do this, but it suddenly occurred to me that it just needs to be off the ground. There is no particular advantage in this location between the peak of the roof and where it is shown in this picture. There are no obstacles to clear with a few more feet. I ran the wire into the barn, where the eaves are screened, but not actually sealed and immediately into the loft/attic area above the workshop, which is roughly the left half of this picture. It’s dusty and dirty up there, but I got the job done. I also ran the wire that feeds one of the cameras through teh same path. It has been temporarily run for months. 🙂

Once in place, I find it interesting that the dish orients itself in a more northeasterly direction.

It was a quick thing to change out the old switch in the workshop for the Cisco and connect the AC1200 and Starlink. I waited for the satellite to come up, verified I could connect directly to it and get data then tried from the Wi-Fi out there. Connecting to Wi-Fi, but no internet. Hmmmm.. Oh, yeah, I still need to make the connections in the house. Hooked it up and everything came up!

I plugged the one hanging cord into the switchport it was supposed to go into. It didn’t come up, even after giving it about 5 minutes. I manually down/up’d the WAN interface in pfSense and it came up!

Again, it was too late in the evening to rewire everything in the cabinet. At this point, the Cisco switch had the two ports involving the backhaul and a couple of other ports in use, but the rest of the stuff was still plugged in to the TP-Link switch.

The next afternoon, I took everything out of the cabinet, cleaned it up and recabled with some color coding. It was getting old tracing black wires connected to mostly black equipment inside a black cabinet. When I finished, it looked like this:

Pretty quickly, I noticed that I was connecting to the Wi-Fi, but not getting internet. I had a link light and all the wiring appeared fine. I dug in the switch configs and found that a feature called Smartport showed a different status for the port the AP was plugged into. I moved the AP to 18 and was able to get internet via Wi-Fi on my phone. I tried the laptop and it would connect to Wi-Fi, but no internet. I looked at Smartport and now the port I moved the AP to looked like it’s previous port.

I chased the Smartport stuff in the switch and online for quite a while and got nowhere. I found that if I put the WAN connection back through the TP-Link and also put the AP on the the TP-Link that everyone worked as it should. Again, I had killed pretty much the whole evening troubleshooting this one issue.

The next day, I mentioned it to a coworker. Networking is not his specialty, but in describing the problem to him, something occurred to me. When I plugged everything in, I could get one complete valid connection, then nothing else could get internet. It tickled the back of my mind that this is an intentional behavior sometimes, to prevent users from adding rogue switches or APs to company switches. It took me a a little more Googling to remember that is called Port Security. I found it under Security, not interfaces or Smartport or VLANs. Sure enough, the ports the APs had been on were locked, configured to allow 1 MAC address. I cleared them, set them to allow 50 MACs and normalized all the wiring. The AP works perfectly now.

As it turns out, the Smartport thing could have been a clue. When port security locked a port, Smartport responded to that action; it never was Smartport doing it.

All that has been operating untouched for 24 hours, pretty much as I am typing this. For now, I’ma call it good. Here is the updated drawing.

The next thing should be another simple change when the ethernet adapter arrives. I don’t know what the effect of disabling Starlink’s router will be, other than freeing up an AC1200…