Flat Fields Matter

For a couple of months, High Point Scientific had my new tripod on backorder. It is apparently quite popular, being solid yet inexpensive. I was pretty excited to get the notice that it was shipping.

As it is winter and Orion is quite prominent in the night sky, I thought it would be nice to capture the Orion nebula. Because of it’s location, basically formed around Orion’s dagger, it should be easy to find, at least compared to many, maybe most, deep sky objects.

For my first try, I had my CLS filter in place. We have a big sodium light that is basically in the same direction as Orion when I am set up in what is arguably a very handy place, in my driveway, just outside the garage. This filter, however, is pretty dark and it made it more difficult to find anything. At some point, I decided that maybe I was pointed the right direction and that I just couldn’t see the nebulosity in my test shots, so I set the thing loose taking 180 x 30 second subs. I spent some of the capture time in the house doing things that needed doing and some of it waiting in the car with the heater on, which was kinda novel.

I had started a little later than planned, plus all that attempting to find a nebula push my capture kind late. The exact place I had set up was inadvertently planned for my capture plan. This was where the camera was pointed at the end of 180 frames.

I captured a really pretty field of stars and I had only missed the nebula by this much:

As luck would have it, a couple nights later was clear and a Friday, so I set up again. This time I removed the CLS filter, hoping it would make it easier to find the nebula. I am not sure whether or not it made a real difference, but I did find it!

It was very exciting not only to see the nebula show up on the viewscreen, but also to be able to frame it so perfectly.

I captured another set of 180 x 30 second subs, about 30 darks, flats and bias. I set up a little bit out in the yard to keep from catching the house if capture went long. I also set up a heater and for the most part, sat with the equipment for most of the capture time.

I also ran a small test of two other bits of equipment, a dew heater for the astrograph and my Bluetti power station. While the power station was not purchased specifically for astrophotography (power loss during winter was the big thing), using it for possible dark site travel was a consideration.

This was the first time I had even powered up the dew heater. According to the Bluetti display, on high, it draws 6 watts. I could hardly even tell it was warm against the aluminum dew sheild of the Redcat. I suppose that all it has to do is keep it just warm enough to discourage condensation. Shrug. Whether or not it was succesful would come up soon enough.

We had plans for early Saturday afternoon, so I decided to stay up and do at least a preliminary stack of the capture. I scrolled fairly quickly through the subs and discovered a couple of where I presume I had bumped the tripod and excluded those subs. The final stack came out… ummm… odd.

This is after a bit of stretching in GIMP. There are two anomalies about this image. The most obvious to me is that the bottom 1/3 or so seems blurred, out of focus. I had run through checking the subs pretty quickly, but certainly none were way out of focus. It also strikes me as odd to only be out of focus on the bottom of the image. The top and middle seem to in sharp focus.

The other thing, and this was harder to notice because of the blurring, but there is a definite linear gradient from top to bottom.

It was too late and I was too tired to do much about it just then, so I hit it again the next morning. One of my more careful trips through the subs, I noticed that several towards the end of the capture seemed to have soft focus, so I excluded them and it was essentially unchanged. I reviewed my flats and noticed that they had a linear gradient to them and thought, oh, that makes sense, so I restacked again without flats: no change. It occurred to me later that I may have unchecked all the flat captures, but maybe not the flat master that the previous stack process created.

I posted a png of the stretched blurry image on the Nebula Photos Patreon community page, with some details about the capture. Nico took an interest and a few private emails later, I had much more carefully tried stacking without the flats. I cleaned all of the .info files out of the lights folder, moved the exclude lights to another folder, as well as moving the master flat to another folder. When I stacked this time, it was 131 lights, zero flats and the presumably good dark master and bias master. The image came out great and with a couple of stretches and a crop:

My favorite astro image thus far!

To further verify that the flats were the issue, I kept all the rest of the conditons the same and added back the flats and I got this different image. It may seem to be ok, but upon closer examination, it is still very wrong.

It is hard to tell at full size, but the bottom half of the picture, getting worse as it goes lower, the stars split into three divergent images of red, green and blue.

I will be the first to admit that I do not understand the inner workings of Deep Space Stacker and how it uses the calibration files, but it now seems obvious that if there is an issue with those files, it can damage your final image in probably unpredictable ways.

Some of the discussion with Nico was about my flat capture process and I am going to rework how I am doing that. For this session, flats were captured by holding a USB tracing pad up to the end of the dew shield and adjusting exposure until it was just a little underexposed, which turned out to be 1/2500″. This is probably way too fast and catching a pattern of sensor noise as well as PWM flicker and shutter artifacts from the brightness control of the panel itself.

There are several ways to address this and I will report on what works well for me.

The Gate Keeper

I have mentioned that I have a camera mounted by our front gate, powered by a dedicated solar panel, charger and battery.

I chose this Renogy Wanderer 10A solar charge controller specifically because it has USB power outlets built in, elegantly feeding the USB power cord to the camera.

The battery box lid, designed for indoor use apparently, had air vents on the top and allowed rain inside. The first controller died a wet death. I covered those air vents. It’s not as if the rest of the lid fits with a hermetic seal, so the box is still well ventilated, but now it is at least rain tight.

Wintertime is tough on solar stuff. So many overcast days result in less efficient charging. The camera draws 100-200mA;. The granularity of the charge controller’s output meter is 0.1A and it’s usually says 0.1A, but sometimes it’s 0.2A. Maybe when it is darker and the IR illuminator is on?

A couple of overcast days is enough for the lawn tractor battery to fall behind and drop power, especially after dark. The tractor battery is not intended for deep cycle use, although the gate controller is only on it’s 2nd battery in about 10 years. I think the constant drain of the camera, even at only 200mA, is probably the issue.

I have considered changing the battery to an actual deep cycle battery. WalMart carries an inexpensive ($80-ish) marine battery in the 24 size class. Real specifications for that specific battery were elusive, but I found several 24 size marine batteries from other sources. If they mentioned it at all, the amp-hour rating was 75-80 AH, so that seems likely for the EverStart at WalMart.

A more useful number for power service is the watt hour, literally how many watts for how long. It’s really handiest when comparing power systems that might not be running the same voltage. Conversion of AH to WH is easy enough. Voltage times amperage is wattage. For example, an off-grid solar system running 12V with 100AH of battery can be compared directly to a 48V system with 25AH of battery because they would both provide 1200 watt hours of power.

12V x 100AH == 48V x 25AH

In the case of the EverStart, 80AH x 12V = 960WH. A 12 volt battery is rarely actually 12 volts. Lead-acid batteries like to charge at 2.25 or so volts per cell, so a 12V battery’s ideal charging voltage is 13.8V. The open circuit voltage for a fully charged battery is about 2.1 volts per cell, so 12.6 is typical. Plug that number into the formula and a 12.6 volt 80AH battery should be good for 1008 watt hours.

Once you have a watt hour rating, figuring out how long a charged battery should last is pretty straight forward. In the above example, under ideal conditions, you should be able to draw one watt for 1008 hours, 2 watts for 504 hours, 50 watts for 20+ hours, and so on.

In practice, lead acid batteries can be permanently damaged if discharged too deeply. Deep cycle batteries are optimized to limit this damage, but you can’t eliminate it. It’s just how the chemsitry in there works. It is good practice to limit discharging a lead acid battery to no more than 50% charge, so derating that watt hour rating by half would be wise. One watt for 504 hours, 2 watts for 252 hours, etc.

Assuming 200mA for the camera and the specified 10mA for the charge controller itself, an 80AH battery should be able to run the camera for 192 hours to the 50% charge level, or about 8 days. That shoudl be plenty.

Then again, lead acid batteries need maintenance, especially during the summer months. All the cool kids are using lithium iron phosphate batteries in their solar systems, mostly because they enjoy certain efficiencies, but also because they are essentially zero maintenance, so I started looking into that.

LiFePO4 batteries can be charged and discharged faster than lead acid, are much lighter and because of smart onboard electronics, don’t really need a super smart charger. They are not particular cheap, however.

I found that 20AH LiFePO4 batteries run similar in price to 80AH lead acid batteries. 20AH works out to 96 hours and these batteries protect themselves at about 75% discharge, so 72 hours is likely.

So, roughly the same price as the big lead acid batter and only 3 days without a charge, but zero maintenance and an arguably longer service life. I decided it was worth trying one out at the very least.

Via Amazon, I chose at Vestwoods 20AH battery. The Renogy charge controller has a lithium mode. Part of setting that mode involves setting the charge voltage. The labelling on the battery indicates that it is a 12.8 volt battery, so I took that as the best guess.

It was delivered at 7:24 on December 30, ironically about a foot from where it would be installed. Here’s Amazon’s delivery photo. The battery box nearest to it is where it goes and the camera above it is what it powers.

It’s about the size of a large motorcycle battery.

I connected it almost immediately, by flashlight.

According to the recordings from that camera, it ran from 9:05PM on December 30 until 12:37 AM New Years Day.

The immediate presumption was that changing the battery type caused an issue. I didn’t have a lot of free time to troubleshoot, so it took me a few days to determine that what had actually happened was that the power output from the charge controller had failed. All indications are that it is on, but there is no power on either the USB jacks or the screw terminals.

Before I found the Renogy charge controller, I had purchased a 12V to USB adapter, assuming I would be using something like that connected directly to a battery. Using that, I was able to at least get the camera operational again, pending the arrival of a replacement charge controller.

Tendrils of the Power Event

After I finally beat the dongles into submission, it took a little while to discover that many of my devices did not seem to be operating, but in reality, they were just not responding to verbal commands via Alexa.

Complicating things is that I was also dealing with some WiFi trauma while settling in the new Ubiquiti gear. I was suffering a lot of disconnections and I could not be 100% sure that the WiFi devices that I was trying to control may not have just been switching on and off like a crazy monkey. Short version on that seems to have been overzealous roaming defaults with only two fairly distant APs, but that is a story for that blog.

Once the WiFi was settled down, if not solved, it was obvious that Alexa could definitely see all the devices. I could change a device name and it would be detected, but I still could not control any devices with verbal commands. There were a couple of responses but by far, the most common was that there was no <device> in my profile. Sometimes, I would get that the device was not responding. What I would never get is a device that was controlled.

The timing for this failure is abysmal. My wife just had knee surgery and in preparation for that, I made sure the lamp in the bedroom where she would be staying (she must avoid the dogs for a while) had a smart bulb in it that she could control verbally, except that verbal control wasn’t happening.

After trying no shortage of useless stuff and random restarts of various services, I was getting to be on the right track and had tried removing the skill from Alexa. In Googling about that, I stumbled across a forum posting where someone suggested removing one file and restarting home assistant. I chose instead to rename that one file, just in case. Well, praise Jack Bauer because it worked. All the Alexa stuff works again!

Well, I had to re-re-rename some things back to normal, basically clean up the messes I made troubleshooting. I’ve had to do that a lot lately.

The file in question is /config/.storage/cloud. My old one was 3 times the size of the new one that the system created automatically to replace the missing one. It had several blocks that seemed to be associated with Google devices (of which I have zero, so far as I know) but generally, it was just a bigger file with more stuff in it. I’m sure there was redundancy or perhaps one scrambled line in there.

Now that I can tell Alexa to turn on the Christmas tree and she does, of freakin’ COURSE the rotating base for the tree would stop passing power through to the lights. It is always something.

All That Glitters

My new WiFi is argubly worse than my old WiFi, but I think there is a reason.

I recently completed a pretty major [up|cross]grade of my home LAN. There were two main goals. First is to add some additional wireless VLANs so that I can help isolate my growing network of IoT devices and thus protect them and the rest of my devices from them. Second, and this is admittedly not a super important thing, it would also be nice if I didn’t have to connect to a different SSID between the house and workshop.

As detailed elsewhere, that was a journey of discovery, ending with the eventual replacement of two TP-Link APs and two Cisco switches with new Ubiquiti devices. Between the interopterability and easy of configuration, it was a good, though not particularly cheap, move.

Or so I thought.

A week or so later, it appears that I have a lot of connectivity issues. Once I really noticed it and started looking into it, I think I know what’s happening and at least a good bit of the solution, the full spectrum of which of course involves helping someone at Ubiquiti buy another Bentley.

I put the new APs exactly where the old ones were, but the old ones were perhaps a little more sophisticated, RF-wise. The TP-Link APs had an array of four large antennas and, sadly, physics matters. The UniFi APs look like oversized light switches and simply cannot have the same antenna performance. However, the one in the house seemed to work just fine until I had a compelling reason to move it. I wont go into those details, but it had to move from about the center of the house to one side of the house. As I am typing this post, the UniFi controller reports my WiFi experience as “Good”, rather than excellent, along with everything else here on this, the opposite side of the house from the AP.

The real issue comes about when, for reasons I have yet to determine, the experience gets enough worse that the connection is lost and the device, most noticeable to me when it is my laptop, sees the other AP, all the way out in the workshop, but with a marginally ok signal, at least compared to the one it just dropped and it roams to that AP. Now that weak signal results in a substantial slowdown and maybe it drops that connection, too. Then it sees the original AP and connects to it again. This vicious cycle continues.

Once I figured out that this seems to be what is happening, I set the workshop AP to lower power. This makes is a less viable alternative to reconnect to and at least stops the toggling. It does not, however, address the actual problem, which is that the new Ubiquiti APs I chose simply don’t have the coverage that the TP-link devices have.

So, I ordered a couple more APs, ceiling mounts this time. I think I can justify a unit in the hallway between the bedrooms and the livingroom and another in the kitchen. If those two provide dense enough coverage in the house, I can deploy the other wall mount in the workshop, to have two out there as well. If needed, however, I can find a place for a third in the house. One has seemed to be enough for the workshop thus far. Everything WiFi out there is within about 20 feet of the AP.

Update: The power setting on the workshop AP has definitely reduced the number of roaming events in the logs. [facepalm] Now they just disconnect instead. [/facepalm]

Bungle in the Dongle

With apologies to Ian Anderson.

Full disclosure. I am still not entirely sure exactly which of the myriad steps finally worked or what order of steps I finally accidentally hit, but the Zigbee and Zwave dongles just came up working again. Furthermore, it’s actually been a few days since that happened, so my memory of it is imperfect as well.

The all critical final step, however, is emblazened in my retinas forever. After doing this SEVERAL times already, removing and adding back the USB devices at the virtual machine level in the Synology NAS, with at least some number of minutes/hours/decades delay seems to let something reset enough that the device is not only detected (which it has been all along) but also works.

Of course, the damage I had done troubleshooting needed to be undone. Some devices had lost their names and almost worked. They had reverted to the default names given when the devices are first interviewed, generally some variation of their brand and model number.

Most of these were pretty easy to figure out, either by being relatively unique (at that point, I had deployed only one Third Reality motion sensor) or by what automation was still associated with the device. Curiously, those automations didn’t work until the name of the device was adjusted. For the afore mentioned motion detector, I had set up a fairly sophisticated timer automation as suggested by Smart Home Junkie and parts of the automation needed to adjusted with new device names as well.

I have all but one or two of those names resolved now, mostly because I just haven’t walked to the doors or plugs in question and verified which was which. Soon.

This power event did inspire me to look into moving Home Assistant over to dedicated hardware. Whether that is really justified depends on whether the advantages of dedicated hardware outweigh the advantages of hosting it as a VM and that really comes down to how it handles USB hardware under unusual conditions like an unplanned power loss.

Raspberry Pi hardware is still very hard to obtain, but I have an Celeron N4100 based fanless PC that is more than adequate to the task. I had a false start (not all the installation methods install identical versions), but did manage to get a suitable Home Assistant image on it and was able to restore a backup of my current configuration. For personal reasons beyond merely being the week of Christmas, I don’t want to tear into the actual move of all these devices to that hardware just this moment. After the holidays, I will revisit it.

Ublissiti

You load 16 ports, what do you get?
Another day older and deeper in debt

The 16 port PoE switch arrived today. With the exception of one adoption hiccup, it works really well and finally, all the VLANs are doing (mostly) what they are supposed to do. Now that the layer two works, I do still need to write firewall rules for them.

I configured DHCP to statically set the IP I wanted to use, so the switch came up where I wanted it to and I clicked Adopt and away it went. About 30 minutes later, it was obvious that it wasn’t going to complete that process, so I did a factory reset and had the controller forget it and this time, it completed the adoption process without further drama.

The only configuration I really needed was to make that one port where Starlink pops out on this end into a native VLAN 50. All the other ports would be just trunk ports. I did label a few ports.

I moved four cables over, the link to the workshop, the Starlink/WAN to the router, the LAN to the router and the laptop I was working from. The router had noted the loss of ping from Starlink and has switched to OneSource, so I had to jump in and force it to switch back, but otherwise everything worked perfectly and I moved the rest of the cables over and powered down the Cisco SG200.

I almost forget to test connecting my phone to the IoT WiFi, but I did and it worked, as expected. This is, afterall, one of the reasons I began this whole exercise. Victory dance!

I think the only thing left now is really more of an irritation than an issue.

When I moved the wiring over, I moved most of them over jack for jack, so port 4 happened to have the NAS connected to it and the controller (presumably) discovered it and helpfully labeled it. I decided to move it to port 10 to free up a PoE port, but it apparently is finished discovering and from what I can find thus far, it does not appear to be an end user editable field. That seems very unlikely but, so far, that is what I see so I am stuck with an empty port labeled with something that has been moved to another port which remains unlabeled.

Then again, it’s 2AM.

A Delicate Matter of Power

On the heels of solving one problem comes another.

We had a scary storm nearby a few days ago.

This is just a few miles north of us and it is just a tiny bit of the storm damage. It took power out for our neighborhood, though only for about two hours. That was long enough for the little CyberPower UPS feeding my network stuff to run down. It was also the first time I have had something hooked up back there that was apparently sensitive to losing power, namely my Home Assistant VM hosted on a Synology NAS.

Before I get into the rest of this story, I’ll say that my UPS does have a USB port, but it is not specifically listed on Synology’s compatibility page. Still, I will secure the proper cable and give it a try. Doing an orderly shutdown of the NAS might have avoided this whole thing. Or not; I suppose that depends on what it takes to finally fix it.

After we got power back, the Home Assistant VM would not finish booting up.

Note that it had been waiting for nearly 6 minutes for whatever it was holding up for. Deep digging seemed to indicate that a database file for something may have gotten corrupted, meta.db. About the time I was figuring this out, I also remembered that I have the Synology doing nightly snapshot backups of this VM!

Though just this moment I don’t remember why, I elected to restore the snapshot from Dec 11. I have also since then locked that snapshot so that it won’t be rolled out. I may need it again.

That process at least got Home Assistant booted up and usable, though it took me a while realize that the Zigbee controller was not up. Zwave was up and working, which was a relief after the previous debacle involving it. I needn’t have worried, though. The attempts to restore Zigbee operation would soon kill ZWave, too.

The details of both seem to indicate that the software can’t find the hardware. Almost surprisingly, however, the hardware is there.

The kernel sees them. So, why can’t Home Assistant?

My first concern is Zigbee because I have more devices on Zigbee and they include the lights in the back yard, used frequently for letting the doggies out. The oldest of these doggies has enough trouble finding the door in the daytime.

For experimental purposes, I also tried another snapshot option wherein you can configure a new VM from one of these snapshot backups. Interestingly, it generates a new bogus MAC address, so DHCP gives the new instance a new IP address. That was slightly unhandy, but easy to fix. The oldest December snapshot, from the 4th, behaved exactly the same, which leads me to believe that this is probably not a filesystem issue, so I have put back the December 11 snapshot and have been working with it.

I tried something kind of radical and deleted the Zigbee Home Automation (ZHA) integration. Home Assistant discovers the dongle like a newly installed piece of hardware:

Unfortunately, it returns an unknown error upon attempting to configure it.

If I instead choose to Add Integration and search for Zigbee, I get these options:

The first option is the same as trying to configure the new hardware. The second option complains that Zigbee integration is not yet set up, but gives an option to proceed to set it up:

It proceeded to look very normal, like it was going to work:

Then, it ground to a halt.

There is something new here, though. It looks like it has the same port in there twice, neither working.

If I click on either of those, I get the same ‘failed to set up/check logs’ message as always, so insanity.

To break the chain, I have ordered Sonoff’s *other* Zigbee device, based on the Silicon Labs chipset. It would seem they had cause to add to the Texas Instruments architecture, whether it was to help assure a diverse supply chain or in pursuit (or retreat) of some features. I hope to know soon.

WiFi Elevation

Act I: Somewhere deep in my first catchup post in this blog, I related that I had acquired a couple of nice TPLink wireless routers with the intent to mesh them together so that the house and workshop would be on the same SSID, only to discover that this particular router apparently cannot be a mesh client in that situation. Further, none of the mesh clients in that product line support ethernet backhaul, which is pretty much required for this specific situation.

Act II: Enter home automation and a sudden propagation of WiFi IoT devices. I am not in any real danger of running out of IPs in my main LAN. It would also be trivial to just expand it if I were; it is all RFC1918 address space, with the exception of dealing with the work VPN and the RFC1918 subnets routed there. However, best practices would have the IoT stuff separated from the rest of the network in case they get somehow compromised on the internet and many of the home automation devices need no internet access at all.

With these two related goals in mind, WiFi roaming and multiple WiFi networks, I shopped around and decided that a couple of Ubiquiti APs would serve this need. Between Ubiquiti being affected by chip shortages like everyone else and a ceiling mount not being my first choice for various reasons, I got two in-wall APs as the best compromise that was also not too expensive.

While they were enroute, I knew that I would need the controller console software running somewhere. Ubiquiti marketing really makes it sound like one of their hardware hosts is the only option, one of the Dream Machine or Cloud Key devices, but they have downloadable software that can run on a variety of OS platforms. You don’t really need the software except for initial setup unless you want to track statistics, so it can be run as needed on, for example, a Windows PC.

Another option is to run it as a Docker container in your NAS.

At one time, not that many years ago, I was an anti-VM snob. I was a hardware purist. I have pretty much gotten over that. There is almost no computer that is busy enough to justify doing only one task, especially one server task. Even modest hardware spends most of it’s time waiting for something to do. My Synology DS220+ is arguably pretty modest hardware and I have it recording six IP cameras, hosting my home automation leviathan and now its also the controller for my small collection of Ubiquiti networking components. Oh yeah, it’s still a NAS, too, backing up two computers. And it’s barely clicking over most of the time.

I digress.

There was really only one issue I had bringing them up on this controller. The APs need DHCP option 43 in order to find this controller and that took a bit of research, both in how to format the info and how to implement that in pfSense. It’s not difficult, but it did take some Googling to find the details.

The ’01:04′ tells the AP that the following info is the controller IP, then the next 4 octets are just the IP of the controller in hexadecimal format, 172.29.0.250 in this case.

The related DHCP issue wasn’t super critical, but I wanted to assign static IPs for the APs. I neglected to set the static mapping before I plugged them in, so between the option 43 stuff and changing IPs, I had to have the controller ‘forget’ them, then factory reset them a couple of times to get things settled in.

Interestingly, I had not noticed until I pasted this image that the WorkshopAP, currently in the same room with the HouseAP while we wait for a PoE switch to arrive for deployment in the workshop, for some reason connected via WiFi instead of the ethernet by which it is powered. I will look into that, but I don’t expect that to be a long term issue; that AP will be elsewhere soon.

Meanwhile, I have created my VLANs, subnets and WiFi networks for the redesign. At the time I wrote that sentence, I needed only to update the VLAN configuration in the switch for everything to get talkin’. Little did I know…

As mentioned in the Starlink saga, VLANs are not a proprietary thing, but as I discovered, not all engineering teams approach them the same way or to the same degree. To recap that experience, the TPLink switches would let me segment ports within a switch, essentially dividing it up into smaller switches, but they did not pass that VLAN tagging information on, so no connected switch was aware of any of those VLANs. I had a somewhat more sophisticated Cisco switch on hand, one that as a bonus also provided PoE for my cameras, so I found it a playmate to put in the workshop and I was able to do the VLAN task I needed in order to backhaul Starlink from the workshop to the house while keeping it isolated from the general LAN.

In this diagram, the blue lines are VLAN50, the Starlink backhaul and the green lines are VLAN1, the main LAN. The switches are able to trunk both VLANs over a connection, so whether it was a piece of wire or a couple of wireless devices, the VLAN tagging was preserved.

With the new APs, the task is similar. I want to be able to run multiple isolated WiFi networks, so it would seem to be the same task, but I could not wrench the Cisco SG200 to do the same thing between ports on the same switch after days of trying. On the other hand, I also learned that one weird trick about VLANs and pfSense on the Netgate 1100 appliance.

The path has not been a smooth one, either. During the last several days, I have lost access to the switch multiple times and to the router once. With the switch, sometimes a power cycle was in order, sometimes just moving to a port I had not accidentally lobotomized. For the router, I was able to connect to it’s serial port and restore a previously saved working configuration.

One important and telling troubleshooting step was to monitor DHCP requests with Wireshark. In that process, I could definitely see the discover messages from the AP hitting the switch port it was connected to, but they were not coming back out the switch port to the router. This largely cemented the issue as being the switch not really trunking VLANs as expected.

The solution, as is often the case, has turned out to not be free. I knew I would need a PoE switch switch for the AP in the workshop and since (I guess?) I am kinda slowly moving to the Ubiquiti ecosystem, I ordered an 8 port switch to put out there. I started doing all this VLAN stuff while it was enroute. Somewhere along the way I wondered, since all the UniFi YouTube videos depict this as a largely click and go process, if the new switch would handle it as expected.

Yes. Yes, it does.

When the switch arrived, I first did all the adoption stuff and got it stable. Leaving it with all ports set to the “All” port profile, which is the default and expected to carry all VLANs, I plugged the USW-Lite-8-PoE switch in line between my Cisco switch and the LAN port on the pfSense router, with one of the APs also on the Ubiquiti switch. The VLAN 1 WiFi came up immediately, but then, it had been working all along. I found one thing (the weird VLAN tagging thing for the internal switch in the Netgate 1100 appliance mentioned above) was not quite right due to all the desperate experimentation I had been doing, but once that was corrected, I was able to connect to the IoT WiFi with my phone and get an IP address. It was glorious. I may have peed a little.

I removed the Ubiquiti switch and put the AP back on the Cisco, just to verify that it would fail again, that I had not just finally gotten all the configurations correct, and it definitely stopped working. So, lesson learned: for all it’s other strengths, the SG200-26P just doesn’t handle port to port VLAN trunking, at least not as the AP needs it to deploy multiple WiFi networks.

I then ordered a USW-Lite-16-PoE to replace the SG200-26P. It should be here in a couple of days.

Meanwhile, I configured a port on the USW-Lite-8-PoE for untagged VLAN50 and moved it out to the workshop. It plugged in and Starlink came up working perfectly. The Cisco again handles this sort of VLAN over a trunk just fine, and it doesn’t care that it’s partner switch is not a Cisco switch.

Importantly, the AP out there came up on the main VLAN and I can roam on that same WiFi network between the house and workshop; I no longer have to swap between HippyHollow and FlyingDog.

For nostalgic reasons, I will miss FlyingDog. It was a specifically chosen name for the network. I know! I will rename the AP!! Actually, I will rename both APs to what their networks used to be called. I are silly.

Not surprisingly, though, the IoT WiFi doesn’t work from out there, further evidence that the Cisco switch just isn’t passing those VLANs to the router. I am reasonably confident that when the USW-Lite-16-PoE is in place, the various alternate WiFi networks will work from everywhere.

Of course, then I have to start moving all the IoT devices to that network…

Tweaking A Few Things

A few unrelated projects has postponed tackling the plumbing part of putting the other two water meters in service. Meanwhile, I have been tweaking the behaviors and such for a couple other bits.

The heatlamp for the water system has turned itself on every day. It was indeed chilly for several days, so it wasn’t inconceivable that maybe it reached my low temperature trigger of 36F in the garage, but a couple of days over 50 that still had the light on every morning made me start actually checking.

I used one of the Emporia switches (cloud controlled; none of my Zigbee switches monitor power and that is a whole ‘nother conversation) so that I could monitor the power consumption of the outlet and thus detect a burned out bulb. The log showed some curious stuff though. This thing loses contact all the time.

If you look carefully, you can see the pattern. It becomes unavailable, then exactly one minute later, it comes back, appearing to change state, but I think it just renews it’s current state. That goes on as far back as you care to scroll.

However, at 6:00, something turns the switch on, then this same pattern just keeps repeating with the new status.

I checked the automation on the water temperature sensor and the outside temperature sensor, even though I had not automation for *this* switch for outside temp.

Finally, I found it. In the Emporia app on my phone, there was a schedule to turn the switch on at 6AM every day. I had originally deployed this plug out in the workshop for the horse trough and I wanted to ensure it was not accidentally turned off and left off. I disabled the scheduled.

The light was on the next morning.

I deleted the schedule and it finally stopped. 🙂

In related news, I have two CloudFree P2 plugs. These are plugs preconfigured with Tasmota and it turns out they have very extensive power monitoring capabilities and no cloud dependency. I have removed the other Emporia plug that (for reasons I don’t recall) was also deployed for the horse trough and put a CloudFree P2 in it’s place.

As you can see, it gathers a lot of power statistics. Currently, there is a recirculating pump that is running basically all the time. It is important, but not particularly critical. However, I am about to redeploy the deicing heater and if it goes out, particularly during freezing conditions, I need to know about it. The automation I have in place currently will notify me if it draws less that 10 watts for more than 10 minutes which would currently only tell me if something disconnected the pump, which one of the horses used to do on occasion. The pump draws 108W pretty much continuously. The heater pulls 1200-ish watts when heating and about about 1/2W when idle, presumably to work it’s thermostat. I presume I could script something to separate a pump failure from a heater failure, even though they run on the same outlet. I have more than once thought about separating them. At the very least, I could run them on two switch plugs.

The bad news is that I only ordered two of the CloudFree P2 units and one of them turned out to be DOA. I have contacted CloudFree about it (no reply, but it has not even been 24 hours yet) but I also ordered several more. I will have other things to control and/or monitor and especially at $10 apiece (Black Friday 2022 pricing apparently; back to $13 as I write this, but still a great price), they are a great little bit of kit.

I had some difficulties excluding and including my old Jasco ZWave switches to my new controller, but all appears forgiven now. I pulled the air gap power plug thingy on them and that seemed to properly reset them enough for them to be happy.

I dug into the wiring for a couple of 3 way switches in the kitchen. They are wired in the 2nd least favorite way for conversion to the Jasco switches I have had on hand for several years, but with some minor rewiring, I can make it work. I will post that separately one day. The one I really want to change out is a set of lights in our sunroom that is tecnically a 4 way, with one switch in the living room that is almost never touched, but is sure to complicate things, wiring-wise. I really want to deploy my Zooz scene controller where the kitchen located switch for that sunroom light is and use it to control several things.

The barn/workshop automation has to be all Zigbee or Wifi. I don’t think I can reasonably extend the ZWave network out there. That isn’t expected to be a problem; I have good Zigbee and excellent WiFI out there. I have a couple of Zigbee light switches, but I’m not sure how they handle 3 way connections as yet. I may need to deploy some wireless toggle switches for those remotes.

I put a new battery in the Ecolink tilting garage door sensor that I have had since Vera was new, along with putting a ZWave repeater in what I hope was a suitable place to extend the network into the garage, but the switch was last heard from on 16 days ago as I write this. I may try again now that the two light switches are working and helping to provide a little more network stability.

Before I discovered ESPHome, I had gotten a Zooz Multirelay with the idea of using it for the water filter project. As it turns out, a couple of it’s features are not particularly well suited to the needs of the water filter, but they are entirely suitable for automating garage door openers.

All The Water Things

Early in my Home Assistant journey, I had planned to connect something to and/or control our self-cleaning whole house water filter.

I had discovered the ESP8266 and it’s pulse_meter sensor and then planned to have a board to connect the sensors for the water meter and later the unfiltered water meter and a separate board for the water heater meter since it is a short distance away, but it is really is a very short distance.

After working with the ESP8266 for a bit, I have decided that there is no reason these functions cannot all be on the same board, thus the Home Assistant device “watermeter” has been repurposed and one of it’s identical syblings “watersystem” has be configured.

It may not win any prizes for awesome miniaturization, but it is packed in pretty tight. On the left is 5VDC power.

On the right, the pair of three terminal connectors are for the unfiltered and hot water flow meters. The voltage divider resistor networks to adapt the meters’ 5 volt output to 3.3 volts are on the bottom of the board. I have not yet connected them, but my confidence that they will operate is reasonably high.

Immediately below those is a Dallas TO92 DS18B20 temperature sensor. There is a 1/4″ hole drilled in the lid of the case to allow it to stick out a bit and measure the ambient temperature. Since all these water system components are installed in an unheated garage and it has gotten below freezing in there before, I need to monitor the ambient temperature and control a heat lamp during winter months.

Next is the pulse input for the main water meter, shown connected in this picture.

Finally, there is a connector intended to connect to the water filter to monitor when it’s cycle runs.

I have covered the details of the configuration elsewhere except for the binary_sensor.

binary_sensor:
  - platform: gpio
    pin:
      number: 15
      # D8 Filter Cycle Sense Switch
      mode:
        input: true
        pullup: true
    name: "Filter Cycle"
    filters:
      - delayed_on: 100ms
      - delayed_off: 100ms

This is the input to monitor the water filter cycle. Since the original water meter, I have learned that the ESP8266 has built in pullup resistors that can be activated, eliminating the need to add a physical resistor. In this application, I may not end up needing it at all. The filter settings are currently there assuming I would be debouncing a switch input.

The water filter has a motor that runs a multiposition valve with a cam and a single sense switch. Investigations indicate that the switch is normally open, with 5V from it’s own controller across it’s terminals when it is at rest. When the filter cycle begins, the switch closes as the valve cam moves. The filter cycle puts the valve into three positions, a flush and a rinse position during the cycle and a normal position when done. I think I can monitor the 5V signal across this switch then either simply time when it has returned to 5V for long enough for the filter cycle to have ended or possibly even count the two stops in the process. I will need to play around with that.

The one thing that might be added is to connect one more GPIO pin to a relay output in order to trigger a filter cycle remotely. There is plenty of room for a couple of wires to run to an external relay. The area in front of of the USB jack needs to remain open in case it is needed to reprogram the board if it becomes unreachable on WiFi.

Funny story there. In arranging the ESP8266 D1 Mini board on the 40×60 perf board, I dithered on whether to mount the USB jack up or down. I decided to mount the board with the jack down, mostly because all the pinout illustrations of the D1 Mini board are shown that way. However, I did not account for how thick the connector is!

I was able to slightly mutilate the connector on a cable enough to fit, though. For future boards, I will need to put a little space under the board.

Sluggy Blogs All The Subjects