I have mentioned the WiFi camera at our front gate before. It has come up in a discussion about the power system for it and in a mysterious DHCP flooding issue that was discovered when I began looking at router logs.
As it would turn out, the DHCP issue was because the camera would not usually hear the router. The conversation would basically be:
Camera: Hey, can I have this IP?
Router: Sure, sounds good to me.
Camera: Hey, can I have this IP?
Router: Sure, sounds good to me.
Camera: Hey, can I have this IP?
Router: Sure, sounds good to me.
Happily, the camera would default to the last IP it had and since the data was largely one way from camera to NAS, I didn’t really miss anything. Neither did it occur to me that the DHCP messages were a symptom of this one-sided communication, though it seems obvious to me now.
Once this occurred to me, I took some steps to address the problem. When I installed the ceiling mounted APs in the house, I had the wall mounted APs laying around. This model provides passthrough PoE, so I took it into the attic at the garage end of the house, closest to the gate camera. The driveway camera is PoE, so I reconfigured the cabling so that the cable powered the AP and the AP powered the camera. Then I reconnected the gate camera to WiFi and the attic AP was now a better signal than whichever one it had connected to before and the DHCP issue got better.
It did not go completely away, but now instead of DHCP requests several times per minute, all day, every day, it reduced to bursty, frequently everyday. I don’t know what made it work better sometimes than others, but it was definitely better.
I still wanted it to work correctly. I ordered an actual outdoor AP. I was going to quickly set it up for testing before permanently installing it and headed into the attic with it. I suddenly relized that the new AP doesn’t do the PoE passthrough as would be needed to have both the AP and the wired camera without running another CAT5 cable.
For the test, I connected the AP and even with it in the attic, the gate camera made only one DHCP request for the entire 30 minutes it was connected, which was definitely an improvement. I went ahead and drilled the hole and mounted the AP outside.
I had decided disconnect the wired camera for a day or two while I waited for the final piece of equipment to arrive, a USW-Flex switch. The USW-Flex is a 5 port switch that is both powered by PoE and provides PoE, within a reasonable power budget. As luck (and shipping) would have it, I had it the next day. It went in like a breeze.
The nails it is nestled between just happen to already be there and work perfectly to hold it in position where can see it lit up from the attic stairs.
The ports are limited to 25W max each and 46W total, at a temperature maxing out at 131F. The power ratings are reduced at higher temperatures. The high temperature range tops out at 149F and that range derates the maximum PoE power available to 25W. It seems obvious that Ubiquity expected these units to be deployed in attics where someone would use it like have to keep from running another CAT5 cable.
The AP and the camera each draw less that 5W each. I have two ports available for adding cameras as well. Assuming I stay with similar cameras, that seems likely.
Oh, and the gate camera does one DHCP renewal per hour, which is as it should be. And the frame rate increased.
I was responding to a post on a forum about pfSense and Ubiquiti and how they get along (very well, thank you) and it occurred to me that I left part of the story hanging.
When last we left our heroes, I had kinda iffy WiFi coverage in the house with the new gear, compared to the old gear, but I had all the desired VLANs working. The solution, or at least the obvious solution, was to further finance the purchase of more Bentleys for Robert Pera by upgrading my APs.
As you may (or may not) recall, one problem I had was that stationary devices in the house would suddenly feel the need to roam and managed to connect to the AP out in workshop. The workshop was so distant that shortly they would roam off it and the only other option was back to the AP in the house again. Vicious cycle ensues, cat and dogs, etc.
Well, the new APs arrived. I initially connected and tested by laying them about on desktops, but quickly got serious and ran wire for the main one to be deployed in the kitchen ceiling. The whole WiFi thing in the house got much better.
It is not without the occasional mystery, however. I have an apparently not terribly smart WiFi device that is the intermediate display panel for my Accurite weather station. The suite of weather sensors outside communicates via a 433MHz signal to the display panel, which in turn connects to my LAN via WiFi and kicks the data out to WeatherUnderground. This WiFi device is literally 15 feet line of sight, with no obstacles, from the kitchen AP. As I type this, the display panel is within reach of my left arm and the AP is two steps away. Consequently, this stupid display panel will connect only to the hallway AP on the other side of the house and ONLY that AP. I have tried every conceivable way to reset the display panel and the APs. The only combination that will work is if the hallway AP is online and accepting the Accurite panel. Nothing seems to make it forget the hallway AP and connect to the much closer kitchen AP.
Hard work often pays off after time, but laziness always pays off now.
I received the replacement Renogy solar charge controller. I actually ordered three of them, with visions of them doing other stuff. However, with the current camera connection to the battery working just fine, I am not in a huge hurry to change it and that laziness has paid off because the gate camera has been just fine.
Oh, I will change it out, just not while it’s still cold outside. We will have a warm day soon, I’m sure. It is winter in Texas, afterall.
I think I will put the “broken” controller on the Kawasaki Mule, to keep it’s battery charged even if we don’t use it all the time.
I have mentioned that I have a camera mounted by our front gate, powered by a dedicated solar panel, charger and battery.
I chose this Renogy Wanderer 10A solar charge controller specifically because it has USB power outlets built in, elegantly feeding the USB power cord to the camera.
The battery box lid, designed for indoor use apparently, had air vents on the top and allowed rain inside. The first controller died a wet death. I covered those air vents. It’s not as if the rest of the lid fits with a hermetic seal, so the box is still well ventilated, but now it is at least rain tight.
Wintertime is tough on solar stuff. So many overcast days result in less efficient charging. The camera draws 100-200mA;. The granularity of the charge controller’s output meter is 0.1A and it’s usually says 0.1A, but sometimes it’s 0.2A. Maybe when it is darker and the IR illuminator is on?
A couple of overcast days is enough for the lawn tractor battery to fall behind and drop power, especially after dark. The tractor battery is not intended for deep cycle use, although the gate controller is only on it’s 2nd battery in about 10 years. I think the constant drain of the camera, even at only 200mA, is probably the issue.
I have considered changing the battery to an actual deep cycle battery. WalMart carries an inexpensive ($80-ish) marine battery in the 24 size class. Real specifications for that specific battery were elusive, but I found several 24 size marine batteries from other sources. If they mentioned it at all, the amp-hour rating was 75-80 AH, so that seems likely for the EverStart at WalMart.
A more useful number for power service is the watt hour, literally how many watts for how long. It’s really handiest when comparing power systems that might not be running the same voltage. Conversion of AH to WH is easy enough. Voltage times amperage is wattage. For example, an off-grid solar system running 12V with 100AH of battery can be compared directly to a 48V system with 25AH of battery because they would both provide 1200 watt hours of power.
12V x 100AH == 48V x 25AH
In the case of the EverStart, 80AH x 12V = 960WH. A 12 volt battery is rarely actually 12 volts. Lead-acid batteries like to charge at 2.25 or so volts per cell, so a 12V battery’s ideal charging voltage is 13.8V. The open circuit voltage for a fully charged battery is about 2.1 volts per cell, so 12.6 is typical. Plug that number into the formula and a 12.6 volt 80AH battery should be good for 1008 watt hours.
Once you have a watt hour rating, figuring out how long a charged battery should last is pretty straight forward. In the above example, under ideal conditions, you should be able to draw one watt for 1008 hours, 2 watts for 504 hours, 50 watts for 20+ hours, and so on.
In practice, lead acid batteries can be permanently damaged if discharged too deeply. Deep cycle batteries are optimized to limit this damage, but you can’t eliminate it. It’s just how the chemsitry in there works. It is good practice to limit discharging a lead acid battery to no more than 50% charge, so derating that watt hour rating by half would be wise. One watt for 504 hours, 2 watts for 252 hours, etc.
Assuming 200mA for the camera and the specified 10mA for the charge controller itself, an 80AH battery should be able to run the camera for 192 hours to the 50% charge level, or about 8 days. That shoudl be plenty.
Then again, lead acid batteries need maintenance, especially during the summer months. All the cool kids are using lithium iron phosphate batteries in their solar systems, mostly because they enjoy certain efficiencies, but also because they are essentially zero maintenance, so I started looking into that.
LiFePO4 batteries can be charged and discharged faster than lead acid, are much lighter and because of smart onboard electronics, don’t really need a super smart charger. They are not particular cheap, however.
I found that 20AH LiFePO4 batteries run similar in price to 80AH lead acid batteries. 20AH works out to 96 hours and these batteries protect themselves at about 75% discharge, so 72 hours is likely.
So, roughly the same price as the big lead acid batter and only 3 days without a charge, but zero maintenance and an arguably longer service life. I decided it was worth trying one out at the very least.
Via Amazon, I chose at Vestwoods 20AH battery. The Renogy charge controller has a lithium mode. Part of setting that mode involves setting the charge voltage. The labelling on the battery indicates that it is a 12.8 volt battery, so I took that as the best guess.
It was delivered at 7:24 on December 30, ironically about a foot from where it would be installed. Here’s Amazon’s delivery photo. The battery box nearest to it is where it goes and the camera above it is what it powers.
It’s about the size of a large motorcycle battery.
I connected it almost immediately, by flashlight.
According to the recordings from that camera, it ran from 9:05PM on December 30 until 12:37 AM New Years Day.
The immediate presumption was that changing the battery type caused an issue. I didn’t have a lot of free time to troubleshoot, so it took me a few days to determine that what had actually happened was that the power output from the charge controller had failed. All indications are that it is on, but there is no power on either the USB jacks or the screw terminals.
Before I found the Renogy charge controller, I had purchased a 12V to USB adapter, assuming I would be using something like that connected directly to a battery. Using that, I was able to at least get the camera operational again, pending the arrival of a replacement charge controller.
My new WiFi is argubly worse than my old WiFi, but I think there is a reason.
I recently completed a pretty major [up|cross]grade of my home LAN. There were two main goals. First is to add some additional wireless VLANs so that I can help isolate my growing network of IoT devices and thus protect them and the rest of my devices from them. Second, and this is admittedly not a super important thing, it would also be nice if I didn’t have to connect to a different SSID between the house and workshop.
As detailed elsewhere, that was a journey of discovery, ending with the eventual replacement of two TP-Link APs and two Cisco switches with new Ubiquiti devices. Between the interopterability and easy of configuration, it was a good, though not particularly cheap, move.
Or so I thought.
A week or so later, it appears that I have a lot of connectivity issues. Once I really noticed it and started looking into it, I think I know what’s happening and at least a good bit of the solution, the full spectrum of which of course involves helping someone at Ubiquiti buy another Bentley.
I put the new APs exactly where the old ones were, but the old ones were perhaps a little more sophisticated, RF-wise. The TP-Link APs had an array of four large antennas and, sadly, physics matters. The UniFi APs look like oversized light switches and simply cannot have the same antenna performance. However, the one in the house seemed to work just fine until I had a compelling reason to move it. I wont go into those details, but it had to move from about the center of the house to one side of the house. As I am typing this post, the UniFi controller reports my WiFi experience as “Good”, rather than excellent, along with everything else here on this, the opposite side of the house from the AP.
The real issue comes about when, for reasons I have yet to determine, the experience gets enough worse that the connection is lost and the device, most noticeable to me when it is my laptop, sees the other AP, all the way out in the workshop, but with a marginally ok signal, at least compared to the one it just dropped and it roams to that AP. Now that weak signal results in a substantial slowdown and maybe it drops that connection, too. Then it sees the original AP and connects to it again. This vicious cycle continues.
Once I figured out that this seems to be what is happening, I set the workshop AP to lower power. This makes is a less viable alternative to reconnect to and at least stops the toggling. It does not, however, address the actual problem, which is that the new Ubiquiti APs I chose simply don’t have the coverage that the TP-link devices have.
So, I ordered a couple more APs, ceiling mounts this time. I think I can justify a unit in the hallway between the bedrooms and the livingroom and another in the kitchen. If those two provide dense enough coverage in the house, I can deploy the other wall mount in the workshop, to have two out there as well. If needed, however, I can find a place for a third in the house. One has seemed to be enough for the workshop thus far. Everything WiFi out there is within about 20 feet of the AP.
Update: The power setting on the workshop AP has definitely reduced the number of roaming events in the logs. [facepalm] Now they just disconnect instead. [/facepalm]
You load 16 ports, what do you get? Another day older and deeper in debt
The 16 port PoE switch arrived today. With the exception of one adoption hiccup, it works really well and finally, all the VLANs are doing (mostly) what they are supposed to do. Now that the layer two works, I do still need to write firewall rules for them.
I configured DHCP to statically set the IP I wanted to use, so the switch came up where I wanted it to and I clicked Adopt and away it went. About 30 minutes later, it was obvious that it wasn’t going to complete that process, so I did a factory reset and had the controller forget it and this time, it completed the adoption process without further drama.
The only configuration I really needed was to make that one port where Starlink pops out on this end into a native VLAN 50. All the other ports would be just trunk ports. I did label a few ports.
I moved four cables over, the link to the workshop, the Starlink/WAN to the router, the LAN to the router and the laptop I was working from. The router had noted the loss of ping from Starlink and has switched to OneSource, so I had to jump in and force it to switch back, but otherwise everything worked perfectly and I moved the rest of the cables over and powered down the Cisco SG200.
I almost forget to test connecting my phone to the IoT WiFi, but I did and it worked, as expected. This is, afterall, one of the reasons I began this whole exercise. Victory dance!
I think the only thing left now is really more of an irritation than an issue.
When I moved the wiring over, I moved most of them over jack for jack, so port 4 happened to have the NAS connected to it and the controller (presumably) discovered it and helpfully labeled it. I decided to move it to port 10 to free up a PoE port, but it apparently is finished discovering and from what I can find thus far, it does not appear to be an end user editable field. That seems very unlikely but, so far, that is what I see so I am stuck with an empty port labeled with something that has been moved to another port which remains unlabeled.
Act I: Somewhere deep in my first catchup post in this blog, I related that I had acquired a couple of nice TPLink wireless routers with the intent to mesh them together so that the house and workshop would be on the same SSID, only to discover that this particular router apparently cannot be a mesh client in that situation. Further, none of the mesh clients in that product line support ethernet backhaul, which is pretty much required for this specific situation.
Act II: Enter home automation and a sudden propagation of WiFi IoT devices. I am not in any real danger of running out of IPs in my main LAN. It would also be trivial to just expand it if I were; it is all RFC1918 address space, with the exception of dealing with the work VPN and the RFC1918 subnets routed there. However, best practices would have the IoT stuff separated from the rest of the network in case they get somehow compromised on the internet and many of the home automation devices need no internet access at all.
With these two related goals in mind, WiFi roaming and multiple WiFi networks, I shopped around and decided that a couple of Ubiquiti APs would serve this need. Between Ubiquiti being affected by chip shortages like everyone else and a ceiling mount not being my first choice for various reasons, I got two in-wall APs as the best compromise that was also not too expensive.
While they were enroute, I knew that I would need the controller console software running somewhere. Ubiquiti marketing really makes it sound like one of their hardware hosts is the only option, one of the Dream Machine or Cloud Key devices, but they have downloadable software that can run on a variety of OS platforms. You don’t really need the software except for initial setup unless you want to track statistics, so it can be run as needed on, for example, a Windows PC.
Another option is to run it as a Docker container in your NAS.
At one time, not that many years ago, I was an anti-VM snob. I was a hardware purist. I have pretty much gotten over that. There is almost no computer that is busy enough to justify doing only one task, especially one server task. Even modest hardware spends most of it’s time waiting for something to do. My Synology DS220+ is arguably pretty modest hardware and I have it recording six IP cameras, hosting my home automation leviathan and now its also the controller for my small collection of Ubiquiti networking components. Oh yeah, it’s still a NAS, too, backing up two computers. And it’s barely clicking over most of the time.
There was really only one issue I had bringing them up on this controller. The APs need DHCP option 43 in order to find this controller and that took a bit of research, both in how to format the info and how to implement that in pfSense. It’s not difficult, but it did take some Googling to find the details.
The ’01:04′ tells the AP that the following info is the controller IP, then the next 4 octets are just the IP of the controller in hexadecimal format, 172.29.0.250 in this case.
The related DHCP issue wasn’t super critical, but I wanted to assign static IPs for the APs. I neglected to set the static mapping before I plugged them in, so between the option 43 stuff and changing IPs, I had to have the controller ‘forget’ them, then factory reset them a couple of times to get things settled in.
Interestingly, I had not noticed until I pasted this image that the WorkshopAP, currently in the same room with the HouseAP while we wait for a PoE switch to arrive for deployment in the workshop, for some reason connected via WiFi instead of the ethernet by which it is powered. I will look into that, but I don’t expect that to be a long term issue; that AP will be elsewhere soon.
Meanwhile, I have created my VLANs, subnets and WiFi networks for the redesign. At the time I wrote that sentence, I needed only to update the VLAN configuration in the switch for everything to get talkin’. Little did I know…
As mentioned in the Starlink saga, VLANs are not a proprietary thing, but as I discovered, not all engineering teams approach them the same way or to the same degree. To recap that experience, the TPLink switches would let me segment ports within a switch, essentially dividing it up into smaller switches, but they did not pass that VLAN tagging information on, so no connected switch was aware of any of those VLANs. I had a somewhat more sophisticated Cisco switch on hand, one that as a bonus also provided PoE for my cameras, so I found it a playmate to put in the workshop and I was able to do the VLAN task I needed in order to backhaul Starlink from the workshop to the house while keeping it isolated from the general LAN.
In this diagram, the blue lines are VLAN50, the Starlink backhaul and the green lines are VLAN1, the main LAN. The switches are able to trunk both VLANs over a connection, so whether it was a piece of wire or a couple of wireless devices, the VLAN tagging was preserved.
The path has not been a smooth one, either. During the last several days, I have lost access to the switch multiple times and to the router once. With the switch, sometimes a power cycle was in order, sometimes just moving to a port I had not accidentally lobotomized. For the router, I was able to connect to it’s serial port and restore a previously saved working configuration.
One important and telling troubleshooting step was to monitor DHCP requests with Wireshark. In that process, I could definitely see the discover messages from the AP hitting the switch port it was connected to, but they were not coming back out the switch port to the router. This largely cemented the issue as being the switch not really trunking VLANs as expected.
The solution, as is often the case, has turned out to not be free. I knew I would need a PoE switch switch for the AP in the workshop and since (I guess?) I am kinda slowly moving to the Ubiquiti ecosystem, I ordered an 8 port switch to put out there. I started doing all this VLAN stuff while it was enroute. Somewhere along the way I wondered, since all the UniFi YouTube videos depict this as a largely click and go process, if the new switch would handle it as expected.
Yes. Yes, it does.
When the switch arrived, I first did all the adoption stuff and got it stable. Leaving it with all ports set to the “All” port profile, which is the default and expected to carry all VLANs, I plugged the USW-Lite-8-PoE switch in line between my Cisco switch and the LAN port on the pfSense router, with one of the APs also on the Ubiquiti switch. The VLAN 1 WiFi came up immediately, but then, it had been working all along. I found one thing (the weird VLAN tagging thing for the internal switch in the Netgate 1100 appliance mentioned above) was not quite right due to all the desperate experimentation I had been doing, but once that was corrected, I was able to connect to the IoT WiFi with my phone and get an IP address. It was glorious. I may have peed a little.
I removed the Ubiquiti switch and put the AP back on the Cisco, just to verify that it would fail again, that I had not just finally gotten all the configurations correct, and it definitely stopped working. So, lesson learned: for all it’s other strengths, the SG200-26P just doesn’t handle port to port VLAN trunking, at least not as the AP needs it to deploy multiple WiFi networks.
I then ordered a USW-Lite-16-PoE to replace the SG200-26P. It should be here in a couple of days.
Meanwhile, I configured a port on the USW-Lite-8-PoE for untagged VLAN50 and moved it out to the workshop. It plugged in and Starlink came up working perfectly. The Cisco again handles this sort of VLAN over a trunk just fine, and it doesn’t care that it’s partner switch is not a Cisco switch.
Importantly, the AP out there came up on the main VLAN and I can roam on that same WiFi network between the house and workshop; I no longer have to swap between HippyHollow and FlyingDog.
For nostalgic reasons, I will miss FlyingDog. It was a specifically chosen name for the network. I know! I will rename the AP!! Actually, I will rename both APs to what their networks used to be called. I are silly.
Not surprisingly, though, the IoT WiFi doesn’t work from out there, further evidence that the Cisco switch just isn’t passing those VLANs to the router. I am reasonably confident that when the USW-Lite-16-PoE is in place, the various alternate WiFi networks will work from everywhere.
Of course, then I have to start moving all the IoT devices to that network…
There was really nothing wrong with my old NAS. It was largely underlutilized, other than I had hit the limit on how many cameras it could record in Surveillance Station because Synology is smart enough not to oversubscribe the hardware’s capabilities and limit the DS120j to 5 cameras. That was really the main thing I wanted to upgrade for and even that wasn’t a big rush.
Still, I had planned to upgrade once I paid off my Amazon credit card and since I had done that…
I did some comparison shopping and finally decided that I would get the most bang for my buck with the DS220+. I wanted the ability to add a 2nd drive and the Intel Celeron in the DS220+ with only 2 cores running at 2.0 to 2.9 GHz could still outperform the DS218/DS218play and had an expansion memory slot. That feature alone would turn out to be handy for me.
Well, it would be once I got the right memory module. I ordered the right stuff, DDR4. The labelling on the packaging was correct. The module in the package was a DDR3 SODIMM, different pinout. I processed the return to Amazon thinking I just ordered the wrong thing, before I looked at the packaging carefully. No matter, it’s sorted out now and the DS220+ is no maxed out at 6GB.
It was not clear ahead of time whether I could swap the physical drive from my DS120j into the DS220+ and have it work correctly. I seem to recall that once I had the hardware in hand, it became obvious that it could be done, but that the performance would suffer because of the differences in the newer and older file systems. I chose what was probably the best way, though it would turn out to be very slow. I set up the new unit fresh, then did a backup of the old unit’s files, using a share on the new unit as the destination. Then I could restore the old unit’s files from that now local source.
It took a surprisingly and irritatingly long time for that backup to complete. Then the local restoral wasn’t a whole lot faster. The whole process was most of 48 hours. On the other hand, it was also completely trouble free.
When everything was restored and settled down, I was able to move my camera license pack from the old NAS to the new NAS and although I didn’t have one immediately available, it did not complain about it when I attempted to add one more camera. Mission accomplished!
I have currently set up the old NAS as a backup target for the new NAS and set up a schedule. It just runs. I am not backing up the surveillance footage simply because that is just huge and pretty perishable anyway.
Soon, however, I would find an even better reason why I am glad I went for the more powerful CPU and full boat of RAM.
It’s a $20 part that, arguably, we shouldn’t have to add to our system, but that’s the way they built it, so….
My ethernet adapter arrived today. Mechanically, it’s very simple. A cable and box that plugs in between the rectangular dishy and the “router”/power supply.
To properly use it, one must disable the Starlink router, which is kinda bogus, by going into the Starlink app on your phone and setting the wifi mode to “bypass”. This mode setting, once committed, is reversible only by a manual factory reset. Sigh. Great design, guys.
In any case, it pretty much works.
I got a new IP address. It is pretty much just as useless at the previous one, but at least it’s in a different subnet.
The new IP for WAN is 100.64.0.1. All indications are that probably all systems with a square dishy will get the same semi-bogus address. According to ARIN, it is a “block is used as Shared Address Space.” Digging deeper, this range is set aside for what is called Carrier Grade Network Address Translation, or CGNAT. It makes sense that Starlink would use such a NAT scheme. It does cause some slight problems with inbound stuff, like port forwarding.
It replaced a WiFi access point, which was wired to my LAN and was the only WiFi device connected to Starlink, depicted below as FlyingHippy SSD.
This functioned perfectly, but as my whole LAN is arguably over complicated already, any simplification has to help. Also, although the WiFi specs published for the BrosTrend device are full speed on the wireless side, 867Mbps on 5GHz WiFi or 300Mbps on 2.4GHz WiFi. For my use case, where Starlink is the only WiFi device connected to it, those wireless specifications mean little because the wired ethernet is 100Mbps.
Since Starlink can exceed 100Mbps (sadly, only occasionally in practice), it was considered a potential bottleneck.