EVs, Renewable Energy Systems and a few aircraft

On 25 Jan 2023 I held a tour of The Vale’s energy system ( https://aeva.asn.au/events/450/) for members of AEVA in Tasmania. We had a lot of people (and a lot of EV’s) turn up 🙂

There was a slide show the same evening in Devonport where I found myself telling the story of my personal journey with EVs over the years, alongside the story of building the Vale’s Renewable Energy System (and mentioning a few aircraft here and there too).

The recording is here: https://youtube.com/watch?v=fP5Ce2XHLNI&t=2017s… (start at 33 min in).

It runs for about an hour, with questions.

How to make AXIS Web Cameras work properly on Safari (MacOS and IOS)

Do you have AXIS web cameras? Do you use MacOS or IOS?

Chances are that you have been suffering the same issue that i have, for more than a year now… and that has been driving me a bit mad!

I have had an issue for more than a year, that’s been driving me a bit mad…

This issue is that current versions of the AXIS WebOS user interface interact badly with Safari, such that Safari keeps popping up a login window over and over (and over and over). You can hit ‘cancel’ to dismiss the login box, but it keeps on coming back, and back and back. Its infuriating.

When I asked AXIS support for help, they didn’t (help) me at all. They just told me to use another browser (which works, but I don’t want to use another browser).

I have finally found the fix, and I wanted to write it down in case it helps others. I found the fix today on an AXIS web page, in reference to a specific camera and IOS, but it turns out that the fix works in general with all AXIS cameras and with Safari on MacOS X as well as on IOS. Yay!

To fix this issue on IOS:

To use AXIS OS web interface with iOS 15 or iPadOS 15, go to Settings > Safari > Advanced > Experimental Features and disable NSURLSession Websocket.

To fix this issue on MacOS X:

If the ‘Develop’ menu is not visible in Safari on the Mac, first make it appear by doing this:

Go to  Safari > Preferences, click Advanced, then select “Show Develop menu in menu bar”

Now, turn off the same ‘Experimental Feature’ (NSURLSession Websocket) by finding it and un-checking it under the Develop > Experimental Features menu.

and…happy days 🙂

Fixing Sonos constant disappearances from a Ubiquiti UniFI WiFi Network

Subtitle: “Ubiquiti Auto-Optimise Breaks stuff again” (it is also responsible for this).

The Scenario

I have been having persistent, annoying and sustained issues with older Sonos devices dropping off of my WiFi network after a while. This is on a network driven entirely with Ubiquiti UniFI products (switches and access points connected to a UDM-Pro).

The older Sonos products would disappear regularly from view from the Sonos Apps.

Power cycle them and they return, but then any time from a few minutes to a few hours later… they’re gone again. Power cycle yet again. Rinse and Repeat. Grrr.

The Problem

The Ubiquiti ‘auto-optimize’ function strikes again – and breaks stuff. Hmmm.

The ‘optimize’ fuction modifies the minimum data rate for 2.4Ghz ‘beacon’ frames to a rate above the 802.11b default. Doing this breaks WiFi connectivity for devices that can only use 802.11b. The older Sonos models use 802.11b. So … this breaks them.

The Fix

(1) Turn off ‘auto-optimize’ (under Advanced settings in the Network settings page).

(2) Turn off the checkbox “2G Data Rate Control” on the Wireless Network page for your WiFi SSID concerned (see images below) to restore working 802.11b WiFi connectivity

The Details (if you need or want them)

I have Sonos One devices and (older) Sonos Play:5 devices. Its the Play:5’s that kept going away.

Importantly the older Sonos units are 2.4Ghz 802.11b only units devices… they can’t talk on 5Ghz (but the UniFi AP’s can bridge them to control devices running on newer devices and newer bands).

After a lot of head scratching, I realised that ‘something’ had changed a critical setting on my UDM-Pro.

The setting that was changed was the one (in the Wireless Network page) called “2G data rate control”.

This had been turned on (by the ‘auto optimizer’) and set to 5.5Mb/s minimum:

This is a minimum beacon rate that (as it says on the page) causes “Limited range and limited connectivity for 802.11b devices”

The older Sonos devices are 802.11b 2.4Ghz only devices! Hence this setting is guaranteed to break Sonos Network access – and to keep breaking it. Argh.

So: The fix is to set that minimum data rate back down to 1Mb/s, at which point the on-screen text changes to “Full Device Compatibility and Range”:

Fixed! Yay!

It is simpler (and more direct) to just disable the rate changing function entirely (by un-checking the box ‘Enable Minimum data rate control’.

However, I wanted to point out the changes in that explanatory text with the screen images above. These underscore that the optimiser (which is on by default) really does make changes that break older WiFi device access (and without warning the user that this is what will happen).

Note that on the UDM (vs the UDM-Pro), there are different checkboxes to restore 802.11b connectivity. On that platform, you (again) turn off the optimiser and then under your WiFi network configuration page you’ll find a switch hiding in the “Advanced” section “Enable Legacy Support”. This switch is explained with the text ‘Enable legacy device support (i.e. 11b)’. Duh. So turn that on (i.e. do enable the legacy device support).

I’m sure this is breaking connectivity for other devices too – there are going to be lots of little gadgets in people’s networks that are older or simpler and that only support 802.11b. If you want those to work… you’ll want to enable that legacy support.

Update: We have some wall mounted underfloor heating controllers (with WiFi) in the house… that were unreliable and just couldn’t hold a WiFi connection. You know where this is going … now they work just fine.

Ubiquiti UniFI OS bug: Wireless Lan Peer to Peer Traffic (incl Printing and Sonos Audio) broken by a recent software update

I’m writing this down in the hope that someone else trying to solve this issue via some random ‘Googling’ may find this article and that it may save them some time… compared to how long it took me to solve this!

I run a moderately sized UniFi based network consisting of a UDM-Pro, 10 UniFi switches on a fibre ring, and 19 UniFI Wireless LAN access points.

I turned on automatic software updates on the UDM-Pro a while ago, and I am presuming that the issue that has occurred here is as a result of the latest software that got auto-installed (UniFi UDM Pro Version 1.10.4 and ‘Network’ version 6.5.53).

It took me a few days to figure out that the UniFi network was the cause of broken printer operation and also the cause of our on-site Sonos audio devices all ceasing to work.

The way the ‘breakage’ occurs is subtle – some, but not all, peer to peer IP traffic over the wireless LAN was being silently dropped. Broken things included Multicast based protocols (such as AirPrint) and also initial connection establishment (with the ARP protocol’s broadcast frames apparently being filtered as well)

This impacted some network nodes but not to all of them.

The way this first manifested was that our printer stopped working. It didn’t work wireless-to-wireless, and it didn’t work when I tried cabling it into a switch and accessing it via wireless on my laptop.

The failure to work even on a wired connection convinced me the printer was just… b0rked, and that it was time to replace it.

In a similar timeframe, the Sonos audio devices in our house also stopped working. I didn’t initially register the timing coincidence with the printer fault because, frankly, Sonos network communication sometimes get flakey ‘all by itself’…and I had shelved that issue to figure out ‘later’.

In the meantime, we actually went and bought a new printer! When I fired that up, it failed to work as well (!). Ah hah… ‘lightbulb moment’. It wasn’t the printer. It was something far more fundamental about the network. It had to be.

I started digging deeper, with that new data point (that it wasn’t actually the printer at fault). Eventually, via various network diagnostic steps, I figured out that a strange network failure mode was the real issue.

Since it seemed to be impacting broadcast and multicast frames, I started looking for, and adjusting settings related to that functionality in the UDM-Pro.

Eventually I landed on the fix:

  1. Turn OFF the ‘AUTO OPTIMIZE NETWORK’ setting under the Settings->Site menu. This is necessary in order to unlock the ability to change the next setting
  2. Turn OFF the checkbox called “Block LAN to WLAN Multicast and Broadcast Traffic” on the Settings->Wireless Networks-> [ Your network name here ] -> EDIT WIRELESS NETWORK page
Turn this off to fix the fault

Ii is that latter setting that is doing the damage. When on, it is literally blocking things as fundamental as ARP broadcast packets so that IP connections can’t even be established reliably between hosts on the local network. Hosts can all talk to ‘the wider Internet’ just fine, but they can’t talk to each other.

I didn’t take any manual action to break the network – all I did was leave automatic OS updates turned on and a few days ago – these mysterious faults appeared.

See the screen shots below to help you find where those settings located on the UniFi Web console interface.

As soon as I took the steps above, the UDP-Pro re-provisioned all my UniFI WiFi access points and everything started working again – printer started working perfectly, Sonos started working fine. I proved this is the issue by turning the setting back on and – bingo – instant network faults. Off again…and it all works again.

I hope this helps someone else!

I’ve reported it to Ubiquiti because I think they need to urgently make a further update to fix this pretty fundamental bug, and hopefully they’ll indeed fix it.

In the meantime, I hope this post helps someone else avoid the head scratching (and/or throwing out perfectly good printers and/or Sonos units!).

This is where the ‘AUTO-OPTIMISE’ switch is (turn off to unlock the following setting change)
Turn off the checkbox at the bottom of this screen to fix the WLAN network fault

Update-1:

12 Dec 2021: The ‘Network’ version has been incremented to 6.5.54 as at 12 December 2021.

Some good has been done in this incremental release: It has been kindly pointed out to me that a workaround is in there now for small sites, so at least small sites (less than 10 APs) won’t suffer this issue any longer.

The following comment has been added to the release notes for 6.5.54 (from here):

  • Enable multicast block if Auto-optimize is enabled, and there are more than 10 APs assigned to SSID.

My site does have more than 10 APs assigned to the SSID concerned. So, for me, 6.5.54 still shows the issue (which explains why, in my testing, the fault wasn’t fixed in 6.5.54).

This is a good thing – sites with less than 10 APs will now no longer see this bug.

However: The bug still exists and this workaround is just hiding it until it leaps out and bites sites in the tail when they expand to 10 or more APs (on the same SSID). Argh.

This feature (when enabled) is breaking fundamental aspects of TCP/IP network operation on a routine basis.

There are two issues here.

The first is that it is quite possible for two hosts to be associated to two different APs while being in the same physical location. Picture a printer on a desk and a user of that printer who has walked in from the AP next door (and is still associated with the AP next door).

Or picture a location that happens to just be at the intersection of two roughly equidistant APs (which is going to happen all the time on a network with more than 10 APs)

When this happens, the outcome for users in terms of Multicast/Broadcast activity is going to become intermittent – sometimes it’ll block packets, and sometimes (if host hosts do happen to both be on the same AP) it might work…for a while. And then stop again mysteriously later.

This intermittency was evident in my initial testing (and now I appreciate why).

As people make their networks larger (and of course for anyone who already was running a large network and who has auto-updated), they will see this mysterious problem happen both without warning and without explanation.

I actually think that’s worse… because now its a fault that unexpectedly occurs when the network expands beyond a certain point. Pity the IT guy who has to figure that one out, with the sole clue being that one line in the Network 6.5.54 release notes.

The signature example of the seriousness of this problem is something completely fundamental to working TCP/IP networks:

This feature blocks ARP packets

As a result, the establishment of working unicast connections between hosts in the local network (e.g. as fundamental as connecting to a printer and using it) will not work reliably (or in many cases, will not work at all)… which is where we came in.

It also means, for instance, that if you try to ‘ping’ another host on your local IP range, that ping might work, or it might not, depending on where the other host is, across your network (or on whether it has roamed to another AP that is reachable in the same physical location).

Debugging that sort of thing could drive people a bit crazy.

Without consideration of this, the functionality of the feature in general is pretty broken.

I get the point, and in implementing this, Ubiquiti means well, but it has not been fully thought out and it is going to be the cause of nasty (and worse, subtle) network faults on a continuing basis until more effort is put in to how this feature works (and to allowing the operator to select Broadcast and MultiCast packet types to continue to forward!).

I’ve noticed that when turned on, this feature allows for the addition of hosts by MAC address that are still able to be visible network wide:

You can add HOSTS to exclude from blocking but you can’t add PROTOCOLS (such as ARP) to exclude!

That is a tacit admission of how this feature breaks stuff.

Adding hosts to the exclude-from-blocking list by MAC address is well meaning, but network operators will be perpetually chasing their own tails as people add printers or audio devices (or replace busted ones). Maintaining a MAC address block list is just a ‘make work’ activity that no network administrator (or their users) needs. Not ever.

Ubiquiti has implemented extensive device ‘fingerprinting’ of devices over time. This meansthey can figure out what things are. If this feature is going to exist (and be silently turned on without warning!!) at all, then it has to be configurable in terms of device types and/or broadcast/multicast protocols that can be whitelisted, not hosts.

Again the issue here is that there are protocols (like ARP – argh) that you just can’t block between APs at all, without breaking fundamental aspects of how TCP/IP networks work.

This isn’t good, and until it is further improved, the underlying problem remains. The change in .54 does help a bit, for most people… but for the people it doesn’t help, it has made the real problem (that the feature itself is un-tenable as it stands) both harder to find (and hence harder to fix).

How much renewable energy does it take to offset flying a Pilatus PC12?

The other day, I was talking with someone about the wonders (and the satisfaction) of operating a large renewable energy system at our Tasmanian farm, and how I get to charge up my electric motor glider and go flying on sunshine, and how we’ll replace all the farm machinery that burns diesel with electric vehicles as soon as someone will sell that electric farm machinery to me (all of which is true).

One of our children kindly (and accurately) popped that balloon for me with a single sentence, by saying: ‘Yeah, but you also fly a turbojet aircraft’.

The plane we fly is a most wonderful beast called a Pilatus PC12 NGX. The convenience, speed, capability and sheer reach is just fantastic. I also get huge personal satisfaction from flying it. However, ‘satisfaction’ is not a Carbon offset.

This conversation lead me to pose a question to myself:

Can our solar array create enough renewable electrical energy to completely offset the carbon dioxide emissions involved in flying our aircraft?

I decided to work it out.

I don’t claim to be any sort of saint – the idea is just to see if it is possible to achieve something like ‘Carbon Neutrality’ by offsetting the aircraft Carbon emissions with solar array Carbon savings.

I’ve tried to get the numbers right here (and they make sense to me)… but if I’m getting the sums wrong somehow (or misunderstanding the source data), I’d be very keen to find that out. That’s one of the reasons why I’ve posted it all here… to subject these calculations to the light of day.

Source Data

My annual flying hours in the PC12: 200 (average over the last 3 years)

Average hourly fuel burn for my mission profile: around 250 litres per hour

Carbon Dioxide emitted per litre of Jet-A1 burned: 2.52Kg (source: “COP25: What is the impact of private jets?“)

Solar array size at The Vale: 200 kW

Average energy generated per annum per 1 kW of array size at The Vale: 1340kWh

Thus for a 200kW array we will make about 200 x 1340 = 268,000 kWh annually (Source: LG Solar Output Calculator ; My ‘actuals’ to date are highly consistent with that calculator).

Whether we use it on site for buildings or for electric tractors, or whether we export it, this is all energy that isn’t being generated somewhere else, hence it is net electrical energy we are adding to the total renewable electrical generation of the world.

Our actual export figure right now is above 90%, though that will reduce as we add more electric farm machinery over the coming years – in the process of progressively reducing our diesel burn figure to zero.

Our farm is in Tasmania. This complicates things because the Tasmanian energy grid is already incredibly ‘green’ – see below:

Source: https://www.industry.gov.au/sites/default/files/2020-12/australias-emissions-projections-2020.pdf

However: Tasmania has one substantial inter-connector to Victoria (Basslink) and there is another big one, MariusLink, on the way. Those interconnections allow Tasmania to sell electricity into the Victorian grid. So we’ll use the Victorian grid as our imputed destination.

The current official figure for Carbon Dioxide emission per kWh generated in Victoria is 1.13Kg per kWh (Source: The Victorian Essential Services Commission).

Now we have all the numbers we need. It is time to start doing some maths.

Annual PC12 Aircraft Carbon Dioxide Emission Created

200 hours x 250 litres per hour x 2.52 Kg per litre = 126,000 Kg

Annual 200kW Solar Array Carbon Dioxide Emission Avoided

268,000 kWh x 1.13 Kg = 302,840 Kg (or 2.4 times the PC12 emissions)

Outcome

Assuming the energy destination is the Victorian energy grid, we are offsetting the aircraft Carbon footprint more than twice over! This was a (good) surprise to me.

That said, Victoria has a particularly ‘dirty’ grid. Sigh…coal…sigh.

What happens if we make this harder, by using the global average Carbon intensity value for energy grids instead of the value for Victoria?

The global average figure is far lower than Victoria, at around 0.5Kg per kWh generated (source: https://www.iea.org/reports/global-energy-co2-status-report-2019/emissions ).

Taking 126,000Kg and dividing it by 0.5Kg per kWh, we get a clean energy generation target of 252,000kWh.

This is still substantially below the 302,840Kg annualised energy production from the solar array at The Vale. Even on this ‘global average’ Carbon intensity basis, we are (more than) completely offsetting the Carbon footprint of my annual PC12 flying time.

One other thing we can derive from all of this is the ratio between flying-hours-per-year and the needed solar array size (for a solar array in Tasmania, and using the higher bar of 0.5Kg offset per kWh generated):

Dividing 252,000 kWh by 200 hours means 1260 kWh of annual energy production is needed per annual-flying-hour. Given that each kW of array size generates 1340kWh per year (in Tasmania), we need 1260/1340=0.94 kW of solar array size per annual-flying-hour in the aircraft to achieve a full offset of the annual flying time concerned.

To put it another way, we need 94kW of solar array size to offset (on a continuing basis) each 100-hours-per-year of flying time in the aircraft.

Time for a bigger calculation.

How much solar would it take to offset the entire global aviation industry?

According to this source, around 900 million tons of carbon dioxide were emitted annually due to global aviation immediately pre-COVID (assume we wind up ‘back up there’ post COVID… eventually).

So that is 900,000,000t x 1000Kg = 900,000,000,000 Kg of CO2. Yikes.

Dividing by 0.5 means we would need to generate 1,800,000,000,000 kWh of electricity from (new) renewable sources to offset the entire global aviation industry.

We are a small investor in a big project: “Sun Cable” . The first major project for Sun Cable will build around 20 Gigawatts (!) of solar arrays in the wilds of the Northern Territory, and export most of it to Singapore.

Yes, really. If you don’t think big, you don’t get big.

The LG Solar Calculator says one could expect 1940kWh of electricity per kW of solar array in Alice Springs. Multiplying 1940kWh by 20,000,000kW gets us 38,800,000,000 kWh (38,800m kWh) per year.

This is just my back of the envelope approximation, and the real outcome in terms of output energy from Sun Cable could well differ somewhat from that estimation for a whole host of rational technical reasons, including things as obvious as energy loss over long transmission paths, that the project isn’t actually in Alice Springs, etc etc.

So: We’ll de-rate that annual production estimate by an arbitrary 25% to fold in some pessimism and call it a ‘mere’ 29,100,000,000 kWh per annum.

Time for the punchline:

1,800,000,000,000 / 29,100,000,000 = around 60 (these are all huge approximations – so – measure with a micrometer, mark with chalk, cut with an axe)

The punchline (and this was also a surprise to me) is this:

It could take just 60 Sun Cable-sized projects to offset the Carbon emissions of the entire global aviation industry

The world could actually do that. If we can make one, we can make sixty.

The Sun Cable web site says that the initial project for the company is an AUD$30+ billion project (US$21bn at the time of writing).

Sixty of those would be a mere US$1260 billion (US$1.3tn). An impossibly large number to consider? Well, the four largest American companies each have a market cap well above this level.

Apple has enough cash on hand (at the time of writing) to build the first 9 of these mega-projects without even taking out a loan. Remember, too, that these will be highly profitable projects, not donations. They won’t merely mitigate carbon – they’ll (literally) power the world.

We have enough sunlight. We have enough land. What we need is enough ambition.

Deploying the worlds smallest flow battery (The Redflow ZBM2) from small sites up to grid scale

I delivered a (virtual) talk at a recent (August 2021) battery technology conference in South Africa.

Having taken a look at the recording, I think it has come out as a reasonably clear and cogent summary of the current state of play in terms of the deployment of, and the scaling of, Redflow ZBM2 based energy storage systems.

The talk runs for about half an hour, and you can find it here:

Deploying the worlds smallest flow battery at grid scale

The slide deck that went with it is here:

Fixing Starlink “Poor Ethernet Connection”

I’ve recently received my first SpaceX Starlink connection kit and fired it up in the wilds of Adelaide, South Australia. I’ve been figuring out how it all works and commencing some efforts toward a mid term project of deploying another Starlink service in a remote wilderness site in the future.

When I fired up my service, I had an initial issue wth it that really had me scratching my head, so I felt there was merit in documenting what happened and how to fix it.

Despite the warnings in the user documentation about potential issues if the cable length is extended, I had initially tested the service by sitting Dishy on the back lawn and plugging the (long!) ethernet/data cable into an RJ45 socket that I already had on the outside of my house (intended for an outdoor PoE WiFI access point). The other end of that RJ45 socket emerges on a patch panel in my study, where I plugged in the Starlink power brick and WiFi adaptor.

Dishy in the backyard on the lawn for initial testing

I tried that, and it worked really nicely. Immediate acquisition of signal and 300M/35M average speeds (!), with short term peak speeds above 400/40 (!!)… wow. I mean seriously… wow.

Having done that test, I got my friendly local sparky to install Dishy on the roof in a suitable location, and to run my ethernet cable into the roof space, and out into the study directly, as the permanent installation. I tried really hard to ‘do it right’, following the instructions about not cutting or extending the ethernet cable.

Dishy on the roof

When I plugged it all back together (no cable connections, using only the original cable run back into the power brick), the service didn’t work.

What I saw on the Starlink app was a fault indication… ‘Poor Ethernet Connection’

Starlink: Poor Ethernet Connection fault report

This fault showed up despite the connection being directly into the power brick, in accordance wth the instructions…

The Poor Ethernet Fault appeared despite no intermediate patching

Worse, the word ‘poor’ was an understatement.

Despite the Starlink App being able to see and control the dish, with Dishy visible in statistical terms on the app, there was in fact zero data flow.

No Internets For Me.

The physical connection from the RJ45 cable into the power brick was not 100% tight, but it didn’t seem terrible, and no amount of jiggling it made any difference to the total lack of service.

A visit from my sparky to re-check for the absence of any cable damage in the installation (and there was none) left us both scratching our heads, until I had one of those counter-intuitive ideas:

The service worked when I had an intermediate set of cable paths and patch points in the data path (and quite long ones). What if I put those back in?

Well, I did that – and – it worked perfectly again(!).

Ah hah.

So that very slightly loose RJ45 connection might just be the issue. Dishy (according to things I’d read online) uses PoE but needs a lot of power (90+ watts), and hence it would need a pretty much perfect RJ45 connection to make this work.

Next, i tried the smallest possible workaround to that slightly loose RJ45 connection on the original equipment…a very short patch lead and an RJ45 joiner:

How to fix a Starlink “Poor Ethernet Connection” – by adding an additional ethernet connection (!)

Bingo – perfect working connection, and it has kept working brilliantly ever since.

If I remove that little patch segment, it fails again. Oh well, it can stay there.

I hope this helps someone else with similar issues…!

This is a really easy fix, and hardly worth getting the hardware replaced by SpaceX when the self-service resolution is so simple, but it is somewhat counter-intuitive (given all the admonishment in the documentation against adding extra ethernet segments).


Update: I reported the issue to Starlink via the support path in the app. I got sent an example photo of what looks like a ‘known’ issue and got asked to check and photograph my own RJ45 plug and socket on the system.

This is what I found on my Dishy plug end when I looked hard at it (and took a careful photo):

Bent pin guide on Dishy’s RJ45 plug end

Well, that’s obviously ‘it’. That’s all it takes.

In response to my photo of that bent RJ45 connector pin, SpaceX are immediately forward-shipping me me an entire new Starlink kit and they have sent return instructions / vouchers for the existing kit.

Not withstanding that I could, in practice, just re-terminate the cable with a new plug, that’d likely void the warranty, so I am happy enough to swap the whole thing out for that reason (to keep the setup entirely ‘supported’).

I’ll have to get my sparky to pull the existing Dishy+cable and install the new one, when that new kit turns up, but – well – I can’t fault the customer service in this case. No arguments, just ‘have a new one’.

Interesting process, and interesting resolution. I wonder if they’ll send me a shiny new square dish this time?

Gigabit Internet at Home

If something is worth doing … it is worth overdoing 🙂

Last night I noticed that my suburb had been upgraded by NBNCo and that Aussie Broadband could now offer me a 1000 megabit per second Internet via NBN HFC at home (The previous NBNCo HFC ‘limit’ at my house was 250 Megabits per second).

Deeper, I was delighted to discover that the Aussie Broadband app allows you to implement a plan/speed change ‘in app’ and has the option to do it ‘right now’.

So I did it ‘right now’ – and – a minute or two later – this:

Wow. Thanks Very Much

Well strap my face to a pig and roll me in the mud

…that is a really, really pleasant Internet speed to have at home 🙂

I’ve had gigabit fibre Internet at the office for years, but having it right in your house is pretty darn cool. Finally feel like we’re starting to catch up with some other countries.

It turns out that more than seven years ago (!!) when I was on the NBN board, I wrote about the potential for NBN HFC to support gigabit Internet speeds.

I don’t think I expected it to take quite that long to get to my house, frankly – but – its here now.

That SpeedTest result is on a wired network port… on an 802.11ac wireless connection to an iMac in another room, I’m maxing out at a lazy 300 megabits or so right now. Finally my WiFi is the speed constraint, and nothing else is getting in the way. WiFi speeds fall off very sharply with distance, which is why I tend to put ethernet ports into any buildings I’m doing any sort of work on. You just can’t beat the speed of a wired connection.

The outcome (even via WiFi) is materially snappier compared to even the 250 Megabit per second service. Its like my office has been for years – click on something and (if it is well connected), then ‘blink’ and it has updated the page completely an instant.

The one bummer is that ‘mere’ 50 megabit per second upload speed – for which I still can’t quite countenance why NBNCo insist on that level of artificial throttling. Speed limiting just to make your ‘business’ products more valuable is the sort of evil tactic we used to complain about Telstra engaging in.

That said, 50 megabits per second upload is still ‘substantial’ and it is the increased upload seed that is actually the major factor in the above-mentioned improved ‘snappiness’ of updates. The extent to which upload speed is a real-world constraint to download performance is still a widely un-appreciated thing.

If this inspires you to move to Aussie Broadband as well, just remember you can type in the magic referral code 4549606 when you sign up, to save yourself (and me!) $50 in the process 🙂

The Vale Energy System

About The Vale

The Vale is a 170 Acre farm in the NorthWest of Tasmania. It is located in a river valley in the shadow of Mount Roland.

Various crops are grown on the property along with the running of sheep and cattle. The property also features a large private runway.

We wanted to future-proof the property in terms of electrical energy self-sufficiency by building a large renewable energy system.

Here is what we built…

System Components

  • Three phase grid feed via a 500KVA transformer (configured for up to 200kWp export)
  • 200 Kilowatt Peak (kWp) ground-mounted solar array using LG 375W panels on Clenergy ground mount systems into 8 x 25kWp Fronius Symo AC Inverters
  • Provision for future on-site generator
  • 144 kW / 180 KVA Victron Energy Inverter/Charger array (12 x Victron Quattro 48/15000)
  • 280 kWh of Flow Battery energy storage (28 x 10kWh Redflow ZBM2 zinc-bromide energy storage modules)
  • Victron Cerbo GX system controller interfaced to 3 x Redflow Battery Management System units
  • Underground sub-main distribution system servicing multiple houses, farm buildings and an aircraft hangar across the entire farm
  • Underground site-wide single-mode optical fibre network serving site-wide indoor and outdoor WiFi access points and networked access control and building management systems

A shout-out to DMS Energy in Spreyton, Tasmania. I designed the system with them, and they built it all extremely well. The installation looks great and it works brilliantly.

Here is a gallery of images from the energy system

Flow Batteries

The system stores surplus energy in Redflow Zinc-Bromide flow batteries. These are a product that I have had a lot to do with over a long period (including as an investor in the company and as the the architect of the Redflow Battery Management System).

These batteries have a lot of advantages, compared to using Lithium batteries, for stationary energy storage applications such as this one.

You can read more about them on the Redflow site and also in various other blog posts here.

System Performance and Future Plans

Tasmania is interesting as a solar power deployment area, because it has the distinction (due to being a long way south!) of being the best place in Australia for solar production in summer, and the worst place in the country for solar production in winter!

This was a key driver for the decision to deploy a relatively large solar array, with the aim of obtaining adequate overall performance in the winter months.

The large solar array is also a renewable transport fuel station!

We already run one Tesla Model S sedan, a Polaris ‘Ranger’ electric ATV, and an electric aircraft on the property.

Our plan is to progressively eliminate the use of diesel on the property entirely, by running electric 4WD vehicles, electric tractors, and electric excavators as they become available on the Australian market. The beauty of the large on-site solar array is that all of these vehicles can be charging directly from on-site solar generation when they are not being driven.

During this winter, we’ve observed that we typically manage to half-fill the battery array, and that it then lasts about half the night before grid energy is required.

That’s why we are now in the midst of doubling the size of the solar array. Once we have done so, we will have a system that (even in mid winter) can supply all of the on-site energy demands of the property on most days, without drawing any grid energy at all.

Of course, in summer, we’ll be exporting plenty of energy (and being paid to do so). Even with the relatively small feed-in tariff offered in Tasmania, the system generates a reasonable commercial return on the solar array investment in non-winter months.

Here are some (summer time) screen shots from the on-site control system and from the outstanding Victron VRM site data logging portal.

On the image from the on-site Cerbo GX controller, you can see a point in time where the solar array was producing more than 90W, the battery array was mostly full and starting to roll back its charging rate, and plenty of that solar energy was also being exported to the grid.

The ‘System Overview’ and ‘Consumption’ charts show the outcome of all that sunshine…with the battery ending the day pretty much full, the site ran all night on ‘time shifted sunshine’ and started the following day half full, ready to be filled up once more.

We exported plenty of green energy to our neighbours and we used practically no inward grid energy at all.

Once we have doubled up the solar array size, we are looking forward to achieving a similar outcome on most winter days, not just during summer, along with exporting even more surplus green energy into the grid.

Once we have transitioned all the on-site vehicles to electric, our total export energy will diminish somewhat, but it will be more than offset by a $0.00 diesel fuel bill (and by zero CO2 and Diesel particulate emission from our on-site activities).

On-site Energy Efficiency

One thing that matters a great deal is to do the best you can in terms of energy consumption, not just energy generation and storage. To state the obvious: The less energy you need to use, the longer your battery lasts overnight.

All the houses on the farm are heated/cooled using heat pumps.

This is the most efficient way to do it, by far. It is often poorly understood just how much more efficient a heat pump is, compared to any other way to cool or heat something.

That’s simply because a heap pump doesn’t create the heat – rather, it moves heat energy in the outside environment into the house (or vice versa, to cool it). Typical values for the Coefficient of Performance (COP) – the ‘multiplier effect’ between kilowatts to run a heat pump and kilowatts of heat energy that can be moved – are of the order of 3-4 times. That literally means that 3-4 times as many kilowatts of heating or cooling are created than the number of kilowatts of energy put into the device to do it. By contrast, heating using an electrical ‘element’ has a COP of 1, meaning there is literally no multiplier effect at all.

Because we’re in Tasmania, and it does get cold in winter, we have put in a wonderful indulgence in the form of a Spa pool. These obviously need a fair bit of energy to keep the pool water hot, and we have done two things to minimise that energy draw.

First, we have used a Spa heat pump to do the hot water heating, which accesses that fantastic multiplier effect mentioned above. It means we are heating the water by just moving heat energy out of the surrounding air and into that water.

Second, we have installed an optional monitoring and control device so we can access the Spa and remotely control it. We can turn the heating off when we are leaving home, and we can then remotely turn the heating back on when we are heading back, so it is nice and hot when we arrive.

We have a third heat pump at our home, the one that heats our hot water. We are using a Sanden Heat Pump based hot water system that (also) performs really well.

On-site Energy Monitoring and Control

The key to optimising energy usage is to be able to actually measure it.

The Victron Energy Cerbo GX at the heart of the energy system monitors all aspects of our renewable power plant in detail (and uploads them for easy review to the no-extra-cost Victron Energy VRM portal). This gives us fantastic (and super detailed) visibility into energy generation, storage, and consumption on site.

However, we have a lot of separate buildings on the farm, and the key to understanding and optimising energy draw is to get deeper insight into which buildings are using energy and when.

To that end, we have installed many Carlo Gavazzi EM24 ethernet interfaced energy meters all around the site-wide underground power network. At each delivery point into a building, there is an ethernet-attached meter installed, so that energy usage can be narrowed down to each of these buildings with ease.

I am currently working on the design of an appropriate monitoring system that will draw this data in and use it to provide me with detailed analytics of where our energy is going on a per-building basis (and when!).

In terms of control we have deployed KNX based sensor and control devices in a variety of places around the property, and we plan to deploy much more of it. Over time, we’ll be able to dynamically control and optimise energy consumption in a variety of useful ways.

KNX is a whole separate story, but – in brief – its an extremely good way to implement building automation using a 30+ year old standardised protocol with full backwards compatibility for older devices and with support from over 500 hardware manufacturers. It allows for the successful deployment of totally ‘mix and match’ multi-vendor collection of the best devices for each desired building automation monitoring or control task.

We are continuing to learn as we go.

With the upcoming enhancements in site monitoring and control, we expect to deepen our understanding of where energy is being used, to (in turn) allow us to further optimise that usage, using techniques as simple as moving various high energy demands to run ‘under the solar curve’ wherever possible. These are the times when on-site energy usage is essentially ‘free’ (avoiding the ‘energy round trip’ via the battery, and leaving more battery capacity for energy demands that cannot be time-shifted overnight)

Summary

Overall, this system is performing extremely well, and we are extremely pleased with it.

When we have added even more solar, it will do even better.

The #1 tip – even in Tasmania – is clear: Just Add More Solar 🙂

The other big tip is to move your transport energy usage to electric.

The more electric vehicles we can deploy here over time (farm machinery as well as conventional cars), the better.

We’ll charge them (in the main) directly ‘under the solar curve’ and achieve a huge win-win in terms of both energy usage and carbon intensity.

As we keep learning and keep improving the monitoring and control systems… it will only get better from here.

Letter to America: The Australian experience with the COVID-19 Pandemic

This is a note to my American friends, many of whom have asked me how we are doing in Australia with the COVID-19 thing. 

I felt that it would be simplest to write this down once instead of explaining it separately to every one of them…

In terms of COVID-19 Pandemic response, the USA is in the midst of snatching victory from the jaws of defeat.

The US experience has been one of significant failure during the early and mid term handling of the COVID-19 pandemic. Those failures, however, created powerful incentive for the US to become a global leader in the creation and delivery of vaccinations to its population.

The Australian experience is the exact opposite. Over here, we are busy snatching defeat from the jaws of victory!

Here is how the story has unfolded in Australia…

Both Australia and New Zealand used their ‘island nation’ situation to advantage.

Very early in the Pandemic period. Australia imposed (and continues to impose) severe rate limits and severe quarantine processes upon to those seeking to return from overseas.

Unusually, Australia also moved at the same time to prevent it own citizens from routinely leaving the country.

Australia then began an aggressive process of suppressing the virus where present in the community.

At the time, the official government line was the same as it was the world over, about ‘flattening the curve’. However there was a clear unofficial target to do far better than that, and against the odds, it happened:

Australia has achieved effective elimination of COVID-19 from the community.

This has been the result of enormous and sustained community engagement, coordinated and resourced through herculean efforts of by Australian state and territory governments. 

State and territory public health teams operate annexed commercial hotels, renamed as ‘Quarantine Hotels’ (located, incredibly, in the centres of our major capital cities) to quarantine and process incoming international passengers at a heavily limited weekly rate. 

Inevitably, COVID-19 outbreaks emerge out of those quarantine facilities. This usually (and predictably) happens as the result of cross-person infection within those facilities themselves, as these facilities were never designed for this purpose.

Each outbreak is then wrestled to the ground with yet another round of snap lockdowns.  The state of Victoria has suffered the longest durations and the largest number of these lockdowns, and continues to do so:

https://www.theguardian.com/australia-news/2021/jun/11/victoria-covid-update-lockdown-lifts-but-strict-health-orders-remain-as-no-new-local-cases-recorded

We have, as a society, agreed to reliably and voluntarily ‘sign in’ using QR codes at every public shop and venue, in order to leave an intentional digital breadcrumb trail of our movements.

This facilitates the rapid creation and update of site ‘exposure’ lists (sometimes hundreds of sites long) that are constructed and updated as a public health tool each time an outbreak occurs. 

People whose movements intersect those lists obligingly get tested and self-isolate, almost without fail. In doing so, we collectively manage to grind every one of these outbreaks into dust (at considerable personal and societal cost).

As a society we have become trained in, and indeed expert at, a high stakes national game of ‘Whack-a-Mole’. 

There has been a payoff for all of this effort. By and large, Australians now wander about in this country (most of the time) in a state of relative normalcy, living (most of the time) free of any COVID-19 at all.

It is clear, though, that we are living in the eye of a storm, and at face value we might be stuck here for a long time.

A major contributor to our situation is the fact that the Federal government in Australia continues to fail in its manifest duty to implement a rapid, high priority vaccination process across our entire community.

We originally had appropriate, aggressive, government vaccination delivery timeframes and targets. 

Sadly, those targets were (badly) missed, before being just completely abandoned with barely a murmur. 

Today the official government line is that – international experience not withstanding – this is “not a race” after all (!):

https://www.abc.net.au/news/2021-04-11/scott-morrison-abandons-covid-19-vaccination-target/100061998

https://www.theguardian.com/australia-news/2021/jun/01/the-morrison-governments-vaccine-rollout-is-not-a-race-nonsense-tells-us-a-lot-about-whats-gone-wrong

The manifestly botched Australian vaccination rollout generates the expectation that (if not resolved) Australia will have to keep playing ‘Whack-a-Mole” behind closed borders for at least another 18 months:

https://www.bloomberg.com/news/articles/2021-05-06/australia-s-borders-may-not-open-until-late-2022-minister-says

By then, it seems clear that Australia will truly be ’The Country In a Bubble’ (or along with New Zealand and Singapore, the ‘Country Group In A Bubble’: https://www.abc.net.au/news/2021-06-11/australia-singapore-travel-bubble-talks/100206972 ).


It is entirely possible to change this path and return Australians to the world community sooner than that.

We must improve and accelerate vaccination rates in Australia urgently. 

We also need to get on the bandwagon as the MRNA vaccine producers gear up for making annual, variant-updated booster shots for next year, and the year after, and so on – just as we already do each year for the ‘seasonal flu’.

Doing this will require leadership, commitment, resourcing, simplified eligibility and access mechanisms, and a range of positive incentives (including differential access and travel rules for those who have been vaccinated and whose booster shot status is also maintained).

We can’t just hide away in our own private (national) Petri dish, telling ourselves that it is ‘not a race’. 

It is time to dust off those running shoes.