The Vale Airfield is a 1300m grass runway running parallel to the Dasher River in NorthWest Tasmania. It is situated only a couple of nautical miles from a beautiful mountain, Mount Roland.
This 4000ft granite beast dominates the local view all the way to the nearby town of Sheffield. It is part of a system of ridge systems that give way to the Tasmanian Central Plateau. The Plateau is a large, gorgeous and pristine alpine and lake region that includes the world famous Cradle Mountain national park.
Over the last several months, I’ve been using Pipistrel Taurus G2 electric self-launch motor glider to gradually (and carefully) explore this complex, fascinating, and beautiful area from the air, through a variety of weather conditions. The conclusion I’ve reached is that we are fortunate indeed, to have an airfield that is surely one of the best places in Tasmania to fly gliders.
There are many opportunities to go soaring here, using a wide variety of ‘lift’ mechanisms enabled by this fascinating and complex terrain – and to do it all year round!
What follows is a study in the successful use of the three major ways to sustain soaring flight in gliders, in flights were conducted over three successive days at The Vale, in three distinct weather systems.
Lets start with an annotated Google Earth image of the local area from the point of view of a soaring pilot (click image to enlarge) and then we’ll turn to Day One:
(The orange dotted lines are some of the local area ridge lines)
Day One: Mountain Lee Wave Soaring
One of the most wonderful ways to go soaring in a glider is the use of ‘Mountain Lee Waves’.
Wave (in this context) refers to a large standing-wave that forms in the atmosphere downstream (to the Lee) of a large physical feature (such as a mountain) in the presence of a strong and consistent wind that increases in strength with increasing height.
Mountain waves can extend into the sky to heights that are multiples of the height of the ground feature that triggers them. What forms in the air is an ‘echo’ of the shape of the ground feature, high up in the sky, with the into-wind side being a tide of rising air that can be surfed in a glider, to gain height.
Even better: If the wind keeps getting stronger with height, the primary wave system can act like another mountain! One wave can trigger another wave system, located further downwind and higher than the primary wave. This can keep happening, with multiple wave systems capable of ‘stacking up’ in a rising sequence.
Clouds can form in the middle of a wave system, appearing in a classic “Lenticular” shape, being quite literally ‘polished’ by the air rotating around the wave core. In the presence of multiple secondary waves, there can be a ‘stack’ of these lenticular clouds.
On the day we flew, there were no lenticular clouds to indicate the presence of the wave system… it was still there, but it was invisible.
However, I had another way to find the wave. I used a fabulous soaring pilots’ weather prediction application called Skysight.
Skysight has access to global, high accuracy weather forecasting data and it uses this data and a great deal of smart number-crunching to generate predictive, visual, forecasts for glider pilots. These forecasts help them to predict (with high accuracy in both space and time) the presence of various distinct sorts of weather systems that can be used to sustain soaring flight.
You can explore these images (generated by Skysight) to see what I mean:
As per those images, the Skysight model showed the presence of a substantial primary wave system above 5000 feet, then extending through multiple secondary wave systems all the way up to over 20,000 feet (!).
It turns out that this wave system sets up quite frequently in Tasmania in the cooler months.
An impressive example of this happened back on 12th April 2020. This was not a day that I could fly (darn). Have a look (below) at just how impressive the wave system was, right across Tasmania. Lennies very much in evidence in the sky to go with it. A soaring pilot could have hopped from wave to wave, literally across the entire state.
Back to the present – and with my son Gabe as our photographer and co-pilot, it was time to see if the computer model was accurate in telling us that the wave was there, even though the indicator clouds were not.
To help us to find this quite invisible lift system, it was time to engage another piece of technology, the LX9000 soaring glass-cockpit system in my glider. The LX9000 is an incredible instrument. One of its plethora of features is the ability to import Skysight predictive map overlays directly onto the device for display in flight.
This means that I could fly the glider with the wave predictive model ‘on screen’, so we could fly under power up to the height and position needed to contact the wave system, and then shut down the motor and start playing.
We did precisely this. We climbed to about 5000 feet and flew to the edge of the predicted lift zone, and shut down the engine. As if by magic – there it was, and we just starting going up.
Here’s what the Skysight wave overlay looks like in the LX, in flight, in the aircraft. On the image, the lift zone is the yellow/orange/red zone on the map.
This photo was taken at a later point, when we had already climbed in wave up to over 9000 feet:
Wave lift is wonderful – it is a smooth, quiet journey of exploration, quietly working your way back and forth along the lift band.
Being a system driven entirely by wind, wave conditions can be (and mostly are) present in the depths of winter, when flat-land glider pilots have given up gliding for the season due to the lack of any useable thermals.
We flew the glider up and down the Mole Creek Valley on our climb, and wound up high over the edge of the Central Plateau. The Plateau was covered in a layer of snow from the previous night, and it looked rugged and wonderful.
Here’s are some images from the wave flight:
(Photo credit for many of these images: Gabe Hackett)
The next post will be about Day Two of Three when the wind moderated, the sun came out, and the lift was there again – but this time it was Thermals.
I am fortunate to own the first electric self-launch glider to fly under Australian skies. It is a Pipistrel Taurus Electro G2.
A few months ago, I wrote a story that explained the background and my journey to owning and flying this impressive little aircraft. The story was published in the Gliding Federation of Australia’s Gliding Australia magazine.
This post documents my learning curve around the difference, in the KNX world, between a KNX/IP Interface, and a KNX/IP Router.
KNX terminology in this context is very important to understand. In part because this is a case of words (really) mattering, and in part because not only do the words matter, but their meaning also differs substantially from the meaning of those works in the pure TCP/IP networking context.
We will start with something that took me multiple product purchases and many hours of head-scratching to appreciate:
A KNX/IP Router is also a KNX/IP Interface
A KNX/IP Interface is not also a KNX/IP router
What is a KNX/IP ‘Interface’?
A KNX/IP ‘interface’ is any device that ETS5 can use to program your KNX devices over your local TCP/IP network from a Windows PC and/or that can allow KNX twisted-pair (TP) device access and control with apps/touchscreens/etc from the local TCP/IP LAN.
However, if a KNX/IP product does not explicitly use the word ‘Router’ in the product name, it is not (also) a router and will not provide KNX/IP routing.
What is a KNX/IP ‘Router’?
In KNX parlance, what a KNX/IP ‘router’ actually provides is the functionality of a KNX ‘area coupler’ or ‘line coupler’, using a TCP/IP network as the linking medium.
An area coupler or a line coupler is a packet forwarding bridge (with built in packet filtering) that moves KNX telegrams (packets) between KNX physical network segments.
The distinction between an ‘area’ coupler and a ‘line’ coupler is simply based on whether you are inserting the coupler (‘router’) to connect distinct areas (first number in the Individual Address is different) or just between distinct ‘lines’ (where the area number is the same but the ‘middle’ number differs).
In either designation, appropriate KNX ‘telegrams’ (packets) get forwarded to the other network segment if they need to be, and they are ‘filtered’ (not forwarded) if they don’t need to be.
Normally this process is automatic, provided you have given the KNX/IP router the appropriate form of Individual Address (IA) to tell it how it is supposed to act (more on this later).
In writing this down, I will say that it is no wonder this is confusing to people already steeped in the terminology and operation of TCP/IP networks (as I am).
It turns out that a KNX/IP ‘router’ is not really a ‘layer 3’ router at all. It is ‘merely’ a layer 2 media bridge. The fact that there is a routing-capable TCP/IP protocol stack inside every IP enabled KNX device doesn’t magically make all those devices into KNX ‘routers’.
One thing that would love ETS5 to feature (and it certainly doesn’t, today) is the addition of a dialog box to warn you about the absence of any configured-in area or line couplers in your setup, any time that you try to construct a group address that spans areas or lines in your project. It seems to me that a simple warning (“No configured area or line couplers are available to forward this telegram”) would save a heap of future grief for others.
Time to look at some real world product examples.
Here are some devices in my home KNX network:
Per the discussion above – only one of these three (the one on the left) is a KNX/IP router, despite all three boxes having more than enough technical ‘grunt’ to be capable of ‘routing’ (and for all I know, they might well all be running identical underlying hardware).
If you don’t have the KNX/IP router there (and at first, I didn’t), then the twisted pair segment concerned is an island. It doesn’t matter how grunty or wonderful your KNX/IP interface products are (the X1 and S1 are both highly capable things)… neither of those is going to route a group telegram between the TP and IP networks for you, no matter how much you try to convince it to do so.
It was the Gira S1 that drove my initial confusion, ironically because it is such a cool and capable device in other ways. Based on my historical TCP/IP experience, I’d thought that because the S1 is a remote-access VPN and local area TCP/IP node, it would also be a KNX twisted pair router – I mean.. why not? Well… no, it doesn’t do that.
One confusing thing is that even if your device isn’t a router, ETS5 will still let you manually define a filter table for it, when there is absolutely no (obvious) point! That is how ETS5 helped to cement my (mistaken) belief that my Gira S1 was a KNX/IP router, when it wasn’t.
I wasted a good day fiddling about, trying to work out why the thing wouldn’t send packets over the network, adding in manually added filter table entries for the group telegrams I wanted to forward and it just … wouldn’t. No error message, no sign of problems, just no packet forwarding. It is obvious now, but it surely wasn’t obvious to me up front.
How to buy the wrong product without really trying
Here are three more KNX devices, all of which are KNX/IP interfaces:
The unit on the left in the photo above is a USB-KNX interface, allowing ETS5 programming without any IP components in the system. The other two units are different brands of KNX/IP interface, and they are functionally identical. They can be used for programming KNX devices in your project over your local TCP/IP network, and/or for facilitating access to KNX TP devices from other TCP/IP applications.
I bought the right hand unit (the MDT one) after I realised the Weinzierl product (installed many years ago, with my underfloor heating system) was not a router. When I did buy the MDT unit to replace it (so I could start doing KNX/IP routing), I mis-ordered it.
MDT make two products with a one digit difference in product ID code:
I purchased a 000.03 and I should have purchased a 100.03. The extreme similarity (in specification and visual appearance) underscores that they are most likely the very same physical box, sold at two different price points, where the lower cost one has simply got the KNX/IP ‘router’ functionality switched off.
Your choice of KNX/IP router Physical Address controls the functionality of the device!
This was a huge part of my learning curve, and not fully understanding this drove a lot of confusion for me in the first instance. In a TCP/IP network, the IP address is irrelevant to the functionality of a device. In a KNX/IP network, the Physical Address you chose has a dramatic impact on what your device will actually do.
The table (below) – or a variant of it – is a familiar component of the setup instructions for any KNX/IP router (this one is extracted from the startup document for a Gira KNX/IP router):
If you do not assign a ‘.0’ as the last part of the IA for your router, then it will not operate as a router at all. If you chose to use an address ending in ‘not 0’, then your KNX/IP ‘router’ will only function as a KNX Interface and will not forward (route) KNX telegrams!
This is in fact a very good design decision in the KNX architecture.
If only a device ending in ‘.0’ can act as an area or line coupler, then you can never have more than one active KNX/IP router per physical twisted pair network segment. This is a good thing, as it ‘naturally’ avoids all sorts of complexities that occur in the TCP/IP context (including the need to implement a network routing table, versus a bridge filtering list).
Just to labour that point slightly – the absence of a layer 3 routing table in KNX is why what KNX does, in my view, is really layer 2 bridging, not ‘routing’. I think these really should have called these things KNX/IP bridges or even better, KNX/IP ‘couplers’ (but… ‘too late now’ 🙂 ).
As another aside: The use of IP Multicast to carry KNX telegrams on the IP side of this process is a smart one. This nicely leverages the merits of IP multicast to ensure carriage of those telegrams (‘packets’) to any other KNX/IP routers that need to hear them, without any explicit configuration work being needed on the TCP/IP ‘side’ of the equation.
Clues to help you to realise your KNX/IP device is not really a router after all
ETS5 is slightly maddening at times, in that it ‘knows’ things about your devices that it doesn’t bother to mention to you – assuming you just ‘already know’. A key one here being whether the thing you (wrongly!) believe to be a KNX/IP router is really just a KNX/IP interface after all.
In other words, ETS5 ‘knew’ full well that I bought the wrong MDT device (see above), and it kinda-sorta tried to tell me this, in ways that were too subtle for me to notice them at the time.
So – to save you the same angst here are the clues to help you know when your KNX/IP router is really not a router after all (i.e. when you ordered the wrong product):
Clue: ETS5 refuses to let you assign a .0 address to your device
This is a dead giveaway (in hindsight). ETS5 ‘knows’ full well its not a router, so when you try to set the last byte to ‘0’, it renumbers it to ‘1’, despite your best efforts to talk it into the ‘0’ at the end.
Annoyingly, not error message – ETS5 ‘should’ (in my view) pop up a message to tell you that ‘Only KNX/IP routers can be assigned an address ending in 0’. Instead, the address box turns red, and then turns black again after ETS5 silently ‘fixes your mistake’ (instead of telling you about it).
Clue: The ETS5 right-click menu on your device doesn’t contain ‘Preview Filter Table’
I finally figured this one out… if you right-click on a device and the pop-up menu contains “Preview Filter Table” then you are looking at a properly confgured-in-to-your-project KNX/IP router (i.e. area or line coupler):
If you think you have got your router configured in properly but that “Preview Filter Table” entry is mysteriously missing when you right-click on it, then… you haven’t actually got it in there properly after all.
It would be really nice if KNX flagged a validly configured KNX/IP router more clearly on the device display. Displaying the device name in green or… something.
Clue: You can check if a KNX device is really a router before you buy it
ETS5 knows whether a device is a router or not. And it knows this based on the product catalogue, not based on your real physical device.
This is a subtle but important point.
You can select any device in ETS5 out of the global KNX product catalog, load it into your project, and use the earlier subtle clues (above) to assure yourself that it really is a router, and that it really will route… and then you can go buy it.
I wish I’d appreciated that before I started buying things!
I want to deploy KNX in some existing and new buildings. KNX is something I’ve been interested in learning ‘how to do’ for ages. Finally a good intersection of time and opportunity has lead to this being the right time for me to undertake the learning journey.
In the COVID era, I couldn’t easily attend the hands-on physical accreditation training courses in KNX design and deployment that are offered by the very fine people at IvoryEgg. So, instead, I started with a free KNX intro seminar they delivered online..
Following that, well, I just licensed ETS5, bought some KNX gadgets from IvoryEgg, and started to just ‘figure it out myself’ because, to borrow the classic geek thought bubble:
How hard can this be?
The answer is that its not too hard, really, except that in doing it this way I’ve missed out on plethora of small nuances (especially about how to drive ETS5, and in how KNX concepts map to ETS5 reality).
These pieces of missing knowledge and understanding would have been gained by doing a formal hands-on training course. Without that, I found myself repeatedly in the situation of knowing something can be done in ETS5/KNX, but being unable to figure out in the first instance how to do it.
I felt that there may be some merit, for others, in writing down the non-obvious things – and their solutions – as I go along. Hence this blog post (and any others that may follow).
What is KNX?
KNX is a remarkable thing. It is a 30+ year old, published and standardised, protocol for building automation. It includes a standardised set of mechanisms for physically wiring and installing KNX-speaking devices in a building. More than 450 hardware vendors with guaranteed interoperation.
There is one ‘master’ programming tool (“ETS5”) that is used to program/configure and deploy KNX devices in real world environments. It imports definition databases and applets from each manufacturer, on demand, to support the myriad variations offered by each device, in a (moderately) consistent manner.
Now, lets start looking at the tricks and traps I have encountered on my self-taught journey into the wonderful world of KNX and ETS5:
Using ETS5 on a Mac: Making it work with VMWare Fusion
ETS5 is Windows software. I use a Mac.
I am running ETS5 on Windows 10 by using VMWare Fusion, which turns the Windows environment into a window on my Mac. This works really nicely with one particularly frustrating and non-obvious trap, related to ‘Network Settings’.
Out of the box, VMWare Fusion configures the Windows virtual “Network Adaptor” using NAT. This means that your windows instance has no direct TCP/IP connection to your Mac, but rather that it is sharing the IP address that your MacOS system uses.
Far from being a good thing – with ETS5 this is a very bad thing.
That’s because ETS5 relies on being on the same physical LAN segment as any IP-connected KNX devices (KNX ‘Interfaces’) that you want to use.
If you leave the VMWare environment set up using VMWare’s internal NAT then ETS5 cannot ‘see’ any of your TCP/IP-based KNX access devices at all – it is as if they just don’t exist.
You can’t make them turn up on the ETS5 interface selection page.. nothing works. Even worse, there are no error messages to help you figure out what is going on, either.. its just that nothing works as expected.
Fortunately, the solution is simple.
In VMWare fusion, go to the Network Adaptor settings for the Windows instance, and change it to use ‘Bridged (Autodetect)’ instead.
That is all there is to it! Now your Windows instance reaches out and obtains its own TCP/IP address on the LAN, and it now has distinct (and direct) network identity. Now, it can see the LAN environment properly including (in particular) the IP Multicast packets that KNX relies upon, to work properly.
Fire up your ETS5 software again, and now all your KNX interface devices magically autodetect and work as expected. Win!
The only other thing of operational consequence in terms of using VMWare Fusion to run the Windows instance you need is that you need to plug the ETS5 license dongle into the Mac after activating my windows instance in VMWare Fusion, or it doesn’t ‘see’ it in ETS.
At worst (or if you forget), just unplug and re-plug the device after launching ETS5, and ETS5 will re-scan and notice the device soon afterward (no need to quit/relaunch ETS5).
How to avoid losing the (tiny) dongle
The ETS license is implemented via a physical USB Dongle. I have some issues with that dongle.
Iit is tiny, the same size as those little plug-in wireless keyboard dongles. This is something I hate – because a software license worth thousands of dollars shouldn’t be deployed in something that is so very easy to lose!
(Note that if you do lose the dongle, you can get it replaced for a moderate re-issue fee – but – why make it so easy to lose the thing and suffer financial cost and replacement delay in the first place?)
The ETS dongle uses a conventional USB-B plug, and my Mac is 100% USB-C. Given how expensive the license (and hence dongle) is, I have no qualms about dedicating an Apple USB-B to USB-C adaptor for the exclusive use of the dongle.
Because of that, though, I can’t leave it plugged in to my Mac all the time.
When you license ETS5, the KNX association sends your dongle in a lovely little presentation box. The box is made of sturdy cardboard material with nice printing on it, and the dongle sits inside, in a little foam bed. Very swish.
This box is big enough that I can put it in the accessory pouch in my laptop bag without losing it. Having to open that presentation box and clip the dongle into the Apple USB-C adaptor to use it, and then having to take it all apart again after each use, was painful. I lived in fear of the dongle getting detached from the adaptor and being lost.
This lead me to a pragmatic solution to the issue of the thing being too small.
What I did was to cut out a little hole in the end of the cardboard presentation box, plugged the dongle into the USB-C adaptor, closed that into the box and taped it shut (and wrote my name and number on the back).
In effect, I have created my own ‘super size’ and natively USB-C version of the ETS dongle, that is much harder to lose! This makes it much, much easier to find (and harder to lose down a drain) versus dealing with a little green dongle the size of a coffee bean.
Make a test bench to play with
It is very worthwhile getting a selection of real KNX devices as soon as you can, and starting to play. Here’s a photo of my home test bench at an early stage of my own exploration’:
What I did soon afterwards was to segment my KNX environment between that test bench and some pre-existing KNX equipment that was installed into my house some years ago by someone else (to implement an underfloor heating system). I segmented it using KNX/IP routers (more about those in a later blog post).
This means my test bench can be turned off and on, or futzed with in general, without breaking the production environment in the house. However, because I am using a couple of KNX/IP routers (one in the production setup, and one on the test bench), I can still reach back and forth between the deployed hardware and the test bench to try things (e.g. making a switch on my test bench drive a real world gadget somewhere else in the house).
Scenes are an excellent concept in KNX – and it seems to me that driving rooms (and outcomes) via scenes is much more rational than having a forest of individual light switches and/or dimmers, even if all you are adjusting is lighting.
That said, once you realise a scene can drive outcomes across multiple types of actuators at once (lights, climate control, blinds, locks, audio systems, you name it)…that is the real lightbulb moment, starting as simply as the notion of having one button at the building exit labelled ‘Home/Away’.
There’s a key (and quite useful) concept related to scenes, called “Teach-In”. More about that later.
The ETS menu structure device Parameter configuration and adjustment is highly dynamic – sub-menus come and go depending on other menu selections
The Theben TA x S binary input device (you can buy them in various values of ‘x’, e.g. 2 4, 6 or 8) is a great device.
This was the first device I tried to get the hang of configuring using ETS5. When I started trying to do that, I just could not work out how to send anything but binary (single bit on/off) outcomes from it.
What I really wanted to do was to make a row of buttons that are ‘scene selection’ buttons. I made up a nice metal box with a row of pushbuttons on it, I wired them into a TA 4 S unit, and I set out to make each button select a scene number (1 through 4).
In the first instance, when I tried to program it in ETS5 – moving to the “Parameters” tab for the device – I just could not work out how to send a scene number with this device.
“Out of the box” the unit let me drive the switches as binary devices (one bit per input) only. It has lots of flexibility about how that works (in terms of whether the bit sent represents absolute switch position or a toggling value, various de-bouncing parameters, etc etc). All lovely, but I wanted to send scene numbers, and they were just absent from the menu structure entirely.
Finally I figured out the subtle quirk (in my view) of the ETS5 interface, and it is this:
When you are looking at a Parameter menu for a device, take note of whether there is a ‘+’ or a ‘-‘ to the left of the menu concerned. If it is a ‘+’ then there is a hidden sub-menu, waiting for you to discover it, by double-clicking on the menu concerned.
In my case, realising that and opening the button menu revealed a sub-menu that let me chose the type of data to be sent in response to input changes. Yay!
This situation can continue through multiple levels…there can be rabbit holes within rabbit holes.
I discovered yet more sub-menus allowing me to configure other cool things to do with that TA 4 S device.
The crux, then, is that there is a pandora’s box in the Parameter system, and you just need to know to look for it. Changing parameter settings can bring additional sub-menus dynamically into existence related to the thing you just changed… its an exploratory process, and once you know how to open the door to deeper levels… keep doing it!
WIth that successful discovery made, I programmed my button box to send scene numbers 1, 2, 3 and 4 in group telegrams in response to pressing buttons 1,2, 3 and 4. I downloaded the configuration into my devices and started pressing buttons… and … weird stuff happened (!).
This lead me (after significant head-scratching) to the next discovery:
Scene Numbers officially start at 1 but they have an underlying Index Origin of 0
If your device supports setting or recognising Scenes using the data type ‘Scene’, then you have 64 scenes to choose from, numbered 1 through 64.
However, ‘under the hood’, the Scene numbering that is actually sent ‘on the wire’ is a value from 0 to 63.
This means ‘Scene 1’ is actually sent as an unsigned 8 bit value of ‘0’, Scene 2 is sent as a value of ‘1’, and so on.
The source of confusion here is that some devices (like the Theben TA x S units) don’t seem to allow you to send a ‘Scene number’ as a data type (if you can, I haven’t found it yet). They do let you send a “Value” perfectly happily though (as an 8 bit unsigned byte).
Devices that receive scene numbers and that describe them as a Scene then operate on the byte received as a value you describe starting from 1. Hence when configuring an actuator to respond to, say, Scene 4, you select “Scene 4″…and that scene activates when a ‘3’ comes in over the wire.
Understanding this, at last, I reprogrammed my TA 4 S to send ‘Value’ bytes of 0, 1, 2 and 3 for my four buttons. When my group telegram packets then landed on the actuators I had configured to respond to Scenes 1, 2, 3 and 4… the right thing started to happen at last.
I expect this could also be fixed by tweaking the data type for the TA 4 S unit in the ETS5 setup for device, to make sure the TA 4 S ‘knows’ that what is being sent is a scene (and thus to avoid this confusion). In the end it doesn’t matter, providing you understand the underlying issue.
What the heck is “Teach-In”?
Teach-in is a pretty cool concept, but some Googling on my part failed to turn up a good explanation of what that really meant in practice. Various KNX product data sheets mention that their Scene logic supports Teach-In, but they mention it as if it is an axiomatically understood concept for the reader. Well, it wasn’t at all obvious to me.
One way to think about Teach-In is as if Scene numbers are ( in my case, literally) a row of numbered buttons on a button box. A way to think about that row of buttons is to compare to them to the row of ‘station selection’ buttons on an old-style car radio. Just hit a button to recall a previously saved radio station frequency, to save manually adjusting until you bump into it.
To continue the car radio analogy – if you want to set up one of those channel buttons to select a station, you first tune in the station manually, and then you press-and-hold the button concerned for a few seconds, which locks the current station in to the button you are pressing right now. In other words, long-press means ‘save station here’.
Teach-in, it turns out, is the analogous thing in KNX!
Teach-in is the way to update (save) the current actuator settings back into a scene number in those actuators ‘on the fly’. This can be far better than statically programming them in ETS5 and hoping that they somehow come out ‘perfect’ in your real-world building (and that the occupants’ needs won’t evolve over time).
If your actuator(s) support Teach-In, then this is how Teach-In works:
– Adjust settings on various actuators in some way other than via scene change. This might be by using manual control buttons on the actuator (if present), and/or by using other KNX sensors to individually change settings, and/or using a KNX whole-of-building control panel or a app to adjust individual lights, sounds, blinds, whatever to be ‘just how you want them’
– Once you have your room and/or entire building ‘just the way you like it, you can save this entire setup into a Scene number across all the relevant actuators by sending a group telegram to those actuators containing the scene number plus 128 (i.e. with binary bit 7 in the byte ‘set’).
Hence to update (re-save) the current actuator configurations into Scene number 6, you would send a group telegram specifying Scene number 6+128=134 (or if sending Values, that would be 5+128=133) to the Scene selection element of the actuators concerned. Bingo – you’ve saved your current Scene state away for future use!
Once I understood this, some nice features back on the Theben TA 4 S suddenly made sense:
By opening up yet more of those hidden sub-menus in the Parameters section for the TA 4 S, it turns out that you can program each single button to be able to send three distinct Value numbers based on ‘how’ you press the button!
You can send distinct Values out on each button depending on whether you (1) short-press the button; (2) long-press the button (i.e. press-and-hold), or (3) double-tap the button
Understanding this, back on my test bench, here is what I did to prove it up:
I programmed the TA 4 S in my button box to send 0, 1, 2, and 3 in group telegrams to my actuators, for short presses of buttons 1,2,3 and 4 (Scenes 1 to 4)
I programmed it to send the values 127, 128, 129, and 130 in response to a long press (Scenes 1 to 4, plus 128)
I also programmed all four inputs to send the value 6 (i.e. select Scene 5) in response to a double-tap on any of those inputs. I programmed Scene 5 in all of my actuators to mean ‘turn everything off’.
And voila – a nice demonstration of Teach-In:
– Press a button to select a scene.
– Adjust manually by other means, and then press-and-hold any button to store the current actuator settings back into that button (car-radio style) – nifty!
– Double-tap any button to turn everything off (Scene 5) – just to demonstrate an outcome for that third way to use the very same buttons.
In the next Part, I’ll discuss some tricks and traps around the selection/purchase and programming of KNX/IP routers (and what is, and is not, a ‘router’ in the KNX world).
I’ll also give you a tip on how to deal with a KNX device that is physically inaccessible, where that ETS5 really wants you to press that ‘programming’ button on, again, to change anything… but when you just can’t do that (because you don’t know where it is, or because you do know, but you can’t ‘get to it’ physically).
Ken Thompson has an automobile which he helped design. Unlike most automobiles, it has neither speedometer, nor gas gauge, nor any of the numerous idiot lights which plague the modern driver. Rather, if the driver makes any mistake, a giant “?” lights up in the center of the dashboard. “The experienced driver”, he says, “will usually know what’s wrong.”
(Source: BSD Unix Fortune Program)
I recently managed to install a current Windows 10 distribution onto an older iMac that I had in storage. I wanted to set up this machine to run some specific Windows software for which it was well suited, and that let me make good use of an otherwise idle machine.
The iMac has a then-fastest-around 2.9Ghz CPU and features the (then) latest and greatest storage innovation, the ‘Fusion Drive’. This is a small SSD blended with a 1Tb Hard Drive. The Fusion Drive was designed to leverage fast-but-expensive SSD’s with slow-but-cheap hard drives, before SSD’s got so cheap that the hard drive became almost irrelevant.
My intention was install Windows 10 using Bootcamp, with an arbitrary 50/50 split of the 1.1Tb Fusion Drive.
At the start of the fateful weekend concerned, I recall thinking ‘how hard can this be?’ because I’d installed Windows using Bootcamp on my current-generation MacBook Pro (with a big SSD) with zero issues at all.
Turns out the answer is: ‘Very Hard’.
I had to get past multiple ‘I should give up because there is no apparent way around this, and the error message gives me no help at all’ situations, spread across what became an entire weekend of trial-and-effort and repeated fruitless attempts at things that took ages, punctuated with just enough ‘ah hah’ moments and clues found via Google to keep me doing it…!
I didn’t find the entire list of challenges I faced in any single web site, so I have decided to write my discoveries down here, in an ‘integrated’ manner. Each of these issues represents some hours of repeated head-banging attempts to get past it that I hope to save you, dear reader, from repeating.
I am assuming in the below that you know how to do a Windows installation using Bootcamp (or are prepared to work that out elsewhere). This isn’t a guide to doing that – its a guide to why the process failed – and failed, and failed, and failed – for me.
Each item below starts with a headline that frames the fix – so if you mostly just want to get it done – just dance across those headlines for a fast path to a working result.
You really need to be running Mojave (Mac OS X 10.14)
I fired up Bootcamp under the OS on the machine at the time – Mac OX 10.13 – and it said it could install windows 7 or later. Well, I wanted to install the latest release of Windows 10, and that’s ‘later’, right?
On this model of Mac you need to use an appropriately large (16GB or more) USB stick. Bootcamp writes the Windows 10 install ISO you’ve downloaded by now (you have, right?) onto that USB stick and turns that into a bootable Windows install drive (including throwing the ‘Bootcamp’ driver set onto it, to be installed into the Windows image once the base install is done).
Well, I plugged in a 16Gb USB stick (actually, I tried several sticks ranging from 8Gb to 32Gb, fruitlessly). In each case, after scratching around for ages, Bootcamp failed with an error message say that my USB stick wasn’t large enough.
Some Google searching turned up the key information here – that Windows 10’s recent ISO’s are large enough that they cross an internal 4GB size boundary that in turn leads to Bootcamp not being able to cope with it properly.
The answer looked to be easy – upgrade to Mojave.
Ok, annoying but straightforward. Cue the download and install process, and come back in several hours…
You also need to update Mojave to the very latest version
Turns out that the build of Mojave one downloads from the App Store isn’t the very latest version (Why isn’t the very latest version? Beats me!).
Bootcamp on the base release of Mojave says it can install Windows 10 or later (not ‘Windows 7 or later’). Yay – that suggests the bug has been sorted out – after all, it mentions Windows 10!
Sorry, but no. Same failure mode, after the same long delay to find out. Argh!
More Googling – turns out the bug didn’t get triggered until some very recent Windows 10 builds, and the base Mojave build still had that (latent) bug when it was released.
Next step is, thus, a Mac OS update pass to move up to the very latest Mojave build, including a version of Bootcamp with the issue resolved in it. This is in fact documented on the Apple support site (if you own 20:20 hindsight).
You may need to back up, wipe and restore your entire Mac OS Drive before Bootcamp’s Partitioning phase will succeed
This one was painful.
After Bootcamp managed to set up my USB stick properly, and managed to download and copy on the Bootcamp windows drivers in as well, it then failed to partition the drive successfully (the last step before it triggers the Windows installation to commence).
As usual, the error message was useless:
Your disk could not be partitioned ; An error occurred while partitioning the disk. Please run Disk Utility to check and fix the error.
The problem here is that I did run Disk Utility to check and fix the error, and no error was fixed!
The Disk First Aid run came up clean – said my disk was fine.
I tried booting from “Recovery Mode” and running Disk First Aid again – nope, still no error found or fixed.
Time to dive deeper – open up the display of detailed information (the little triangle that can be used to pop a window of debug text) during the underlying fsck…
…One tiny clue turns up – a succession of warnings in the midst of the checking process, a warning (not a failure) involving something about ‘overflows’. Turns out that Disk First Aid (‘fsck’, really), within Disk Utility doesn’t fix these issues – it just declares the disk to be ok and finishes happily despite them.
Disk Utility can even partition the drive just fine – but the Partition function in Bootcamp itself … fails.
The fix turns out to be annoyingly radical: Do a full system backup, and then do a full system restore.
So – break out a spare USB hard drive to direct-connect (less angst and potentially higher I/O rate than doing it over the network). Use Time Machine to back up the whole machine to that local storage, then boot in recovery mode and restore the system from that drive again.
This takes… along time. All day and half the night.
However – it helped! When I tried yet once more, after this radical step… now the Bootcamp partition step works – hazzah!
And then Windows 10 starts to install itself at last – hazzah!
In the windows installer, you may need to format the partition designated for Windows
Once windows starts to install process, it reaches a point where it displays all drive partitions and asks you to just pick the one to install Windows onto.
Merely selecting the right partition (the one helpfully labelled BOOTCAMP) doesn’t work. It fails, saying the partition is in the wrong format.
It seems that some inexplicable reason Bootcamp has left the intended Windows partition in the wrong state as far as the Windows installer is concerned.
The fix is to bravely select the partition concerned (again: its helpfully labelled BOOTCAMP)… and hit the ‘Format’ button to reformat it. Then you can re-select it – and the installation now starts to work – yay!
Use a directly attached USB keyboard when the wireless Apple Keyboard stops working
This one is self-explanatory. My Apple wireless keyboard didn’t work in Windows.
I thought I’d just need to load the Bootcamp drivers to fix that but – not so fast! (see the next issue, below).
Meantime I just switched to a wired keyboard – ironically the one I found in my storage room was a genuine Microsoft branded one with lots of useful extra function keys on it.
I’ve been perfectly happy to just stay with that – especially noting the next issue.
Remove/Rename a magic driver file to avoid Bootcamp support causing a Windows “WDF Violation” Blue-Screen-Of-Death a minute or so after Windows boots
Well, with Windows ‘up’, I installed the Bootcamp mac hardware support drivers. This is important for all sorts of reasons (including WiFi not working until you do).
I did that by selecting the (still mounted/attached) USB installer stick and running ‘Setup’.
The installation of drivers worked fine.
What didn’t come out fine was the unintended consequence.
Once the Bootcamp hardware support was installed, Windows started crashing a minute or so after each boot up, with a “WDF Violation”.
You can log in and start working – just – and then ‘bang!’ – sad/dead windows:
After everything else (and one and a half days of this stuff) – this was really frustrating.
Cue yet more googling – and at least this one seemed to be an ‘understood’ issue.
It appears to the the case that the wrong version of a crucial driver file (keyboard support related, by the looks of it) is loaded in by Bootcamp, but when installing onto this particular generation of iMacs.
The fix – after I found it – involves booting Windows in diagnostic mode and disabling that driver file.
Even getting into that diagnostic mode is a challenge… it turns out that you don’t reboot holding down the shift key for ‘safe mode’ in Windows any more – that would too easy…
…Instead, now you boot up and then select restart… and while doing that restart, you hold down the shift key. You then wind up with the opportunity, during the reboot, to access diagnostic functions.
Sure, that’s obvious… not.
Anyway – once booted in diagnostic mode, select to bring up a ‘DOS’ command window.
Now select drive C: and then locate and rename (or delete) the errant driver file concerned (C:\Windows\System32\Drivers\MACHALDRIVER.SYS) as per this screen shot:
One trap to watch out for: Make sure you’ve changed to drive C:, and that you’re not still on drive ‘X:’ looking for that file.
That drive – which you start out on when bringing up the command window – contains a whole separate copy of Windows…without the bootcamp files on it. So you think you’re searching in the right filesystem – after all, Windows is on it… but you aren’t.
I guess that’s a consequence of using the Diagnostic mode, but it fooled me for a while, as I was trying to find the errant driver file there (on drive ‘X:’) at first…and failing to do so.
Now reboot and – yay – no more WDF blue-screen-of-death failures.
… but also, no bluetooth keyboard support.
No problem to me – I really prefer the direct-attach larger keyboard I found with all the Microsoft specific buttons on it anyway, for this task.
Contrary to warnings on the web sites that had helpfully pointed out the incorrect/broken MACHALDRIVER.SYS file issue, I have had no practical issues with volume control or similar things as a consequence of disabling that file.
For me, it all seems to work fine without this file in my life at all.
At this point, I have a working Windows 10 installation on my machine.
I have subsequently installed the software I wanted to run in the first place and its all working just fine.
I do hope someone else finds this useful – and that if you do go down this road, that you have a smoother ride than I did! 🙂
Today Redflow announced the appointment of John Lindsay as a non-executive director of Redflow Limited. John has deep skills and experience around technology and technology related business matters. He is, to use a favourite phase (for us both), ‘smart and gets things done’.
Its worth appreciating that John has specific expertise and experience in precisely the realms that Redflow needs. I sent John over to Brisbane when I originally invested in Redflow, to help me assess the technical merit of the technology. He, like me, has been a shareholder in Redflow ever since.
In addition to being a great businessman, John is also a technology geek at heart (as am I). He has been an active member of the electric vehicle and renewable energy community for many years. His daily driver is electric (as is mine) – of course. He knows which end of a soldering iron is the hot end.
His idea of a fun weekend hobby is (literally – and recently) to have set up a D.I.Y. solar and battery offgrid system in his own garage to charge up his electric car from renewable energy because… he can (and because he knows how to).
His appointment frees me up to transition my own head space in the Redflow context totally into the technology around making our battery work in the real world. Doing that stuff is what I really love about being involved with Redflow. I love helping to make this amazing technology sing and dance smoothly for real people, solving real problems.
The technical lever I designed, to help Redflow to move this particular part of the world, is the Redflow Battery Management System (BMS). I am very proud of the great work done by the technical team at Redflow who have taken many good ideas and turned them into great code – and who continue to do that on an ongoing basis.
So… while there can be a natural tendency, when looking at this sort of transition, to wonder whether my leaving the board (given how influential I’ve been at board level in the last few years) is because something ‘bad’ is happening, or because I don’t like it any more, or because I don’t feel confident about things at Redflow, the reality is precisely the opposite.
My being happy to step back from board level involvement over the next few months is the best possible compliment that I can give to the current board, lead by Brett Johnson (and now including John) and to the current executive (now ably lead by Tim Harris).
I’ve put my money where my mouth is, to a very large extent, with Redflow. I am its largest single investor – and I have also put my money down as a customer, too, in my home and in my office.
At this point, I’m happy to note that we are seeing great new batteries turning up from our new factory. We are on the verge of refreshing our training processes to show our integrators – and their customers – how far the BMS and our integration technology has come at this point (and just how easy it all is, now, to make the pieces work). We are looking forward to the integration industry installing more of our batteries into real world situations around the world again – at last.
We do this with confidence and we do this with eagerness.
I am proud to be a shareholder in Redflow and I look forward to the next chapter of this story.
At the Australian Energy Storage conference held in Adelaide, South Australia on May 23-24 2018, I delivered this keynote address about the role of flow batteries and other energy storage technologies in the context of building an energy grid with renewable energy in the majority and with “Baseload” generation on the wane.
The core thematic question I posed was this: Is a future grid with large amounts of renewable energy storage necessarily using Lithium-Ion (or other, otherwise conventional) battery systems for the majority of that large scale energy storage – or are there better ways?
A specific underlying aspect of that conversation is about environmental impact – around the notion of ‘environmentally friendly’ energy generation and storage being a notion that must factor in the ultimate environmental impact for each storage technology and not just its up-front cost.
The video below is a recording of my address synchronised to the slide deck that I used.
Updated Feb 2019: System now operating at full battery capacity and with increased solar array size
The Base64 energy system has been a fantastic learning experience for us in general and me in particular.
The system is built around a large Redflow ZBM2 battery array. We call these configurations an “LSB” (Large Scale Battery). It is charged with solar energy harvested from a large solar array (most of which is ‘floating’ above the staff carpark).
We deployed it first some time ago now, prior to having got so deeply experienced with using Victron Energy inverter/charger systems. At the time we (Base64) purchased a big custom industrial AC inverter that didn’t come with any sort of monitoring or logging system and no control system to drive it to interact properly with on-site solar.
All of the necessary energy system control, management and data logging technology comes ‘out of the box’ with the Victron Energy CCGX controller unit in a Victron installation, so I imagined ‘everyone’ provided such things. Well, I was wrong about that.
The big industrial unit we bought came with nothing but a MODBUS programming manual and created a lot of head-scratching along the lines of… ‘now what?’. For some reason industrial scale systems are in the dark ages in terms of the stuff that Victron Energy have ‘nailed’ for the residential/SOHO battery market – they supply great, easy to use, easy to understand, effective and powerful out-of-the-box energy system control software and hardware (entered around their CCGX/Venus system). It also comes with an excellent (no extra cost) web-accessible portal for remote data logging, analysis and remote site system control.
Meantime, we were exercising our large battery ‘manually’ – charging and discharging it happily on a timed basis to prove it worked – but we were unable to run it in a manner that properly integrated it with the building energy use, for the lack of that control system in the inverter we had at the time. We didn’t want to write one from scratch just for us – that’d be a bit mad. We also didn’t want to pay someone else thousands of dollars to set up a third party control system and make it work – a major consulting project – just to do what the Victron Energy CCGX does on a plug-and-play basis at very low cost.
In parallel, and importantly – it also took ages to get substantial on-site solar operating at Base64 – and there wasn’t much point in driving the LSB in production until we did have a decent amount of on-site solar to sustainably charge it with.
To the latter point – we are in an massively renovated and reworked heritage listed building and I was unable to get permission to mount solar on the massive north-facing roof of the main building.
Instead we commissioned a rather innovative mounting system that has (at last) let us complete the installation of a 50kWp solar array that literally ‘floats’ above our staff car park on four big mount poles supporting what we call ‘trees’ – suspended metal arrays holding the solar panels up.
That system was commissioned and imported from a company called P2P Perpetual Power in California to suit our site. There are lower cost systems – but (by comparison) they’re ugly. We wanted it to be beautiful, as well as functional – because Base64 in all other respects is…both of those things.
It was worth the wait.
The result is (in my humble opinion) quite spectacular.
Including that ‘floating’ 50kWp array, we have a total of 99kWp of solar on the site, though some of the rest of it is on ‘non-optimal’ roof directions, and so on a good day what we see around 80kW generated at peak in the high (solar) season.
That said, the advantage of some other parts of the solar system being on east and west facing rooftops is that our solar generation curve runs for more hours of the day. We get power made from earlier in the day (from the eastern array) and later into the evening (from the western one) – and that’s quite helpful in terms of providing a solar energy generation offset to local demand patterns.
In parallel, we pulled the LSB apart and rebuilt it using Victron Energy products and control systems, so that we could get a fantastic operational result and have optimal use of the solar energy to drive the building, charge the batteries, and support the building load at night – the very same stuff we do in houses with our batteries, just on a bigger scale – without facing a one-off software development exercise for the old proprietary inverter system we had been using.
Swapping the Victron Energy gear in has turned out cheaper and far better than the bespoke software exercise would have ever been. It has also created a signature example of a large scale Victron Energy deployment running a decently sized multiple building site. I hope that this, in turn, may inspire more of the global Victron Energy installation community to consider the use Redflow battery technology at this sort of scale.
The battery array is built with 45 x ZBM2 = 450kWh of Redflow energy storage.
We have 72kWp of Victron inverters installed right into the container as well. We could have gone larger (in terms of peak inverter power), but these have been ‘right-sized’ to the building demand at Base64, with summer peaks normally around 60kW (75-80kW worst case) and typical draw around the 30-40kW level when the building complex is in daytime operation.
It is all linked to that 99kW distributed solar array using via multiple Fronius AC solar inverters.
I’m thrilled with how well the system is working – its a monument to all of our Redflow BMS development work that the whole thing – at this scale – really is ‘plug and play’ with the Victron CCGX energy system controller and the associated inverter/charger equipment.
It is very satisfying to run an office in the middle of a major city that typically uses very little grid energy, that is resilient to grid faults, and that even still exports solar energy to the grid as well.
A subsequent step will be to interface with a grid energy ‘virtual power plant’ operator in the future, so that we can sell battery energy back to the grid during times of high grid demand.
Every battery system on an energy grid has the potential to also become a programmable grid-supporting energy source during peak load periods. The missing links are software, regulation, and attitude – with the software part being the easiest of the three.
We can easily set up to proactively control over when the battery charges and discharges in response to, for instance, wholesale market price. The Victron control system makes that easy. What need to give that project legs is an innovative retailer who will work with us on that and a small amount of software ‘glue’ to make it happen on our local site.
Here is a little gallery of photos of the system that we’ve installed – click through them for a little more information about the system.
Image from the Victron VRM portal for our site
Test run of a 15 x ZBM2 array in the Base64 LSB
72kW peak over three phases using 6 x 12kW Victron Quattro 48/15000 units.
The primary solar array ‘floating’ over the Base64 staff carpark
Main 50kWp solar array over carpark with additional arrays on other rooftops. Further solar has subsequently been added.
I installed an Internode NBN HFC service in an apartment a few months ago. It comes with a Huawei HG659 router, attached to the NBN standard issue Arris HFC cable modem.
I really don’t like that router. Its got some negative characteristics – including it having a DHCP server that can get itself confused and conspire to keep handing out a conflicting IP address on the active network. I also much prefer using Apple Airport Extreme base stations for WiFi networking rather than the built in stuff in routers of that ilk (lets call them ‘low cost and cheerful’) – especially when I’m running multiple WiFi base stations (as is the case in the site concerned).
I’ve had great success in another site using Internode NBN via Fixed Wireless by just configuring the PPPoE client into the Airport Extreme and plugging that straight into the incoming connection from the Fixed Wireless NTD. That worked like a charm, and eliminated a similarly ‘cheerful’ router in that circumstance. However this simple approach just didn’t work on the NBN HFC connection – configuring the PPPoE client in the Airport Extreme and plugging it into the Arris HFC cable modem directly lead to no joy.
Each NBN ISP has some choice over how the HFC based NBN connection gets deployed to their customers. Some digging turned up the data point that the Internode service delivered via the NBN-Arris HFC modem is implemented as two ethernet VLANs, with VLAN 1 delivering the bundled VoIP fixed line phone service and with the Internet service delivered over VLAN 2.
There is no way to configure the use of an upstream VLAN in the Airport Extreme – it expects the PPPoE frames to turn up natively (with no VLAN tagging).
Some more digging and the solution emerged, namely to keep the Huawei HG659 in the picture but use it merely as an ethernet VLAN decoder. In that role, its job is so simple that it can do it without losing the plot.
and… it works (yay!)… but there are – of course – wrinkles 🙂
The steps involved should have been this simple:
Configure the HG659 using its wizard to ‘connect with another modem’. This is what the HG659 uses as its description for bridging the incoming VLAN to the local LAN ports.
Keep the HG659 WAN port connected to the Arris HFC modem (obviously)
Cable the Airport Extreme WAN port into one of the LAN ports of the HG659
Using the Airport Utility on a Mac, configure your PPPoE account details into the Extreme (Internet tab, select PPPoE and then fill in the username and password, leave the ‘Service Name’ blank)
However, this is what I also had to do (all in the Airport Utility)…
The DHCP IP range configured into the Airport Extreme needed to be changed (at least, I needed to change it, to make things work – YMMV). I switched it from its default of the 10.x range, and instead set it to use NAT on the 172.16 range (Network tab, Network Options button, IP v4 DHCP Range drop-down)
I had to turn off IPv6 entirely to avoid an ‘IPv6 Relay’ error coming up (Internet tab, Internet Options button, Configure IPv6 drop-down set to ‘Link Local Only’).
Turn off ‘Setup over WAN’ to avoid an alert coming up on the Airport Utility and the base station light flashing amber (Base Station Tab, clear the “Allow Setup over WAN’ check box). The point here is to explicitly disable the capacity for the Airport Extreme to be accessed (by the Airport Utility) over the WAN path. That’s definitely something I want disabled. My only issue here is that I’m surprised this checkbox is actually on by default in the first place!
One more bit of collateral damage here is that I probably can’t access the free VoIP phone service delivered over HFC VLAN 1 and out via the analog port on the HG659. I don’t care, I wasn’t interested in using it in the first place. It may well be the case that some cunning manual configuration of the HG659 could make that work (too) – but I really don’t care about it – so I just haven’t tried.
The one silly thing left out of all of this is that I didn’t get rid of any physical devices in the process, so I have this conga line of three hardware devices between the cable modem wall plate and the user devices in the site – the Arris HFC modem, the HG659 (now as a VLAN 2 decoder box only) and the Airport Extreme (as the site router plus central ethernet switch to some downstream Airport devices).
Speed tests are just as good as they were already, with downstream rates testing reliably in the mid 90’s and upstream in the high 30’s – pretty darned good (especially through that crazy hardware conga line) on a 100/40 Internode connection. Importantly the issues I had with the HG659 router and DHCP are gone.
The Internode NBN HFC service is in fact deployed on TPG infrastructure, so the above should apply equally to a ‘native’ TPG NBN service too. This also explains why the IPv6 doesn’t work (sniff).
The VoIP service should be capable of still being used, perhaps with some custom configuration of the HG659, and I may try to find a way to make that work just for the sake of the challenge
A router such as a FritzBox which is capable of VLAN decoding on the WAN port should be able to be used to deliver the Internet service directly via the Arris HFC modem without using the HG650 at all (eliminating one device). Its also possible the FritzBox may be smart enough to support logging in to the voice service via WAN VLAN 1 as well … and that is something to try out another day…!
Arris HFC Modem
Apple Airport Extreme
Postscript: There is another approach to the removal of the Huawei device from the critical path that has been pointed out to me on another blog – here. This won’t work with the Airport but it is a way to allow a Fritzbox or a high end Billion or another router with WAN port VLAN support to be used for the Internet path instead of the HG659, leaving the HG659 functional as well – in parallel – to provide the voice port service that is bundled in with the Internode NBN HFC service. The benefit here is for people who do want to use that bundled voice service while also removing the HG659 from the critical path in Internet access terms. While it does need yet more hardware (an ethernet switch) – its a really creative and effective answer that might be very helpful to others to know about!