250 Megabit Internet at Home

The Need For Speed

While nothing special in some other countries (like… New Zealand… let alone the USA), a better-than-100-Megabit Internet service at home has been an unattainable goal for me in the wilds of Adelaide, South Australia until now.

I have long had access to much faster speeds at the office, via path diverse gigabit fibre links that were installed back when I owned an Internet company, but not at home.

The companies who are giving the NBN a run for their money using fixed wireless services couldn’t help me, because I live in one of those leafy streets full of those tall things that leaves grow on. Our house has no radio line-of-sight to anywhere and no way to ‘fix’ that without the use of chainsaws. Not doing that.

Wait, but Why?

Why bother? To some extent this is for the same reason that I have a Tesla Model S P100D (Capable of accelerating from 0 to the open road speed limit in less than 2.3 seconds)… ‘Because’.

At least, that’s how I felt about it before I’d done it.

I have since found that there are some genuine benefits beyond mere geek bragging rights.

Our home is in the HFC network footprint. Back in December 2013 (!) I penned a blog post about how HFC (while definitely not as good as pure fibre) was still capable of speeds well over 100 Megabits per second, and definitely a dramatic improvement over (sigh) FTTN.

I don’t think I was expecting it to be a seven year wait (!) but at last, here in 2020, I have finally got there, via the very same HFC box pictured in that 2013 blog post.

To my great chagrin, I’ve not been able to obtain those > 100 Meg speeds with the ISP I founded, Internode. It seems that Internode is a prisoner of the the TPG group’s apparent disinterest in keeping up with state-of-the-art NBN home Internet speeds.

The fastest home Internet service currently offered by the TPG group companies is 100 Megabits, despite the release of higher fixed line speeds in the underlying network by NBNCo in May 2020.

This is a direct mirror of the long term TPG group decision to artificially constrain they speeds they offer on the NBN Fixed Wireless footprint, as I related recently. On the Fixed Wireless (FW) footprint, the fastest speeds being sold by Internode are 25 Megabits per second, despite NBNCo having offered Internet providers FW speeds of up to 75 megabits per second.

The TPG group have ignored higher speed options on fixed-wireless for more than two years so far (and yes, I have asked – repeatedly), so I have little optimism for the group to return to the forefront in fixed line speeds on the NBN in general any time soon.

Time to change providers. This was a decision I was sad about because, well, I did start Internode!

The changeover process on the NBN fixed line network is incredibly smooth and simple – such a contrast to the complicated realm that Internode and others had to navigate when it came to switching between ADSL2+ DSLAM networks.

Online signup took just a few minutes. A little later the same day I got an SMS to say that I had a 250 Megabit Internet service running with Aussie Broadband.

There was no physical change to anything. I simply got an SMS message to say it was done, and without even resetting or logging into the router, the world got…faster.

In fact, I was a bit shocked at how fast it was:

Over-achieving on a ‘250’ megabit Aussie Broadband service

I’m used to real-world download speeds being lower than the ‘advertised’ line rate, because that advertised raw data rate typically includes TCP/IP packet overheads. By contrast, this service is achieving noticeably more than the advertised speed(!).

It is also amazingly consistent. At 8pm I tried again and instead of a 274 megabit per second speedtest result, I managed a ‘mere’ 273 Megabits per second. Indeed I am yet to see a speedtest result below 250.

“Nice upstream speed, kid…gonna miss it after the upgrade?”

One thing I am a bit sad about, and it is not Aussie Broadband’s fault, is the NBNCo decision to speed constrain the upstream direction on the NBN ‘250’ services to a mere 25 megabits per second. The NBNCo 100M service has a 40M upstream, and the loss of that faster upstream (and that’s what I used to have) real does peeve me a bit.

In my view, constraining the upload speed artificially is akin to a gangster charging ‘protection money’. This level of asymmetry (10:1) is a bit unreasonable when all of the underlying backhaul/CVC/etc links are full duplex (i.e. same-speed-both-ways) data paths, so the upstream pipes are mostly full of ‘air’. At the same ratio as the 100M NBNCo service, there really should be a 100M uplink speed on this service.

Anyway – it is what it is, and in this regard I am merely a paying customer.

(My Aussie Broadband Refer-A-Friend Code is 4549606 if you feel like doing the same thing and if you’d like a $50 credit when you sign up 🙂 )

Does it matter – can you tell the difference?

It turns out that you can.

Web browsing of even content-rich sites is now visibly ‘snappier’, which isn’t earth-shattering, but it is very nice.

It is (of course) in the downloading of large chunks of data that the speed difference really comes to the fore.

I found myself downloading the latest Mac OS X release, Catalina, that weighs in at around 12.5 Gigabytes (!). I hit the ‘Download’ button on the Mac App Store and went off to make a cup of coffee, being used to this sort of thing taking a fair old while, even on a 100M link.

I came back to the Mac a little over 5 minutes later and it was fully downloaded and waiting for me to hit the ‘start’ button to do the upgrade. I had to get the calculator out to decide if that was even possible…and it is. The speeds I am achieving equate to more than 2 Gigabytes per minute of achieved payload data rate. Mercy Sakes that is quick.

Another few hundred gigabytes of Dropbox folders needed to be synchronised over the Internet link into that same Mac. Sure, that took a few hours, but again it was way faster than it had ever happened before. A few hundred gigabytes.

Overall – I’m really loving this.

There is just no sense of conflict in usage by different household members, even when a few household members are are streaming high bandwidth 4K HDR content at the same time (and…they really do).

Even while that Mac was chugging away in a corner, re-synchronising hundreds of Gigabytes of Dropbox folders onto its onboard SSD, the Internet service remained just lightning-fast for everyday tasks.

The Weakest Link

Back in the ADSL2+ days at Internode, we would often have to chase down apparent Internet link speed issues that really turned out to be local (in-house) issues with WiFi base stations or other in-house network issues – even at a mere 10-20 megabits per second. The state of the art in routers and wifi at the time was a lot worse than it is today.

By contrast, the 270 megabit per second down speed test results I am consistently obtaining with my shiny new Aussie Broadband service are being achieved to a laptop over WiFi on the kitchen table – not even using a wired network port (!).

I have tried again on a wired port, just to see if it was different and it was exactly the same. Somewhere between my glass-half-full blog post about HFC in 2013 and now, the rest of the home network technology concerned has comprehensively ‘caught up’.

For interest, the on-site data path is:

  1. A Ubiquiti EdgeRouter-X. This router is more than up to the speed task, rock solid and reliable, has automatic backup link failover, and the 5 port model I have at home comes in at under A$90. Incredible. This is a disruptive, excellent value device that is worthy of a separate review in its own right.
  2. An old TP-Link rack-mount gigabit switch.
  3. Multiple trusty Apple Airport Extreme base stations spread around the house, all connected on wired ethernet back to the central switch. Also well up to the task, but Apple don’t make ’em any more.
  4. My (now) 3.5 year old MacBook Pro.

I’m intending to swap it all that out in a little while for a new set of Ubiquiti ‘UniFi’ series hardware (UDM-Pro, UniFi PoE switches and UniFi PoE Wireless Access Points).

I do not expect that change to create a speed gain. However, I deployed that full product set on our farm recently across a six site single mode fibre ring and – wow. That product set achieves everything on a complex site that used to take days of head-scratching with a Unix command line, and it turns it all into 10 minutes of point-and-click with a web browser. Again well deserving of a separate review sometime.

Conclusion

I am just loving the new 250 Megabit per second Internet service at home. Having spent most of my business career involved in the engineering of local, national and international many-gigabit-per-second networks, its nice to have something at home that – at last – feels like it is decently quick.

I’m hanging out for the full Gigabit service, though, on the happy day when NBNCo manage to get fibre down my street. Bring that on !

How to ignore a customer without even trying

Today I experienced an ironic example of an Internet Service Provider (ISP) successfully avoiding any consideration of a well meaning (and simple!) suggestion to improve their offerings. It is ironic because the ISP concerned is Internode, the company I founded in 1991.

I meant well in trying to help them to improve their service offering, but all I wound up doing was falling down a funny / sad rabbit hole in terms of where those efforts landed me, as you will see.

I used what appeared to be the appropriate email address (found on this page):

This is what I sent (very lightly edited for additional clarity):

From: Simon Hackett

Subject: The absence of support for Fixed Wireless Plus is a strange
and unfortunate deficiency

Date: 17 October 2020 at 1:21:38 pm ACDT

To: customer-relations@internode.com.au


Hi guys,

I have a 25 Megabit fixed wireless service in Tasmania. 

This is the fastest Fixed Wireless offering available from
Internode/iiNet/TPG.

Fully appreciate this sheets home to TPG decisions on how the NBN
Fixed Wireless service is operated - but - NBNCo introduced a new,
higher speed/best effort (up to 75/10) Fixed Wireless service a
long time ago (December 2018!).

I have tried repeatedly since to get my service upgraded to
support those higher speeds, but I have confirmed (on multiple
occasions) with the sales team that there is no plan to have
Internode able to offer those higher speeds… which is just crazy,
frankly.

I think I’ve given Internode at least a year to fix this - and it
isn’t getting fixed - that much is clear. 

So - I’ve now given up and signed up with Aussie Broadband and as
of yesterday, I am indeed enjoying > 60 megabit per second
speeds on the same site with the same hardware and the performance
change is dramatic. 

I will call the accounts team on Monday to cancel down the old
Internode services at the site concerned (snbs client ID is
<REDACTED>, for reference).

As the person who founded Internode, I have found it hugely
disappointing - indeed actually upsetting - to have had to do
this… but (sincerely) this ball (in terms of supporting fixed
wireless customers) has been comprehensively dropped on a long
term basis by the TPG group. Supporting the now-current Fixed
Wireless service offering and rolling existing customers over to
it would be trivial. 

It beggars belief that this is not being done - but - well -
obviously it is not.

For the sake of not losing customers in Fixed Wireless over time
in this entirely avoidable manner, I would challenge you to
actually fix this. It won’t help me, any longer, but it would help
YOU (and your existing and future customers). 

Yours sincerely,
 Simon Hackett
 Founder, Internode

I got an email reply promptly back from iiNet (note: not from Internode), which said:

Hi Simon 

Would you mind providing your account number or mobile number for us to
assist you further.

Kind Regards
<REDACTED>
Case Manager 
iiNet Customer Relations

I pointed out in reply that I had in fact already provided this information.

What floored me is what came back next:

Hello Simon,

Thank you for your email and I do apologize for the delayed response.

Please contact internode directly via the following link: 
https://www.internode.on.net/contact/?dep=support

Their contact details are via the above website.

Warm Regards,

Customer Service Representative
iiNet Support

iiNet

iiNet Limited, Locked bag 16, Cloisters Square WA 6850
ph: 13 22 58 fax: 1300 785 632
email: support@iinet.net.au
web: www.iinet.net.au

Um… excuse me?

Here’s the bottom line – I tried, but – having been taken on a complete runaround for my trouble, well, I’m outta there…

…and wondering why I gave them more than year to fail to address my original issue (as per my email above) before I left. Loyalty, I guess.

My Aussie Broadband ‘Refer-a-Friend’ code is 4549606 if you’re considering the same move, and it will get you (and me!) a $50 credit if you use it.

Thus far I’ve been highly impressed with the outcome, and I’ll have more to say about that later.

(Full Disclosure: I have also purchased some ASX:ABB shares after their recent IPO)

Three Days Soaring at The Vale – Day Three – Ridge Soaring

Ridge soaring is perhaps the simplest soaring lift method to understand. If the ambient wind strikes a perpendicular obstacle (like a ridge line), the air has no choice but to go… up.

The 4000 foot Mount Roland, right beside the airfield, works really well for ridge soaring. The mountain is almost square, with sheer faces on the west, north and east sides. You can see this shape clearly on this Google Earth image of the local area:

(orange dotted lines show soarable ridge faces)

Annotated Google Earth image showing relevant features for soaring pilots

I’ve done a lot of ridge soaring on Mount Roland and on the ridge line extending immediately to the west, toward Mount Claude. However, until Day Three of this particular three day soaring exercise, I had never been over to the eastern ridge line – the Gog Range.

I took off and motored up in the Pipistrel Taurus Electro above the Gog Ranges, shut down the engine, and wafted down to the ridge line to give it a shot.

The wind was in the right direction but wasn’t very strong, so I couldn’t get much above ridge-top height, but I had no problems in maintaining that height, while flying end to end along the Gog Ranges ‘at will’, with an armchair view, watching the world go by 🙂

After a few passes back and forth along the full length of the ridge, I recorded a short video of the experience:

The beeping sound in the video is the sound of the “Audio Vario”. It is a good sound to hear when gliding.

The Audio Vario is a standard piece of gliding instrumentation that converts aircraft rate-of-climb into a tone sequence that becomes more urgent/higher pitched as the climb rate increases. The tone falls away entirely when you are not in lift. This sound lets a soaring pilot keep their eyes outside the cockpit, while using their ears to gauge their soaring performance.

The Gog Range is around 2500 feet high, and the terrain and the forest are really quite pretty. Ridge soaring really allows the opportunity to see it all ‘up close and personal’.

Interestingly, the Skysight ridge lift prediction (below) didn’t highlight the Gog Range, but it did show good ridge conditions on the edge of the Central Plateau itself – parallel and to the south of the Gog Range. It was that prediction that gave me the impetus to try the nearer, smaller Gog Range line.

The Central Plateau is a much higher, much more sheer, face – but it is also somewhat further away (with a long motor run back into wind to get home from it). That is something to try on another day.

Here is how the ridge looked, from the far (eastern) end, looking back toward Mount Roland in the distance:

This ridge flight in the Taurus Electro capped off three excellent days, experiencing three different weather systems and three different sorts of soaring technique, all in the same place.

What a wonderful spot to go gliding 🙂

Three Days Soaring at The Vale – Day Two – Soaring using Thermals

Thermals are columns of rising hot air, driven by the sun differentially heating the ground. When there is sufficient moisture in the atmosphere, that rising air condenses at the top of the thermal to form a Cumulous cloud (or ‘Cu’).

Cumulous clouds are the classic fluffy white clouds often seen on a sunny day. These clouds showing a glider pilot the top of where a thermal is (or where it was – the lift under them tends to ‘cycle’ on and off over time).

Thermals can exist whether the Cu clouds are there as indicators of it or not. The gliding term for the Cu cloud right above you is a “Near Cu”. The term for the even better looking Cu Cloud that is just too far away to reach is a “Far Cu”.

Covering ground on a Thermal day involves circling slowly and tightly in the core of the rising air, gaining height, until the thermal starts to weaken. Then it is time to set sail for your intended destination, optimising your cruising performance by slowing down in lift and speeding up in ‘sink’ (a technique called ‘Dolphin Soaring’). If you get low again, it is time to find another thermal.

Australia is a great place to fly gliders in general. In the arid areas of the mainland it is possible to achieve quite spectacular soaring distances in the middle of summer. How far can you go? Just take a look at the current Australian Distance Records.

Back in Tasmania, on Day Two of Three, the wind had moderated and the day was several degrees warmer. The Skysight weather model indicated that there would be thermals in the middle of the day rising to 5500 feet or so, which is easily high enough to have a very fine time going gliding.

We set off in the Pipistrel Taurus Electro to explore those thermals and found that they were big, wide and gentle (not always the case!), and that the intermediate sink zones were also quite moderate.

Gabe and I wound up reaching around 6000 feet (very much in accordance with the prediction) over the very same valley that we had wave soared across the day before. The snow on the Central Plateau from the previous day had already started to melt.

We had a lovely time of it, just wafting about the neighbourhood, and the living was easy. Indeed, as is often the case on a good thermalling day – by late in the afternoon it seemed hard to go down 🙂

Here are some pictures from Day Two:

In the next post – on day three – I had a chance to use yet another soaring technique – Ridge Soaring.

Three Days Soaring at The Vale – Day One – Wave Soaring

The Vale Airfield is a 1300m grass runway running parallel to the Dasher River in NorthWest Tasmania. It is situated only a couple of nautical miles from a beautiful mountain, Mount Roland.

This 4000ft granite beast dominates the local view all the way to the nearby town of Sheffield. It is part of a system of ridge systems that give way to the Tasmanian Central Plateau. The Plateau is a large, gorgeous and pristine alpine and lake region that includes the world famous Cradle Mountain national park.

Over the last several months, I’ve been using Pipistrel Taurus G2 electric self-launch motor glider to gradually (and carefully) explore this complex, fascinating, and beautiful area from the air, through a variety of weather conditions. The conclusion I’ve reached is that we are fortunate indeed, to have an airfield that is surely one of the best places in Tasmania to fly gliders.

There are many opportunities to go soaring here, using a wide variety of ‘lift’ mechanisms enabled by this fascinating and complex terrain – and to do it all year round!

What follows is a study in the successful use of the three major ways to sustain soaring flight in gliders, in flights were conducted over three successive days at The Vale, in three distinct weather systems.

Lets start with an annotated Google Earth image of the local area from the point of view of a soaring pilot (click image to enlarge) and then we’ll turn to Day One:

(The orange dotted lines are some of the local area ridge lines)

Map showing local geographic features
Many opportunities for soaring flight are driven by this complex geology

Day One: Mountain Lee Wave Soaring

One of the most wonderful ways to go soaring in a glider is the use of ‘Mountain Lee Waves’.

Wave (in this context) refers to a large standing-wave that forms in the atmosphere downstream (to the Lee) of a large physical feature (such as a mountain) in the presence of a strong and consistent wind that increases in strength with increasing height.

Mountain waves can extend into the sky to heights that are multiples of the height of the ground feature that triggers them. What forms in the air is an ‘echo’ of the shape of the ground feature, high up in the sky, with the into-wind side being a tide of rising air that can be surfed in a glider, to gain height.

Even better: If the wind keeps getting stronger with height, the primary wave system can act like another mountain! One wave can trigger another wave system, located further downwind and higher than the primary wave. This can keep happening, with multiple wave systems capable of ‘stacking up’ in a rising sequence.

Clouds can form in the middle of a wave system, appearing in a classic “Lenticular” shape, being quite literally ‘polished’ by the air rotating around the wave core. In the presence of multiple secondary waves, there can be a ‘stack’ of these lenticular clouds.

On the day we flew, there were no lenticular clouds to indicate the presence of the wave system… it was still there, but it was invisible.

However, I had another way to find the wave. I used a fabulous soaring pilots’ weather prediction application called Skysight.

Skysight has access to global, high accuracy weather forecasting data and it uses this data and a great deal of smart number-crunching to generate predictive, visual, forecasts for glider pilots. These forecasts help them to predict (with high accuracy in both space and time) the presence of various distinct sorts of weather systems that can be used to sustain soaring flight.

You can explore these images (generated by Skysight) to see what I mean:

As per those images, the Skysight model showed the presence of a substantial primary wave system above 5000 feet, then extending through multiple secondary wave systems all the way up to over 20,000 feet (!).

It turns out that this wave system sets up quite frequently in Tasmania in the cooler months.

An impressive example of this happened back on 12th April 2020. This was not a day that I could fly (darn). Have a look (below) at just how impressive the wave system was, right across Tasmania. Lennies very much in evidence in the sky to go with it. A soaring pilot could have hopped from wave to wave, literally across the entire state.

Back to the present – and with my son Gabe as our photographer and co-pilot, it was time to see if the computer model was accurate in telling us that the wave was there, even though the indicator clouds were not.

To help us to find this quite invisible lift system, it was time to engage another piece of technology, the LX9000 soaring glass-cockpit system in my glider. The LX9000 is an incredible instrument. One of its plethora of features is the ability to import Skysight predictive map overlays directly onto the device for display in flight.

This means that I could fly the glider with the wave predictive model ‘on screen’, so we could fly under power up to the height and position needed to contact the wave system, and then shut down the motor and start playing.

We did precisely this. We climbed to about 5000 feet and flew to the edge of the predicted lift zone, and shut down the engine. As if by magic – there it was, and we just starting going up.

Here’s what the Skysight wave overlay looks like in the LX, in flight, in the aircraft. On the image, the lift zone is the yellow/orange/red zone on the map.

This photo was taken at a later point, when we had already climbed in wave up to over 9000 feet:

Successfully working the primary wave system with the Skysight predictive model overlaid on the LX9000

Wave lift is wonderful – it is a smooth, quiet journey of exploration, quietly working your way back and forth along the lift band.

Being a system driven entirely by wind, wave conditions can be (and mostly are) present in the depths of winter, when flat-land glider pilots have given up gliding for the season due to the lack of any useable thermals.

We flew the glider up and down the Mole Creek Valley on our climb, and wound up high over the edge of the Central Plateau. The Plateau was covered in a layer of snow from the previous night, and it looked rugged and wonderful.

Here’s are some images from the wave flight:

(Photo credit for many of these images: Gabe Hackett)

The next post will be about Day Two of Three when the wind moderated, the sun came out, and the lift was there again – but this time it was Thermals.

Flying On Electric Avenue

I am fortunate to own the first electric self-launch glider to fly under Australian skies. It is a Pipistrel Taurus Electro G2.

A few months ago, I wrote a story that explained the background and my journey to owning and flying this impressive little aircraft. The story was published in the Gliding Federation of Australia’s Gliding Australia magazine.

You can read the article here .

Alternatively here is the same article as a PDF file:

Electric Avenue – Taurus Electro G2

I’m posting these links as a precursor/background to a story I will write soon about three wonderful days of flying this aircraft from our airstrip in Tasmania.

KNX Tips and Traps Part 2: KNX/IP Routing

This post documents my learning curve around the difference, in the KNX world, between a KNX/IP Interface, and a KNX/IP Router.

KNX terminology in this context is very important to understand. In part because this is a case of words (really) mattering, and in part because not only do the words matter, but their meaning also differs substantially from the meaning of those works in the pure TCP/IP networking context.

We will start with something that took me multiple product purchases and many hours of head-scratching to appreciate:

A KNX/IP Router is also a KNX/IP Interface

A KNX/IP Interface is not also a KNX/IP router

What is a KNX/IP ‘Interface’?

A KNX/IP ‘interface’ is any device that ETS5 can use to program your KNX devices over your local TCP/IP network from a Windows PC and/or that can allow KNX twisted-pair (TP) device access and control with apps/touchscreens/etc from the local TCP/IP LAN.

However, if a KNX/IP product does not explicitly use the word ‘Router’ in the product name, it is not (also) a router and will not provide KNX/IP routing.

What is a KNX/IP ‘Router’?

In KNX parlance, what a KNX/IP ‘router’ actually provides is the functionality of a KNX ‘area coupler’ or ‘line coupler’, using a TCP/IP network as the linking medium.

An area coupler or a line coupler is a packet forwarding bridge (with built in packet filtering) that moves KNX telegrams (packets) between KNX physical network segments.

The distinction between an ‘area’ coupler and a ‘line’ coupler is simply based on whether you are inserting the coupler (‘router’) to connect distinct areas (first number in the Individual Address is different) or just between distinct ‘lines’ (where the area number is the same but the ‘middle’ number differs).

In either designation, appropriate KNX ‘telegrams’ (packets) get forwarded to the other network segment if they need to be, and they are ‘filtered’ (not forwarded) if they don’t need to be.

Normally this process is automatic, provided you have given the KNX/IP router the appropriate form of Individual Address (IA) to tell it how it is supposed to act (more on this later).

In writing this down, I will say that it is no wonder this is confusing to people already steeped in the terminology and operation of TCP/IP networks (as I am).

It turns out that a KNX/IP ‘router’ is not really a ‘layer 3’ router at all. It is ‘merely’ a layer 2 media bridge. The fact that there is a routing-capable TCP/IP protocol stack inside every IP enabled KNX device doesn’t magically make all those devices into KNX ‘routers’.

One thing that would love ETS5 to feature (and it certainly doesn’t, today) is the addition of a dialog box to warn you about the absence of any configured-in area or line couplers in your setup, any time that you try to construct a group address that spans areas or lines in your project. It seems to me that a simple warning (“No configured area or line couplers are available to forward this telegram”) would save a heap of future grief for others.

Time to look at some real world product examples.

Here are some devices in my home KNX network:

Per the discussion above – only one of these three (the one on the left) is a KNX/IP router, despite all three boxes having more than enough technical ‘grunt’ to be capable of ‘routing’ (and for all I know, they might well all be running identical underlying hardware).

If you don’t have the KNX/IP router there (and at first, I didn’t), then the twisted pair segment concerned is an island. It doesn’t matter how grunty or wonderful your KNX/IP interface products are (the X1 and S1 are both highly capable things)… neither of those is going to route a group telegram between the TP and IP networks for you, no matter how much you try to convince it to do so.

It was the Gira S1 that drove my initial confusion, ironically because it is such a cool and capable device in other ways. Based on my historical TCP/IP experience, I’d thought that because the S1 is a remote-access VPN and local area TCP/IP node, it would also be a KNX twisted pair router – I mean.. why not? Well… no, it doesn’t do that.

One confusing thing is that even if your device isn’t a router, ETS5 will still let you manually define a filter table for it, when there is absolutely no (obvious) point! That is how ETS5 helped to cement my (mistaken) belief that my Gira S1 was a KNX/IP router, when it wasn’t.

I wasted a good day fiddling about, trying to work out why the thing wouldn’t send packets over the network, adding in manually added filter table entries for the group telegrams I wanted to forward and it just … wouldn’t. No error message, no sign of problems, just no packet forwarding. It is obvious now, but it surely wasn’t obvious to me up front.

How to buy the wrong product without really trying

Here are three more KNX devices, all of which are KNX/IP interfaces:

The unit on the left in the photo above is a USB-KNX interface, allowing ETS5 programming without any IP components in the system. The other two units are different brands of KNX/IP interface, and they are functionally identical. They can be used for programming KNX devices in your project over your local TCP/IP network, and/or for facilitating access to KNX TP devices from other TCP/IP applications.

I bought the right hand unit (the MDT one) after I realised the Weinzierl product (installed many years ago, with my underfloor heating system) was not a router. When I did buy the MDT unit to replace it (so I could start doing KNX/IP routing), I mis-ordered it.

MDT make two products with a one digit difference in product ID code:

MDT IP Interface and IP router product codes

I purchased a 000.03 and I should have purchased a 100.03. The extreme similarity (in specification and visual appearance) underscores that they are most likely the very same physical box, sold at two different price points, where the lower cost one has simply got the KNX/IP ‘router’ functionality switched off.

Your choice of KNX/IP router Physical Address controls the functionality of the device!

This was a huge part of my learning curve, and not fully understanding this drove a lot of confusion for me in the first instance. In a TCP/IP network, the IP address is irrelevant to the functionality of a device. In a KNX/IP network, the Physical Address you chose has a dramatic impact on what your device will actually do.

The table (below) – or a variant of it – is a familiar component of the setup instructions for any KNX/IP router (this one is extracted from the startup document for a Gira KNX/IP router):

The mapping between Physical Address and device function for a KNX/IP router

If you do not assign a ‘.0’ as the last part of the IA for your router, then it will not operate as a router at all. If you chose to use an address ending in ‘not 0’, then your KNX/IP ‘router’ will only function as a KNX Interface and will not forward (route) KNX telegrams!

This is in fact a very good design decision in the KNX architecture.

If only a device ending in ‘.0’ can act as an area or line coupler, then you can never have more than one active KNX/IP router per physical twisted pair network segment. This is a good thing, as it ‘naturally’ avoids all sorts of complexities that occur in the TCP/IP context (including the need to implement a network routing table, versus a bridge filtering list).

Just to labour that point slightly – the absence of a layer 3 routing table in KNX is why what KNX does, in my view, is really layer 2 bridging, not ‘routing’. I think these really should have called these things KNX/IP bridges or even better, KNX/IP ‘couplers’ (but… ‘too late now’ 🙂 ).

As another aside: The use of IP Multicast to carry KNX telegrams on the IP side of this process is a smart one. This nicely leverages the merits of IP multicast to ensure carriage of those telegrams (‘packets’) to any other KNX/IP routers that need to hear them, without any explicit configuration work being needed on the TCP/IP ‘side’ of the equation.

Clues to help you to realise your KNX/IP device is not really a router after all

ETS5 is slightly maddening at times, in that it ‘knows’ things about your devices that it doesn’t bother to mention to you – assuming you just ‘already know’. A key one here being whether the thing you (wrongly!) believe to be a KNX/IP router is really just a KNX/IP interface after all.

In other words, ETS5 ‘knew’ full well that I bought the wrong MDT device (see above), and it kinda-sorta tried to tell me this, in ways that were too subtle for me to notice them at the time.

So – to save you the same angst here are the clues to help you know when your KNX/IP router is really not a router after all (i.e. when you ordered the wrong product):

Clue: ETS5 refuses to let you assign a .0 address to your device

This is a dead giveaway (in hindsight). ETS5 ‘knows’ full well its not a router, so when you try to set the last byte to ‘0’, it renumbers it to ‘1’, despite your best efforts to talk it into the ‘0’ at the end.

Annoyingly, not error message – ETS5 ‘should’ (in my view) pop up a message to tell you that ‘Only KNX/IP routers can be assigned an address ending in 0’. Instead, the address box turns red, and then turns black again after ETS5 silently ‘fixes your mistake’ (instead of telling you about it).

Clue: The ETS5 right-click menu on your device doesn’t contain ‘Preview Filter Table’

I finally figured this one out… if you right-click on a device and the pop-up menu contains “Preview Filter Table” then you are looking at a properly confgured-in-to-your-project KNX/IP router (i.e. area or line coupler):

KNX/IP Routers have “Preview Filter Table” on the menu. Non routers… do not.

If you think you have got your router configured in properly but that “Preview Filter Table” entry is mysteriously missing when you right-click on it, then… you haven’t actually got it in there properly after all.

It would be really nice if KNX flagged a validly configured KNX/IP router more clearly on the device display. Displaying the device name in green or… something.

Clue: You can check if a KNX device is really a router before you buy it

ETS5 knows whether a device is a router or not. And it knows this based on the product catalogue, not based on your real physical device.

This is a subtle but important point.

You can select any device in ETS5 out of the global KNX product catalog, load it into your project, and use the earlier subtle clues (above) to assure yourself that it really is a router, and that it really will route… and then you can go buy it.

I wish I’d appreciated that before I started buying things!

 

KNX Tips And Tricks Part 1: VMWare, Dongles, and Scene Teach-In

The Back Story

I want to deploy KNX in some existing and new buildings. KNX is something I’ve been interested in learning ‘how to do’ for ages. Finally a good intersection of time and opportunity has lead to this being the right time for me to undertake the learning journey.

In the COVID era, I couldn’t easily attend the hands-on physical accreditation training courses in KNX design and deployment that are offered by the very fine people at IvoryEgg. So, instead, I started with a free KNX intro seminar they delivered online..

Next, I did the short online ‘eCampus’ introductory course at the knx.org site to get a high level understanding of how to drive the ETS5 app (and I will say that it was indeed very useful for that purpose).

Following that, well, I just licensed ETS5, bought some KNX gadgets from IvoryEgg, and started to just ‘figure it out myself’ because, to borrow the classic geek thought bubble:

How hard can this be?

The answer is that its not too hard, really, except that in doing it this way I’ve missed out on plethora of small nuances (especially about how to drive ETS5, and in how KNX concepts map to ETS5 reality).

These pieces of missing knowledge and understanding would have been gained by doing a formal hands-on training course. Without that, I found myself repeatedly in the situation of knowing something can be done in ETS5/KNX, but being unable to figure out in the first instance how to do it.

I felt that there may be some merit, for others, in writing down the non-obvious things – and their solutions – as I go along. Hence this blog post (and any others that may follow).

What is KNX?

KNX is a remarkable thing. It is a 30+ year old, published and standardised, protocol for building automation. It includes a standardised set of mechanisms for physically wiring and installing KNX-speaking devices in a building. More than 450 hardware vendors with guaranteed interoperation.

There is one ‘master’ programming tool (“ETS5”) that is used to program/configure and deploy KNX devices in real world environments. It imports definition databases and applets from each manufacturer, on demand, to support the myriad variations offered by each device, in a (moderately) consistent manner.

Now, lets start looking at the tricks and traps I have encountered on my self-taught journey into the wonderful world of KNX and ETS5:

Using ETS5 on a Mac: Making it work with VMWare Fusion

ETS5 is Windows software. I use a Mac.

I am running ETS5 on Windows 10 by using VMWare Fusion, which turns the Windows environment into a window on my Mac. This works really nicely with one particularly frustrating and non-obvious trap, related to ‘Network Settings’.

Out of the box, VMWare Fusion configures the Windows virtual “Network Adaptor” using NAT. This means that your windows instance has no direct TCP/IP connection to your Mac, but rather that it is sharing the IP address that your MacOS system uses.

Far from being a good thing – with ETS5 this is a very bad thing.

That’s because ETS5 relies on being on the same physical LAN segment as any IP-connected KNX devices (KNX ‘Interfaces’) that you want to use.

If you leave the VMWare environment set up using VMWare’s internal NAT then ETS5 cannot ‘see’ any of your TCP/IP-based KNX access devices at all – it is as if they just don’t exist.

You can’t make them turn up on the ETS5 interface selection page.. nothing works. Even worse, there are no error messages to help you figure out what is going on, either.. its just that nothing works as expected. 

Fortunately, the solution is simple.

In VMWare fusion, go to the Network Adaptor settings for the Windows instance, and change it to use ‘Bridged (Autodetect)’ instead.

That is all there is to it! Now your Windows instance reaches out and obtains its own TCP/IP address on the LAN, and it now has distinct (and direct) network identity. Now, it can see the LAN environment properly including (in particular) the IP Multicast packets that KNX relies upon, to work properly.

Fire up your ETS5 software again, and now all your KNX interface devices magically autodetect and work as expected. Win!

The only other thing of operational consequence in terms of using VMWare Fusion to run the Windows instance you need is that you need to plug the ETS5 license dongle into the Mac after activating my windows instance in VMWare Fusion, or it doesn’t ‘see’ it in ETS.

At worst (or if you forget), just unplug and re-plug the device after launching ETS5, and ETS5 will re-scan and notice the device soon afterward (no need to quit/relaunch ETS5).

How to avoid losing the (tiny) dongle

The ETS license is implemented via a physical USB Dongle. I have some issues with that dongle.

Iit is tiny, the same size as those little plug-in wireless keyboard dongles. This is something I hate – because a software license worth thousands of dollars shouldn’t be deployed in something that is so very easy to lose!

(Note that if you do lose the dongle, you can get it replaced for a moderate re-issue fee – but – why make it so easy to lose the thing and suffer financial cost and replacement delay in the first place?)

The ETS dongle uses a conventional USB-B plug, and my Mac is 100% USB-C. Given how expensive the license (and hence dongle) is, I have no qualms about dedicating an Apple USB-B to USB-C adaptor for the exclusive use of the dongle.

Because of that, though, I can’t leave it plugged in to my Mac all the time.

When you license ETS5, the KNX association sends your dongle in a lovely little presentation box. The box is made of sturdy cardboard material with nice printing on it, and the dongle sits inside, in a little foam bed. Very swish.

This box is big enough that I can put it in the accessory pouch in my laptop bag without losing it. Having to open that presentation box and clip the dongle into the Apple USB-C adaptor to use it, and then having to take it all apart again after each use, was painful. I lived in fear of the dongle getting detached from the adaptor and being lost.

This lead me to a pragmatic solution to the issue of the thing being too small.

What I did was to cut out a little hole in the end of the cardboard presentation box, plugged the dongle into the USB-C adaptor, closed that into the box and taped it shut (and wrote my name and number on the back).

In effect, I have created my own ‘super size’ and natively USB-C version of the ETS dongle, that is much harder to lose! This makes it much, much easier to find (and harder to lose down a drain) versus dealing with a little green dongle the size of a coffee bean.

Make a test bench to play with

It is very worthwhile getting a selection of real KNX devices as soon as you can, and starting to play. Here’s a photo of my home test bench at an early stage of my own exploration’:

 

What I did soon afterwards was to segment my KNX environment between that test bench and some pre-existing KNX equipment that was installed into my house some years ago by someone else (to implement an underfloor heating system). I segmented it using KNX/IP routers (more about those in a later blog post).

This means my test bench can be turned off and on, or futzed with in general, without breaking the production environment in the house. However, because I am using a couple of KNX/IP routers (one in the production setup, and one on the test bench), I can still reach back and forth between the deployed hardware and the test bench to try things (e.g. making a switch on my test bench drive a real world gadget somewhere else in the house).

Understanding ‘Scenes’

Scenes are an excellent concept in KNX – and it seems to me that driving rooms (and outcomes) via scenes is much more rational than having a forest of individual light switches and/or dimmers, even if all you are adjusting is lighting.

That said, once you realise a scene can drive outcomes across multiple types of actuators at once (lights, climate control, blinds, locks, audio systems, you name it)…that is the real lightbulb moment, starting as simply as the notion of having one button at the building exit labelled ‘Home/Away’.

There’s a key (and quite useful) concept related to scenes, called “Teach-In”. More about that later.

The ETS menu structure device Parameter configuration and adjustment is highly dynamic – sub-menus come and go depending on other menu selections

The Theben TA x S binary input device (you can buy them in various values of ‘x’, e.g. 2 4, 6 or 8) is a great device.

This was the first device I tried to get the hang of configuring using ETS5. When I started trying to do that, I just could not work out how to send anything but binary (single bit on/off) outcomes from it.

What I really wanted to do was to make a row of buttons that are ‘scene selection’ buttons. I made up a nice metal box with a row of pushbuttons on it, I wired them into a TA 4 S unit, and I set out to make each button select a scene number (1 through 4).

In the first instance, when I tried to program it in ETS5 – moving to the “Parameters” tab for the device – I just could not work out how to send a scene number with this device.

“Out of the box” the unit let me drive the switches as binary devices (one bit per input) only. It has lots of flexibility about how that works (in terms of whether the bit sent represents absolute switch position or a toggling value, various de-bouncing parameters, etc etc). All lovely, but I wanted to send scene numbers, and they were just absent from the menu structure entirely.

Finally I figured out the subtle quirk (in my view) of the ETS5 interface, and it is this:

When you are looking at a Parameter menu for a device, take note of whether there is a ‘+’ or a ‘-‘ to the left of the menu concerned. If it is a ‘+’ then there is a hidden sub-menu, waiting for you to discover it, by double-clicking on the menu concerned.

In my case, realising that and opening the button menu revealed a sub-menu that let me chose the type of data to be sent in response to input changes. Yay!

This situation can continue through multiple levels…there can be rabbit holes within rabbit holes.

I discovered yet more sub-menus allowing me to configure other cool things to do with that TA 4 S device.

The crux, then, is that there is a pandora’s box in the Parameter system, and you just need to know to look for it. Changing parameter settings can bring additional sub-menus dynamically into existence related to the thing you just changed… its an exploratory process, and once you know how to open the door to deeper levels… keep doing it!

WIth that successful discovery made, I programmed my button box to send scene numbers 1, 2, 3 and 4 in group telegrams in response to pressing buttons 1,2, 3 and 4. I downloaded the configuration into my devices and started pressing buttons… and … weird stuff happened (!).

This lead me (after significant head-scratching) to the next discovery:

Scene Numbers officially start at 1 but they have an underlying Index Origin of 0

If your device supports setting or recognising Scenes using the data type ‘Scene’, then you have 64 scenes to choose from, numbered 1 through 64.

However, ‘under the hood’, the Scene numbering that is actually sent ‘on the wire’ is a value from 0 to 63.

This means ‘Scene 1’ is actually sent as an unsigned 8 bit value of ‘0’, Scene 2 is sent as a value of ‘1’, and so on.

The source of confusion here is that some devices (like the Theben TA x S units) don’t seem to allow you to send a ‘Scene number’ as a data type (if you can, I haven’t found it yet). They do let you send a “Value” perfectly happily though (as an 8 bit unsigned byte).

Devices that receive scene numbers and that describe them as a Scene then operate on the byte received as a value you describe starting from 1. Hence when configuring an actuator to respond to, say, Scene 4, you select “Scene 4″…and that scene activates when a ‘3’ comes in over the wire.

Understanding this, at last, I reprogrammed my TA 4 S to send ‘Value’ bytes of 0, 1, 2 and 3 for my four buttons. When my group telegram packets then landed on the actuators I had configured to respond to Scenes 1, 2, 3 and 4… the right thing started to happen at last.

Success!

I expect this could also be fixed by tweaking the data type for the TA 4 S unit in the ETS5 setup for device, to make sure the TA 4 S ‘knows’ that what is being sent is a scene (and thus to avoid this confusion). In the end it doesn’t matter, providing you understand the underlying issue.

What the heck is “Teach-In”?

Teach-in is a pretty cool concept, but some Googling on my part failed to turn up a good explanation of what that really meant in practice. Various KNX product data sheets mention that their Scene logic supports Teach-In, but they mention it as if it is an axiomatically understood concept for the reader. Well, it wasn’t at all obvious to me.

One way to think about Teach-In is as if Scene numbers are ( in my case, literally) a row of numbered buttons on a button box. A way to think about that row of buttons is to compare to them to the row of ‘station selection’ buttons on an old-style car radio. Just hit a button to recall a previously saved radio station frequency, to save manually adjusting until you bump into it.

To continue the car radio analogy – if you want to set up one of those channel buttons to select a station, you first tune in the station manually, and then you press-and-hold the button concerned for a few seconds, which locks the current station in to the button you are pressing right now. In other words, long-press means ‘save station here’.

Teach-in, it turns out, is the analogous thing in KNX!

Teach-in is the way to update (save) the current actuator settings back into a scene number in those actuators ‘on the fly’. This can be far better than statically programming them in ETS5 and hoping that they somehow come out ‘perfect’ in your real-world building (and that the occupants’ needs won’t evolve over time).

If your actuator(s) support Teach-In, then this is how Teach-In works:

– Adjust settings on various actuators in some way other than via scene change. This might be by using manual control buttons on the actuator (if present), and/or by using other KNX sensors to individually change settings, and/or using a KNX whole-of-building control panel or a app to adjust individual lights, sounds, blinds, whatever to be ‘just how you want them’

– Once you have your room and/or entire building ‘just the way you like it, you can save this entire setup into a Scene number across all the relevant actuators by sending a group telegram to those actuators containing the scene number plus 128 (i.e. with binary bit 7 in the byte ‘set’).

Hence to update (re-save) the current actuator configurations into Scene number 6, you would send a group telegram specifying Scene number 6+128=134 (or if sending Values, that would be 5+128=133) to the Scene selection element of the actuators concerned. Bingo – you’ve saved your current Scene state away for future use!

Once I understood this, some nice features back on the Theben TA 4 S suddenly made sense:

By opening up yet more of those hidden sub-menus in the Parameters section for the TA 4 S, it turns out that you can program each single button to be able to send three distinct Value numbers based on ‘how’ you press the button!

You can send distinct Values out on each button depending on whether you (1) short-press the button; (2) long-press the button (i.e. press-and-hold), or (3) double-tap the button

Understanding this, back on my test bench, here is what I did to prove it up:

I programmed the TA 4 S in my button box to send 0, 1, 2, and 3 in group telegrams to my actuators, for short presses of buttons 1,2,3 and 4 (Scenes 1 to 4)

I programmed it to send the values 127, 128, 129, and 130 in response to a long press (Scenes 1 to 4, plus 128)

I also programmed all four inputs to send the value 6 (i.e. select Scene 5) in response to a double-tap on any of those inputs. I programmed Scene 5 in all of my actuators to mean ‘turn everything off’.

And voila – a nice demonstration of Teach-In:

– Press a button to select a scene.

– Adjust manually by other means, and then press-and-hold any button to store the current actuator settings back into that button (car-radio style) – nifty!

– Double-tap any button to turn everything off (Scene 5) – just to demonstrate an outcome for that third way to use the very same buttons.

Next Time

In the next Part, I’ll discuss some tricks and traps around the selection/purchase and programming of KNX/IP routers (and what is, and is not, a ‘router’ in the KNX world).

I’ll also give you a tip on how to deal with a KNX device that is physically inaccessible, where that ETS5 really wants you to press that ‘programming’ button on, again, to change anything… but when you just can’t do that (because you don’t know where it is, or because you do know, but you can’t ‘get to it’ physically).

Solved: Installing Windows 10 using Bootcamp on iMac with a Fusion Drive

 

Ken Thompson has an automobile which he helped design. Unlike most automobiles, it has neither speedometer, nor gas gauge, nor any of the numerous idiot lights which plague the modern driver. Rather, if the driver makes any mistake, a giant “?” lights up in the center of the dashboard. “The experienced driver”, he says, “will usually know what’s wrong.”

(Source: BSD Unix Fortune Program)

I recently managed to install a current Windows 10 distribution onto an older iMac that I had in storage. I wanted to set up this machine to run some specific Windows software for which it was well suited, and that let me make good use of an otherwise idle machine.

The iMac has a then-fastest-around 2.9Ghz CPU and features the (then) latest and greatest storage innovation, the ‘Fusion Drive’. This is a small SSD blended with a 1Tb Hard Drive. The Fusion Drive was designed to leverage fast-but-expensive SSD’s with slow-but-cheap hard drives, before SSD’s got so cheap that the hard drive became almost irrelevant.

My intention was install Windows 10 using Bootcamp, with an arbitrary 50/50 split of the 1.1Tb Fusion Drive.

At the start of the fateful weekend concerned, I recall thinking ‘how hard can this be?’ because I’d installed Windows using Bootcamp on my current-generation MacBook Pro (with a big SSD) with zero issues at all.

Turns out the answer is: ‘Very Hard’.

I had to get past multiple ‘I should give up because there is no apparent way around this, and the error message gives me no help at all’ situations, spread across what became an entire weekend of trial-and-effort and repeated fruitless attempts at things that took ages, punctuated with just enough ‘ah hah’ moments and clues found via Google to keep me doing it…!

I didn’t find the entire list of challenges I faced in any single web site,  so I have decided to write my discoveries down here, in an ‘integrated’ manner. Each of these issues represents some hours of repeated head-banging attempts to get past it that I hope to save you, dear reader, from repeating.

I am assuming in the below that you know how to do a Windows installation using Bootcamp (or are prepared to work that out elsewhere). This isn’t a guide to doing that – its a guide to why the process failed – and failed, and failed, and failed – for me.

Each item below starts with a headline that frames the fix – so if you mostly just want to get it done – just dance across those headlines for a fast path to a working result.

You really need to be running Mojave (Mac OS X 10.14)

I fired up Bootcamp under the OS on the machine at the time – Mac OX 10.13 – and it said it could install windows 7 or later. Well, I wanted to install the latest release of Windows 10, and that’s ‘later’, right?

Wrong.

On this model of Mac you need to use an appropriately large (16GB or more) USB stick. Bootcamp writes the Windows 10 install ISO you’ve downloaded by now (you have, right?) onto that USB stick and turns that into a bootable Windows install drive (including throwing the ‘Bootcamp’ driver set onto it, to be installed into the Windows image once the base install is done).

Well, I plugged in a 16Gb USB stick (actually, I tried several sticks ranging from 8Gb to 32Gb, fruitlessly). In each case, after scratching around for ages, Bootcamp failed with an error message say that my USB stick wasn’t large enough.

Some Google searching turned up the key information here – that Windows 10’s recent ISO’s are large enough that they cross an internal 4GB size boundary that in turn leads to Bootcamp not being able to cope with it properly.

The answer looked to be easy – upgrade to Mojave.

Ok, annoying but straightforward. Cue the download and install process, and come back in several hours…

You also need to update Mojave to the very latest version

Turns out that the build of Mojave one downloads from the App Store isn’t the very latest version (Why isn’t the very latest version? Beats me!).

Bootcamp on the base release of Mojave says it can install Windows 10 or later (not ‘Windows 7 or later’). Yay – that suggests the bug has been sorted out – after all, it mentions Windows 10!

Sorry, but no. Same failure mode, after the same long delay to find out. Argh!

More Googling – turns out the bug didn’t get triggered until some very recent Windows 10 builds, and the base Mojave build still had that (latent) bug when it was released.

Next step is, thus, a Mac OS update pass to move up to the very latest Mojave build, including a version of Bootcamp with the issue resolved in it. This is in fact documented on the Apple support site (if you own 20:20 hindsight).

You may need to back up, wipe and restore your entire Mac OS Drive before Bootcamp’s Partitioning phase will succeed

This one was painful.

After Bootcamp managed to set up my USB stick properly, and managed to download and copy on the Bootcamp windows drivers in as well, it then failed to partition the drive successfully (the last step before it triggers the Windows installation to commence).

As usual, the error message was useless:

An-error-occurred-partitioning

Your disk could not be partitioned ; An error occurred while partitioning the disk. Please run Disk Utility to check and fix the error.

The problem here is that I did run Disk Utility to check and fix the error, and no error was fixed!

The Disk First Aid run came up clean – said my disk was fine.

I tried booting from “Recovery Mode” and running Disk First Aid again – nope, still no error found or fixed.

Time to dive deeper – open up the display of detailed information (the little triangle that can be used to pop a window of debug text) during the underlying fsck…

…One tiny clue turns up – a succession of warnings in the midst of the checking process, a warning (not a failure) involving something about ‘overflows’. Turns out that Disk First Aid (‘fsck’, really), within Disk Utility doesn’t fix these issues – it just declares the disk to be ok and finishes happily despite them.

Disk Utility can even partition the drive just fine – but the Partition function in Bootcamp itself … fails.

The fix turns out to be annoyingly radical: Do a full system backup, and then do a full system restore.

So – break out a spare USB hard drive to direct-connect (less angst and potentially higher I/O rate than doing it over the network). Use Time Machine to back up the whole machine to that local storage, then boot in recovery mode and restore the system from that drive again.

This takes… along time. All day and half the night.

However – it helped! When I tried yet once more, after this radical step…  now the Bootcamp partition step works – hazzah!

And then Windows 10 starts to install itself at last – hazzah!

In the windows installer, you may need to format the partition designated for Windows

Once windows starts to install process, it reaches a point where it displays all drive partitions and asks you to just pick the one to install Windows onto.

Merely selecting the right partition (the one helpfully labelled BOOTCAMP) doesn’t work. It fails, saying the partition is in the wrong format.

It seems that some inexplicable reason Bootcamp has left the intended Windows partition in the wrong state as far as the Windows installer is concerned.

The fix is to bravely select the partition concerned (again: its helpfully labelled BOOTCAMP)… and hit the ‘Format’ button to reformat it. Then you can re-select it – and the installation now starts to work – yay!

Use a directly attached USB keyboard when the wireless Apple Keyboard stops working

This one is self-explanatory. My Apple wireless keyboard didn’t work in Windows.

I thought I’d just need to load the Bootcamp drivers to fix that but – not so fast! (see the next issue, below).

Meantime I just switched to a wired keyboard – ironically the one I found in my storage room was a genuine Microsoft branded one with lots of useful extra function keys on it.

I’ve been perfectly happy to just stay with that – especially noting the next issue.

Remove/Rename a magic driver file to avoid Bootcamp support causing a Windows “WDF Violation” Blue-Screen-Of-Death a minute or so after Windows boots

Well, with Windows ‘up’, I installed the Bootcamp mac hardware support drivers. This is important for all sorts of reasons (including WiFi not working until you do).

I did that by selecting the (still mounted/attached) USB installer stick and running ‘Setup’.

The installation of drivers worked fine.

What didn’t come out fine was the unintended consequence.

Once the Bootcamp hardware support was installed, Windows started crashing a minute or so after each boot up, with a “WDF Violation”.

You can log in and start working – just – and then ‘bang!’ – sad/dead windows:

WDF-failure-imac

After everything else (and one and a half days of this stuff) – this was really frustrating.

Cue yet more googling – and at least this one seemed to be an ‘understood’ issue.

It appears to the the case that the wrong version of a crucial driver file (keyboard support related, by the looks of it) is loaded in by Bootcamp, but when installing onto this particular generation of iMacs.

Yay.

The fix – after I found it – involves booting Windows in diagnostic mode and disabling that driver file.

Even getting into that diagnostic mode is a challenge… it turns out that you don’t reboot holding down the shift key for ‘safe mode’ in Windows any more – that would too easy…

…Instead, now you boot up and then select restart… and while doing that restart, you hold down the shift key.  You then wind up with the opportunity, during the reboot, to access diagnostic functions.

Sure, that’s obvious… not.

Anyway – once booted in diagnostic mode, select to bring up a ‘DOS’ command window.

Now select drive C: and then locate and rename (or delete) the errant driver file concerned  (C:\Windows\System32\Drivers\MACHALDRIVER.SYS) as per this screen shot:

WDF-resolution-iMac

One trap to watch out for: Make sure you’ve changed to drive C:, and that you’re not still on drive ‘X:’ looking for that file.

That drive – which you start out on when bringing up the command window – contains a whole separate copy of Windows…without the bootcamp files on it. So you think you’re searching in the right filesystem – after all, Windows is on it… but you aren’t.

I guess that’s a consequence of using the Diagnostic mode, but it fooled me for a while, as I was trying to find the errant driver file there (on drive ‘X:’) at first…and failing to do so.

Now reboot and – yay – no more WDF blue-screen-of-death failures.

… but also, no bluetooth keyboard support.

No problem to me – I really prefer the direct-attach larger keyboard I found with all the Microsoft specific buttons on it anyway, for this task.

Contrary to warnings on the web sites that had helpfully pointed out the incorrect/broken MACHALDRIVER.SYS file issue, I have had no practical issues with volume control or similar things as a consequence of disabling that file.

For me, it all seems to work fine without this file in my life at all.

Success!

At this point, I have a working Windows 10 installation on my machine.

I have subsequently installed the software I wanted to run in the first place and its all working just fine.

I do hope someone else finds this useful – and that if you do go down this road, that you have a smoother ride than I did! 🙂

Windows-iMac-running

 

Life, the universe, and Redflow

Today Redflow announced the appointment of John Lindsay as a non-executive director of Redflow Limited. John has deep skills and experience around technology and technology related business matters. He is, to use a favourite phase (for us both), ‘smart and gets things done’.

Its worth appreciating that John has specific expertise and experience in precisely the realms that Redflow needs. I sent John over to Brisbane when I originally invested in Redflow, to help me assess the technical merit of the technology. He, like me, has been a shareholder in Redflow ever since.

In addition to being a great businessman, John is also a technology geek at heart (as am I). He has been an active member of the electric vehicle and renewable energy community for many years. His daily driver is electric (as is mine) – of course. He knows which end of a soldering iron is the hot end.

His idea of a fun weekend hobby is (literally – and recently) to have set up a D.I.Y. solar and battery offgrid system in his own garage to charge up his electric car from renewable energy because… he can (and because he knows how to).

His appointment frees me up to transition my own head space in the Redflow context totally into the technology around making our battery work in the real world. Doing that stuff is what I really love about being involved with Redflow. I love helping to make this amazing technology sing and dance smoothly for real people, solving real problems.

It was just the same at  Internode – the company I spent more than two decades running. The ideal situation is to do things in business because you’re passionate about it. In the words of Simon Sinek: People don’t buy what you do, they buy why you do it.

I care about Redflow because I believe that Redflow’s technology can genuinely help to accelerate the world’s transition to renewable energy as a replacement to burning things to make electricity. Its really that simple.

The technical lever I designed, to help Redflow to move this particular part of the world, is the Redflow Battery Management System (BMS). I am very proud of the great work done by the technical team at Redflow who have taken many good ideas and turned them into great code – and who continue to do that on an ongoing basis.

So… while there can be a natural tendency, when looking at this sort of transition, to wonder whether my leaving the board (given how influential I’ve been at board level in the last few years) is because something ‘bad’ is happening, or because I don’t like it any more, or because I don’t feel confident about things at Redflow, the reality is precisely the opposite.

My being happy to step back from board level involvement over the next few months is the best possible compliment that I can give to the current board, lead by Brett Johnson (and now including John) and to the current executive (now ably lead by Tim Harris).  

I’ve put my money where my mouth is, to a very large extent, with Redflow. I am its largest single investor – and I have also put my money down as a customer, too, in my home and in my office.

At this point, I’m happy to note that we are seeing great new batteries turning up from our new factory. We are on the verge of refreshing our training processes to show our integrators – and their customers – how far the BMS and our integration technology has come at this point (and just how easy it all is, now, to make the pieces work). We are looking forward to the integration industry installing more of our batteries into real world situations around the world again – at last.

We do this with confidence and we do this with eagerness.

I am proud to be a shareholder in Redflow and I look forward to the next chapter of this story.