Internet of Somebody Else’s Things

It seems a bit odd, but this whole Internet of Things (IoT) movement seems to heavily rely on the “cloud”. Millions or billions of finicky little devices tucked all over that are each bound to a stable internet connection. Data is produced, shipped off somewhere opaque, then presented back in a usable fashion. That strategy becomes a problem when our rural internet service is intermittent and not particularly performant. Given that the cloud is just someone else’s computer, why can’t I have my own personal IoT?

It’s a bit tough to see, and I could have chosen a better time to grab the graph, but my router comes with some software that shows a dashboard – this is an approximation of the network performance over the last 24 hours. I’ve added a couple timestamps more useful as a static image.

last_24_hours

Friday at 14:00 to Saturday at 14:00

You can see that fastest max speed we see is ~40 Mbps down and 2.5 Mbps up, and that’s in the middle of the night. During peak hours (21:00 seems to be pretty terrible) latency sky rockets and both average and max downlink plummet.

That said, the problem for us and IoT is that second graph – we’re averaging < 1 Mbps up at any given time, and dropping down as low as 0.1 Mbps regularly. That means that any service that pushes significant (doesn’t even have to be *that* significant when you consider we actually want to use the internet at the same time) amounts of data to the cloud is going to be an issue for us. Nevermind the regular power outages and internet degradation when storms roll through, when things are all going perfectly we have extremely limited ability to push data to the internet.


 

We do a fair bit of heating with wood in the winter months, so the thermostat that was installed when we moved in wasn’t particularly useful – the temperature differences between rooms is huge, let alone between rooms on separate floors. With the traditional propane forced air furnace everything is all rosy, but heat is not so easily distributed with a fireplace. Because of that, I wanted to get a more holistic view of the temperature conditions in the house. Originally I was trying to make my own temperature sensors, but I’m not nearly capable or motivated enough to pull that off without having to plug something in. I dropped the idea for a while, but took notice when the Ecobee came out.

I wasn’t interested in the Ecobee as a thermostat (it can’t throw logs on the fire…) but more for the sensors it comes with. The Ecobee proper + 3 remote sensors was a little over 300$. PLUS Ecobee has an API for and manipulating consuming the data. A little bit of a slam dunk, from my point of view – someone went and made a nice bundled product that would address a project I had mostly given up on.

The only stumbling block is it all relies on Ecobee’s servers. If Ecobee goes under, I’m not really sure what happens to our thermostat. If our power or internet goes out, we can no longer use an app to look at the house stats or adjust the temperature even though all the data is produced and processed locally. Damn.

Ecobee does have a pretty compelling answer to that here (an answer to a question of why not make a LAN version), transcribed below:

MarkK(API Architect) over 3 years ago

Hi,

Rather than argue the merits and faults of a specific implementation, I will explain why we made the choices we have made in the implementation of the API:

1) Everything, absolutely everything speaks HTTP today. From embedded devices to PCs. HTTP has become the de-facto protocol in the interconnected world.

2) We implemented a REST-like interface which has become the most common method of delivering API connectivity. JSON is a lightweight serialization protocol supported by all commonly used languages out there. It is also extremely common today, almost every API on the web uses it today.

3) Industry standards were chosen because it is a common language everyone today speaks. It is easy to integrate with 3rd party services when they all speak the same language and operate in the similar ways.

4) Polling is a scalability strategy which allows us to scale to millions+ requests. Push technology is expensive and would require on our part to ensure delivery, track state, etc. We choose to be light weight on our end so that we can continue to support millions of devices and 3rd party applications out there.

5) The thermostat is already connected to our servers. This is how we provide data services (Home IQ). Our servers ensure that your thermostat is protected from attacks against it. It also does not require you to open any ports on your home firewall/router to communicate with your thermostat. This is a simpler and safer solution. It also makes our thermostat safer and stable by not including server code into it. It also makes us more agile by being able to update our servers much more quickly than we can with the thermostat device firmware. You get more quicker this way.

6) PDF documentation gets out of date quickly. On the web you get the latest and greatest documentation every time you read it. If PDF is your forte, there are plugins and tools which can generate a PDF from a web site.

<snipped answer to irrelevant question>

Thanks,
Mark

So according to Mark, it’s a business decision to keep the thermostats dumb and put all the smarts in the servers. A simple thermostat API means they have to update customer’s firmware (which is WAY more difficult) far less frequently, and can instead update their own servers (which is WAY easier). Brilliant move from a business perspective, but one more service that erodes personal data ownership.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s