Where it all ends up – the database

It’s a bit much to try and tackle the entire web application stack all at once. It can often be helpful to look at a particular area, see how the various technologies fit with your end goal, then let that guide you towards complementary technologies.

Database first?

One of the most approachable decisions is the data persistence layer. I think this is a great way to start narrowing your focus. Databases are pleasantly abstracted from applications, with generally usable middleware to make integration with everything else doable. Whether you use Redis or PostgreSQL or MongoDB, you can usually move from one to the other without impacting the majority of your application logic. That’s not to say it’ll be easy, just that you *can*. I recently took part in a MySQL to PostgreSQL migration, and it definitely had its quirks, but it was quite managable. Also, one of the only real manifestations of “I made a bad choice!” is performance suffers at scale – that’s a great problem to have!

NoSQL seems to be all the rage these days, I’m suspicious it’s mostly because people don’t want to learn SQL or genrally don’t know what they want to do, which is fair-ish. Certainly key-value stores like MongoDB come across as kind of a cop out. It seems like most data is pretty well structured, and worst case scenario you can store JSON in PostgreSQL, so some of the more “modern” choices seem a bit suspect. That’s not to say PostgreSQL solves all problems for all people. Graph databases like Neo4j are a great example of an excellent reason to use a different technology – it overlaps very little with what RDBMSs excel at. On the other hand, is a timeseries database a special enough case to warrant something like InfluxDB, or would you be better of with more traditional choices? That’s a much tougher call to make. For the general case, and for my own purposes, I feel like the ultimate answer is two fold.

You’re guaranteed to have structured data in your application. I contend that while it’s marginally more work to get started with a schema and have to deal with migrations and types, those are all good problems to have. Sure it sucks when you’re just starting and you can’t do anything until you write a migration or a schema, but as soon as you get out of that very first step and have data you want to keep around, it’s SO much easier when you have some structure and integrity. The bugs that arise from unstructured data storage feel far too much like the bugs that are only possible with dynamically typed languages. Use a SQL database for data that fits in it. Your user has a name and a phone number – sure some day you may add an address, but fundamentally I have a hard time believing the model will shift enough to justify a schema-less technology.

You’re guaranteed to have unstructured or semi-structured data in your application. This is where low friction when soliciting JSON is particularly valuable. You don’t want to have to translate from your form submission to your database, you want some magical ORM angel to swoop down and save you from knowing what you’re doing. Someone filled out an arbitrary 3/4 of a form – do you put that in your database? Probably not! You desperately don’t want to sacrifice data integrity, and you’re immediately going to run into constraints you’ve put in place that provide value in general but provide friction while iterating over invalid data to get it to a usable state. Side note: this is one of the fundamental design issues I have with ActiveRecord in Rails – it doesn’t provide any real separation from the application perspective between an “immutable” object that can be constructed from data in the database and the super mutable object that is destined to maybe one day make it into the database. To address that, storing things directly as JSON, or alternatively looking at a parallel deployment of someting like MongoDB or Redis (depending on use cases) may make more sense. Once they’ve been completed and validated, then you can fit them into the structure you’ve defined.

 

For my own foray into end-to-end web development, I’ve decided I’ll have a strong bias towards technologies that allow me to use PostgreSQL.

Pros:

  • It’s the data persistence technology I’m most familiar with (bit of a trump card)
  • It’s got a huge community to provide tooling and help
  • It extremely mature, which means its rock solid and scalable

Cons:

  • It’s not that cool, what with its rules and structure

In the same sense that I feel pretty strongly that static typing (when implemented properly in a language) is undeniably “better” than dynamic typing, I’m pretty suspicious that most of the reasons people use NoSQL are bullshit. I’d love to see some numbers that show modern MongoDB (for example) panning out better than PostgreSQL for… well… anything really. The only thing I’d buy is the same argument for dynamic typing. It’s “easier” at the very outset of a project, largely because it doesn’t make you stop and figure out what you’re doing.

Current Web App Tech Stack

At work, the product I currently work on uses PostgreSQL and Ruby on Rails 4. It was started well before the current front-end craze, so the front end is basically a mess of JavaScript, jQuery, and the ERBs that Rails provides. There’s been a push to move the front end to Angular 2, which seems like it’ll be a massive improvement over what we have now.

My impressions of the current stack:

Database

PostgreSQL is awesome. In the realm of RDBMSs, PostgreSQL certainly seems like a top contender for the “prosumer” market. Every time I’ve seen a “MySQL vs PostgreSQL” comparison, modern PostgreSQL has slapped MySQL about. Also, MySQL being backed by Oracle is instantly a turn off for me (Oracle sucks, don’t you know?). I’ve only used MySQL in passing, and Oracle DB even more briefly than that, so my opinion is grounded in almost nothing but looking up internet articles. I managed the transition for my work’s current web app from MySQL to PostgreSQL (mainly for licensing concerns) and there didn’t seem to be a huge difference. MySQL’s historical lack of a first class boolean is a bit of a pain. It’s harder to reason about when *probably* that tinyint(1) is a boolean, but you’d best make sure…

Application layer

When I first started working in Ruby on Rails, I had a pretty strong aversion to it. I came from a mostly Java background with a tiny bit of C++, C, and Python. Syntactically, things like not explicitly calling “return” really threw me for a loop. Right off the bat I thought “how do you know whether the function just does something or whether it returns something?!?!? You can never trust any functions’ return value again!”. I have really come around on that particular issue – the explicit return provides very little meaningful value from my perspective in terms of both development and maintenance. You never really call a function from your own code without understanding what its doing, and you don’t refactor without considering breaking changes so it just never comes up. That said, I also never felt like typing “return” was bloat, and now there’s an inconsistency between early returns and regular returns, so while I don’t think it’s a black mark, I’m also not convinced it’s any better. Not having return types on the other hand…

Now I can safely say that I love small pieces of Ruby. The collection operators are so very terse and useful, it really showed me how much a language depends on them. Their implementation within Java with the .stream() function and all its bloat just seems cumbersome by contrast. Functions as first class citizens just makes for better code. No I don’t want to create a SortingStrategy interface and then pass in a concrete class which implements it – I just want to pass in a method. Go away needless layers of abstraction!

I can also safely say that I despise Ruby. I didn’t realize that something like an interface was such blasphemy. Heaven forbid I want two separate implementations of a concept adhering to the same API to be kept in sync. Don’t even get me started on enums. Obviously I wanted a typo in that symbol I had to type out in 20 different places to make it all the way to production – who wouldn’t? One of the daily pet peeves though is how I feel constrained with naming in Ruby. The lack of polymorphism leads to either gross looking function signatures, or resorting to something like repeating the parameter in the method name. Additionally, if you ever have any desire to refactor anything on a large scale, don’t even think about naming methods with common words. Thinking something like “find” or “valid?” would be a great name? Good luck changing it later! You’ll curse as you manually curate the list of possible invocations your IDE has presented you with. I constantly find myself resorting to string searches, which is a terrible sign.

I don’t have particularly strong opinions about Rails. It definitely gets you up and running quickly. At first I was upset with all the magic – “how am I supposed to know that if you name it like this, then this other file way over here is magically attached?”. The first time I had to debug having pluralized a name I definitely did not say anything nice about Rails. Then I had some exposure to the alternative – explicitly linking the TypeScript, CSS, and HTML files in Angular and I thought “they’re all sitting right there next to each other and named the same thing, why do I have to spell it all out?”. So there are definitely downsides to both strategies, but I personally find I don’t mind the “bloat” as it pulls back the curtain and there’s less magic, and I think automatic code generation hits the sweet spot in between the two. Code generation with tooling support to keep it out from under developers’ feet seems like it’s better than both options. I have my issues with how rails treats all models as mutable and ActiveRecord in general, but I haven’t seem something that is obviously much better, so I’ll reserve judgement.

All in all, I think Ruby on Rails is great to get going, but painful to maintain and scale up. If the app is a continuous project and not a “build it and move on situation”, I couldn’t recommend it. There are just too many avenues for bugs that could have been solved by static typing.

Front End

The unstructured world of old-timey terrible JavaScript and jQuery mangling HTML is awful. Throw in some Rails ERBs for even more inconsistency and it’s a giant unsustainable mess. I’m sure the technology *could* be used well – built in a sensible fashion, relatively decoupled and scalable. It’s just not, in my experience… Historically all of the “real work” has been done on the server side, so the client side hasn’t gotten the focus and rigor it needs. Definitely there are people who can write excellent client-side code with JavaScript and jQuery, they’re just few, far between, and likely underappreciated because people see “the button is more red than I’d like” and not how easily that button was integrated with the rest of the app due to well structured code. The web app I work on is definitely larger than a personal project, but it’s not *huge* by any means. It makes me wonder what the massive websites using similar technologies did. I suppose a lot of that is that nobody was really a professional JavaScript developer where I work, and code written by people who are learning as they go is never going to age well.

Angular is a conceptual change and a technology leap forwards for my work’s application. That said, I’m not totally positive it was the right step. The goal is to slowly transition from Rails powered front end to Angular powered front end. That makes a lot of sense by itself, but the entire transition period is a pretty gross. Angular and Rails don’t go together particularly seamlessly, as there is quite a bit of overlap between the two. First of all, it should be a no brainer that a technology built to support single page apps is at odds with the very not-single-page-app model that is Rails. So how do you address that in terms of deployment? It’s hard to go one page at a time when it means that every time you navigate off an Angular page and then back its essentially restarting the application.

I really like what I’ve seen of Angular so far, and it makes a lot of sense to me. I’m looking forward to compile time checks of the template code that is allegedly coming in Angular 3, as I *hate* the magic string matching that shares so much with Ruby and Rails. Other than that, it seems like client side development is finally starting to mature to the point where frameworks can hit it big even when they impose structure and opinions about how to do the work.

I’m excited to work with it more on my own!

Web app development stack landscape

The basic building blocks of the run-of-the-mill web application come down to:

  1. Something to store data
  2. Something to manipulate that data into meaningful pieces
  3. Something to show those pieces to something else

A simple example of how that could play out in terms of an end product could be:

  1. A regular old text file with the temperature recorded every hour
  2. A program that reads the file and finds the daily average temperature
  3. A graph of the daily average temperature for the last month

Unfortunately (or fortunately?), to satisfy those 3 simple tasks, there are a ridiculously intimidating number of technologies that can all kind of work together and but mostly generally not. Well, ultimately everything *can* work together, often it’s far more trouble than it’s worth. For a long time, The Answer was the LAMP stack. I think it’s safe to say that LAMP is relegated to maintenance work, with new projects being rare – mostly the product of people who don’t want to learn new technologies.

I personally feel as though the weakest part of LAMP is the P – I continue to hold the opinion that PHP is a nightmare. It just is. It’s everywhere, and that’s great and all, but its adoption as a “general purpose” programming language seems misguided (hmm… what’s this about JavaScript popping up on the server side?). Additionally, M hasn’t aged well for different reasons – PostgreSQL has largely usurped MySQL as the go to RDBMS. So what technologies power the modern web application stack?

Looking at StackOverflow’s 2016 developer survey is always a good place for a sanity check. Largely, I think the front end choices are a bit easier. Everything is JavaScript and HTML and CSS. Sure, there are tons of technologies that jump in there, which can cause a mild panic attack…

“Do I go the modular route and try React, or start erecting the Angular monolith? Is Sass better than LESS? Is HTML good enough by itself or should I use Mustache or Jade or Handlebars or… and what if I screw up?!”

The real kicker with these types of investigations is the overlap. It doesn’t really work to ask “Angular or React” in the same way you can ask “MySQL or PostgreSQL” – they do different things and satisfy different requirements, they just both happen to be popular. This can quickly overwhelm someone (e.g. me) trying to compare and contrast and end up with a defensible choice. Largely, I don’t think it really matters too much what you go with on the front end, as that landscape is shifting preposterously fast and virtually all choices are better than writing HTML and jQuery, which has done a surprisingly good job so far. Soon enough, my guess is the client side will mature a bit and we’ll see the same layering we’re familiar with on the server side – mobile apps seem to already be there. Persistent client-side storage, native computation power and responsiveness, while still serving highly dynamic content are all simply too compelling. The convergence of mobile applications and web is well on its way, as browsers are either the new OS, or OSs are the future of browsers or something.

So how do you start?

I think the best option here is to already have started. By that, I mean isolate the technologies that you’re most familiar with, so you can focus on building something you can actually think of as “done” at some point. If you try and learn everything about everything all in one shot, chances are (if you’re anything like me) that you’ll find yourself frustrated and give up. Do you know MongoDB intimately? Awesome, use it. Have you done extensive work with CSS and somehow managed to not have come across Sass? Don’t bother with it now. I’m a strong advocate for surging blindly forward when stakes are low and the threat of spinning your wheels investigating everything and producing nothing is high. Get *something* working end to end, then fan out and refine. This can also guide to to a particular set of technologies as well. Often frameworks are drawn to each other as people refine their development processes, so pick one that goes well with what you know. Perhaps even let whatever general purpose language guide you:

  • Know Java or Scala? Check out Play Framework (side note, TERRIBLE choice of name, Googling for it is *so* much harder than it needs to be)
  • Know Ruby? Check out Ruby on Rails
  • Know Python? Check out Django
  • Know JavaScript? Check out Meteor
  • Know Go? You’re cooler than me, why don’t you just go build your own

 

Web App Development

About a year ago my job responsibilities changed from very enterprisey Java application development to an enterprise Ruby on Rails web application. I started out my career doing mobile development, so web development feels very similar. Both have a very tangible result and a strong feedback loop from development. The write code -> deploy code -> see results cycle takes very little time with modern tooling, and the changes made are typically far more visible. For example, previously one of my responsibilities was to implement a particular algorithm. It was fun and everything, but I did not find the end result tremendously satisfying – being slightly less data used in one aspect of the client/server communication. On the other hand, a similar amount of time on a web app yields a new analysis feature with a UI that gives the user configuration options and renders a pretty graph that provides some value (presumably). Generally, it seems the “server-side” problems are closer to “real” problems – they’re more interesting, they’re more substantial, and there may be some actual design and figuring out to do. The “client-side” problems strike me more as tasks. The hard work is the UX while the rest of the feature is generally following the script:

  1. Get user input
  2. Join tables and run query
  3. Apply algorithm
  4. Send result to client

This means that there’s far more time spent figuring out the framework than figuring out the “problem”. I feel like that’s generally true of user facing applications though – most of the work is not particularly novel. Yet somehow we still manage to botch it up an astonishing amount…

So how do you make web application development interesting then? Why, making it yourself! This has all the signs of a project I’ll start and never really finish, but I’d like to give it a shot. The whole environment is moving so quickly that I’m sure doing anything at all will yield some insight.

As with anything, the first step is often the hardest. What makes it even harder is I have no use case to match up to – I just want the learning experience of building a web application from the bottom up, and any actual goal is only a pretense. That may seem a bit backward, but what I’m saying is I’d rather change my “project” to better compliment a technology I’m interested than change technologies to best serve my “project”.

Internet of Somebody Else’s Things

It seems a bit odd, but this whole Internet of Things (IoT) movement seems to heavily rely on the “cloud”. Millions or billions of finicky little devices tucked all over that are each bound to a stable internet connection. Data is produced, shipped off somewhere opaque, then presented back in a usable fashion. That strategy becomes a problem when our rural internet service is intermittent and not particularly performant. Given that the cloud is just someone else’s computer, why can’t I have my own personal IoT?

It’s a bit tough to see, and I could have chosen a better time to grab the graph, but my router comes with some software that shows a dashboard – this is an approximation of the network performance over the last 24 hours. I’ve added a couple timestamps more useful as a static image.

last_24_hours

Friday at 14:00 to Saturday at 14:00

You can see that fastest max speed we see is ~40 Mbps down and 2.5 Mbps up, and that’s in the middle of the night. During peak hours (21:00 seems to be pretty terrible) latency sky rockets and both average and max downlink plummet.

That said, the problem for us and IoT is that second graph – we’re averaging < 1 Mbps up at any given time, and dropping down as low as 0.1 Mbps regularly. That means that any service that pushes significant (doesn’t even have to be *that* significant when you consider we actually want to use the internet at the same time) amounts of data to the cloud is going to be an issue for us. Nevermind the regular power outages and internet degradation when storms roll through, when things are all going perfectly we have extremely limited ability to push data to the internet.


 

We do a fair bit of heating with wood in the winter months, so the thermostat that was installed when we moved in wasn’t particularly useful – the temperature differences between rooms is huge, let alone between rooms on separate floors. With the traditional propane forced air furnace everything is all rosy, but heat is not so easily distributed with a fireplace. Because of that, I wanted to get a more holistic view of the temperature conditions in the house. Originally I was trying to make my own temperature sensors, but I’m not nearly capable or motivated enough to pull that off without having to plug something in. I dropped the idea for a while, but took notice when the Ecobee came out.

I wasn’t interested in the Ecobee as a thermostat (it can’t throw logs on the fire…) but more for the sensors it comes with. The Ecobee proper + 3 remote sensors was a little over 300$. PLUS Ecobee has an API for and manipulating consuming the data. A little bit of a slam dunk, from my point of view – someone went and made a nice bundled product that would address a project I had mostly given up on.

The only stumbling block is it all relies on Ecobee’s servers. If Ecobee goes under, I’m not really sure what happens to our thermostat. If our power or internet goes out, we can no longer use an app to look at the house stats or adjust the temperature even though all the data is produced and processed locally. Damn.

Ecobee does have a pretty compelling answer to that here (an answer to a question of why not make a LAN version), transcribed below:

MarkK(API Architect) over 3 years ago

Hi,

Rather than argue the merits and faults of a specific implementation, I will explain why we made the choices we have made in the implementation of the API:

1) Everything, absolutely everything speaks HTTP today. From embedded devices to PCs. HTTP has become the de-facto protocol in the interconnected world.

2) We implemented a REST-like interface which has become the most common method of delivering API connectivity. JSON is a lightweight serialization protocol supported by all commonly used languages out there. It is also extremely common today, almost every API on the web uses it today.

3) Industry standards were chosen because it is a common language everyone today speaks. It is easy to integrate with 3rd party services when they all speak the same language and operate in the similar ways.

4) Polling is a scalability strategy which allows us to scale to millions+ requests. Push technology is expensive and would require on our part to ensure delivery, track state, etc. We choose to be light weight on our end so that we can continue to support millions of devices and 3rd party applications out there.

5) The thermostat is already connected to our servers. This is how we provide data services (Home IQ). Our servers ensure that your thermostat is protected from attacks against it. It also does not require you to open any ports on your home firewall/router to communicate with your thermostat. This is a simpler and safer solution. It also makes our thermostat safer and stable by not including server code into it. It also makes us more agile by being able to update our servers much more quickly than we can with the thermostat device firmware. You get more quicker this way.

6) PDF documentation gets out of date quickly. On the web you get the latest and greatest documentation every time you read it. If PDF is your forte, there are plugins and tools which can generate a PDF from a web site.

<snipped answer to irrelevant question>

Thanks,
Mark

So according to Mark, it’s a business decision to keep the thermostats dumb and put all the smarts in the servers. A simple thermostat API means they have to update customer’s firmware (which is WAY more difficult) far less frequently, and can instead update their own servers (which is WAY easier). Brilliant move from a business perspective, but one more service that erodes personal data ownership.

12 Months Later

*Knock on wood* I have had absolutely no problems with the Prius. More than 25 000 km later and it’s as though the battery never gave me any trouble in the first place. There’s not a lot more to say about it than that, really. If the battery isn’t giving you any trouble, the car just drives. I don’t have any easy way of determining the pack’s health, and it’s completely possible that I’ve wrecked everything and shortened the lifespan of the traction battery, but it sure doesn’t seem to be the case. I have over 250 000 extremely rough kilometers on the car – I did construction and renovation projects to help pay for university, and used the Prius as a truck. I’m sure that contributed to the early wear of the battery pack, as there was an inordinate amount of dirt and debris in the car that probably very few other Prius in the world are subjected to. For example, I once filled the back with the waste shingles and roof debris from a re-roofing job. So that all said, I think the battery is probably one of the “healthier” pieces of the Prius as it stands right now. It’s missing most of its front bumper, and numerous non-essential pieces of the body. A small oil leak, a couple rust spots, but it still runs.

Would I do it again?

Absolutely. Particularly having done it before, but looking back, if I were doing it for the first time I still think it was well worth it. I very briefly thought about trying to set up a business offering assistance with other people’s hybrids, but I really don’t care enough to learn all I would need to do it properly. I’d certainly help anyone looking to do it themselves though. It’s utterly preposterous that the status quo is “sell us your old pack, we’ll sell you a new one”. I learned a massive amount, overcame most of my fear of electricity, and saved a few thousand dollars. I’m lucky enough to both work at a job that is very accommodating with allowing me to work from home when I needed to, and having a partner who was willing to ferry me around a bit. That bought me the ~1 month I needed to diagnose what was wrong with the Prius, research my options, buy everything I needed, wait for it to ship, fix the battery, and put it back together.

Done at last

With all the interesting things done, this is just the first couple steps in reverse.

DSC_0216

Collect cleaned contacts and nuts

DSC_0217

Make sure the battery pack is securely fastened together

DSC_0225

Pop the plastic covers back on the bus bar and the vent tubes on the top

DSC_0226

And we’re all done! Battery pack fixed, balanced, reassembled and ready to go! Putting the battery back in the Prius is exactly the opposite of taking it out. Screw in a bunch of stuff, try not to break anything or strip anything. I didn’t put it back together though, on the off chance I would have to take it out again…

One last note

I put the battery pack in, crossed my fingers, and turned the Prius on. Everything went terribly. The dash lit up like a Christmas tree. My heart sank; this had been an almost month long experiment of stress and an intimidating amount of learning. In an effort to not curl up in a ball and lay there for a while, I turned the Prius off and tried the only thing that was left: wiggle the safety plug. I had read that it was a bit tricky to set it in properly. I had taken extra care to make sure it sat properly. I had screwed it up though! Holy jumpin’, I tried not to get too excited with another shot. I took the plug out, pushed it back in, then repeated a couple times. Finally it sat in really securely – once clipped in it didn’t wiggle at all.

Crossed my fingers and my toes this time, pressed the power button and… success! No lights anywhere! Glorious lack of abject failure. I took the car for a quick drive, everything went as well as could be expected, and I parked it for the night. So if you’re in the same situation – double and triple check that the safety plug is seated correctly before having a stress breakdown. And then probably take it out and try all over again, because it’s not exceedingly easy to sit it in perfectly. Good luck!

Reassembling the Pack

I labelled all the modules and kept track of their performance individually throughout the process. When it came time to rebuild the pack, I was curious if I should put it back exactly how I took it apart or “rotate the tires” or what. I couldn’t find a definitive answer on what the best thing to do was, so I used of the only metrics I had. I averaged out each module’s discharge over the 3 cycles and ranked them. Then I paired the modules together so the “strongest” was with the “weakest” and so on. So #1 went with #28, #2 with #27 etc. I saw someone mention this method, and for lack of anything else particularly sensible to do I went with it as well.

DSC_0205

From there, the only tricky thing left was getting the modules back into the pack. It’s a tight squeeze! Each module has to line up with the hole in the bottom so that the screw can fasten it, so there’s not a lot of room for error. I used a super high tech block of wood.

DSC_0214

DSC_0215

I removed just enough modules to be able to fit a 2×4 in there. Then I fastened the screws on top and on the bottom and used those screws to compress the modules. This let me get all the screws in the bottoms of all the modules, so releasing the pressure wouldn’t cause the modules too shift too much.

The last module was a tight squeeze:

There was no real trick here, I just mashed it in and tightened ‘er down. That said, I did find it easier to hook the plastic end piece on the bolts then jam the last module in as opposed to putting the module in and trying to hook the plastic end piece on.

Balancing the Battery Pack

As you may recall, what originally tripped the Prius error was what basically to a voltage difference between the modules. The sensors in the battery pack report the module voltage (I think in groups of 2? You can see the wires leading to a contact in each section of the bus bar), and the ECU will report an error beyond a certain threshold. I believe it’s a difference in anything greater than 0.3V between any two module groups that will trigger an error, but I’m not positive that’s the threshold.

The significance of this is that cycling the batteries isn’t enough to “reset” the pack. The voltage also needs to be balanced across all the modules. The last charge of the modules left it at a relatively arbitrary voltage, as the charge terminated after reaching the 7250 mAh mark. Here’s what I ended up with:

Module Voltage
1 7.96
2 8.01
3 7.97
4 7.96
5 7.95
6 8.04
7 8.02
8 8.03
9 7.95
10 7.93
11 7.94
12 8.00
13 7.97
14 7.97
15 8.03
16 8.03
17 7.92
18 8.00
19 8.03
20 7.95
21 8.09
22 7.99
23 7.98
24 8.04
25 7.98
26 8.05
27 7.93
28 8.03

So the lowest voltage is 7.92V and the highest is 8.09V. That yields a difference of 0.17V – far beyond the tolerance. The modules as designed to drive the Prius are hooked up in series – 28 * 7.2 nominal volts per module = 201.6V. That’s the dangerous part. Welding level voltages. To balance all the modules we hook them up in parallel. One ridiculously giant 7.2V battery, probably run your cell phone for a couple years. This part is not that dangerous. You could have quite a bit of current, but the voltage is low enough that you’d have to trip and accidentally impale yourself on a couple of the contacts for it to really matter.

I read about a couple strategies for doing this, but the wiring harnesses didn’t look very robust, easy, or cheap. I really didn’t want to worry about all those wires. It would be extremely easy to accidentally hook up a module backwards and ruin it. Given that removing the bad module had been relatively easy, I opted for a different strategy.

Flip every second pack, so the terminals are all aligned:

DSC_0208

Then run a wire across:

DSC_0209

DSC_0211

DSC_0210
This worked wonderfully! And it was super easy to avoid screwing up, which is a big perk in my book. I left it like this for a little over a day, and it all seemed to work out. The pack equalized at (I didn’t actually write this down, so just going off memory) 7.96V and all good to go!

Replacing a dead module

I had a single bad module, as tagged in the photo below:

DSC_0192.JPG

No two ways about it, I’m going to have to open everything up if I’m going to get the module out. The modules are pretty packed in there, I guess they’re compressed slightly so they don’t expand while charging/discharging. The white piece of plastic at the end is roughly the size of a module, and keeps everything snug. There are two screws in the two black bars on top that span the entire pack, two screws in two almost identical bars on the underside of the pack, as well as two bolts sticking up through the frame that keep the bottom of the plastic end cap in place – only one of which had a nut in my battery pack.

First things first, there’s a temperature sensor underneath the modules. It’s just a couple relatively small gauge wires, but nothing works if the temperature sensor is damaged. It’s held on with two little black plastic clips that pop right off. It’s hard to see in the picture, but hopefully it gives you an idea (it’s the little white wire on the right):

DSC_0125.JPG

Jumping ahead to make it more clear, you can see how it attaches here:

DSC_0212.JPG

It’s fastened in two places:

DSC_0206

Jumping back, after that it’s pretty straightforward. Undoing the 4 screws through the top and bottom bars put a lot of pressure on the two upright bolts – I was a little concerned they would break, but it seemed to hold together without too much problem. The plastic part comes off relatively easily.

DSC_0197

What isn’t quite as easy is the surprise that is waiting for you after that. The modules don’t lift right out.

DSC_0199

Underside of the battery pack

Each module is fastened on one end with a screw through the bottom. With everything being reasonably squished in, it seemed like it was going to be extremely difficult to get a module out of the center of the pack – I would have to keep the modules on both sides of the dead one compressed. Instead of fiddling with it and stressing out, I decided I’d take everything out. Before doing this, I numbered each of them, so I didn’t mix anything up. I set the battery pack on a small side table and hung one side off at a time. Slow and steady, and the pack is apart.

DSC_0201.JPG

Just like that, the bad module is gone!