Logitech G815 Review / Impressions

Logitech G815 Review / Impressions

I recently decided I wanted to upgrade my keyboard. I had two principle goals: the first was to find a production keyboard I could still buy. My former go to was the Logitech K740 (Logitech Illuminated Keyboard) which had been out of production for a very long time. The last time I tried to replace one I ended up buying about 3 keyboards off eBay before I suceeded in getting what I was actually after.

With that one now on the way out due to the key caps breaking off on frequently used keys like the backspace, and some suspected trouble with key registration it seemed like it was finally time to choose a new keyboard and adapt to it. The typing experience and it's ergonomics has become important to me, between age and profession, so it's a big decision.

Why a mechanical keyboard?

I've been curious to try a mechanical keyboard essentially due to hype, although there is some solid logic behind it. My K740s have failed due to the scissor-type plastic (nylon) mechanism failing, and once it goes there's nothing you can do. They also build up dust underneath the keys, but removing the key caps is not super-well supported - and I've lived with a very fiddly backspace for a while now, as well as some problems with key registration if I don't hit the larger keys (backspace, tab, enter) suitably dead-center.

To be clear: these are emergent problems - as new, the keyboards were solid but they failed in a predictable way.

So what I'm looking for by going with a mechanical keyboard is improved durability for key registration, and a nice typing experience. With the G815 I'm buying a gaming keyboard, but I'm buying it because I want good key registration for typing.

2AtnXir.jpg

G815: First impressions - there's an ergonomics change

The K740 is a very thin keyboard, with a built in palm rest. It is 9.3mm thick - that is incredibly slender, and no mechanical keyboard is going to beat that. The G815/915 series is the thinnest mechanical keyboard on the market at 22mm thick, but that's still more then double. Up front: It's noticeable, my typing position was substantially changed.

The G815 doesn't come with a palm rest out of the box: people have said they don't think it needs it, I would disagree. The first thing I found myself doing was raising my arm rests to get my hands flat to the keyboard. It's what I'm doing while typing this review. I'll be buying a palm rest soon and updating this post when I do.

The G Keys

The bigger issue I found, which I did not see talked about before buying in the reviews and is probably universal to this type of gaming keyboard design is the addition of the G keys to the left hand side of the keyboard.

I did not realize this before I bought the keyboard because it's a habit I do without thinking about it, but I essentially use my left hand to find the top-left of the keyboard when typing with my pinky finger. On a regular keyboard, holding the top-left of the chassis like this works fine because it's pretty well lined up with escape and the top row of number keys.

The addition of the G keys however changes the ergonomics of this in a big way - my initial attempts at typing were frustrated and difficult because all my instincts about where the keys are were wrong: I'm so used to using that pinky to control where the top of the keyboard is that it was very difficult to adapt without it. If you are considering this keyboard, or any gaming style keyboard with extra left hand macro keys, you would be well advised to really check if this is something you're doing: it was a huge surprise to me, and the change in how I type is, as of writing (so about 45 minutes after unboxing it) still feeling rough. I'm expecting to adapt, but I'm also feeling a muscle strain in my left arm due to the new typing position so it's not an easy adaptation, and as noted above may involve more peripherals to get it comfortable.

I strongly encourage not underestimating this - this is a peripheral I use for 8 hours a day for my job. It's function and whether it causes muscle strain is vital.

The Key Action

Mechnical keyboards are all about the key action of th keyboard. I can't give any advice here: YouTube will show you people using it, how it sounds and tell you how it feels but it is something which needs to be experienced for yourself. I can say that despite my complaints about the additional G keys, and the fact it's not as thin as the K740, the "Linear" type key model fo the G815 feels great to type on when you're in the zone on it. The action is smooth, comfortable and feels solid - this is consistent with some other reviews which noted that the Linear key switches tended to feel the best after a little while of typing, and this I can believe.

Some very good advice when you get into reviewing keyboards and other "things you never think about" is that almost all of them can be criticized - perfect doesn't exist, and the criticisms always feel louder then the good points. The most I can add here is, if you can use one in person, then that's the best way to explore the space (this is an expensive keyboard, so just buying a whole lot of them - as I suspect gets most YouTubers into making YouTube videos about keyboards - is a danger).

Conclusions - we'll see

It's no fun getting a fairly expensive new thing and feeling "hmmm" about how well it works. The G keys might be the real problem here - that change in typing experience was a huge surprise to me, so if you find this review then that's my core take away: be wary of layout changes like that. There is a numpad-less variant of the G815 which can be had, but I like my media keys and numpad so that's why I bought the larger one. If you don't need or want a numpad, then I'd recommend that one at the present time - no G keys means no problems.

I'm hoping at the moment I'll adapt to the G keys: their potential utility is high (though you can't program them on Linux), but if I could buy a full-size variant without them tomorrow I'd do it and not bother with the adaptation.

But the keys feel great to use, so hence the conclusion: we'll see.

Conclusions Update (same day) - went back to the K740

This is probably a good gaming keyboard.

I say that because I'm sure the G keys are effective for gaming purposes. But for the way I type, which is not true touch typing, the presence of the G keys and the offset they introduce had two pronounced effects: (1) it was almost impossible for me to re-centre my typing of the keyboard when I moved my hands away without a pronounced and noticeable process of feeling out where the top-left edge of the keyboard is.

The problem of key-centering was replicable with my wife, who has much smaller hands, typing on the keyboard - she found the same subtle problem trying to line up, finding she inevitably ended up hitting the caps lock key when she did.

The second problem (2) was wrist strain: because the G keys are actual keys and live on the left hand side of the keyboard, my natural resting position for my left hand which is off to the side with my palm free introduced a great deal of strain to my left arm specifically. The pictures below of my hands sort of show the problem - on the top is my backup K740 and the bottom the G815:

K740 resting positionG815 resting position

This is with my hands trying to rest in a ready position on the keyboard: you can see the problem - I'm having to actively support the left hand to stop it from depressing the G keys. In my experience put a strain through the tendon running right up my arm and was quite painful after a short amount of use. It is possible a wrist rest would help fix this problem, but I'm not wild about the prospect since it's not an included feature of the keyboard unlike the K740, and I also do not experience this problem using other normal thickness keyboards - this seems to be an issue specifically with how I hold my hands to type and the existence of the extra macro row.

Wrapping Up

None of the reviews I read or watched for this keyboard before buying it mentioned this possible issue with the full-size keyboard and G keys, though I do recall that most reviewers favore using TKL (ten key-less) variants of the keyboard for endurance typing - which notably does not have the G keys.

Please keep in mind that if you're reading this, this is all based on quirks of typing which may be specific to just how I hold my hands - I am not a touch typist, just a decently fast one from long practice and most of my typing is done using two-fingers on each hand. You may have a fundamentally different experience with this keyboard then I do.

But, I have seen no reviews of gaming keyboards with these extra macro keys in this position which commented on the possible issues in use that they may introduce - it was a huge surprise when I opened this, and significantly impactful in a very direct way.

Easy Ephemeral Virtual Machines with libvirt

The Situation

At a previous job I was finally fed up with docker containers: generally speaking I was always working to setup whole systems or test whole system stuff, and docker containers - even when suitable - don't look anything like a whole system.

While Vagrant does exist, there was always something slightly "off" about the feeling of using it - it did what you want, but had a lot of opinions on it.

So the question I asked myself was, what was I actually wanting to do?

What we want to do

Since this was a job specific issue, the thing I wanted to do was boot cloud-specific environments quickly in a way which would let me deploy the codebase as it ran in the cloud. The company had since simply moved to launching cloud VM instances for this on AWS, but ultimately this left holes in the experience - i.e. try getting access to the disk of a cloud VM - on my local machine I can just mount it directly, or dive in with wxHexEditor if I really want to - on the cloud I get to spend some time trying to security manage an instance into the right environment, attaching EBS volumes and...just a lot of not the current problem.

So: the problem I wanted to solve is, given a cloud-init compatible disk image, give myself a command line parameter which would provision and boot the machine with sensible defaults, and give me an SSH login for it that would just work.

The Solution

What I ended up pulling together to do this is called kvmboot and for me at least works pretty nicely. It has also accidentally become my repository for build recipes to get various flavors of Windows VMs kicked out in a non-annoying state as quickly as possible - the result of the job I took after the original inspiration.

The environment currently works on Ubuntu (what I'm running at home) and should work on Fedora (what I was running when I developed it - hence the SELinux workarounds in the repository).

What it is is pretty simple - launch-cloud-image is a large bash script which spits out an opinionated take on a reasonable libvirt. libvirt ships with a number of tools to accomplish things like this, but no real set of instructions to produce something as useful as I've found this customization - of course that might just be me.

Usage

The basic usage I have for it today is setting up Amazon AMI provisioning scripts. Amazong provide a downloadable version of Amazon Linux 2 for KVM, and launch-cloud-image makes using it very easy: -

kvmboot $ time ./launch-cloud-image --ram 2G --video amzn2-kvm-2.0.20210813.1-x86_64.xfs.gpt.qcow2 blogtest

xorriso 1.5.2 : RockRidge filesystem manipulator, libburnia project.

Drive current: -outdev '/tmp/lci.blogtest.userdata.3dQylgsKb.iso'
Media current: stdio file, overwriteable
Media status : is blank
Media summary: 0 sessions, 0 data blocks, 0 data, 51.0g free
xorriso : NOTE : -blank as_needed: no need for action detected
xorriso : WARNING : -volid text does not comply to ISO 9660 / ECMA 119 rules
xorriso : UPDATE :      12 files added in 1 seconds
Added to ISO image: directory '/'='/tmp/lci.blogtest.userdata.kq9RDblTKJ'
ISO image produced: 41 sectors
Written to medium : 192 sectors at LBA 32
Writing to '/tmp/lci.blogtest.userdata.3dQylgsKb.iso' completed successfully.

xorriso : NOTE : Re-assessing -outdev '/tmp/lci.blogtest.userdata.3dQylgsKb.iso'
xorriso : NOTE : Loading ISO image tree from LBA 0
xorriso : UPDATE :      12 nodes read in 1 seconds
Drive current: -dev '/tmp/lci.blogtest.userdata.3dQylgsKb.iso'
Media current: stdio file, overwriteable
Media status : is written , is appendable
Media summary: 1 session, 41 data blocks, 82.0k data, 51.0g free
Volume id    : 'config-2'
User Login: will
Root disk path: /home/will/.local/share/libvirt/images/lci.blogtest.root.qcow2
ISO file path: /home/will/.local/share/libvirt/images/lci.blogtest.userdata.3dQylgsKb.iso
Virtual machine created as: blogtest
blogtest.default.libvirt : will : aedeebootahnouD7Meig

real    0m16.764s
user    0m0.326s
sys 0m0.077s

16 seconds isn't bad from nothing to what I'd get an in EC2 VM - and since I have SSH access I can jump right into using Ansible or something else to provision that machine. Or just alias it so I can kick one up quickly to try silly things.

kvmboot $ ssh will@blogtest.default.libvirt

       __|  __|_  )
       _|  (     /   Amazon Linux 2 AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-2/
19 package(s) needed for security, out of 59 available
Run "sudo yum update" to apply all updates.
[will@blogtest ~]$ # and then you try stuff here

What's nice is that this is absolutely standard libvirt. It appears in virt-manager, you can play around with it using all the standard virt-manager commands and management. It'll work with remote libvirtd's if you have them, but it's a super-convenient way to use a barebones VM environment - about as easy as doing docker run -it ubuntu bash or something similar, but with way more isolation.

But it also works for Windows!

This was the real joy of this solution: when I stumbled into a bunch of Windows provisioning, I'd never had a good solution. But it turns out launch-cloud-image (I should probably rename it kvmboot like the repo) actually works really well for this use case. By the addition of an installation mode, and some support scripting to build the automatic installation disk images, it can in fact support the whole lifecycle to go from "Windows ISO" to "cloud-initable Windows image" to "Windows workstation with all the cruft removed".

As a result the repository itself has grown a lot of my research into how to easily get usable Windows environments, but it does work and it works great - with Windows 10 we can automate the SSH installation and have it drop you straight into Powershell, ready for provisioning.

Conclusion

I use this script all the time. It's the fastest way I know to get VM environments up which look like the type of cloud instance machines you would be using in the public cloud, and the dnsmasq integration and naming makes them super easy to work with while being standard, boring libvirt - no magic.

Log OpenSSH public keys from failed logins

Problem

I setup an autossh dialback on a machine in the office and forgot to note down the public key.

While certainly not safe to do so, how hard could it really be to grab the public key from the machine with the fixed IP that's hitting my server every 3 seconds for the last 24 hours and give it a login (to be clear: a login to my reverseit tool which is only ever going to allow me to connect back to the other end if it is in fact the machine I think it is).

Solution

This StackOverflow solution looks like what I needed, only when I implemented it the keys I got back still didn't work.

The reason is because: you don't need to do it.

As of OpenSSH 8.9 in Ubuntu Jammy, debug level 2 will produce log messages that start with

debug2: userauth_pubkey: valid user will querying public key rsa-sha2-512 AAAAB3Nz....

and just give you the whole public key...almost.

The problem is OpenSSH log messages are truncated by default - if longer then 1024 characters to be precise, which modern public keys are longer than (when RSA - ECC would fit).

This is controlled by a #define in log.c:

#define MSGBUFSIZ 1024

Upping this to 8192 I recompiled and...it still didn't work.

Pasting the log lines I was getting into VS Code, I found that all of them were exactly 500 characters. That sounds like a format string to me, so some more spelunking and there it is - in log.c there's the do_log function with this line:

openlog(progname, LOG_PID, log_facility);
        syslog(pri, "%.500s", fmtbuf);
        closelog();

I'm guessing this is to work with legacy syslog limited to about 512 byte messages. We're trying to log to journald so let's just increase that to 8192 and try it out.

debug2: userauth_pubkey: valid user will querying public key rsa-sha2-512 AAAAB3NzaC1yc2EAAAADAQABAAABgQCklLxvJWTabmkVDFpOyVUhKTynHtTGfL3ngRH41sdMoiIE7j5WWcA+zvJ2ZqXzH+b5qIAMwb13H4ZkXmu6HLidlaZ0T9VBkKGjUpeHDhJ4fd1p+uw9WTRisVV+Xmw9mjbpiR8+AGXnoNwIeX5tMukglAFwEIQ8GQtM8EV4tS36RWxZjOSoT5sQlAjYsgEzQ7PHXsH3hgM7dyIK1HXrr2XcwFZPCts2EhOyh4e0hyUsvm9Nix2Y7qlqhFA+nH4buuSNpJZ2LjNb9CmWo5bjiYvrRLnU0qJMuPXp0jJeV+LwGA+W/JMbsep9xoqSA6aEQvlRUQx5jRyaJZf9GKqGBNe+v55vEbaTb+PXBU4o7nVFGCygZj2fLrW475o7vZBXJJjdgW/rZ1Eh4G2/Aukz3kfrMiJynRQOc5sFHL1ogZhHEVDqViZVLAHA2aoMCYtrsBJ9BBr/r73bzs9HbsND1wqi5ejYSiODZwX0DGmWZD21OPAj/SDMPUap6Nt/tG7oqs0= [preauth]

Oh wow - there's a lot there! in fact there's the [preauth] tag at the end which is completely cut off normally.

Full Patch

patch
diff --git a/log.c b/log.c
index bdc4b6515..09474e23a 100644
--- a/log.c
+++ b/log.c
@@ -325,7 +325,7 @@ log_redirect_stderr_to(const char *logfile)
    log_stderr_fd = fd;
 }

-#define MSGBUFSIZ 1024
+#define MSGBUFSIZ 8192

 void
 set_log_handler(log_handler_fn *handler, void *ctx)
@@ -417,7 +417,7 @@ do_log(LogLevel level, int force, const char *suffix, const char *fmt,
        closelog_r(&sdata);
 #else
        openlog(progname, LOG_PID, log_facility);
-       syslog(pri, "%.500s", fmtbuf);
+       syslog(pri, "%.8192s", fmtbuf);
        closelog();
 #endif
    }
--

Use git apply in the working tree of the OpenSSH, which I recommend editing with dgit.

Conclusions

OpenSSH does log offered public keys, at DEBUG2 level. But on any standard Ubuntu install, you will not get enough text to see them.

The giveaway for, at least these logs being truncated is whether you can see [preauth] after them. This behavior is kind of silly (and should be configurable) - ideally though we would at least get a ... or <truncated> message when this is happening because with variable length fields like public keys it is not obvious.

Jipi and the Paranoid Chip

This is a short story by Neil Stephenson which used to be hosted online here. It's outlined more in the wikipedia article here and I've been wanting to read it again due to the recent furor surrounding Google's LaMDA (Is Google’s LaMDA conscious? A philosopher’s view).

But alas! The original hosting returns a 404 now: fortunately the Google cached version is still available and I've downloaded that and made it part of my private collection.

So: to ensure this stays up I'm also including the cached copy as a part of this blog. It goes without saying that all rights to this story belong to the original author.

Click here to read Jipi and the Paranoid Chip (or any of the links above).

Install Firefox as a deb on Ubuntu 22.04

Introduction

Ubuntu 22.04 removes a native Firefox package in favor of a snap package. I'm sure this has advantages.

But the reality for me was several fold: startup times were noticeably slower, and the selenium geckodriver just plain didn't work for me (issue here), with some debate online but no canonical solution. I also couldn't get Jupyterlab to autolaunch (minor, but annoying).

Solution below reproduced from https://balintreczey.hu/blog/firefox-on-ubuntu-22-04-from-deb-not-from-snap/ with adaptations which worked for me.

Solution

You can still install Firefox as a native deb from the Mozilla team PPA. The process which worked for me was:

Step 1

Add the (Ubuntu) Mozilla team PPA to your list of software sources by running the following command in the same Terminal window:

sudo add-apt-repository ppa:mozillateam/ppa

Step 2

Pin the Firefox package

echo '
Package: *
Pin: release o=LP-PPA-mozillateam
Pin-Priority: 1001
' | sudo tee /etc/apt/preferences.d/mozilla-firefox

Step 3

Ensure upgrades will work automatically

echo 'Unattended-Upgrade::Allowed-Origins:: "LP-PPA-mozillateam:${distro_codename}";' | sudo tee /etc/apt/apt.conf.d/51unattended-upgrades-firefox

Step 4

Install firefox (this will warn of a downgade - ignore it)

sudo apt install firefox

Step 5

Remove the Firefox snap

sudo snap remove firefox

Conclusion

This worked for me - Firefox starts, my existing Selenium scripts work.

Running npm install (and other weird scripts) safely

Situation

You do this:

$ git clone https://some.site/git/some.repo.git
$ cd some.repo
$ npm install

Pretty common right? What can go wrong?

What about this:

curl -L https://our-new-thing.xyz/install | bash

This looks a little unsafe. Who would recommend it? Well it's still one of the ways to install pip in unfamiliar environments. Or Rust.

Now installing from these places is safe: why? Because they're trusted. There's huge reputational defense going on. But the reality is that for a lot of tools - npm being a big offender, pip too - there's all sorts of ways that while sudo and user permissions will protect your system from going down, your data - $HOME and the like - basically all the important things on your system - are exposed.

This is key: you are always running as "superuser" of your data. In fact your entire operating environment - systemctl --user provides a very useful and complete way to schedule tasks and persistent daemons for your entire user session. There's a lot of power and persistence there.

Problem

There's two competing demands here: it's pretty easy to build isolated environments when you feel like you're under attac, but it takes time - time you don't really want to commit to the problem. It's inconvenient - which is basically the currency we trade when it comes to security.

But the convenience<->security exchange rate is not fixed. It has a floor price, but if we can build more convenient tools, then we can protect ourselves against some threats for almost no cost.

Goals

What we want to do is find a safe way to do something like npm install and not be damaged by anything which might get run by it. For our purposes, damage is data destruction or corruption beyond a sensible scope.

We also want this to light weight: this should be a momentary "that looks unsafe" sort of intervention, not "let me plan out by secure dev environment".

Enter Bubblewrap

bubblewrap is intended to be an unprivileged containers sandboxing tool and has as its specific goal the elimination of container escape CVEs. It's also just available in the Ubuntu repositories which makes things a lot easier.

This is a fairly low level tool, so let's just cut to the wrapper script usage:

#!/bin/bash
# Wrap an executable in a container and limit writes to the current directory only.
# This system does not attempt to limit access to system files, but it does limit writes.

# See: https://stackoverflow.com/questions/59895/how-to-get-the-source-directory-of-a-bash-script-from-within-the-script-itself
# Note: you can't refactor this out: its at the top of every script so the scripts can find their includes.
SOURCE="${BASH_SOURCE[0]}"
while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink
  DIR="$( cd -P "$( dirname "$SOURCE" )" >/dev/null 2>&1 && pwd )"
  SOURCE="$(readlink "$SOURCE")"
  [[ $SOURCE != /* ]] && SOURCE="$DIR/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
done
SCRIPT_DIR="$( cd -P "$( dirname "$SOURCE" )" >/dev/null 2>&1 && pwd )"

function log() {
  echo "$*" 1>&2
}

function fatal() {
  echo "$*" 1>&2
  exit 1
}

start_dir="$(pwd)"

bwrap="$(command -v bwrap)"
if [ ! -x "$bwrap" ]; then
    fatal "bubblewrap is not installed. Try running: apt install bubblewrap"
fi

export PS_TAG="$(tput setaf 14)[safe]$(tput sgr0) "

exec "$bwrap" \
    --die-with-parent \
    --tmpfs / \
    --dev /dev \
    --proc /proc \
    --tmpfs /run \
    --mqueue /dev/mqueue \
    --dir /tmp \
    --unshare-all \
    --share-net \
    --ro-bind /bin /bin \
    --ro-bind /etc /etc \
    --ro-bind /run/resolvconf/resolv.conf /run/resolvconf/resolv.conf \
    --ro-bind /lib /lib \
    --ro-bind /lib32 /lib32 \
    --ro-bind /libx32 /libx32 \
    --ro-bind /lib64 /lib64 \
    --ro-bind /opt /opt \
    --ro-bind /sbin /sbin \
    --ro-bind /srv /srv \
    --ro-bind /sys /sys \
    --ro-bind /usr /usr \
    --ro-bind /var /var \
    --ro-bind /home /home \
    --bind "${HOME}/.npm" "${HOME}/.npm" \
    --bind "${HOME}/.cache" "${HOME}/.cache" \
    --bind "${start_dir}" "${start_dir}" \
    -- \
    "$@"

In addition to this script, I also have this in my .bashrc file to get nice shell prompts if I spawn a shell with it:

if [ ! -z "$PS_TAG" ]; then
  export PS1="${PS_TAG}${PS1}"
fi

The basic structure of this invocation is that the resultant container has networking, and my full operating environment in it...just not write access to any files beyond the current user directory.

This is a handy safety feature for reasons beyond a malicious NPM package - I've known more then one colleague to wipe out their home directory writing make clean directives.

Usage

Usage could not be simpler. With the script in my PATH under the name saferun, I can isolate any command or script I'm about to run to only be able to write to the current directory with: saferun ./some-shady-command

I can also launch a protected session with saferun bash which gives me a prompt like:

[safe] $

This is about as low overhead as I can imagine for providing basic filesystem protection.

Conclusions

This is not bullet-proof armor. And it certainly won't keep nosy code from poking around the rest of the filesystem. Are you 100% confident you never saved an important password to some file? I'm not. But I do normally work with a lot auxillary commands and functions around my home directory, and I like them being mostly available when doing risky things. This strikes a good balance - at the very least it limits the damage scope of running some random script you downloaded from causing real nuisance.

I recommend checking out bubblewrap's full set of features to figure out what it can really do, but for something I knocked up by reading for a few hours this added a handy tool to my repository for me.

Reconditioning the Gen 2 Prius HV battery

The Problem

So I've had a Generation 2 Toyota Prius since 2004. Coming up on 17 years old now in Australia, and recently I finally had what turns out to be the dreaded PA080 fault code get thrown - this is a general hybrid traction battery error.

Since the battery is relatively expensive compared to the value of the car and I don't like spending money anyway, the question becomes what can we do about this?

DIY Reconditioning

Fortunately, the car is old enough that this problem has happened before. Over at https//priuschat.com and elsewhere on the web, people have disassembled then Prius traction battery and fixd this problem themselves.

There are basically 2 issues at play: general NiMH degration, and polarity reversal - cell failure.

Cell Failure

In general, the PA080 code (at least in my experience), happens when a battery module will suddenly drop its voltage by over 1V.

This happens due to a phenomenon in NiMH cells called "polarity reversal" - characterized by a discharge curve like this one:

image.png
Source

It is what it sounds like: under extreme discharge conditions, the NiMH cell will go to 0, and if left in this state for too long (or in a battery pack where current continues to be pulled through the cell) it will then enter polarity reveral - positive becomes negative, negative becomes positive. This is disasterous in a normal application, and devastating in a battery pack as the cell now gets driven in this condition by regular charging to continue soaking up current producing heat.

At this point, the cell is dead. In a Prius battery module of 6 cells, a reduction in voltage of about 1V means you know you've had a cell drop into reverse polarity and its not coming back.

NiMH battery cells primer

It's important to understand NiMH cells to understand why "battery reconditioning" is possible and advisable.

image.png
Source

Standard NiMH battery chemistry has a nominal voltages of 1.2V. This has little bearing on the real voltages you see with the cells - a fully charged cell goes up to 1.5V, considered to be the absolute top and you're evolving hydrogen at that point - and a single, standalone cell, can be take all the way to 0V (this is not safe - miss the mark and you wind up in polarity reversal).

In a battery pack of NiMH cells, these lower limits are higher for safety: pack cells all have slightly different capacities, and once you hit 0V on one, if the others don't hit 0V at the exact same time then the empty ones will get driven into polarity reversal. At roughly 0.8V you start running into a cliff of voltage decay anyway, so that's generally the stopping point.

The graph below is an excellent primer on the voltage behaviors of NiMH at different states of charge. Note that the nominal voltage is measured right before the cell is practically empty, but for most of its duration voltage is very constant - almost linear - until the cell is almost full.

image.png
Source
$$\require{mhchem}$$

Degradation Mechanisms

The above explains the behavior of NiMH cells, but not why we can recondition them in a vehicle like the Prius. To understand this, we need to understand the common NiMH battery degradation mechanisms.

NiMH chemistry is based on the following 2 chemical reactions:

Anode: $\ce{H2O + M + e^- <=> OH^- + MH}$

Cathode: $\ce{Ni(OH)2 + OH^- <=> NiO(OH) + H2O + e^-}$

Note the M: this is an intermetallic compound, rather then any specific metal is essentially where a lot of the R&D in NiMH batteries goes.

Our target of recovery is the cathodic reaction involving the Nickel. In normal operation the Prius runs the NiMH batterys between 20-80% of their rated capacity. This is, in general, the correct answer - deep discharging batteries causes degradation of the electrode materials which is a permanent killer (over the order of 500-1000 cycles though).

Crystal Formation

The problem enters with an issue known as "crystal formation" when the batteries are operated in this way over an extended period. Search around and you'll see this referenced a lot without a lot of explanation and mostly in context of Nickel-Cadmium (NiCd) batteries.

NiMH's were meant to, and were a huge improvement on, most of the "memory effect" degradation mechanisms of NiCd batteries, however some of the fundamental mechanisms involved still apply as they are still based on the same basic active materials on the cathode - the Nickel Hydroxide and Nickel oxide hydroxide.

There are many, many mechanisms of permanent and transient change in NiMH batteries, but there are 2 identified which can be treated by the deep charge-discharge cycle recommended for reconditioning.

One is that observed by Sato et. al.: nickel oxide hydroxide has 2 primary crystal structures when used in batteries - β‐NiOOH and γ‐NiOOH.

β‐NiOOH and γ‐NiOOH are generally recognized as being two in-flux crystal states of the Nickel electrodes of any nickel based battery with a (simplified) schema looking like the following:

image.png
Source

γ‐NiOOH is the bulkier crystal form, and has more resistance to hydrogen ion diffusion - this is important because the overall ability of the battery to be recharged is entirely dependent on the accessibility of the surface to $\ce{H^+}$ ions to convert it back to $\ce{Ni(OH)2}$.

What Sato et. al. observes is that during shallow discharging and overcharging of NiCd cells, they see a voltage depression effect correllated with a rise in γ‐NiOOH peaks on XRD spectra. When they fully cycled the cells, the peaks disappeared - the γ‐NiOOH crystals over several cycles are dissolved back to $\ce{Ni(OH)2}$ during the recharge cycle.

image.png
SEM photographs captured at 10 μm of the positive plates of (a) a good battery, (b) an aged battery, and (c) a restored battery. Note: these were NiCd's, but a similar process applies to the nickel electrode of an NiMH cell.

Source |

Although the Prius works hard to avoid this sort of environment - i.e. the battery is never overcharged - it's worth remembering that the battery is not overcharged in aggregate - but it's a physical system, with a physical environment. Ions need to move around in solution, and so while in aggregate you can avoid ever overcharging a cell - on a microsopic levels through random change every now and again an overcharge-like condition can manifest. That said - it took my car 17 years to get to this point.

There's more detail to this story - a lot more - and pulling a complete picture out of the literature is tricky. For example the γ‐NiOOH phase isn't considered true γ‐NiOOH but rather γ'‐NiOOH - the product of Nickel intercalating into γ‐NiOOH, rather then potassium ions (from the potassium - $\ce{K^+}$ used as electrolyte in the cell). It's also a product of rest time on the battery - the phase grows when the battery is resting in a partly charged state.

The punchline of all of this is the reason Prius battery reconditioning works though: the Prius is exceptionally good at managing its NiMH cells, and mostly fights known memory effects while driving. However, it can't fight them all the time and with time and age you wind up with capacity degradation due to crystal formation in this ~50% state-of-charge (SOC) range. And importantly: it's experimentally shown that several normal cycles is highly effective at restoring it by dissolving away the unwanted phase.

Dehydration

There's a secondary degradation mechanism that's worth noting for those who have seemingly unrecoverable cells in a Prius: dehydration.

Looking again at the NiMH battery chemistry -

Anode: $\ce{H2O + M + e^- <=> OH^- + MH}$

Cathode: $\ce{Ni(OH)2 + OH^- <=> NiO(OH) + H2O + e^-}$

you can see that water - $\ce{H2O}$ - is involved but not consumed in the reactions. This is also kind of transparently obvious: you need an electrolyte for ion exchange. What is not obvious though is that the situation under battery charging is technically a competitive with a straight electrolytic water-splitting reaction:

$\ce{2H2O <=> 2H^2 + O^2}$

This is a known problem - though largely resolved from normal recombinative processes in the battery (having a shared gas headspace allows the H2 and O2 to recombine back into water) and can be assisted by adding specific recombination chemistry and normally just resembles a loss function on charging the cells, simply producing heat.

This is a tradeoff in battery design: a sealed cell doesn't leak gas, which ensures it can eventually recombine. But a sealed cell can overpressure and rupture, at which point the cell is destroyed. The Prius cells are not sealed - a one-way overpressure blow off valve is present which vents at 80-120 psi - 550-828 kPa (this is substantial) - and the cells themselves depend on being clamped to prevent gas pressure from damaging them during charging.

But the result is the same: failed seals or overheated cells over a long duration may have lost water through either electrolysis processes.

There are ways to fix this sort of failure - and the results are spectacular - but this is definitely into "last resort for experimentalists" sort of intervention. Typical NiMH design uses a 20-40% w/v KOH solution in water. LiOH is added to improve low temperature performance, and NaOH is substituted partially or fully for reduced corrosion in high temperature applications.

Per this link 30% w/v KOH and 1.5 g/L LiOH is suggested. For the purposes of cell rehydration, an exact match is probably not important as a "dried out cell" will still contain all its salt components (though depending on redissolving them may not be the best option). A starting point for other mixes might be this paper which concludes a 6M KOH solution is optimal.

The big results reported over by this PriusChat member for anyone considering this are here - where he notes he used 20% KOH. Of note: getting deionized water, and a suitably un-metal contaminated salt, is probably key to success here (as well as sealing up the cells properly - the trickiest part by all accounts). That said - various metal dopants are used in NiMH cells to contribute all sorts of properties, so this may be a small effect. It is worth worrying about polymeric impurities in salts - you can eliminate these by "roasting" the salt to turn the into carbon ash.

It is noted in the literature that 6-8M KOH is the sweet spot for discharge capacity - however the use of a 1M solution for total cycle life has also been noted here.

One key parameter for anyone considering this is a rule of thumb figure for electrolyte volume of 1.5 - 2.5 mL A/h. For Prius cells this corresponds to 9.75 - 16.25 mL per cell, or 58.5 - 97.5 mL per module (each module has 6 cells).

Doing the Work

You'll need to dismantle your battery out of your car to do this. This can be done quickly once you know what you're doing, but follow a YouTube tutorial and take a lot of photos while you do it. Also read the following section and understand what we're dealing with.

Safety

This is part in the story where we include the big high voltages can kill warning, but let me add some explanatory detail here: the Prius HV battery is 201.6V nominal - in Australia this is lower then the voltage you use at an electrical outlet every day. But it is a battery - it has no shutoff, and it's DC power (so being shocked will trigger muscle contraction that will prevent you letting go).

Before you do anything to get the battery out of the car, make sure you pull the high voltage service plug, and then take a multimeter and always verify anything you're about to touch is showing 0V between the battery and car chassis.

Now the tempering factor to this is, handled properly, this battery is quite safe to work with once disassembled. High voltage is only present between the end terminals when the bus bars are connected - broken down into the individual modules the highest voltage is 9V from the individual NiMH modules.

Specific Advice

What does the High Voltage disconnector do?

The big orange plug you pull out of the battery does two things: it breaks the the circuit between positive and negative inside the battery, which makes the voltage at the battery terminals in the car go to 0V. This makes the battery safe to handle with the cover on.

It does this specifically by sitting between the 2 battery modules in block 10, and breaking the connection there. Because the battery output is wired from the last module positive, to the first module negative, this breaks the circuit.

There's a secondary benefit to this once the battery is open: breaking the battery wire here limits the total possible voltage inside the battery to ~130V (from block 1 to block 10). This is still a lethal voltage though.