Hack-A-Day: Hack-A-Farm

It’s november, and I’ve decided this month that I’m going to do 30 projects in 30 days. It’s an all-month hack-a-thon!

Today’s project is Hack-A-Farm (demo, source). It’s a simple tile-based RPG. You can walk around as a chicken, admire your house, and plant and harvest two types of crops.

My main goal with this project was to work with spritesheets or animation before, which I had never done. Showing off the individual tiles is deliberate. Also, the game should respond well to smaller and larger screens, I hope.

I had a good time with this one, and I’m happy with how much I got done in a day. I originally planned to do more fluid walking (it was called Hack-A-Walk), but it was more fun to add crops instead.

I re-used some of the logic from Hack-A-Minigame and Hack-A-Snake. I’ve been finding d3 to be mildly useful, if a little annoying.

tty audit logs

I recently wrote a program that records all tty activity. That means bash sessions, ssh, raw tty access, screen and tmux sessions, the lot. I used script. The latest version of my software can be found on github.

Note that it’s been tested only with bash so far, and there’s no encryption built in.

To just record all shell commands typed, use the standard eternal history tricks (bash).

problem-log.txt

One of the more useful things I did was to start logging all my technical problems. Whenever I hit a problem, I write an entry in problem-log.txt. Here’s an example

2022-08-02
Q: Why isn't the printer working? [ SOLVED ]
A: sudo cupsenable HL-2270DW

// This isn't in the problem log, but the issue is that CUPS will silently disable the printer if it thinks there's an issue. This can happen if you pull a USB cord mid-print.

I write the date, the question, and the answer. Later, when I have a tough or annoying problem, I try to grep problem-log.txt. I’ll add a note if I solve a problem using the log, too.

This was an interesting project to look at 5 years later. I didn’t see benefits until 1-2 years later. It does not help me think through a problem. It’s hard to remember to do. But, over time it’s built up and become invaluable to me. I hit a tricky problem, and I can’t immediately find an answer on the web. I find out it’s in problem-log.txt. And, someone’s written it exactly with my hardware (and sometimes even my folder names) correctly in there. Cool!

Here’s another example:

2018-10-21
Q: How do I connect to the small yellow router?

Not every problem gets solved. Oh well.

Scan Organizer

I scan each and every piece of paper that passes through my hands. All my old to-do lists, bills people send me in the mail, the manual for my microwave, everything. I have a lot of scans.

scan-organizer is a tool I wrote to help me neatly organize and label everything, and make it searchable. It’s designed for going through a huge backlog by hand over the course of weeks, and then dumping a new set of raw scans in whenever afterwards. I have a specific processing pipeline discussed below. However if you have even a little programming skill, I’ve designed this to be modified to suit your own workflow.

Input and output

The input is some raw scans. They could be handwritten notes, printed computer documents, photos, or whatever.

A movie ticket stub

The final product is that for each file like ticket.jpg, we end up with ticket.txt. This has metadata about the file (tags, category, notes) and a transcription of any text in the image, to make it searchable with grep & co.

---
category: movie tickets
filename: seven psychopaths ticket.jpg
tags:
- cleaned
- categorized
- named
- hand_transcribe
- transcribed
- verified
---
Rialto Cinemas Elmwood
SEVEN PSYCHOPAT
R
Sun Oct 28 1
7:15 PM
Adult $10.50
00504-3102812185308

Rialto Cinemas Gift Cards
Perfect For Movie Lovers!

Here are some screenshots of the process. Apologizies if they’re a little big! I just took actual screenshots.

At any point I can exit the program, and all progress is saved. I have 6000 photos in the backlog–this isn’t going to be a one-session thing for me! Also, everything has keyboard shortcuts, which I prefer.

Phase 1: Rotating and Cropping

Phase 1: Rotating and Cropping

First, I clean up the images. Crop them, rotate them if they’re not facing the right way. I can rotate images with keyboard shortcuts, although there are also buttons at the bottom. Once I’m done, I press a button, and scan-organizer advanced to the next un-cleaned photo.

Phase 2: Sorting into folders

Phase 2: Sorting into folders

Next, I sort things into folders, or “categories”. As I browse folders, I can preview what’s already in that folder.

Phase 3: Renaming Images

Phase 3: Renaming images

Renaming images comes next. For convenience, I can browse existing images in the folder, to help name everything in a standard way.

Phase 4: Tagging images

Phase 4: Tagging images

I tag my images with the type of text. They might be handwritten. Or they might be printed computer documents. You can imagine extending the process with other types of tagging for your use case.

Not yet done: OCR

Printed documents are run through OCR. This isn’t actually done yet, but it will be easy to plug in. I will probably use tesseract.

Phase 5: Transcribing by hand

Phase 5a: Transcribing by Hand

I write up all my handwritten documents. I have not found any useful handwriting recognition software. I just do it all by hand.

The point of scan-organizer is to filter based on tags. So only images I’ve marked as needing hand transcription are shown in this phase.

Phase 6: Verification

 At the end of the whole process, I verify that each image looks good, and is correctly tagged and transcribed.

One Screenshot Per Minute

One of my archiving and backup contingencies is taking one screenshot per minute. You can also use this to get a good idea of how you spend your day, by turning it into a movie. Although with a tiling window manager like I use, it’s a headache to watch.

I send the screenshots over to another machine for storage, so they’re not cluttering my laptop. It uses up 10-20GB per year.

I’ll go over my exact setup below in case anyone is interested in doing the same:

/bin/screenlog

GPG_KEY=Zachary
TEMPLATE=/var/screenlog/%Y-%m-%d/%Y-%m-%d.%H:%M:%S.jpg
export DISPLAY=:0
export XAUTHORITY=/tmp/XAuthority

IMG=$(\date +$TEMPLATE)
mkdir -p $(dirname "$IMG")
scrot "$IMG"
gpg --encrypt -r "$GPG_KEY" "$IMG"
shred -zu "$IMG"

The script

  • Prints everything to stderr if you run it manually
  • Makes a per-day directory. We store everything in /var/screenlog/2022-07-10/ for the day
  • Takes a screenshot. By default, crontab doesn’t have X Windows (graphics) access. To allow it, the XAuthority file which allows access needs to be somewhere my crontab can reliably access. I picked /tmp/XAuthority. It doesn’t need any unusual permissions, but the default location has some random characters in it.
  • GPG-encrypts the screenshot with a public key and deletes the original. This is extra protection in case my backups somehow get shared, so I don’t literally leak all my habits, passwords, etc. I just use my standard key so I don’t lose it. It’s public-key crypto, so put the public key on your laptop. Put the private key on neither, one, or both, depending on which you want to be able to read the photos.

/etc/cron.d/screenlog

* * * * * zachary  /bin/screenlog
20  * * * * zachary  rsync --remove-source-files -r /var/screenlog/ backup-machine:/data/screenlog/laptop
30  * * * * zachary  rmdir /var/screenlog/*

That’s

  • Take a screenshot once every minute. Change the first * to */5 for every 5 minutes, and so on.
  • Copy over the gpg-encrypted screenshots hourly, deleting the local copy
  • Also hourly, delete empty per-day folders after the contents are copied, so they don’t clutter things

~/.profile

export XAUTHORITY=/tmp/XAuthority

I mentioned /bin/screenlog needs to know where XAuthority is. In Arch Linux this is all I need to do.

Encrypted root on debian part 2: unattended boot

I want my debian boot to work as follows:

  1. If it’s in my house, it can boot without my being there. To make that happen, I’ll put the root disk key on a USB stick, which I keep in the computer.
  2. If it’s not in my house, it needs a password to boot. This is the normal boot process.

As in part 1, this guide is debian-specific. To learn more about the Linux boot process, see part 1.

First, we need to prepare the USB stick. Use ‘dmesg’ and/or ‘lsblk’ to make a note of the USB stick’s path (/dev/sdae for me). I chose to write to a filesystem rather than a raw block device.

sudo mkfs.ext4 /dev/sdae # Make a filesystem directly on the device. No partition table.
sudo blkid /dev/sdae # Make a note of the filesystem UUID for later

Next, we’ll generate a key.

sudo mount /dev/sdae /mnt
sudo dd if=/dev/urandom of=/mnt/root-disk.key bs=1000 count=8

Add the key to your root so it can actually decrypt things. You’ll be prompted for your password:

sudo cryptsetup luksAddKey ROOT_DISK_DEVICE /mnt/root-disk.key

Make a script at /usr/local/sbin/unlockusbkey.sh

#!/bin/sh
USB_DEVICE=/dev/disk/by-uuid/a4b190b8-39d0-43cd-b3c9-7f13d807da48 # copy from blkid's output UUID=XXXX

if [ -b $USB_DEVICE ]; then
  # if device exists then output the keyfile from the usb key
  mkdir -p /usb
  mount $USB_DEVICE -t ext4 -o ro /usb
  cat /usb/root-disk.key
  umount /usb
  rmdir /usb
  echo "Loaded decryption key from USB key." >&2
else
  echo "FAILED to get USB key file ..." >&2
  /lib/cryptsetup/askpass "Enter passphrase"
fi

Mark the script as executable, and optionally test it.

chmod +x /usr/local/sbin/unlockusbkey.sh
sudo /usr/local/sbin/unlockusbkey.sh | cmp /mnt/root-disk.key

Edit /etc/crypttab to add the script.

root PARTLABEL=root_cipher none luks,keyscript=/usr/local/sbin/unlockusbkey.sh

Finally, re-generate your initramfs. I recommend either having a live USB or keeping a backup initramfs.

sudo update-initramfs -u

[1] This post is loosely based on a chain of tutorials based on each other, including this
[2] However, those collectively looked both out of date and like they were written without true understanding, and I wanted to clean up the mess. More definitive information was sourced from the actual cryptsetup documentation.

Migrating an existing debian installation to encrypted root

In this article, I migrate an existing debian 10 buster release, from an unencrypted root drive, to an encrypted root. I used a second hard drive because it’s safer–this is NOT an in-place migration guide. We will be encrypting / (root) only, not /boot. My computer uses UEFI. This guide is specific to debian–I happen to know these steps would be different on Arch Linux, for example. They probably work great on a different debian version, and might even work on something debian-based like Ubuntu.

In part 2, I add an optional extra where root decrypts using a special USB stick rather than a keyboard passphrase, for unattended boot.

Apologies if I forget any steps–I wrote this after I did the migration, and not during, so it’s not copy-paste.

Q: Why aren’t we encrypting /boot too?

  1. Encrypting /boot doesn’t add much security. Anyone can guess what’s on my /boot–it’s the same as on everyone debian distro. And encrypting /boot doesn’t prevent tampering–someone can easily replace my encrypted partition by an unencrypted one without my noticing. Something like Secure Boot would resist tampering, but still doesn’t require an encrypted /boot.
  2. I pull a special trick in part 2. Grub2’s has new built-in encryption support, which is what would allow encrypting /boot. But grub2 can’t handle keyfiles or keyscripts as of writing, which I use.

How boot works

For anyone that doesn’t know, here’s how a typical boot process works:

  1. Your computer has built-in firmware, which on my computer meets a standard called UEFI. On older computers this is called BIOS. The firmware is built-in, closed-source, and often specific to your computer. You can replace it with something open-source if you wish.
  2. The firmware has some settings for what order to boot hard disks, CD drives, and USB sticks in. The firmware tries each option in turn, failing and using the next if needed.
  3. At the beginning of each hard disk is a partition table, a VERY short info section containing information about what partitions are on the disk, and where they are. There are two partition table types: MBR (older) and GPT (newer). UEFI can only read GPT partition tables. The first thing the firmware does for each boot disk is read the partition table, to figure out which partitions are there.
  4. For UEFI, the firmware looks for an “EFI” partition on the boot disk–a special partition which contains bootloader executables. EFI always has a FAT filesystem on it. The firmware runs an EFI executable from the partition–which one is configured in the UEFI settings. In my setup there’s only one executable–the grub2 bootloader–so it runs that without special configuration.
  5. Grub2 starts. The first thing Grub2 does is… read the partition table(s) again. It finds the /boot partition, which contains grub.cfg, and reads grub.cfg. (There is a file in the efi partition right next to the executable, which tells grub where and how to find /boot/grub.cfg. This second file is confusingly also called grub.cfg, so let’s forget it exists, we don’t care about it).
  6. Grub2 invokes the Linux Kernel specified in grub.cfg, with the options specified in grub.cfg, including the an option to use a particular initramfs. Both the Linux kernel and the initramfs are also in /boot.
  7. Now the kernel starts, using the initramfs. initramfs is a tiny, compressed, read-only filesystem only used in the bootloading process. The initramfs’s only job is to find the real root filesystem and open it. grub2 is pretty smart/big, which means initramfs may not have anything left to do on your system before you added encryption. If you’re doing decryption, it happens here. This is also how Linux handles weird filesystems (ZFS, btrfs, squashfs), some network filesystems, or hardware the bootloader doesn’t know about. At the end of the process, we now have switched over to the REAL root filesystem.
  8. The kernel starts. We are now big boys who can take care of ourselves, and the bootloading process is over. The kernel always runs /bin/init from the filesystem, which on my system is a symlink to systemd. This does all the usual startup stuff (start any SSH server, print a bunch of messages to the console, show a graphical login, etc).

Setting up the encrypted disk

First off, I used TWO hard drives–this is not an in-place migration, and that way nothing is broken if you mess up. One disk was in my root, and stayed there the whole time. The other I connected via USB.

Here’s the output of gdisk -l on my original disk:

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         1050623   512.0 MiB   EF00  # EFI, mounted at /boot/efi
   2         1050624       354803711   168.7 GiB   8300  # ext4, mounted at /
   3       354803712       488396799   63.7 GiB    8200  # swap

Here will be the final output of gdisk -l on the new disk:

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048          526335   256.0 MiB   EF00  efi # EFI, mounted at /boot/efi
   2         1050624       135268351   64.0 GiB    8200  swap # swap
   3       135268352       937703054   382.6 GiB   8300  root_cipher # ext4-on-LUKS. ext4 mounted at /
   4          526336         1050623   256.0 MiB   8300  boot # ext4, mounted at /boot
  1. Stop anything else running. We’re going to do a “live” copy from the running system, so at least stop doing anything else. Also most of the commands in this guide need root (sudo).
  2. Format the new disk. I used gdisk and you must select a gpt partition table. Basically I just made everything match the original. The one change I need is to add a /boot partition, so grub2 will be able to do the second stage. I also added partition labels with the c gdisk command to all partitions: boot, root_cipher, efi, and swap. I decided I’d like to be able to migrate to a larger disk later without updating a bunch of GUIDs, and filesystem or partition labels are a good method.
  3. Add encryption. I like filesystem-on-LUKS, but most other debian guides use filesystem-in-LVM-on-LUKS. You’ll enter your new disk password twice–once to make an encrypted partition, once to open the partition.
    cryptsetup luksFormat /dev/disk/by-partlabel/root_cipher
    cryptsetup open /dev/disk-by-partlabel/root_cipher root
  4. Make the filesystems. For my setup:
    mkfs.ext4 /dev/disk/by-partlabel/root
    mkfs.ext4 /dev/disk/by-partlabel/boot
    mkfs.vfat /dev/disk/by-partlabel/efi
  5. Mount all the new filesystems at /mnt. Make sure everything (cryptsetup included) uses EXACTLY the same mount paths (ex /dev/disk/by-partlabel/boot instead of /dev/sda1) as your final system will, because debian will examine your mounts to generate boot config files.
    mount /dev/disk/by-partlabel/root /mnt
    mkdir /mnt/boot && mount /dev/disk/by-partlabel/boot /mnt/boot
    mkdir /mnt/boot/efi && mount /dev/disk/by-partlabel/efi /mnt/boot/efi
    mkdir /mnt/dev && mount --bind /dev /mnt/dev # for chroot
    mkdir /mnt/sys && mount --bind /sys /mnt/sys
    mkdir /mnt/proc && mount --bind /dev /mnt/proc
  6. Copy everything over. I used rsync -axAX, but you can also use cp -ax. To learn what all these options are, read the man page. Make sure to keep the trailing slashes in the folder paths for rsync.
    rsync -xavHAX / /mnt/ --no-i-r --info=progress2
    rsync -xavHAX /boot/ /mnt/boot/
    rsync -xavHAX /boot/efi/ /mnt/boot/efi/
  7. Chroot in. You will now be “in” the new system using your existing kernel.
    chroot /mnt
  8. Edit /etc/crypttab. Add:
    root PARTLABEL=root_cipher none luks
  9. Edit /etc/fstab. Mine looks like this:
    /dev/mapper/root / ext4 errors=remount-ro 0 1
    PARTLABEL=boot /boot ext4 defaults,nofail 0 1
    PARTLABEL=efi /boot/efi vfat umask=0077,nofail
    PARTLABEL=swap none swap sw,nofail 0 0
    tmpfs /tmp tmpfs mode=1777,nosuid,nodev 0 0
  10. Edit /etc/default/grub. On debian you don’t need to edit GRUB_CMDLINE_LINUX.
    GRUB_DISABLE_LINUX_UUID=true
    GRUB_ENABLE_LINUX_PARTLABEL=true
  11. Run grub-install. This will install the bootloader to efi. I forget the options to run it with… sorry!
  12. Run update-grub (with no options). This will update /boot/grub.cfg so it knows how to find your new drive. You can verify the file by hand if you know how.
  13. Run update-initramfs (with no options). This will update the initramfs so it can decrypt your root drive.
  14. If there were any warnings or errors printed in the last three steps, something is wrong. Figure out what–it won’t boot otherwise. Especially make sure your /etc/fstab and /etc/crypttab exactly match what you’ve already used to mount filesystems.
  15. Exit the chroot. Make sure any changes are synced to disk (you can unmount everything under /mnt in reverse order to make sure if you want)
  16. Shut down your computer. Remove your root disk and boot from the new one. It should work now, asking for your password during boot.
  17. Once you boot successfully and verify everything mounted, you can remove the nofail from /etc/fstab if you want.
  18. (In my case, I also set up the swap partition after successful boot.) Edit: Oh, also don’t use unencrypted swap with encrypted root. That was dumb.

Making a hardware random number generator

If you want a really good source of random numbers, you should get a hardware generator. But there’s not a lot of great options out there, and most people looking into this get (understandably) paranoid about backdoors. But, there’s a nice trick: if you combine multiple random sources together with xor, it doesn’t matter if one is backdoored, as long as they aren’t all backdoored. There are some exceptions–if the backdoor is actively looking at the output, it can still break your system. But as long as you’re just generating some random pads, instead of making a kernel entropy pool, you’re fine with this trick.

So! We just need a bunch of sources of randomness. Here’s the options I’ve tried:

  • /dev/urandom (40,000KB/s) – this is nearly a pseudo-random number generator, so it’s not that good. But it’s good to throw in just in case. [Learn about /dev/random vs /dev/urandom if you haven’t. Then unlearn it again.]
  • random-stream (1,000 KB/s), an implementation of the merenne twister pseudo-random-number generator. A worse version of /dev/urandom, use that unless you don’t trust the Linux kernel for some reason.
  • infnoise (20-23 KB/s), a USB hardware random number generator. Optionally whitens using keccak. Mine is unfortunately broken (probably?) and outputs “USB read error” after a while
  • OneRNG (55 KiB/s), a USB hardware random number generator. I use a custom script which outputs raw data instead of the provided scripts (although they look totally innocuous, do recommend
  • /dev/hwrng (123 KB/s), which accesses the hardware random number generator built into the raspberry pi. this device is provided by the raspbian package rng-tools. I learned about this option here
  • rdrand-gen (5,800 KB/s), a command-line tool to output random numbers from the Intel hardware generator instruction, RDRAND.

At the end, you can use my xor program to combine the streams/files. Make sure to use limit the output size if using files–by default it does not stop outputting data until EVERY file ends. The speed of the combined stream is at most going to be the slowest component (plus a little slowdown to xor everything). Here’s my final command line:

#!/bin/bash
# Fill up the folder with 1 GB one-time pads. Requires 'rng-tools' and a raspberry pi. Run as sudo to access /dev/hwrng.
while true; do
  sh onerng.sh | dd bs=1K count=1000000 of=tmp-onerng.pad 2>/dev/null
  infnoise --raw | dd bs=1K count=1000000 of=tmp-infnoise.pad 2>/dev/null
  xor tmp-onerng.pad tmp-infnoise.pad /dev/urandom /dev/hwrng | dd bs=1K count=1000000 of=/home/pi/pads/1GB-`\date +%Y-%m-%d-%H%M%S`.pad 2>/dev/null;
done

Great, now you have a good one-time-pad and can join ok-mixnet 🙂

P.S. If you really know what you’re doing and like shooting yourself in the foot, you could try combining and whitening entropy sources with a randomness sponge like keccak instead.

fabric1 AUR package

Fabric is a system administration tool used to run commands on remote machines over SSH. You program it using python. In 2018, Fabric 2 came out. In a lot of ways it’s better, but it’s incompatible, and removes some features I really need. I talked to the Fabric dev (bitprophet) and he seemed on board with keeping a Fabric 1 package around (and maybe renaming the current package to Fabric 2).

Here’s an arch package: https://aur.archlinux.org/packages/fabric1/

Currently Fabric 1 runs only on Python2. But there was a project to port it to Python 3 (confusingly named fabric3), which is currently attempting to merge into mainline fabric. Once that’s done, I’m hoping to see a ‘fabric1’ and ‘fabric2’ package in all the main distros.

mon(8)

I had previously hand-rolled a status monitor, status.za3k.com, which I am in the process of replacing (new version). I am replacing it with a linux monitoring daemon, mon, which I recommend. It is targeted at working system administrators. ‘mon’ adds many features over my own system, but still has a very bare-bones feeling.

The old service, ‘simple-status‘ worked as follows:

  • You visited the URL. Then, the status page would (live) kick of about 30 parallel jobs, to check the status of 30 services
  • The list of services is one-per-file in a the services.d directory.
  • For each service, it ran a short script, with no command line arguments.
  • All output is displayed in a simple html table, with the name of the service, the status (with color coding), and a short output line.
  • The script could return with a success (0) or non-success status code. If it returned success, that status line would display in green for success. If it failed, the line would be highlighted red for failure.
  • Scripts can be anything, but I wrote several utility functions to be called from scripts. For example, “ping?” checks whether a host is pingable.
  • Each script was wrapped in timeout. If the script took too long to run, it would be highlighted yellow.
  • The reason all scripts ran every time, is to prevent a failure mode where the information could ever be stale without me noticing.

Mon works as follows

  • The list of 30 services is defined in /etc/mon/con.cf.
  • For each service, it runs a single-line command (monitor) with arguments. The hostname(s) are added to the command line automatically.
  • All output can be displayed in a simple html table, with the name of the service, the status (with color coding), the time of last and next run, and a short output line. Or, I use ‘monshow‘, which is similar but in a text format.
  • Monitors can be anything, but several useful ones are provided in /usr/lib/mon/mon.d (on debian). For example the monitor “ping” checks whether a host is pingable.
  • The script could return with a success (0) or non-success status code. If it returned success, the status line would display in green for success (on the web interface), or red for failure.
  • All scripts run periodically. A script have many states, not just “success” or “failure”. For example “untested” (not yet run) or “dependency failing” (I assume, not yet seen).

As you can see, the two have a very similar approach to the component scripts, which is natural in the Linux world. Here is a comparison.

  • ‘simple-status’ does exactly one thing. ‘mon’ has many features, but does the minimum possible to provide each.
  • ‘simple-status’ is stateless. ‘mon’ has state.
  • ‘simple-status’ runs on demand. ‘mon’ is a daemon which runs monitors periodically.
  • Input is different. ‘simple-status’ is one script which takes a timeout. ‘mon’ listens for trap signals and talks to clients who want to know its state.
  • both can show an HTML status page that looks about the same, with some CGI parameters accepted.
  • ‘mon’ can also show a text status page.
  • both run monitors which return success based on status code, and provide extra information as standard output. ‘mon’ scripts are expected to be able to run on a list of hosts, rather than just one.
  • ‘mon’ has a config file. ‘simple-status’ has no options.
  • ‘simple-status’ is simple (27 lines). ‘mon’ has longer code (4922 lines)
  • ‘simple-status’ is written in bash, and does not expose this. ‘mon’ is written in perl, all the monitors are written in perl, and it allows inline perl in the config file
  • ‘simple-status’ limits the execution time of monitors. ‘mon’ does not.
  • ‘mon’ allows alerting, which call an arbitrary program to deliver the alert (email is common)
  • ‘mon’ supports traps, which are active alerts
  • ‘mon’ supports watchdog/heartbeat style alerts, where if a trap is not regularly received, it marks a service as failed.
  • ‘mon’ supports dependencies
  • ‘mon’ allows defining a service for several hosts at once

Overall I think that ‘mon’ is much more complex, but only to add features, and it doesn’t have a lot of features I wouldn’t use. It still is pretty simple with a simple interface. I recommend it as both good, and overall better than my system.

My only complaint is that it’s basically impossible to Google, which is why I’m writing a recommendation for it here.