Making a hardware random number generator

If you want a really good source of random numbers, you should get a hardware generator. But there’s not a lot of great options out there, and most people looking into this get (understandably) paranoid about backdoors. But, there’s a nice trick: if you combine multiple random sources together with xor, it doesn’t matter if one is backdoored, as long as they aren’t all backdoored. There are some exceptions–if the backdoor is actively looking at the output, it can still break your system. But as long as you’re just generating some random pads, instead of making a kernel entropy pool, you’re fine with this trick.

So! We just need a bunch of sources of randomness. Here’s the options I’ve tried:

  • /dev/urandom (40,000KB/s) – this is nearly a pseudo-random number generator, so it’s not that good. But it’s good to throw in just in case. [Learn about /dev/random vs /dev/urandom if you haven’t. Then unlearn it again.]
  • random-stream (1,000 KB/s), an implementation of the merenne twister pseudo-random-number generator. A worse version of /dev/urandom, use that unless you don’t trust the Linux kernel for some reason.
  • infnoise (20-23 KB/s), a USB hardware random number generator. Optionally whitens using keccak. Mine is unfortunately broken (probably?) and outputs “USB read error” after a while
  • OneRNG (55 KiB/s), a USB hardware random number generator. I use a custom script which outputs raw data instead of the provided scripts (although they look totally innocuous, do recommend
  • /dev/hwrng (123 KB/s), which accesses the hardware random number generator built into the raspberry pi. this device is provided by the raspbian package rng-tools. I learned about this option here
  • rdrand-gen (5,800 KB/s), a command-line tool to output random numbers from the Intel hardware generator instruction, RDRAND.

At the end, you can use my xor program to combine the streams/files. Make sure to use limit the output size if using files–by default it does not stop outputting data until EVERY file ends. The speed of the combined stream is at most going to be the slowest component (plus a little slowdown to xor everything). Here’s my final command line:

#!/bin/bash
# Fill up the folder with 1 GB one-time pads. Requires 'rng-tools' and a raspberry pi. Run as sudo to access /dev/hwrng.
while true; do
  sh onerng.sh | dd bs=1K count=1000000 of=tmp-onerng.pad 2>/dev/null
  infnoise --raw | dd bs=1K count=1000000 of=tmp-infnoise.pad 2>/dev/null
  xor tmp-onerng.pad tmp-infnoise.pad /dev/urandom /dev/hwrng | dd bs=1K count=1000000 of=/home/pi/pads/1GB-`\date +%Y-%m-%d-%H%M%S`.pad 2>/dev/null;
done

Great, now you have a good one-time-pad and can join ok-mixnet 🙂

P.S. If you really know what you’re doing and like shooting yourself in the foot, you could try combining and whitening entropy sources with a randomness sponge like keccak instead.

Tagged , ,

fabric1 AUR package

Fabric is a system administration tool used to run commands on remote machines over SSH. You program it using python. In 2018, Fabric 2 came out. In a lot of ways it’s better, but it’s incompatible, and removes some features I really need. I talked to the Fabric dev (bitprophet) and he seemed on board with keeping a Fabric 1 package around (and maybe renaming the current package to Fabric 2).

Here’s an arch package: https://aur.archlinux.org/packages/fabric1/

Currently Fabric 1 runs only on Python2. But there was a project to port it to Python 3 (confusingly named fabric3), which is currently attempting to merge into mainline fabric. Once that’s done, I’m hoping to see a ‘fabric1’ and ‘fabric2’ package in all the main distros.

Tagged , ,

mon(8)

I had previously hand-rolled a status monitor, status.za3k.com, which I am in the process of replacing (new version). I am replacing it with a linux monitoring daemon, mon, which I recommend. It is targeted at working system administrators. ‘mon’ adds many features over my own system, but still has a very bare-bones feeling.

The old service, ‘simple-status‘ worked as follows:

  • You visited the URL. Then, the status page would (live) kick of about 30 parallel jobs, to check the status of 30 services
  • The list of services is one-per-file in a the services.d directory.
  • For each service, it ran a short script, with no command line arguments.
  • All output is displayed in a simple html table, with the name of the service, the status (with color coding), and a short output line.
  • The script could return with a success (0) or non-success status code. If it returned success, that status line would display in green for success. If it failed, the line would be highlighted red for failure.
  • Scripts can be anything, but I wrote several utility functions to be called from scripts. For example, “ping?” checks whether a host is pingable.
  • Each script was wrapped in timeout. If the script took too long to run, it would be highlighted yellow.
  • The reason all scripts ran every time, is to prevent a failure mode where the information could ever be stale without me noticing.

Mon works as follows

  • The list of 30 services is defined in /etc/mon/con.cf.
  • For each service, it runs a single-line command (monitor) with arguments. The hostname(s) are added to the command line automatically.
  • All output can be displayed in a simple html table, with the name of the service, the status (with color coding), the time of last and next run, and a short output line. Or, I use ‘monshow‘, which is similar but in a text format.
  • Monitors can be anything, but several useful ones are provided in /usr/lib/mon/mon.d (on debian). For example the monitor “ping” checks whether a host is pingable.
  • The script could return with a success (0) or non-success status code. If it returned success, the status line would display in green for success (on the web interface), or red for failure.
  • All scripts run periodically. A script have many states, not just “success” or “failure”. For example “untested” (not yet run) or “dependency failing” (I assume, not yet seen).

As you can see, the two have a very similar approach to the component scripts, which is natural in the Linux world. Here is a comparison.

  • ‘simple-status’ does exactly one thing. ‘mon’ has many features, but does the minimum possible to provide each.
  • ‘simple-status’ is stateless. ‘mon’ has state.
  • ‘simple-status’ runs on demand. ‘mon’ is a daemon which runs monitors periodically.
  • Input is different. ‘simple-status’ is one script which takes a timeout. ‘mon’ listens for trap signals and talks to clients who want to know its state.
  • both can show an HTML status page that looks about the same, with some CGI parameters accepted.
  • ‘mon’ can also show a text status page.
  • both run monitors which return success based on status code, and provide extra information as standard output. ‘mon’ scripts are expected to be able to run on a list of hosts, rather than just one.
  • ‘mon’ has a config file. ‘simple-status’ has no options.
  • ‘simple-status’ is simple (27 lines). ‘mon’ has longer code (4922 lines)
  • ‘simple-status’ is written in bash, and does not expose this. ‘mon’ is written in perl, all the monitors are written in perl, and it allows inline perl in the config file
  • ‘simple-status’ limits the execution time of monitors. ‘mon’ does not.
  • ‘mon’ allows alerting, which call an arbitrary program to deliver the alert (email is common)
  • ‘mon’ supports traps, which are active alerts
  • ‘mon’ supports watchdog/heartbeat style alerts, where if a trap is not regularly received, it marks a service as failed.
  • ‘mon’ supports dependencies
  • ‘mon’ allows defining a service for several hosts at once

Overall I think that ‘mon’ is much more complex, but only to add features, and it doesn’t have a lot of features I wouldn’t use. It still is pretty simple with a simple interface. I recommend it as both good, and overall better than my system.

My only complaint is that it’s basically impossible to Google, which is why I’m writing a recommendation for it here.

Tagged , , ,

Cron email, and sending email to only one address

So you want to know when your monitoring system fails, or your cron jobs don’t run? Add this to your crontab:

MAILTO=me@me.com

Now install a mail-sending agent. I like ‘nullmailer‘, which is much smaller than most mail-sending agents. It can’t receive or forward mail, only send it, which is what I like about it. No chance of a spammer using my server for something nasty.

The way I have it set up, I’ll have a server (avalanche) sending all email from one address (nullmailer@avalanche.za3k.com) to one email (admin@za3k.com), and that’s it. Here’s my setup on debian:

sudo apt-get install nullmailer
echo "admin@za3k.com" | sudo tee /etc/nullmailer/adminaddr # all mail is sent to here, except for certain patterns
echo "nullmailer@`hostname`.za3k.com" | sudo tee /etc/nullmailer/allmailfrom # all mail is sent from here
echo "`hostname`.za3k.com" | sudo tee /etc/nullmailer/defaultdomain # superceded by 'allmailfrom' and not used
echo "`hostname`.za3k.com" | sudo tee /etc/nullmailer/helohost # required to connect to my server. otherwise default to 'me'
echo "smtp.za3k.com smtp --port=587 --starttls" | sudo tee /etc/nullmailer/remotes && sudo chmod 600 /etc/nullmailer/remotes

Now just run echo "Subject: sendmail test" | /usr/lib/sendmail -v admin@za3k.com to test and you’re done!

Tagged , , ,

Postmortem: bs-store

Between 2020-03-14 and 2020-12-03 I ran an experimental computer storage setup. I movied or copied 90% of my files into a content-addressable storage system. I’m doing a writeup of why I did it, how I did it, and why I stopped. My hope is that it will be useful to anyone considering using a similar system.

The assumption behind this setup, is that 99% of my files never change, so it’s fine to store only one, static copy of them. (Think movies, photos… they’re most of your computer space, and you’re never going to modify them). There are files you change, I just didn’t put them into this system. If you run a database, this ain’t for you.

Because I have quite a lot of files and 42 drives (7 in my computer, ~35 in a huge media server chassis), there is a problem of how to organize files across drives. To explain why it’s a problem, let’s look at the two default approaches:

  • One Block Device / RAID 0. Use some form of system that unifies block devices, such as RAID0 or a ZFS’s striped vdevs. Writing files is very easy, you see a single 3000GB drive.
    • Many forms of RAID0 use striping. Striping splits each file across all available drives. 42 drives could spin up to read one file (wasteful).
    • You need all the drives mounted to read anything–I have ~40 drives, and I’d like a solution that works if I move and can’t keep my giant media server running. Also, it’s just more reassuring that nothing can fail if you can read each drive individually.
  • JBOD / Just a Bunch of Disks. Label each drive with a category (ex. ‘movies’), and mount them individually.
    • It’s hard to aim for 100% (or even >80%) drive use. Say you have 4x 1000GB drives, and you have 800GB movies, 800GB home video footage, 100GB photographs, and 300GB datasets. How do you arrange that? One drive per dataset is pretty wasteful, as everything fits on 3. But, with three drives, you’ll need to split at least one dataset across drives. Say you put together 800GB home video and 100GB photographs. If you get 200GB more photographs, do you split a 300 GB collection across drives, or move the entire thing to another drive? It’s a lot of manual management and shifting things around for little reason.

Neither approach adds any redundancy, and 42 drives is a bit too many to deal with for most things. Step 1 is to split the 42 drives into 7 ZFS vdevs, each with 2-drive redundancy. That way, if a drive fails or there is a small data corruption (likely), everything will keep working. So now we only have to think about accessing 7 drives (but keep in mind, many physical drives will spin up for each disk access).

The ideal solution:

  • Will not involve a lot of manual management
  • Will fill up each drive in turn to 100%, rather than all drives at an equal %.
  • Will deduplicate identical content (this is a “nice to have”)
  • Will only involve accessing one drive to access one file
  • Will allow me to get and remove drives, ideally across heterogenous systems.

I decided a content-addressable system was ideal for me. That is, you’d look up each file by its hash. I don’t like having an extra step to access files, so files would be accessed by symlink–no frontend program. Also, it was important to me that I be able to transparently swap out the set of drives backing this. I wanted to make the content-addressable system basically a set of 7 content-addressable systems, and somehow wrap those all into one big content-addressable system with the same interface. Here’s what I settled on:

  • (My drives are mounted as /zpool/bs0, /zpool/bs1, … /zpool/bs6)
  • Files will be stored in each pool in turn by hash. So my movie ‘cat.mpg’ with sha hash ‘8323f58d8b92e6fdf190c56594ed767df90f1b6d’ gets stored in /zpool/bs0/83/23/f58d8b9 [shortened for readability]
  • Initially, we just copy files into the content-addressable system, we don’t delete the original. I’m cautious, and I wanted to make everything worked before getting rid of the originals.
  • To access a file, I used read-only unionfs-fuse for this. This checks each of /zpool/bs{0..6}/<hash> in turn. So in the final version, /data/movies/cat.mpg would be a symlink to ‘/bs-union/83/23/f58d8b9’
  • We store some extra metadata on the original file (if not replaced by a symlink) and the storied copy–what collection it’s part of, when it was added, how big it is, what it’s hash is, etc. I chose to use xattrs.

The plan here is that it would be really easy to swap out one backing blockstore of 30GB, for two of 20GB–just copy the files to the new drives and add it to the unionfs.

Here’s what went well:

  • No problems during development–only copying files meant it was easy and safe to debug prototypes.
  • Everything was trivial to access (except see note about mounting disks below)
  • It was easy to add things to the system
  • Holding off on deleting the original content until I was 100% out of room on my room disks, meant it was easy to migrate off of, rather risk-free
  • Running the entire thing on top of zfs ZRAID2 was the right decision, I had no worries about failing drives or data corruption, despite a lot of hardware issues developing at one point.
  • My assumption that files would never change was correct. I made the unionfs filesystem read-only as a guard against error, but it was never a problem.
  • Migrating off the system went smoothly

Here are the implementation problems I found

  • I wrote the entire thing as bash scripts operating directly on files, which was OK for access and putting stuff in the store, but just awful for trying to get an overview of data or migrating things. I definitely should have used a database. I maybe should have used a programming language.
  • Because there was no database, there wasn’t really any kind of regular check for orphans (content in the blobstore with no symlinks to it), and other similar checks.
  • unionfs-fuse suuucks. Every union filesystem I’ve tried sucks. Its read bandwidth is much lower than the component devices (unclear, probably), it doesn’t cache where to look things up, and it has zero xattrs support (can’t read xattrs from the underlying filesystem).
  • gotcha: zfs xattrs waste a lot of space by default, you need to reconfigure the default.

But the biggest problem was disk access patterns:

  • I thought I could cool 42 drives spinning, or at least a good portion of them. This was WRONG by far, and I am not sure how possible it is in a home setup. To give you an idea how bad this was, I had to write a monitor to shut off my computer if the drives went above 60C, and I was developing fevers in my bedroom (where the server is) from overheating. Not healthy.
  • unionfs has to check each backing drive. So we see 42 drives spin up. I have ideas on fixing this, but it doesn’t deal with the other problems
  • To fix this, you could use double-indirection.
    • Rather than pointing a symlink at a unionfs: /data/cat.mpg -> /bs-union/83/23/f58d8b9 (which accesses /zpool/bs0/83/23/f58d8b9)
    • Point a symlink at another symlink that points directly to the data: /data/cat.mpg -> /bs-indirect/83/23/f58d8b9 -> /zpool/bs0/83/23/f58d8b9
  • The idea is that backing stores are kinda “whatever, just shove it somewhere”. But, actually it would be good to have a collection in one place–not only to make it easy to copy, but to spin up only one drive when you go through everything in a collection. It might even be a good idea to have a separate drive for more frequently-accessed content. This wasn’t a huge deal for me since migrating existing content meant it coincidentally ended up pretty localized.
  • Because I couldn’t spin up all 42 drives, I had to keep a lot of the array unmounted, and mount the drives I needed into the unionfs manually.

So although I could have tried to fix things with double-indirection, I decided there were some other disadvantages to symlinks: estimating sizes, making offsite backups foolproof. I decided to migrate off the system entirely. The migration went well, although it required running all the drives at once, so some hardware errors popped up. I’m currently on a semi-JBOD system (still on top of the same 7 ZRAID2 devices).

Hopefully this is useful to someone planning a similar system someday. If you learned something useful, or there are existing systems I should have used, feel free to leave a comment.

Tagged , , ,

Printing on the Brother HL-2270DW printer using a Raspberry Pi

Although the below directions work on Raspberry Pi, they should also work on any other system. The brother-provided driver does not run on arm processors[1] like the raspberry pi, so we will instead use the open-source brlaser[2].

Edit: This setup should also work on the following Brother monochrome printers, just substitute the name where needed:

  • brlaser 4, just install from package manager: DCP-1510, DCP-1600 series, DCP-7030, DCP-7040, DCP-7055, DCP-7055W, DCP-7060D, DCP-7065DN, DCP-7080, DCP-L2500D series, HL-1110 series, HL-1200 series, HL-L2300D series, HL-L2320D series, HL-L2340D series, HL-L2360D series, HL-L2375DW series, HL-L2390DW, MFC-1910W, MFC-7240, MFC-7360N, MFC-7365DN, MFC-7420, MFC-7460DN, MFC-7840W, MFC-L2710DW series
  • brlaser 6, follow full steps below: DCP-L2520D series, DCP-L2520DW series, DCP-L2540DW series (unclear, may only need 4), HL-2030 series, HL-2140 series, HL-2220 series, HL-2270DW series, HL-5030 series

Also, all these steps are command-line based, and you can do the whole setup headless (no monitor or keyboard) using SSH.

  1. Get the latest raspbian image up and running on your pi, with working networking. At the time of writing the latest version is 10 (buster)–once 11+ is released this will be much easier. I have written a convenience tool[3] for this step, but you can also find any number of standard guides. Log into your raspberry pi to run the following steps
  2. (Option 1, not recommended) Upgrade to Debian 11 bullseye (current testing release). This is because we need brlaser 6, not brlaser 4 from debian 10 buster (current stable release). Then, install the print system and driver[2]:
    sudo apt-get update && sudo apt-get install lpr cups ghostscript printer-driver-brlaser
  3. (Option 2, recommended) Install ‘brlaser’ from source.
    1. Install print system and build tools
      sudo apt-get update && sudo apt-get install lpr cups ghostscript git cmake libcups2-dev libcupsimage2-dev
    2. Download the source
      wget https://github.com/pdewacht/brlaser/archive/v6.tar.gz && tar xf v6.tar.gz
    3. Build the source and install
      cd brlaser-6 && cmake . && make && sudo make install
  4. Plug in the printer, verify that it shows up using sudo lsusb or sudo dmesg. (author’s shameful note: if you’re not looking, I find it surprisingly easy to plug USB B into the ethernet jack)
  5. Install the printer.
    1. Run sudo lpinfo -v | grep usb to get the device name of your printer. It will be something like usb://Brother/HL-2270DW%20series?serial=D4N207646
      If you’re following this in the hopes that it will work on another printer, run sudo lpinfo -m | grep HL-2270DW to get the PPD file for your printer.
    2. Install and enable the printer
      sudo lpadmin -p HL-2270DW -E -v usb://Brother/HL-2270DW%20series?serial=D4N207646 -m drv:///brlaser.drv/br2270dw.ppd
      Note, -p HL-2270DW is just the name I’m using for the printer, feel free to name the printer whatever you like.
    3. Enable the printer (did not work for me)
      sudo lpadmin -p HL-2270DW -E
    4. (Optional) Set the printer as the default destination
      sudo lpoptions -d HL-2270DW
    5. (Optional) Set any default options you want for the printer
      sudo lpoptions -p HL-2270DW -o media=letter
  6. Test the printer (I’m in the USA so we use ‘letter’ size paper, you can substitute whichever paper you have such as ‘a4’).
    1. echo "Hello World" | PRINTER=HL-2270DW lp -o media=letter (Make sure anything prints)
    2. cat <test document> | PRINTER=HL-2270DW lp -o media=letter (Print an actual test page to test alignment, etc)
    3. cat <test document> | PRINTER=HL-2270DW lp -o media=letter -o sides=two-sided-short-edge (Make sure duplex works if you plan to use that)
  7. (Optional) Set up an scp print server, so any file you copy to a /printme directory gets printed. For the 2270DW, I also have a /printme.duplex directory.

Links
[1] brother driver does not work on arm (also verified myself)
[2] brlaser, the open-source Brother printer driver
[3] rpi-setup, my convenience command-line script for headless raspberry pi setup
[4] stack overflow answer on how to install one package from testing in debian

Tagged , ,

Streaming Linux->Twitch using ffmpeg and ALSA

I stopped using OBS a while ago for a couple reasons–the main one was that it didn’t support my video capture card, but I also had issues with it crashing or lagging behind with no clear indication of what it was doing. I ended up switching to ffmpeg for live streaming, because it’s very easy to tell when ffmpeg is lagging behind. OBS uses ffmpeg internally for video. I don’t especially recommend this setup, but I thought I’d document it in case someone can’t use a nice GUI setup like OBS or similar.

I’m prefer less layers, so I’m still on ALSA. My setup is:

  • I have one computer, running linux. It runs what I’m streaming (typically minecraft), and captures everything, encodes everything, and sends it to twitch
  • Video is captured using libxcb (captures X11 desktop)
  • Audio is captured using ALSA. My mic is captured directly, while the rest of my desktop audio is sent to a loopback device which acts as a second mic.
  • Everything is encoded together into one video stream. The video is a Flash video container with x264 video and AAC audio, because that’s what twitch wants. Hopefully we’ll all switch to AV1 soon.
  • That stream is sent to twitch by ffmpeg
  • There is no way to pause the stream, do scenes, adjust audio, see audio levels, etc while the stream is going. I just have to adjust program volumes independently.

Here’s my .asoundrc:

# sudo modprobe snd-aloop is required to set up hw:Loopback
pcm.!default {
  type plug
  slave.pcm {
    type dmix
    ipc_key 99
    slave {
      pcm "hw:Loopback,0"
      rate 48000
      channels 2
      period_size 1024
    }
  }
}

My ffmpeg build line:

./configure --enable-libfdk-aac --enable-nonfree --enable-libxcb --enable-indev=alsa --enable-outdev=alsa --prefix=/usr/local --extra-version='za3k' --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-libaom --enable-libass --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-libopus --enable-libpulse --enable-libvorbis --enable-libvpx --enable-libx265 --enable-opengl --enable-libdrm --enable-libx264 --enable-shared --enable-librtmp && make -j 4 && sudo make install

And most imporantly, my ffmpeg command:

ffmpeg 
  -video_size 1280x720 -framerate 30 -f x11grab -s 1280x720 -r 30 -i :0.0 
  -f alsa -ac 1 -ar 48000 -i hw:1,0 
  -f alsa -ac 2 -ar 48000 -i hw:Loopback,1
  -filter_complex '[1:a][1:a]amerge=inputs=2[stereo1] ; [2:a][stereo1]amerge=inputs=2[a]' -ac 2 
  -map '[a]' -map 0:v 
  -f flv -ac 2 -ar 48000 
  -vcodec libx264 -g 60 -keyint_min 30 -b:v 3000k -minrate 3000k -maxrate 3000k -pix_fmt yuv420p -s 1280x720 -preset ultrafast -tune film 
  -c:a libfdk_aac -b:a 160k -strict normal -bufsize 3000k 
  rtmp://live-sjc.twitch.tv/app/${TWITCH_KEY}

Let’s break that monster down a bit. ffmpeg structures its command line into input streams, transformations, and output streams.

ffmpeg input streams

-video_size 1280x720 -framerate 30 -f x11grab -s 1280x720 -r 30 -i :0.0:
Grab 720p video (-video_size 1280x720) at 30fps (-framerate 30) using x11grab/libxcb (-f x11grab), and we also want to output that video at the same resolution and framerate (-s 1280x720 -r 30). We grab :0.0 (-i :0.0)–that’s X language for first X server (you only have one, probably), first display/monitor. And, since we don’t say otherwise, we grab the whole thing, so the monitor better be 720p.

-f alsa -ac 1 -ar 48000 -i hw:1,0:
Using alsa (-f alsa), capture mono (-ac 1, 1 audio channel) at the standard PC sample rate (-ar 48000, audio rate=48000 Hz). The ALSA device is hw:1,0 (-i hw:1,0), my microphone, which happens to be mono.

-f alsa -ac 2 -ar 48000 -i hw:Loopback,1:
Using alsa (-f alsa), capture stereo (-ac 2, 2 audio channels) at the standard PC sample rate (-ar 48000, audio rate=48000 Hz). The ALSA device is hw:Loopback,1. In the ALSA config file .asoundrc given above, you can see we send all computer audio to hw:Loopback,0. Something sent to play on hw:Loopback,0 is made available to record as hw:Loopback,1, that’s just the convention for how snd-aloop devices work.

ffmpeg transforms

-filter_complex '[1:a][1:a]amerge=inputs=2[stereo1] ; [2:a][stereo1]amerge=inputs=2[a]' -ac 2:
All right, this one was a bit of a doozy to figure out. In ffmpeg’s special filter notation, 1:a means “stream #1, audio” (where stream #0 is the first one).

First we take the mic input [1:a][1:a] and convert it from a mono channel to stereo, by just duplicating the sound to both ears (amerge=inputs=2[stereo1]). Then, we combine the stereo mic and the stereo computer audio ([2:a][stereo1]) into a single stereo stream using the default mixer (amerge=inputs=2[a]).

-map '[a]' -map 0:v :
By default, ffmpeg just keeps all the streams around, so we now have one mono, and two stereo streams, and it won’t default to picking the last one. So we manually tell it to keep that last audio stream we made (-map '[a]'), and the video stream from the first input (-map 0:v, the only video stream).

ffmpeg output streams

-f flv -ac 2 -ar 48000:
We want the output format to be Flash video (-f flv) with stereo audio (-ac 2) at 48000Hz (-ar 48000). Why do we want that? Because we’re streaming to Twitch and that’s what Twitch says they want–that’s basically why everything in the output format.

-vcodec libx264 -g 60 -keyint_min 30 -b:v3000k -minrate 3000k -maxrate 3000k -pix_fmt yuv420p -s 1280x720 -preset ultrafast -tune film:
Ah, the magic. Now we do x264 encoding (-vcodec libx264), a modern wonder. A lot of the options here are just what Twitch requests. They want keyframes every 2 seconds (-g 60 -keyint_min 30, where 60=30*2=FPS*2, 30=FPS). They want a constant bitrate (-b:v3000k -minrate 3000k -maxrate 3000k) between 1K-6K/s at the time of writing–I picked 3K because it’s appropriate for 720p video, but you could go with 6K for 1080p. Here are Twitch’s recommendations. The pixel format is standard (-pix_gmt yub720p) and we still don’t want to change the resolution (-s 1280x720). Finally the options you might want to change. You want to set the preset as high as it will go with your computer keeping up–mine sucks (-preset ultrafast, where the options go ultrafast,superfast,veryfast,faster,fast,medium, with a 2-10X jump in CPU power needed for each step). And I’m broadcasting minecraft, which in terms of encoders is close to film (-tune film)–lots of panning, relatively complicated stuff on screen. If you want to re-encode cartoons you want something else.

-c:a libfdk_aac -b:a 160k:
We use AAC (-c:a libfdk_aac). Note that libfdk is many times faster than the default implementation, but it’s not available by default in debian’s ffmpeg for (dumb) license reasons. We use 160k bitrate (-b:a 160k ) audio since I’ve found that’s good, and 96K-160K is Twitch’s allowable range. `-strict normal`

-strict normal: Just an ffmpeg option. Not interesting.
-bufsize 3000k: One second of buffer with CBR video

rtmp://live-sjc.twitch.tv/app/${TWITCH_KEY}:
The twitch streaming URL. Replace ${TWITCH_KEY} with your actual key, of course.

Sources:

  • jrayhawk on IRC (alsa)
  • ffmpeg wiki and docs (pretty good)
  • ALSA docs (not that good)
  • Twitch documentation, which is pretty good once you can find it
  • mark hills on how to set up snd-aloop
Tagged , , , ,

Capturing video on Debian Linux with the Blackmagic Intensity Pro 4K card

Most of this should apply for any linux system, other than the driver install step. Also, I believe most of it applies to DeckLink and Intensity cards as well.

My main source is https://gist.github.com/afriza/879fed4ede539a5a6501e0f046f71463. I’ve re-written for clarity and Debian.

  1. Set up hardware. On the Intensity Pro 4K, I see a black screen on my TV when things are set up correctly (a clear rectangle, not just nothing).
  2. From the Blackmagic site, download “Desktop Video SDK” version 10.11.4 (not the latest). Get the matching “Desktop Video” software for Linux.
  3. Install the drivers. In my case, these were in desktopvideo_11.3a7_amd64.deb.
    After driver install, lsmod | grep blackmagic should show a driver loaded on debian.
    You can check that the PCI card is recognized with lspci | grep Blackmagic (I think this requires the driver but didn’t check)
  4. Update the firmware (optional). sudo BlackmagicFirmwareUpdater status will check for updates available. There were none for me.
  5. Extract the SDK. Move it somewhere easier to type. The relevant folder is Blackmagic DeckLink SDK 10.11.4/Linux/includes. Let’s assume you move that to ~/BM_SDK
  6. Build ffmpeg from source. I’m here copying from my source heavily.

    1. Get the latest ffmpeg source and extract it. Don’t match the debian version–it’s too old to work.
      wget https://ffmpeg.org/releases/ffmpeg-4.2.tar.bz2 && tar xf ffmpeg-*.tar.bz2 && cd ffmpeg-*
    2. Install build deps.
      sudo apt-get install nasm yasm libx264-dev libx265-dev libnuma-dev libvpx-dev libfdk-aac-dev libmp3lame-dev libopus-dev libvorbis-dev libass-dev
    3. Build.
      PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure --prefix="$HOME/ffmpeg_build" --pkg-config-flags="--static" --extra-cflags="-I$HOME/ffmpeg_build/include -I$HOME/ffmpeg_sources/BMD_SDK/include" --extra-ldflags="-L$HOME/ffmpeg_build/lib" --extra-libs="-lpthread -lm" --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree --enable-decklink

      make -j $(`nproc)`

      sudo cp ffmpeg ffprobe /usr/local/bin/

  7. Use ffmpeg.
    ffmpeg -f decklink -list_devices 1 -i dummy should show your device now. Note the name for below.

    ffmpeg -f decklink -list_formats 1 -i 'Intensity Pro 4K' shows supported formats. Here’s what I see for the Intensity Pro 4K:

[decklink @ 0x561bd9881800] Supported formats for 'Intensity Pro 4K':
        format_code     description
        ntsc            720x486 at 30000/1001 fps (interlaced, lower field first)
        pal             720x576 at 25000/1000 fps (interlaced, upper field first)
        23ps            1920x1080 at 24000/1001 fps
        24ps            1920x1080 at 24000/1000 fps
        Hp25            1920x1080 at 25000/1000 fps
        Hp29            1920x1080 at 30000/1001 fps
        Hp30            1920x1080 at 30000/1000 fps
        Hp50            1920x1080 at 50000/1000 fps
        Hp59            1920x1080 at 60000/1001 fps
        Hp60            1920x1080 at 60000/1000 fps
        Hi50            1920x1080 at 25000/1000 fps (interlaced, upper field first)
        Hi59            1920x1080 at 30000/1001 fps (interlaced, upper field first)
        Hi60            1920x1080 at 30000/1000 fps (interlaced, upper field first)
        hp50            1280x720 at 50000/1000 fps
        hp59            1280x720 at 60000/1001 fps
        hp60            1280x720 at 60000/1000 fps
        4k23            3840x2160 at 24000/1001 fps
        4k24            3840x2160 at 24000/1000 fps
        4k25            3840x2160 at 25000/1000 fps
        4k29            3840x2160 at 30000/1001 fps
        4k30            3840x2160 at 30000/1000 fps

Capture some video: ffmpeg -raw_format argb -format_code Hp60 -f decklink -i 'Intensity Pro 4K' test.avi

The format (raw_format and format_code) will vary based on your input settings. In particular, note that-raw_format uyvy422 is the default, which I found did not match my computer output. I was able to switch either the command line or the computer output settings to fix it.

Troubleshooting

  • I’m not running any capture, but passthrough isn’t working. That’s how the Intensity Pro 4K works. Passthrough is not always-on. I’d recommend a splitter if you want this for streaming.
  • ffmpeg won’t compile. Your DeckLink SDK may be too new. Get 10.11.4 instead.
  • I can see a list of formats, but I can’t select one using -format_code. ffmpeg doesn’t recognize the option. Your ffmpeg is too old. Download a newer source.
  • When I look at the video, I see colored bars. The HDMI output turns on during recording. The Intensity Pro 4K outputs this when the resolution, hertz, or color format does not match the input. This also happens if your SDK and driver versions are mismatched.

Sources:

Tagged , ,

github.com archive – Background Research

My current project is to archive git repos, starting with all of github.com. As you might imagine, size is an issue, so in this post I do some investigation on how to better compress things. It’s currently Oct, 2017, for when you read this years later and your eyes bug out at how tiny the numbers are.

Let’s look at the list of repositories and see what we can figure out.

  • Github has a very limited naming scheme. These are the valid characters for usernames and repositories: [-._0-9a-zA-Z].
  • Github has 68.8 million repositories
  • Their built-in fork detection is not very aggressive–they say they have 50% forks, and I’m guessing that’s too low. I’m unsure what github considers a fork (whether you have to click the “fork” button, or whether they look at git history). To be a little more aggressive, I’m looking at collections of repos with the same name instead.There are 21.3 million different respository names. 16.7 million repositories do not share a name with any other repository. Subtracting, that means there 4.6million repository names representing the other 52.1 million possibly-duplicated repositories.
  • Here are the most common repository names. It turns out Github is case-insensitive but I didn’t figure this out until later.
    • hello-world (548039)
    • test (421772)
    • datasciencecoursera (191498)
    • datasharing (185779)
    • dotfiles (120020)
    • ProgrammingAssignment2 (112149)
    • Test (110278)
    • Spoon-Knife (107525)
    • blog (80794)
    • bootstrap (74383)
    • Hello-World (68179)
    • learngit (59247)
    • – (59136)
  • Here’s the breakdown of how many copies of things there are, assuming things named the same are copies:
    • 1 copy (16663356, 24%)
    • 2 copies (4506958, 6.5%)
    • 3 copies (2351856, 3.4%)
    • 4-9 copies (5794539, 8.4%)
    • 10-99 copies (13389713, 19%)
    • 100-999 copies (13342937, 19%)
    • 1000-9999 copies (7922014, 12%)
    • 10000-99999 copies (3084797, 4.5%)
    • 1000000+ copies (1797060, 2.6%)

That’s about everything I can get from the repo names. Next, I downloaded all repos named dotfiles. My goal is to pick a compression strategy for when I store repos. My strategy will include putting repos with the name name on the same disk, to improve deduplication. I figured ‘dotfiles’ was a usefully large dataset, and it would include interesting overlap–some combination of forks, duplicated files, similar, and dissimilar files. It’s not perfect–for example, it probably has a lot of small files and fewer authors than usual. So I may not get good estimates, but hopefully I’ll get decent compression approaches.

Here’s some information about dotfiles:

  • 102217 repos. The reason this doesn’t match my repo list number is that some repos have been deleted or made private.
  • 243G disk size after cloning (233G apparent). That’s an average of 2.3M per repo–pretty small.
  • Of these, 1873 are empty repos taking up 60K each (110M total). That’s only 16K apparent size–lots of small or empty files. An empty repo is a good estimate for per-repo overhead. 60K overhead for every repo would be 6GB total.
  • There are 161870 ‘refs’ objects, or about 1.6 per repo. A ‘ref’ is a branch, basically. Unless a repo is empty, it must have at least one ref (I don’t know if github enforces that you must have a ref called ‘master’).
  • Git objects are how git stores everything.
    • ‘Blob’ objects represent file content (just content). Rarely, blobs can store content other than files, like GPG signatures.
    • ‘Tree’ objects represent directory listings. These are where filenames and permissions are stored.
    • ‘Commit’ and ‘Tag’ objects are for git commits and tags. Makes sense. I think only annotated tags get stored in the object database.
  • Internally, git both stores diffs (for example, a 1 line file change is represented as close to 1 line of actual disk storage), and compresses the files and diffs. Below, I list a “virtual” size, representing the size of the uncompressed object, and a “disk” size representing the actual size as used by git.For more information on git internals, I recommend the excellent “Pro Git” (available for free online and as a book), and then if you want compression and bit-packing details the fine internals documentation has some information about objects, deltas, and packfile formats.
  • Git object counts and sizes:
    • Blob
      • 41031250 blobs (401 per repo)
      • taking up 721202919141 virtual bytes = 721GB
      • 239285368549 bytes on disk = 239GB (3.0:1 compression)
      • Average size per object: 17576 bytes virtual, 5831 bytes on disk
      • Average size per repo: 7056KB virtual, 2341KB on disk
    • Tree
      • 28467378 trees (278 per repo)
      • taking up 16837190691 virtual bytes = 17GB
      • 3335346365 bytes on disk = 3GB (5.0:1 compression)
      • Average size per object: 591 bytes virtual, 117 bytes on disk
      • Average size per repo: 160KB virtual, 33KB on disk
    • Commit
      • 14035853 commits (137 per repo)
      • taking up 4135686748 virtual bytes = 4GB
      • 2846759517 bytes on disk = 3GB (1.5:1 compression)
      • Average size per object: 295 bytes virtual, 203 bytes on disk
      • Average size per repo: 40KB virtual, 28KB on disk
    • Tag
      • 5428 tags (0.05 per repo)
      • taking up 1232092 virtual bytes = ~0GB
      • 1004941 bytes on disk = ~0GB (1.2:1 compression)
      • Average size: 227 bytes virtual, 185 bytes on disk
      • Average size per repo: 12 bytes virtual, 10 bytes on disk
    • Ref: ~2 refs, above
    • Combined
      • 83539909 objects (817 per repo)
      • taking up 742177028672 virtual bytes = 742GB
      • 245468479372 bytes on disk = 245GB
      • Average size: 8884 bytes virtual, 2938 bytes on disk
    • Usage
      • Blob, 49% of objects, 97% of virtual space, 97% of disk space
      • Tree, 34% of objects, 2.2% of virtual space, 1.3% of disk space
      • Commit, 17% of objects, 0.5% of virtual space, 1.2% of disk space
      • Tags: 0% ish

Even though these numbers may not be representative, let’s use them to get some ballpark figures. If each repo had 600 objects, and there are 68.6 million repos on github, we would expect there to be 56 billion objects on github. At an average of 8,884 bytes per object, that’s 498TB of git objects (164TB on disk). At 40 bytes per hash, it would also also 2.2TB of hashes alone. Also interesting is that files represent 97% of storage–git is doing a good job of being low-overhead. If we pushed things, we could probably fit non-files on a single disk.

Dotfiles are small, so this might be a small estimate. For better data, we’d want to randomly sample repos. Unfortunately, to figure out how deduplication works, we’d want to pull in some more repos. It turns out picking 1000 random repo names gets you 5% of github–so not really feasible.

164TB, huh? Let’s see if there’s some object duplication. Just the unique objects now:

  • Blob
    • 10930075 blobs (106 per repo, 3.8:1 deduplication)
    • taking up 359101708549 virtual bytes = 359GB (2.0:1 dedup)
    • 121217926520 bytes on disk = 121GB (3.0:1 compression, 2.0:1 dedup)
    • Average size per object: 32854 bytes virtual, 11090 bytes on disk
    • Average size per repo: 3513KB virtual, 1186KB on disk
  • Tree
    • 10286833 trees (101 per repo, 2.8:1 deduplication)
    • taking up 6888606565 virtual bytes = 7GB (2.4:1 dedup)
    • 1147147637 bytes on disk = 1GB (6.0:1 compression, 2.9:1 dedup)
    • Average size per object: 670 bytes virtual, 112 bytes on disk
    • Average size per repo: 67KB virtual, 11KB on disk
  • Commit
    • 4605485 commits (45 per repo, 3.0:1 deduplication)
    • taking up 1298375305 virtual bytes = 1.3GB (3.2:1 dedup)
    • 875615668 bytes on disk = 0.9GB (3.3:1 dedup)
    • Average size per object: 282 bytes virtual, 190 bytes on disk
    • Average size per repo: 13KB virtual, 9KB on disk
  • Tag
    • 2296 tags (0.02 per repo, 2.7:1 dedup)
    • taking up 582993 virtual bytes = ~0GB (2.1:1 dedup)
    • 482201 bytes on disk = ~0GB (1.2:1 compression, 2.1:1 dedup)
    • Average size per object: 254 virtual, 210 bytes on disk
    • Average size per repo: 6 bytes virtual, 5 bytes on disk
  • Combined
    • 25824689 objects (252 per repo, 3.2:1 dedup)
    • taking up 367289273412 virtual bytes = 367GB (2.0:1 dedup)
    • 123241172026 bytes of disk = 123GB (3.0:1 compression, 2.0:1 dedup)
    • Average size per object: 14222 bytes virtual, 4772 bytes on disk
    • Average size per repo: 3593KB, 1206KB on disk
  • Usage
    • Blob, 42% of objects, 97.8% virtual space, 98.4% disk space
    • Tree, 40% of objects, 1.9% virtual space, 1.0% disk space
    • Commit, 18% of objects, 0.4% virtual space, 0.3% disk space
    • Tags: 0% ish

All right, that’s 2:1 disk savings over the existing compression from git. Not bad. In our imaginary world where dotfiles are representative, that’s 82TB of data on github (1.2TB non-file objects and 0.7TB hashes)

Let’s try a few compression strategies and see how they fare:

  • 243GB (233GB apparent). Native git compression only
  • 243GB. Same, with ‘git repack -adk’
  • 237GB. As a ‘.tar’
  • 230GB. As a ‘.tar.gz’
  • 219GB. As a’.tar.xz’ We’re only going to do one round with ‘xz -9’ compression, because it took 3 days to compress on my machine.
  • 124GB. Using shallow checkouts. A shallow checkout is when you only grab the current revision, not the entire git history. This is the only compression we try that loses data.
  • 125GB. Same, with ‘git repack -adk’)

Throwing out everything but the objects allows other fun options, but there aren’t any standard tools and I’m out of time. Maybe next time. Ta for now.

Tagged , , , , , , , ,

Getting the Adafruit Pro Trinket 3.3V to work in Arch Linux

I’m on Linux, and here’s what I did to get the Adafruit Pro Trinket (3.3V version) to work. I think most of this should work for other Adafruit boards as well. I’m on Arch Linux, but other distros will be similar, just find the right paths for everything. Your version of udev may vary on older distros especially.

  1. Install the Arduino IDE. If you want to install the adafruit version, be my guest. It should work out of the box, minus the udev rule below. I have multiple microprocessors I want to support, so this wasn’t an option for me.
  2. Copy the hardware profiles to your Arduino install. pacman -Ql arduino shows me that I should be installing to /usr/share/aduino.  You can find the files you need at their source (copy the entire folder) or the same thing is packaged inside of the IDE installs.

    cp adafruit-git /usr/share/arduino/adafruit
    
  3. Re-configure “ATtiny85” to work with avrdude. On arch, pacman -Ql arduino | grep "avrdude.conf says I should edit /usr/share/arduino/hardware/tools/avr/etc/avrdude.conf. Paste this revised “t85” section into avrdude.conf (credit to the author)

  4. Install a udev rule so you can program the Trinket Pro as yourself (and not as root).

    # /etc/udev/rules.d/adafruit-usbtiny.rules
    SUBSYSTEM=="usb", ATTR{product}=="USBtiny", ATTR{idProduct}=="0c9f", ATTRS{idVendor}=="1781", MODE="0660", GROUP="arduino"
    
  5. Add yourself as an arduino group user so you can program the device with usermod -G arduino -a <username>. Reload the udev rules and log in again to refresh the groups you’re in. Close and re-open the Arduino IDE if you have it open to refresh the hardware rules.

  6. You should be good to go! If you’re having trouble, start by making sure you can see the correct hardware, and that avrdude can recognize and program your device with simple test programs from the command link. The source links have some good specific suggestions.

Sources:
http://www.bacspc.com/2015/07/28/arch-linux-and-trinket/
http://andijcr.github.io/blog/2014/07/31/notes-on-trinket-on-ubuntu-14.04/

Tagged , , , , , ,