Let me describe the scenario:

  • You have a linux software raid (raid5, in my case, created with mdadm).
  • On top of it, you have a few LVM volumes, and LUKS encrypted partitions.
  • You literally set this up 10 years ago - 4 disks 2 Tb each.
  • It has been running strong for the last 10 years, with the occasional disk replaced.
  • You just bought new 8Tb disks.

And now, you want to replace the old disks for the new ones, increase the size of the raid5 volume and, well, you want to do it live (with the partition in use, read write without unmounting it, and without rebooting the machine).

All of this with consumer hardware, that DOES NOT SUPPORT ANY SORT OF HOT SWAP. Basically, no hardware raid controller, just the cheapest SATA support offered by the cheapest atom motherboard that you bought 10 years ago that happened to have enough SATA plugs.

Not for the faint of hearts, but turns out this is possible with a stock linux kernel, fairly easy to do, and worked really well for me.

All you need to do is to make sure you type a few more commands from your shell, so that your incredibly cheap and naive SATA controller and linux system knows what you're up to before going around touching the wiring.

[ ... ]

When you run a an application under docker, you have a few different mechanisms you can choose from to provide networking connectivity.

This article digs into some of the details of two of the most common mechanisms, while trying to estimate the cost of each.

The most common way to provide network connectivity to a docker container is to use the -p parameter to docker run. For example, by running:

docker run --rm -d -p 10000:10000 envoyproxy/envoy

you have exposed port 10000 of an envoy container on port 10000 of your host machine.

Let's see how this works. As root, from your host, run:

netstat -ntlp

and look for port 10000. You'll probably see something like:

[...]
tcp6   0  0 :::10000    :::*   LISTEN   31541/docker-proxy  
[...]

this means that port 10000 is open by a process called docker-proxy, not envoy.

Like the name implies, docker-proxy is a networking proxy similar to many others: an userspace application that listens on a port, forwarding bytes and connections back and forth as necessary.

[ ... ]

I recently had to increase the size of an encrypted partition on my Debian server. I have been a long time user of LVM and dm-crypt and tried similar processes in the early days of the technology.

I was really impressed by how easy it was today, and how it all just worked without effort without having to reboot the system, mark the filesystems read only, or unmount them.

Here are the steps I used, on a live system:

  1. Used mount to determine the name of the cleartext partition to resize. In my case, I wanted to add more space to /opt/media, so I run:

    # mount |grep /opt/media
    /dev/mapper/cleartext-media on /opt/media type ext4 (rw,nosuid,nodev,noexec,noatime,nodiratime,discard,errors=remount-ro,data=ordered)
    

    which means that /opt/media is backed by /dev/mapper/cleartext-media

  2. Used cryptsetup to determine the name of the encrypted Logical Volume backing the encrypted partition:

    # cryptsetup status /dev/mapper/cleartext-media
    /dev/mapper/cleartext-media is active and is in use.
      type:    LUKS1
      cipher:  [...]
      keysize: [...]
      device:  /dev/mapper/system-encrypted--media
      offset:  4096 sectors
      size:    [...] sectors
      mode:    read/write
      flags:
    

    From this output, you can tell that /dev/mapper/cleartext-media is the cleartext version of the /dev/mapper/system-encrypted--media, where system is the name of the Volume Group while encrypted-media is the name of the Logical Volume.

[ ... ]

When talking about using Debian, one of the first objections people will raise is the fact that it only has "old packages", it is not updated often enough.

This is generally not true. Or well, "true" if you stick to the "stable" release of Debian, which might not be the right version for you.

Also, people don't often realize that it's easy to use more than one "release" on a given system. For example, you can configure apt-get to install "stable" packages by defautl, but allow you to do a manual override to install from "testing" or "unstable", or vice-versa.

Before starting, it's worth noting that this might not be for the faint of hearts: mixing and matching debian releases is generally risky and discouraged. Why is this a problem? Well, it all has to do with dependencies, backward compatibility and the fact that they may not always be correctly tracked.

For example: let's say you install a cool new gnome application from unstable. This cool and new application depends on the latest icons, which apt also correctly installs. Now, in the new icons package, some old icons have been removed. Old applications using them will either need to be upgraded, or their icons break. This is generally handled correctly by apt-get dependencies, assuming the maintainer did a really good job tracking versions. But this is hard to do, and error prone at times. Worst can happen with C libraries or different GCC versions using different ABIs, or systemic changes like the introduction of systemd or similar.

[ ... ]

Recently, I tried to run a Java application on my Debian workstation that needed to establish SSL / HTTPs connections.

But... as soon as a connection was attempted, the application failed with an ugly stack trace:

ValidatorException: No trusted certificate found
sun.security.validator.ValidatorException: No trusted certificate found
        at net.filebot.web.WebRequest.fetch(WebRequest.java:123)
          at net.filebot.web.WebRequest.fetchIfModified(WebRequest.java:101)
          at net.filebot.web.CachedResource.fetchData(CachedResource.java:28)
          at net.filebot.web.CachedResource.fetchData(CachedResource.java:11)
          at net.filebot.web.AbstractCachedResource.fetch(AbstractCachedResource.java:137)
          at net.filebot.web.AbstractCachedResource.get(AbstractCachedResource.java:82)
          at net.filebot.cli.ArgumentProcessor$DefaultScriptProvider.fetchScript(ArgumentProcessor.java:210)
          at net.filebot.cli.ScriptShell.runScript(ScriptShell.java:82)
          at net.filebot.cli.ArgumentProcessor.process(ArgumentProcessor.java:116)
          at net.filebot.Main.main(Main.java:169)
Failure (<C2><B0>_<C2><B0>)

First attempts at solving the problem were trivial: install all trusted SSL certificates on the Debian box.

[ ... ]

Down with the spinning disks! And hail the SSDs!

That's about what happened the last time I upgraded my laptop. SSDs were just so much faster, energy efficient, and quieter that I couldn't stand the thought of remaining loyal to the trustful spinning disks.

So... I just said goodbye to a few hundred dollars to welcome a Corsair Force GS on my laptop, and been happy ever after.

Or so I thought. Back to the hard reality: last week my linux kernel started spewing read errors at my face, and here is a tale of what I had to do in order to bring my SSD back to life.

It all started on a Friday morning with me running an apt-get install randomapp on my system.

The command failed with an error similar to:

# apt-get install random-app-whatever-it-was
...
(Reading database ... dpkg: error processing whatever.deb (--install):
dpkg: unrecoverable fatal error, aborting:
   reading files list for package 'libglib2.0-data': Input/output error
E: Sub-process /usr/bin/dpkg returned an error code (2)

where libglib.20-data had nothing to do with what I was trying to install.

[ ... ]

With my last laptop upgrade I started using awesome as a Window Manager.

I wasn't sure of the choice at first: I have never liked graphical interfaces, and the thought of having to write lua code to get my GUI to provide even basic functionalities wasn't very appealing to me.

However, I have largely enjoyed the process so far: even complex changes are relatively easy to make, while the customizability has improved my productivity while making the interface more enjoyable for me to use.

The switch, however, has forced me to change several things in my setup. Among others, I ended up abandoning xscreensaver for i3lock and xautolock, while changing a few things on my system to better integrate with the new environment.

In this article, you will find:

  • A description of how to use xautolock together with i3lock to automatically lock your screen after X minutes of inactivity and when the laptop goes to sleep via ACPI.

  • My own recipe to display the battery status on the top bar of Awesome. This is very similar to existing suggestions on the Awesome wiki, except there is support for displaying the status of multiple batteries at the same time. Which, for how rare this may sound, is something supported on my laptop which I regularly use (x230 with 19+ cell slice battery).

[ ... ]

Jun 21, 2014 |

Let's say you want to make the directory /opt/test on your desktop machine visible to a virtual machine you are running with libvirt.

All you have to do is:

  • virsh edit myvmname, edit the XML of the VM to have something like:

    <domains ...>
      ...
    
      <devices ...>
        <filesystem type='mount' accessmode='passthrough'>
          <source dir='/opt/test'/>
          <target dir='testlabel'/>
        </filesystem>
      </devices>
    </domains>
    

    where /opt/test is the path you want to share with the VM, and testlabel is just a mnemonic of your choice.

    Make sure to set accessmode to something reasonable for your use case. According to the libvirt documentation, you can use:

    mapped
    To have files created and accessed as the user running kvm/qemu. Uses extended attributes to store the original user credentials.
    passthrough
    To have files created and accessed as the user within kvm/qemu.

[ ... ]

All the libvirt related commands, like virsh, virt-viewer or virt-install take a connect URI as parameter. The connect URI can be thought as specifying which set of virtual machines you want to control with that command, which physical machine to control, and how.

For example, I can use a command like:

virsh -c "xen+ssh://admin@corp.myoffice.net" start web-server

to start the web-server virtual machine on the xen cluster running at myoffice.net, by connecting as admin via ssh to the corresponding server.

If you don't specify any connect URI to virsh (or any other libvirt related command), by default libvirt will try to start a VM running as your username on your local machine (eg, qemu:///session). This unless you are running as root, in which case libvirt will try to run the image as a system image, not tied to any specific user (eg, qemu:///system).

I generally run most of my VMs as system VMs, and systematically forget to specify which connect URI to use to commands like virsh or virt-install. What is more annoying is that some of those commands take the URI as -c while others as -C.

[ ... ]

While traveling, I have been asked a few times by security agents at airports to turn on my laptop, and well, show them it did work, and looked like a real computer.

Although they never searched the content and nothing bad ever happend, every time I cross the border or go through security I am worried about what might happen, especially given recent stories of people being searched and their laptops taken away for further inspection.

The fact I use full disk encryption does not help: if I was asked to boot, my choice would be to either enter the password and login, thus disclosing most of the content of the disk, or refuse and probably have my laptop taken away for further inspection.

So.. for the first time in 10 years, I decided to keep Windows on my personal laptop. Even more, leave it as the default operating system in GRUB, and well, not show up GRUB at all during boot.

Not because I think it is safer this way, but just to create as little pretexts or excuses for anyone to further poke at my laptop, in case I need to show it or they need to inspect it.

[ ... ]

Just a few days ago I realized that the Raspberry PI I use to control my irrigation system was dead. Could not get to the web interface, pings would time out, could not ssh into it.

The first thing I tried was a simple reboot. The raspberry is in a black box in my backyard, maybe the hot summer days were... too hot? I have a cron job that shuts it down if the temperature goes above 70 degrees. Or maybe the shady wireless card and its driver stopped working? I have another cron job to restart it, so this seems less likely.

So.. I reboot it by phyiscally unplugging it, but still nothing happens. The red led on the board, next to the ethernet plug is on, which means it is getting power. The green led next to it flashes only once. By reading online, this led can flash to report an error, or to indicate that the memory card is being read.

There is no error corresponding to one, single, flash, so I assume it means that it tried to read the flash, and somehow failed. It is supposed to be booting now, so I would expect much more activity from the memory card.

[ ... ]

I've always liked text consoles more than graphical ones. This at least until some time in 2005, when I realized I was spending a large chunk of my time in front of a browser, and elinks, lynx, links and friends did not seem that attractive anymore.

Nonetheless, I've kept things simple: at first I started X manually, with startx, on a need by need basis. I used ion (yes! ion) for a while, until it stopped working during some upgrade. Than I decided it was time to boot in a graphical interface, and started using slim. Despite some quirks, I've been happy since.

In terms of window managers, I really don't like personalizing or tweaking my graphical environment. I see it as a simple tool that should be zero overhead, require no maintenance, and not get in the way of what I want to do with a computer. I don't want to learn which buttons to click on, how to do transparency, which icons mean what, or where the settings I am looking for were moved to in the latest version.

[ ... ]

If you like hacking and have a few machines you use for development, chances are your system has become at least once in your lifetime a giant meatball of services running for who knows what reason, and your PATH is clogged with half finished scripts and tools you don't even remember what they are for.

If this never happened to you, don't worry: it will happen, one day or another.

My first approach at sorting this mess out were chroots. The idea was simple: always develop on my laptop, but create a self contained environment for each project. In each such environment, install all the needed tools, libraries, services, anything that was needed for my crazy experiments. This was fun for a while and worked quite well: I became good friend with with rsync, debootstrap, mount --rbind and sometimes even pivot_root, and I was happy.

Until, well, I run into the limitations of chroots: can't really simulate networking, can't run two processes on port 80, can't run different kernels (or OSes), and don't really help if you need to work on something boot related or that has to do with userspace and kernel interactions.

[ ... ]

Just a few days ago I finally got a new server to replace a good old friend of mine which has been keeping my data safe since 2005. I was literally dying to get it up and running and move my data over when I realized it had been 8 years since I last setup dmcrypt on a server I only had ssh access to, and had no idea of what best current practices are.

So, let me start first by describing the environment. Like my previous server, this new machine is setup in a datacenter somewhere in Europe. I don't have any physical access to this machine, I can only ssh into it. I don't have a serial port I can connect to over the network, I don't have IPMI, nor something like intel kvm, but I really want to keep my data encrypted.

Having a laptop or desktop with your whole disk encrypted is pretty straightforward with modern linux systems. Your distro will boot up, kernel will be started, your scripts in the initrd will detect the encrypted partition, stop the boot process, ask you for a passphrase, decrypt your disk, and happily continue with the boot process.

[ ... ]

While trying to get ldap torture back in shape, I had to learn again how to get slapd up and running with a reasonable configs. Here's a few things I had long forgotten and I have learned this morning:

  1. The order of the statements in slapd.conf is relevant. Don't be naive, even though the config looks like a normal key value store, some keys can be repeated multiple times (like backend, or database), and can only appear before / after other statements.
  2. My good old example slapd.conf file, no longer worked with slapd. Some of it is because the setup is just different, some of it because I probably had a few errors to being with, some of it is because a few statements moved around or are no longer valid. See the changes I had to make.
  3. Recent versions of slapd support having configs in the database itself, or at least represented in ldiff format and within the tree. Many distros ship slapd with the new format. To convert from the old format to the new one, you can use:

    slapd -f slapd.conf -F /etc/ldap/slapd.d
    
  4. I had long forgotten how quiet slapd can be, even when things go wrong. Looking in /var/log/syslog might often not be enough. In facts, my database was invalid, configs had error, and there was very little indication of the fact that when I started slapd, it was sitting there idle because it couldn't really start. To debug errors, I ended up running it with:

    slapd -d Any -f slapd.conf
    
  5. slapd will not create the initial database by itself. To do so, I had to use:

    /usr/sbin/slapcat -f slapd.conf < base.ldiff
    

    with base.ldiff being something like this.

[ ... ]

Have you ever been lost in conversations or threads about one or the other file system? which one is faster? which one is slower? is that feature stable? which file system to use for this or that payload?

I was recently surprised by seeing ext4 as the default file system on a new linux installation. Yes, I know, ext4 has been around for a good while, and it does offer some pretty nifty features. But when it comes to my personal laptop and my data, well, I must confess switching to something newer always sends shrives down my back.

Better performance? Are you sure it's really that important? I'm lucky enough that most of my coding & browsing can fit in RAM. And if I have to recompile the kernel, I can wait that extra minute. Is the additional slowness actually impacting your user experience? and productivity?

Larger files? Never had to store anything that ext2 could not support. Even with a 4Gb file limit, I've only rarely had problems (no, I don't use FAT32, but when dmcrypt/ecryptfs/encfs and friends did not exist, I used for years the good old CFS, which turned out to have a 2Gb file size limit). Less fragmentation? More contiguous blocks? C'mon, how often have you had to worry about the fragmentation of your ext2 file system on your laptop?

What I generally worry about is the safety of my data. I want to be freaking sure that if I lose electric power, forget my laptop in suspend mode or my horrible wireless driver causes a kernel panic I don't lose any data. I don't want no freaking bug in the filesystem to cause any data loss or inconsistency. And of course, I want a good toolset to recover data in case the worst happens (fsck, debug.*fs, recovery tools, ...).

[ ... ]