Managing your gpg keys – the right way

When installing software from non-official repositories in Debian-based distributions, you might come across “key problems”, such as:

The following signatures couldn't be verified because the public key is not available: NO_PUBKEY <key>

When it appears, you might scratch your head for quite some time.

There is a simple way of dealing with those. However, as I recently experienced while upgrading a machine, most tutorials are incomplete or even sometimes totally misleading.

Why keys?

First, let’s see what these keys are for.

When installing software from non-official repositories, Linux needs to download packages from those external sources. However, hackers may introduce malware inside the files that are on the servers of those external sources. This type of hack is not an easy one, since web administrators are watching those sites closely. However, when it succeeds, the attackers can automatically distribute their malware to a lot of computers at once. Consequently, everything should be put in place to avoid spreading Trojan horses this way.

This is why any file that can be downloaded is signed digitally by the actual provider of the source. If a hacker alters a file, the digital signature no longer matches the content of the file anymore. This way, your Linux distribution makes sure that anything it downloads is an unaltered original file as originally published by the source.

To verify the signature, it only needs the public key of the source. And that is why your distribution needs to keep a list of public keys of all the non-official sources.

The apt-key way (deprecated in Ubuntu 22)

Previously, one could import keys using a tool called “apt-key”. Such way is still generally mentioned in many tutorials, in the form:

apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys <KEY>

Transitioning to GPG (the new way)

Ubuntu and other distributions are switching to a different, more secure way of storing keys – although actually still not secure, but it is what it is.

Keys are now stored with GPG. To transition, it is possible to import keys from apt-key to gpg. This is done in two steps:

  • listing the existing keys with “apt-key list”, which gives the following type of result:
pub   rsa4096 2022-01-31 [SC] [expires: 2024-01-31]
  DF44 CF0E 1930 9195 C106  9AFE 6299 3C72 4218 647E
uid       [ unknown] Vivaldi Package Composer KEY08 <packager@vivaldi.com>
sub   rsa4096 2022-01-31 [E] [expires: 2024-01-31
  • importing those keys to gpg, using a command of the form:
apt-key export 4218647E | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/vivaldi.gpg

Importing directly into gpg

So, for new sources, rather than importing through apt-key, you should use gpg directly instead. The commands take the form of:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

Matching package source and gpg file

Now, here comes the trick. In the previous command above to import a gpg key for docker, the target file for the key was:

/etc/apt/keyrings/docker.gpg

What actually happens when running “apt” is that it reads the package information in a file located in the directory:

/etc/apt/sources.list.d

For instance, you may have followed instructions to add the docker ppa using the following command:

echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Note the “signed-by” part of the command, which specifically points to the following gpg file:

/usr/share/keyrings/docker-archive-keyring.gpg

So that’s exactly where your gpg file should be. The confusing part is that some tutorials / recommendations prefer /etc/apt/keyrings while others use /usr/share/keyrings.

Once your gpg key is in the correct place, your problem is solved.

I hope this will help prevent some of you from scratching your heads over this.

Mounting Synology drives on Linux

I’ve just unmounted my drives from my Synology box, replaced by a home-brewed server. I’ll write some other article about the reasons that made me switch. Just in case you wonder, the Synology box is running fine. That’s not the point.

I took the disks from the Synology box and plugged them into a host with simple Linux distrib (in my case, Ubuntu, but that shouldn’t matter).

Just type:

mount /dev/vg1000/lv /mnt

That’s it. You have the file system from your Synology box on your Linux machine. It may come handy in case your box crashed and you are waiting for a new one. In the meantime, you have access to your data.

In case you want to reuse the disks and dispose of them (WARNING: the following will destroy your data on those disks), here is how to do it.

vgremove vg1000

Now check the md volumes that are available and that you didn’t create yourself (just use ls /dev/md12*). Then stop those volumes (replace md12? with the volumes you want to stop if you have additional ones on your system that you obviously don’t want to stop – they won’t stop if they are mounted anyway):

mdadm -S /dev/md12?

Empty the beginning of the related disks, for each disk replace the … by your disk letter:

dd if=/dev/zero of=/dev/sd… bs=1M count=1024

And now you can play around with partitioning etc without being bothered again by vg1000 or mdadm.