How To Use GPG on the Command Line


I use GPG (also known as GnuPG) software for encrypting files that contain sensitive information (mostly passwords).  As a systems engineer, I do most of my work on remote servers, accessible via command line interface.  Naturally, I find it easier to use the command line version of GPG to directly encrypt and decrypt documents.GnuPG-Logo.svg

GPG (GNU Privacy Guard) is a free open source version of PGP (Pretty Good Privacy) encryption software.  Conceptually, both use the same approach to cryptography (i.e. encryption and decryption).  However, each is uniquely different in its implementation.

What follows is a quick primer on how to install the GPG command line tools, as well as a list of basic commands you are most likely to need.

Installing GPG

GPG can be installed in a number of different ways.  The instructions here will install the core GPG command line tools, which are intended to be used in a terminal.

If, on the other hand, you prefer a graphical user interface (or GUI) for accessing GPG functionality (e.g. encrypting email communications, or encrypting documents in a GUI text editor), refer to the links at the end of this article.

Red Hat / CentOS

yum install gnupg

Ubuntu / Debian

apt-get install gnupg

Mac OS X

The easiest way to install the GPG command line tools on your Mac is to first install Homebrew, a package management system that makes thousands of software packages available for install on your Mac.

Open a Terminal window (Applications > Utilities menu), then enter the following command.

ruby -e "$(curl -fsSL"
When that’s complete, install the GPG software package with the following command.
brew install gnupg

GPG Quick How To

What follows is a very brief introduction to command line usage of GPG.  Think of it as a “quick reference” or a “cheat sheet.”  You should certainly learn more about GPG than what is explained within this post.  It is intended only to get you started.  If you expect to use GPG more extensively, I strongly advise you to read more documentation (see the Links section below).

GPG is powerful encryption software, but it can also be easy to learn — once you understand some basics.  GPG uses a method of encryption known as public key cryptography, which provides a number of advantages and benefits.  However, to obtain these advantages, a minimal level of complexity is required to make it all work.  For an overview of how public key cryptography works, read the Introduction to Cryptography (link at the bottom of this post).

Typographical conventions used in commands:

In all examples below, text that you will need to replace with your own values (e.g. usernames, email addresses, filenames) is shown in “gray italic”.  Text that you will type literally (unchanged) is indicated with “black constant width”.

"gray italic"
"black constant width"

Create your GPG key:

To get started with GPG, you first need to generate your key pair.  That is, you will generate both a private and a public key with a single command.  Enter your name and email address at the prompts, but accept the default options otherwise.

gpg --gen-key

The first key is your private (or secret) key.  You must keep this private key safe at all times, and you must not share it with anyone.  The private key is protected with a password.  Try to make the password as long as possible, but something you will not forget.  If you forget the password, there’s no way to recover it.  For the same reason, you should also make a backup copy of your private key.  (Consider using Time Machine for backups on Mac OS X.)

The second key is your public key, which you can safely share with other people.

The relationship of the private and public key is actually very simple.  Anything that is encrypted using the public key can only be decrypted with the related private key.  Therefore, you will provide your public key to another person, and they will provide you with their public key.  Anything encrypted to your public key can only be decrypted by you.  Anything encrypted to the other person’s public key can only be decrypted by the other person.

Export your public key:

The next step is to export your public key and share it with another person.  That person should do the same, and export their public key.

gpg --export --armor > mypubkey.asc

Import another person’s public key:

When you import a public key, you are placing it into what is commonly referred to as your GPG “keyring.”

gpg --import theirpubkey.asc

List the public keys in your keyring:

You can now view a list of public keys in your keyring, as well as the name and email address associated with each key.

gpg --list-keys

List private keys in your keyring:

The following command will list the private keys in your keyring.  This will show your own private key, which you created earlier.

gpg --list-secret-keys

Trust a public key:

Once you have imported the other person’s public key, you must now set the trust level of the key.  This prevents GPG from warning you every time you encrypt something with that public key.

Specify the other person’s name or email in the command.

gpg --edit-key glenn

trust (invoke trust subcommand on the key)
5 (ultimate trust)
y (if prompted)

Useful GPG Commands

GPG has many options, most of which you will never need.  Here’s a quick list of the most useful commands you are likely to need.

Encrypt a file:

To encrypt a file named filename.txt for a single individual, specify that individual as a recipient.

gpg --encrypt --recipient glenn filename.txt

This will create a new encrypted file named filename.txt.gpg.

If you want to encrypt a file so that only you yourself can decrypt it, then specify yourself as the recipient.

gpg --encrypt --recipient 'my_name' filename.txt

If you want to encrypt a file so that both you and another person can decrypt the file, specify both you and the other person as recipients.

gpg --encrypt --recipient glenn --recipient 'my_name' filename.txt

If you want to encrypt a file for a group of people, define the group in your gpg.conf file (see section below), and then specify the group as a recipient.

gpg --encrypt --recipient journalists filename.txt

After a while, you’ll want to be more concise and use the short version of the command line options.  Here’s the same command.

 gpg -e -r journalists filename.txt

Decrypt a file to terminal (standard output):

The first version of this command will display the content of a file within the terminal window itself.

gpg --decrypt filename.txt.gpg

Use the –decrypt option only if the file is an ASCII text file.  If it’s a binary file, then omit the –decrypt option, which will write the decrypted file to disk.  At that point, you can open the binary file in whatever application is used to view the file.

Decrypt a file to disk:

Whether the file is ASCII or binary, if you want to make changes to the content of an encrypted file, you must first decrypt it, make your changes, then re-encrypt the file.  As I mentioned in the previous paragraph, you write the decrypted version of a file to disk, by omitting the –decrypt option from the command.

gpg filename.txt.gpg

If the encrypted file was named filename.txt.gpg, the above command will create a decrypted version named filename.txt (with the .gpg extension removed).

Create Groups of People in Your GPG Configuration File

For convenience, you can pre-define a group of people in your GPG configuration file.  This has the benefit of allowing you to encrypt a file to every member of the group by specifying only the group name as the recipient, rather than tediously specifying every individual member of the group.

Your GPG software configuration is stored in your home directory within the ~/.gnupg/gpg.conf file.  Edit this file using your favorite command line text editor (vim, nano, pico, emacs, etc).  While there are numerous settings available in the configuration file, go to the section pertinent to defining groups.

When defining a group, you list the members of the group.  Each member is referenced by some attribute of their public key found in your GPG keyring — typically a person’s name (or partial name, such as first or last name) or an email address (or partial email address).

If you are a member of the group, remember to include yourself in the group!  If you do not list yourself in the group, you won’t be able to decrypt any files you encrypt to the group.

Here’s an example of a group named “journalists”, listing the first name of each person.

group  journalists  =  glenn  laura  ewan  barton

Where To Go From Here

I encourage you to learn more about GPG.  See the Links below.

You may also want to learn about secure methods to erase files from your computer hard drive.  Mac OS X has the “Secure Empty Trash” option within Finder.  There are also numerous third-party tools you can install.

Since we’re on the theme of learning how to use GPG in the command line, you may want to try “bcwipe” — a program to securely erase files within the command line.

On Mac OS X, you can install bcwipe via Homebrew.

brew install bcwipe



GUI Tools


On the Nature of DevOps

yin_yangLike the Tao Te Ching, those who seek DevOps often come up empty-handed.  Both the Tao and DevOps are elusive.  (Recruiters find it particularly so.)  And yet, if they open their minds, there it will be, right in front of them.

The DevOps that can be spoken is not the eternal DevOps.
The name that can be named is not the eternal name.
The nameless is the origin of Development and Operations.

Today, DevOps is a buzz word.  The term has as many different meanings as the number of people you ask to define it.  Many people today are captivated by the word, but they fail to see beyond, to the essence of what it means.  They seek to find practitioners of DevOps, but are disappointed when they can’t find any.

Officially, I began my career as a systems administrator in 2001.  At that company, the high-level view of the technical organization was as follows.  I worked within the Technical Operations group.

                        | Software    |
                        | Development |
                          /          \
                        /              \
                      /                  \
            +------------+             +-------------+
            | Technical  | ----------- | Release     |
            | Operations |             | Engineering |
            +------------+             +-------------+

From the beginning, I had begun to apply many of the principles that would later become tenets of the DevOps movement.

  • Automation of our operating system installs (Kickstart and Jumpstart).
  • Lots of scripting (Perl, Bash, Python).
  • Configuration management of servers (CFEngine).
  • A central management server, with password-less SSH access to all the other servers.
  • A whole lot of monitoring tools (too many to list here).
  • Code and configurations were in a revision control system.
  • As a company, we also automated our code builds and deployments
    (done by our Release Engineering team).
  • Technical Operations met weekly with our colleagues in both Software Development and Release Engineering — to discuss upcoming releases, the potential impacts of new features to our infrastructure, how best to design those features so that they were scalable, how best to monitor the new services and features so we could identify problems early, as well as planning for additional infrastructure capacity.
  • We also communicated with each other though IRC chat.

That was nearly a decade before the term DevOps was eventually coined in 2009.  We were already doing these things, not because it was cool, but because it was common sense.  We saw the value of investing our time and efforts into doing these things.  We were highly productive at what we did.  We were able to manage a large installation of thousands of servers and storage nodes with a surprisingly small number of people.  We were early adopters of DevOps principles, years before it became fashionable to do so.

Another important point worth mentioning.  The company I worked for invested time and money into training and developing the employees within the Technical Operations department.  As “ops” people, we were encouraged to learn scripting and coding.  In essence, the company developed the people it needed to accomplish the work that needed to be done.  It was a mutually beneficial arrangement.  There was very little employee turnover within the Technical Operations group.  I worked at that company for 8 years, and many of my colleagues had similar tenures there.

So, the next time you find yourself looking for those elusive DevOps engineers that you can’t seem to find, you should think about what you are really looking for.  Ask yourself whether you are really looking for the right things in the right places.  For most of you, I suspect you are not.

Mac OS X … “fork: Resource temporarily unavailable”

Almost a year ago, I made the switch to using Mac OS X as my primary workstation.  As difficult as the transition was during my first 3 weeks, it has now become second nature to me.  I haven’t had any major issues along the way, but now and then, I have had to tweak a thing or two.  Here is the most recent such example.

Lately, I’ve been seeing a fair number of the following error, when launching commands under Bash within a local Terminal session.  It’s rather annoying, because it stops me from getting any work done.  As a systems administrator, I have lots of Terminal sessions open, and this is where the error has always manifested itself.

[laptop:~ root]$ ssh somehost

-bash: fork: Resource temporarily unavailable

The first time this happened to me, I didn’t think much of it, so I rebooted my system.  Today, I did not feel like rebooting my system, because I have a ton of work in progress.  I suspect most people won’t hit this resource limit, but if you’re a power user, you eventually might.

Like other Unix-like operating systems, Max OS X limits the maximum number of processes, system-wide and per-user, but it does so with a rather conservative number.  On my Mac, system-wide processes were limited to 512, and per-user processes were limited to 266.  I’ve never encountered this problem on Linux, because these values are dynamically set in the kernel, based on the amount of memory installed in the Linux system.

Here’s how I resolved this problem.

  • First, determine what your system’s current limits are.  These are the values on my installation of Mac OS X 10.5 with 4 GB of RAM.
[laptop:~ root]$ sysctl -a | grep maxproc | grep =

kern.maxproc = 512
kern.maxprocperuid = 266
  • Now set these to higher values.  I suggest approximately doubling what you already have.
[laptop:~ root]$ sysctl -w kern.maxproc=1024
[laptop:~ root]$ sysctl -w kern.maxprocperuid=512
  • You’ll also want to apply these changes automatically at system bootup.  To do so, add them to the /etc/sysctl.conf file.
[laptop:~ root]$ sudo vi /etc/launchd.conf
[laptop:~ root]$ cat /etc/sysctl.conf

  • At this point, launched processes will still inherit the default maxprocperuid value of 266.  I would like to change this inherited default value for new processes automatically, but I haven’t found a way to increase the default system-wide soft-limit of maxprocperuid on Mac.  On Linux, I simply update the /etc/security/limits.conf file, but I don’t see such a file under Mac OS X.  If you know of a way to do this in Mac OS X, leave a comment below.  For now, I’ve added the following command to my user’s .bash_profile file.  The value of 512 is intended to match that of kern.maxprocperuid shown above.
ulimit -u 512
  • To confirm the change, launch a new Terminal session, and run the following command.  Before the change, you will have seen “266.”  After the change, you should see “512”.
[laptop:~ root]$ ulimit -a

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) 6144
file size               (blocks, -f) unlimited
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 256
pipe size            (512 bytes, -p) 1
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 512
virtual memory          (kbytes, -v) unlimited

A commenter posted a link below, referring me to a method of changing the default system-wide values at boot time.  I’ve elaborated in greater detail below.

My system is running a newer version of Mac OS X (10.5.8), and it uses a launchd daemon, which is the parent of all processes (similar to init under Linux).  This launchd daemon can read options from a configuration file, which can be used to set the various “limit” values.  There is both a system-wide and a per-user launchd daemon, and each can be configured independently, although the user’s launchd daemon will inherit its values from the system’s launchd at boot time .  The system-wide daemon values will always limit the maximum values that a per-user daemon can set.

Further, the launchd daemon can be re-configured live (with certain limitations), using the launchctl command, without requiring a reboot.  Though, I suggest that you add your custom settings to /etc/launchd.conf to persist your changes across reboots.

  • First, check your existing user’s launchd settings.  For each setting, there are two values.  A “soft” limit on the left, and a “hard” limit on the right.  The soft limit is the currently active limit.  However, you can increase the soft limit up to (but not greater than) the hard limit.  The root user can increase the hard limits of the system-wide launchd, without a reboot.  However, you cannot change a user’s launchd hard limits without a reboot (even if trying to do so as the root user).
[laptop:~ user]$ launchctl limit

         cpu         unlimited      unlimited
         filesize    unlimited      unlimited
         data        6291456        unlimited
         stack       8388608        67104768
         core        0              unlimited
         rss         unlimited      unlimited
         memlock     unlimited      unlimited
         maxproc     266            532
         maxfiles    256            unlimited
  • Next, check your system-wide setings.  Notice how the user-specific values are identical to the system-wide settings, as they were inherited at boot time.
[laptop:~ user]$ sudo su -
[laptop:~ root]$ launchctl limit

         cpu         unlimited      unlimited
         filesize    unlimited      unlimited
         data        6291456        unlimited
         stack       8388608        67104768
         core        0              unlimited
         rss         unlimited      unlimited
         memlock     unlimited      unlimited
         maxproc     266            532
         maxfiles    256            unlimited

[laptop:~ root]$ exit
  • If you do not need to increase the user-specific values before rebooting, then go on to the next step.  Otherwise, if you need to resolve your immediate “fork: Resource temporarily unavailable” problem, try the following steps.  (You may need to close some applications first.)  Remember, you will not be able to increase the soft values higher than the hard limit imposed by the user’s launchd daemon.  If you try to set the soft limit too high, the value will be reduced to the hard limit value.  As illustrated in the example below, increasing the maxproc soft limit to 512 succeeds, while increasing the maxproc hard limit to 1024 fails.  Run the following commands as the user (not root).
[laptop:~ user]$ launchctl limit maxproc   512 1024
[laptop:~ user]$ launchctl limit maxfiles  512 unlimited
[laptop:~ user]$ launchctl limit

         cpu         unlimited      unlimited
         filesize    unlimited      unlimited
         data        6291456        unlimited
         stack       8388608        67104768
         core        0              unlimited
         rss         unlimited      unlimited
         memlock     unlimited      unlimited
         maxproc     512            532
         maxfiles    512            unlimited
  • Now, place new system-wide default settings into the /etc/launchd.conf file.  You will need to create this file, as it does not exist by default.  This file contains only the options that are passed to launchctl (not the launchctl command itself).  Note that on OS X 10.5.8, the maxproc value has a maximum hard limit of 2500.
[laptop:~ user]$ sudo vi /etc/launchd.conf
[laptop:~ user]$ cat /etc/launchd.conf

limit maxproc 1024 2048
limit maxfiles 1024 unlimited
  • You may also create a configuration file at $HOME/.launchd.conf, which can be used to set the per-user values.  But you need this only if you want smaller default values than what the system allows the user-specific launchd to inherit.

For additional information on launchctl settings, check out the following man pages.

[laptop:~ user]$ man launchd.conf
[laptop:~ user]$ man launchctl
[laptop:~ user]$ man getrlimit

System Administrator Appreciation Day

Listening to yesterday’s broadcast of the Marketplace radio program on NPR this morning (as a podcast), I heard the following mentioned during the “Datebook for July 31, 2009” segment of the program.

And bring some goodies for the IT department. You need those folks for the health of your server, firewall and computer stuff. It’s System Administrator Appreciation Day.

Considering that I was in the office until 2 AM last night, I find it particularly serendipitous.  If you’re a system administrator, you already know (too well) the propensity for late nights and working weekends.  For those of you not familiar, here’s a quick synopsis of my evening’s circumstances to help you understand how it all went down.

Yesterday afternoon, my office lost power when a car crash brought down a neighborhood utility poll.  As we waited for utility power to be restored, our server room’s UPS kept the battery power rolling to the network, the wireless access points, and the servers.  The overhead lights were out, but the Internet was still on.

When the UPS reported 25 minutes of power remaining,  we decided to shutdown the non-critical servers.  At 10 minutes of power remaining, we began to shutdown all critical servers.  Unfortunately it was not enough.  About 40 minutes after the outage started, all remaining equipment went dark.  When the power was restored another 20 minutes after that, we discovered that the Cisco Catalyst switch was dead.  The supervisor card on the Cat was unable to boot the switch properly, and it would be 4 hours before a new replacement would be available on-site.

Some of you might point out that we could have designed better redundancy into our office network.  And yes, we could have.  But it all comes down to balancing cost and risk.  All of our production data centers (i.e. live website presence) are configured with redundant sets of A/B switches.  However, our office network is not.  A production site outage can potentially impact our website revenue to the tune of $1,000,000 or more, depending on the timing and duration.  An office outage, on the other hand, represents lost productivity on an order of magnitude much less costly.  And for this reason, we do not have a redundant Catalyst switch in the office, but we do have a Cisco support contract with 4-hour response turn-around time.

Waiting for the new supervisor card to arrive, we began the process of bypassing the larger Catalyst switch with a smaller 24-port switch, patching the most critical servers (e.g. email, VPN, phones, and various other backend processing).  The new card arrived later that evening, and we then began the process of getting the larger Catalyst back online and reconfigured.  We eventually moved all the bypassed servers and devices back to the original switch, brought up all remaining servers and storage, and fixed all the other little things that break during an unplanned outage.

When it was all done, the time was 2 AM.

I’m not complaining about my evening, because it’s what I do.  It’s my job, and after all, I love my job.  But it is the nature of the work.  And I hope this story helps you to gain a little insight into what we system administrators do.

So, on the last Friday in July, put a smile on a system administrator’s face, and send a brief mention of thanks or appreciation for all the work they do.


DNS Cache Poisoning: Testing & Verifying the Patch

It seems there is some confusion surrounding how to test for the DNS flaw, and/or confirming that a patch is working.

Unfortunately, just because your DNS server is patched, it may not be entirely safe. The use of NAT may be interfering. Many NAT devices are reducing the randomness of the source UDP port of queries. More about the NAT problem can be found in my other posting.

Here are my suggestions for testing your DNS. Section 1 (“browser”) and Section 2 (“commandline”) are good tests for checking

  1. that recursive queries are not affected as they leave your network (e.g. via NAT), and
  2. that an upstream DNS server (e.g. one that your DNS server may forward all requests to) is not vulnerable.

1. From a Web Browser

You can test the path of DNS servers from your browser to a test server, using one of the following web pages.

2. From a Unix/Cygwin Commandline

If you have access to a Unix or Cygwin commandline on a computer whose DNS path you want to test, you can perform a special DNS query. You’ll need either “dig” or “nslookup” installed.

If you have dig available, use the following command, which is taken from this page at DNS-OARC.

dig +short TXT

If you want to be more specific about the DNS server you wish to test, specify it on the dig commandline as follows. For example, if you run this command on the same server that runs your DNS software, you should use “@localhost” or “@” for the query. Otherwise, specify the hostname or the IP address of the server you want to test (e.g. “” or “@”).

dig +short @ TXT

If you have only nslookup available (e.g. Windows DOS prompt), then you can try the following. Bold text indicates what you type in.

C:\Documents and Settings\gitm> nslookup
Default Server:
> set type=txt
Default Server:
[your results show up here]
> exit

To interpretation your results, check this page at DNS-OARC. Essentially, you are looking for a high standard deviation, as reported by a “GREAT” result. A result of “POOR” is not good.

3. TCP Dump

This section will help you verify that your DNS server is patched. I’ll use “tcpdump” on Linux as an example, but you can also use “snoop” on Solaris. Other Unix operating systems will have the same or a similar tool. The “tcpdump” command must be run by a system administrator who has root user access on the DNS server.

First, we’ll initiate a tcpdump session on the server that is running the DNS software. In my case, I am using ISC Bind. Additionally, my DNS server receives about 10,000 queries per second, so I want to view only the relevant queries to my test. For that, I’ll need to filter the output based on the destination server I will be querying. Note: keep the IP address in the command.

tcpdump -nn host

Now, in another terminal window, type the following command, which is similar to the one used by the DOXPARA web page listed above. If you run the command on the same server that runs your DNS software, you should use “@localhost” or “@” for the query. Otherwise, specify the hostname or the IP address of the server you want to test (e.g. “” or “@”). The “date” subcommand is used to prevent hostname caching on the DNS server you are testing.

dig @ $(date +%s)

The “dig” command should output a series of recursive CNAME lookups.

The “tcpdump” output should look similar to the following. In this example, is the IP address of the DNS server that I am testing, and is the DNS server for the domain.

22:49:42.870301 IP >  28554 [1au] A? (50)
22:49:42.992076 IP    >  28554*- 1/0/0 CNAME[|domain]
22:49:42.992635 IP >  13098 [1au][|domain]
22:49:43.101637 IP    >  13098*-[|domain]
22:49:43.102151 IP >  10725 [1au][|domain]
22:49:43.216196 IP    >  10725*-[|domain]
22:49:43.216671 IP  >  21053 [1au][|domain]
22:49:43.327506 IP    >   21053*-[|domain]
22:49:43.327997 IP >  59611 [1au][|domain]
22:49:43.436943 IP    >  59611*-[|domain]

Each pair of lines in the output represents a query and a response. A query goes to the DNS server and a reply comes back to my DNS server. The bold numbers in red are the source UDP port numbers of queries leaving my DNS server. The bold numbers in blue are the standard UDP port number (53) for the dns/domain service on the DNS server.

Note the randomness of the source UDP ports in red (45399, 16585, 41503, 8699, 55354). This indicates that my DNS server is patched. If the source port number was the same for all queries, that would indicate an unpatched server.

But wait! There’s more.

Just because my DNS server is patched and sending out queries from random UDP source ports, it does not mean I am out of the woods yet. I still need to verify that the source port numbers still look randomized at the destination DNS server at For that, I will need to use one of the test methods (web-based or commandline) described in sections 1 or 2 above.

It’s entirely plausible that a NAT device (e.g. your DSL/cable router, a Cisco router, etc) is rewriting the random source ports to a not-so-random sequence. Many NAT products I have tried will rewrite the random source port numbers to a predictable sequential series of source port numbers.

For example, clicking the “Check My DNS” link on this DOXPARA web page, here’s what the page reported to me (slightly altered to protect my DNS server’s identity).

Your name server, at, may be safe, but the
NAT/Firewall in front of it appears to be interfering with
its port selection policy.  The difference between largest
port and smallest port was only 5.
Please talk to your firewall or gateway vendor -- all are
working on patches, mitigations, and workarounds. TXID=21918 TXID=55556 TXID=45625 TXID=8942 TXID=359

Note how the source UDP ports in bold red are nearly sequentially numbered. In this case, I will need to get a patch or firmware update from the manufacturer of my NAT device. Unfortunately, they do not yet have a patch available.