Free SSL Certificate from Let’s Encrypt for Nginx

Linux No Comments

See my previous post in the unmanaged VPS (virtual private server) series, Automate Remote Backup of WordPress Database, on how to create and schedule a Windows batch script to backup the WordPress database.

Update: Let’s Encrypt has made changes that have broken my pre-existing certificate renewal. First, the new certbot package has replaced the original letsencrypt package. Second, the certbot package does not recognize the pre-existing certificates in the “/etc/letsencrypt” directory (generated by the old letsencrypt package). If you have the old letsencrypt package, I recommend deleting it, the “~/.local” directory, and the “/etc/letsencrypt” directory before installing the new certbot package. I’ve updated the instructions below to use the new certbot package.

I’ve been meaning to enable HTTPS/SSL access to this WordPress site since I heard that Google had started giving ranking boosts to secure HTTPS/SSL websites; however, the thing stopping me was the expensive and yearly cost of a SSL certificate. (Unfortunately, self-signed SSL certificates wouldn’t work because browsers would throw security warnings when encountering them.) But now, there is a new certificate authority, Let’s Encrypt, which provides free SSL certificates.

The only catch is that the SSL certificates would expire in 90 days. But that’s okay because Let’s Encrypt provides a command line client which can create and renew the certificates. Some elbow grease and a weekly Cron job should automatically renew any expiring SSL certificates.

Note: StartSSL provides a free, non-commercial SSL certificate which can be manually renewed after a year. I learned about it at the same time as Let’s Encrypt, but decided to go with Let’s Encrypt because of the possibility of automation and no restrictions on usage.

Below are the steps I took to create and install the SSL certificates on the Nginx server running on my unmanaged DigitalOcean VPS.

Create SSL Certificate

Ubuntu didn’t have the certbot package, so we will need to build it from source. Secure shell into your VPS server and run these commands:

# Install Git version control (alternative to Subversion)
sudo apt-get install git

# Double-check that Git is installed by getting version
git --version

# Download the certbot client source code (to the home directory)
cd
git clone https://github.com/certbot/certbot

# Install dependencies, update client code, build it, and run it using sudo
cd certbot
./certbot-auto --help
# Input the sudo password if requested to

# Get a SSL certificate for mydomain.com
./certbot-auto certonly --webroot -w /var/www/wordpress -d mydomain.com -d www.mydomain.com
# Input your email address (for urgent notices and lost key recovery)

# Get another SSL certificate for mydomain2.com
./certbot-auto certonly --webroot -w /var/www/mydomain2 -d mydomain2.com -d www.mydomain2.com

Note: The Let’s Encrypt Ubuntu Nginx install instructions suggest using the wget command to get the latest available certbot version. I think “git clone” is a better method because it provides a more powerful way to update the certbot package, as we will see later.

Running the “certbot-auto” script will do the following:

  1. Install any missing dependencies including the GCC compiler and a ton of libraries.
  2. Update the certbot client source code.
  3. If necessary, build or update the certbot client, located at “~/.local/share/letsencrypt/bin/letsencrypt”. (The name switch from letsencrypt to certbot is not complete and thus a little confusing.)
  4. Run the certbot client using sudo; thus, you may be prompted to input the sudo password.

Note: If you want to speed it up by avoiding the extra update steps, you can just run the “sudo ~/.local/share/letsencrypt/bin/letsencrypt” command directly, instead of the “~/certbot/certbot-auto” script.

When running the “certbot-auto certonly –webroot” certificate generation option, the following (with some guesses on my part) occurs:

  1. The certbot client will create a challenge response file under the domain’s root directory (indicated by the “-w /var/www/wordpress” parameter); for example, “/var/www/wordpress/.well-known/acme-challenge/Y8a_KDalabGwur3bJaLfznDr5vYyJQChmQDbVxl-1ro”. (The sudo access is required to write to the domain’s root web directory.)
  2. The certbot client will then call the letsencrypt.org ACME server, passing in necessary credential request info such as the domain name (indicated by the “-d mydomain.com -d www.mydomain.com” parameters).
  3. The letsencrypt.org ACME server will attempt to get the challenge response file; for example, by browsing to “http://mydomain.com/.well-known/acme-challenge/Y8a_KDalabGwur3bJaLfznDr5vYyJQChmQDbVxl-1ro”. This verifies that the domain has valid DNS records and that you have control of the domain.
  4. The letsencrypt.org ACME server passes the generated SSL certificate back to the certbot client.
  5. The certbot client writes the SSL certificate to the “/etc/letsencrypt” directory, including the private key. If you can only backup one thing, it should be the contents of this directory.
  6. The certbot client deletes the contents of the “.well-known” directory; for example, leaving an empty “/var/www/wordpress/.well-known” directory once done. You can manually delete the “.well-known” directory.

Note: It is possible to create a multi-domain certificate containing more than one domain, but I recommend keeping it simple. Multi-domain certificates are bigger to download and may be confusing to the user when viewed.

Configure Nginx

The directory where the SSL certificates are located under, “/etc/letsencrypt/live”, require root user access, so we will need to copy the certificate files to a directory which Nginx can read.

# Copy out the SSL certificate files
sudo cp /etc/letsencrypt/live/mydomain.com/fullchain.pem /etc/nginx/ssl/mydomain-fullchain.pem
sudo cp /etc/letsencrypt/live/mydomain.com/privkey.pem /etc/nginx/ssl/mydomain-privkey.pem
sudo cp /etc/letsencrypt/live/mydomain2.com/fullchain.pem /etc/nginx/ssl/mydomain2-fullchain.pem
sudo cp /etc/letsencrypt/live/mydomain2.com/privkey.pem /etc/nginx/ssl/mydomain2-privkey.pem

# Double-check that the files exist and are readable by group and others
ls -l /etc/nginx/ssl

Because our website will behave the same under HTTPS as under HTTP, we will only need to make minimal changes to the existing HTTP server configuration.

Edit Nginx’s server block file for mydomain (“sudo nano /etc/nginx/sites-available/wordpress”) and add the “#Support both HTTP and HTTPS” block to the “server” section:

server {
        #listen   80; ## listen for ipv4; this line is default and implied
        #listen   [::]:80 default ipv6only=on; ## listen for ipv6

        #Support both HTTP and HTTPS
        listen 80;
        listen 443 ssl;
        ssl_certificate /etc/nginx/ssl/mydomain-fullchain.pem;
        ssl_certificate_key /etc/nginx/ssl/mydomain-privkey.pem;

        root /var/www/wordpress;
        #index index.php index.html index.htm;
        index index.php;

        # Make site accessible from http://localhost/
        #server_name localhost;
        server_name mydomain.com www.mydomain.com;

If you wish to have this server block be the default to serve (for both HTTP and HTTPS) when your server is accessed by IP address, change the “listen” directives to the following:

server {
        #Support both HTTP and HTTPS
        listen 80 default_server; # default_server replaces older default
        listen 443 ssl default_server;
}

Note: Make sure that only one of your server block files is set to be the default for IP address access. Also, when you browse to the IP address using HTTPS, you will still get a security warning because the IP address won’t match the domain name.

Repeat the above modifications for any other domain’s server block file.

Note: If you want HTTPS to behave differently than HTTP, leave the HTTP server section alone, uncomment the “# HTTPS server” section (at bottom of the server block file) and make your updates there.

Once you are done updating the server block files, tell Nginx to reload its configuration:

# Reload Nginx service
sudo service nginx reload

# If Nginx throws an error, look at the error log for clues
sudo tail /var/log/nginx/error.log

To test, browse to your server using the HTTPS protocol; for example, “https://mydomain.com/”.

Renew SSL Certificate

To renew all your SSL certificates, run this command 30 days before the expiration date:

~/certbot/certbot-auto renew

Note: If you run it earlier than 30 days before the expiration date, no action will be taken.

Cron Job To Renew

To automate the SSL certificate renewal, we will use a Cron job that runs weekly under the root user. Running the job weekly is sufficient to guarantee a certificate renewal within the 30 days before expiration window.

First, create a script by running “nano ~/certbot_cron.sh” and inputting the content below (make sure to replace “mynewuser” with your actual username):

#!/bin/bash

# Run this script from root user's crontab

# Log file
LOGFILE="/tmp/certbot-renew.log"

# Print the current time
echo $(date)

# Try to renew certificates and capture the output
#/home/mynewuser/certbot/certbot-auto renew --no-self-upgrade > $LOGFILE 2>&1
/home/mynewuser/certbot/certbot-auto renew > $LOGFILE 2>&1

# Check if any certs were renewed
if grep -q 'The following certs have been renewed:' $LOGFILE; then
  # Copy SSL certs for Nginx usage
  cp /etc/letsencrypt/live/mydomain.com/fullchain.pem /etc/nginx/ssl/mydomain-fullchain.pem
  cp /etc/letsencrypt/live/mydomain.com/privkey.pem /etc/nginx/ssl/mydomain-privkey.pem
  cp /etc/letsencrypt/live/mydomain2.com/fullchain.pem /etc/nginx/ssl/mydomain2-fullchain.pem
  cp /etc/letsencrypt/live/mydomain2.com/privkey.pem /etc/nginx/ssl/mydomain2-privkey.pem

  # Reload Nginx configuration
  service nginx reload
fi

Notes about the script:

  • To test the script, run this command:
    sudo sh ~/certbot_cron.sh

    If none of your certificates are renewed, you won’t get a “Reloading nginx configuration” message (outputted by the “service nginx reload” command).

  • The “–no-self-upgrade” argument flag can be passed to certbot to prevent certbot from upgrading itself. At first, because we will be running the script under the root user, I hesitated to allow certbot to update with root permissions automatically. Avoiding an update seemed more secure and definitely faster to execute. However, without the update, by the time the three months expired, the certbot is hopelessly outdated and won’t successfully renew certificates. So, I had to allow certbot to automatic self-upgrade with root privileges to avoid having to do manual updates.
  • To simulate certificate renewals, use the “–dry-run” argument flag to simulate a successful renewal. Change the “certbot-auto” command in the script to the following:
    /home/mynewuser/certbot/certbox-auto renew --dry-run > $LOGFILE

    When you re-run the “certbot_cron.sh”, you will get the “Reloading nginx configuration” message. Don’t forget to remove this change from the script once you are done testing.

  • The script copies out all the SSL certificates, instead of checking for and only copying certificates which have been modified. I don’t think the effort to do the latter is worth it.

Add the script to the root user’s Crontab (Cron table) by running these commands:

# Edit the root user's crontab
sudo crontab -e
  # Insert this line at the end of the file
  @weekly sh /home/mynewuser/certbot_cron.sh > /tmp/certbot-cron.log 2>&1

# List content of root user's Crontab
sudo crontab -l

# Find out when @weekly will run; look for cron.weekly entry
cat /etc/crontab

Note: Instead of “@weekly”, you may wish to set a specific time that works best for your situation. Refer to these Cron examples for info on how to set the time format.

If you want to test the Cron job, do the following:

# Delete the script's output log file
sudo rm /tmp/certbot-renew.log

# Change "@weekly" to "@hourly" in the Crontab
sudo crontab -e
  # Edit this line at the end of the file
  @hourly sh /home/mynewuser/certbot_cron.sh > /tmp/certbot-cron.log 2>&1

# Wait more than an hour

# Check if output log files were generated
cat /tmp/certbot-renew.log
cat /tmp/certbot-cron.log

# Change "@hourly" back to "@weekly" in the Crontab
sudo crontab -e
  # Edit this line at the end of the file
  @weekly sh /home/mynewuser/certbot_cron.sh > /tmp/certbot-cron.log 2>&1

Update Certbot

If you have chosen to disable the certbot self update in the cron script (using the “–no-self-upgrade” argument flag), I recommend manually running the “certbot-auto” command (without any arguments) once a month to make sure that certbot is up-to-date.

If you find that the “certbot-auto” command is unable to self-update or doing the self-update doesn’t solve an issue, you can try to update using the “git pull” command.

# Update the certbot source code using git
cd certbot
git pull

# See status of certbot source code and version
git status

Backup SSL Certs & Keys

Note: Re-issuing the SSL certificates (because of the switch from the letsencrypt to the certbot package) proved to be painless and fast. Thus, I’ve realized that backing up the “/etc/letsencrypt” directory is not necessary. If something goes wrong, just re-issue the SSL certificates.

In Automate Remote Backup of WordPress Database, we created a Windows batch file to download MySQL dump files and other files from the server. Let us add an additional command to that Windows batch script to download a zip archive of the “/etc/letsencrypt” directory.

Originally, I added an ssh command to the Windows batch file to zip up the “/etc/letsencrypt” directory. Unfortunately, accessing that directory requires sudo privileges which causes the script to prompt for the sudo password. I looked at two solutions to running sudo over SSH without interruption. The first involved just echo’ing the sudo password (in plaintext) to the ssh command. The second involved updating the sudoers file to allow running a particular file without requiring the password. I didn’t actually test the two solutions, but they didn’t look secure so I decided go with a very simple solution: run the zip command in the “~/certbot_cron.sh” script.

First, edit the Cron script (“nano ~/certbot_cron.sh”) and add the tar zip command after reloading the Nginx server:

# Check if any certs were renewed
if grep -q 'The following certs have been renewed:' $LOGFILE; then
  ...

  # Reload Nginx configuration
  service nginx reload

  # Zip up the /etc/letsencrypt directory
  tar -zcvf /tmp/letsencrypt.tar.gz /etc/letsencrypt
fi

Note: We are using the tar command, instead of gzip, because gzip doesn’t handle the symbolic links under the “/etc/letsencrypt/live” directory correctly.

There is a security issue because “/tmp/letsencrypt.tar.gz” is readable by others; if this is a concern, you can adjust access permissions by adding the following commands to the “~/letsencrypt_cron.sh” script:

  ...

  # Zip up the /etc/letsencrypt directory
  tar -zcvf /tmp/letsencrypt.tar.gz /etc/letsencrypt

  # Change owner and restrict access to owner
  chown mynewuser /tmp/letsencrypt.tar.gz
  chmod 600 /tmp/letsencrypt.tar.gz
fi

Second, edit the Windows batch script file “C:\home\myuser\backups\backup_wordpress.bat” and add the following to the end:

REM Download the /etc/letsencrypt tar file

mkdir \home\myuser\backups\letsencrypt
cd \home\myuser\backups\letsencrypt
rsync.exe -vrt --progress -e "ssh -p 3333 -l mynewuser -v" mydomain.com:/tmp/letsencrypt.tar.gz %date:~10,4%.%date:~4,2%.%date:~7,2%-letsencrypt.tar.gz

And we are done with the backup. In the future, if you ever need to restore the contents of “/etc/letsencrypt”, upload the tar archive to the server’s “tmp” directory and run the following on the server:

# Unzip the tar file
cd /tmp
tar -xvzf letsencrypt.tar.gz
# Will uncompress everything to /tmp/etc/letsencrypt

# Copy the contents to its original location
sudo cp -r /tmp/etc/letsencrypt /etc/

Redirect HTTPS to HTTP

If you have a domain which you don’t wish to provide HTTPS access to (i.e., go through the trouble of creating a SSL certificate for), you can configure Nginx to redirect HTTPS requests to HTTP. Uncomment the “# HTTPS server” section in the domain’s server block file and add a redirect statement:

# HTTPS server
#
server {
        listen 443 ssl;
        server_name monieer.com www.monieer.com;

        #To redirect, use return instead of no longer recommended rewrite:
        #rewrite ^(.*) http://$host$request_uri;
        return 302 http://$host$request_uri;
}

Because we did not set the “ssl_certificate” and ssl_certificate_Key” values in the server_block above, Nginx will use the default_server’s SSL certificate instead. Unfortunately, the browser will show a security warning because the domain name in the default_server’s SSL certificate won’t match the requested domain name. If the user agrees to proceed, the redirection to non-secure HTTP access would correctly take place.

Info above derived from:

No Comments

Automate Remote Backup of WordPress Database

Linux No Comments

See my previous post, Subversion Over SSH on an Unmanaged VPS, to learn how to set up Subversion on Ubuntu (running on a DigitalOcean VPS). In this post, we will learn how to create a script to backup the WordPress database and copy it from the server to our local Windows client. We’ll also look at copying other files on the server to our local client’s hard drive. Finally, we will automate the execution of the backup script to run at regular intervals on the local client.

Install Windows SSH Tools

The backup script will use Unix tools, like ssh (secure shell) and rsync (remote sync), which are not included with Windows. Fortunately, there are free distributions of these tools for Windows. Let’s install them.

Get the ssh and rsync tools:

  1. Download the version of DeltaCopy without the installer (see “Download Links” located top-right).
  2. Unzip the downloaded “DeltaCopyRaw.zip” to “C:\Program Files (x86)\DeltaCopy”.
  3. Add DeltaCopy to the execution path and set the home directory (where we will save the public/private RSA key pair files later):
    1. Open up the “System Properties” dialog by running “Edit system environmental variables” (or “sysdm.cpl”). Click on the Advanced tab. Click on the “Environmental Variables” button near the bottom to launch the “Environmental Variables” dialog.
    2. In the “Environmental Variables” dialog, select “Path” under “System variables” and click the “Edit…” button.
    3. Add “;C:\Program Files (x86)\DeltaCopy” (without the double-quotes) to the end of the existing “Variable value” field. Click Ok to save the change.
    4. Back in the “Environmental Variables” dialog, click “New…” button under “System variables”.
    5. Set “Variable name” to “HOME” and “Variable value” to your home directory like “C:\home\myuser”. Click Ok to save the change.
    6. Click Ok to close the “Environmental Variables” dialog and “Ok” again to close the “System Properties” dialog.

Get the ssh-keygen (secure shell authentication key generation) tool:

  1. Download the free version of cwRsync (click on the Get tab).
  2. Unzip the downloaded “cwRsync_5.5.0_x86_Free.zip” to a temporary directory like “C:/temp/cwRsync”. We will only need to use ssh-keygen once to generate the public/private RSA key pair.
  3. Besides ssh-keygen, cwRsync includes ssh and rsync which we won’t use; cwRsync’s ssh and rsync is not as Windows-compatible as DeltaCopy. For example, cwRsync’s ssh and rsync require that the RSA key pair files stored on Windows have Unix-like 0600 permissions, which then require the chmod tool (ironically included with DeltaCopy, but not cwRsync). DeltaCopy doesn’t have such issues. (Both DeltaCopy and cwRsync are based on a tiny part of Cygwin and DeltaCopy is the most Windows-friendly option of the three.)

Get the scp (secure copy) tool:

  1. Download the “pscp.exe” file from PuTTY.
  2. Move it into the “C:\Program Files (x86)\DeltaCopy” directory.

Create the “.ssh” directory under the home directory and test the environmental variables by running the “Command Prompt” (or “cmd.exe”) and inputting these commands. (Don’t type the comment lines below that start with the # pound character.)

# Test the HOME variable
echo %HOME%
c:\home\myuser

# Create the .ssh directory
mkdir %HOME%\.ssh

# Test the PATH variable; ssh should be found and executed
ssh -p 3333 mynewuser@mydomain.com

Server, Trust Me

To enable the backup script to run without requiring password input from the user, we need to establish trust between the remote server and the local client. To do so, we will create a client public/private RSA key pair and configure the server to trust the client public key. Tools like ssh and rsync can then authenticate against the server using the RSA key pair to avoid requiring the user to input a password.

Open the “Command Prompt” and do the following:

# Go the directory where we unzipped the ssh-keygen tool to
cd /temp/cwRsync/bin

# Generate client RSA key pair (for security, 2048 bits is the new minimum)
ssh-keygen -b 2048

Generating public/private rsa key pair.
# When prompted, select the current directory to write to;
# if you keep the default, it will fail
Enter file in which to save the key (/home/myuser/.ssh/id_rsa): ./id_rsa
# Keep the default; do not input a passphrase
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in ./id_rsa.
Your public key has been saved in ./id_rsa.pub.

# Move client RSA key pair to .ssh directory
move ./id_rsa* /home/myuser/.ssh

# Copy client public key to the server
pscp -P 3333 /home/myuser/.ssh/id_rsa.pub mynewuser@domain.com:/home/mynewuser/

# Secure shell into the server; you will be prompted for password
ssh -p 3333 mynewuser@mydomain.com

# On server, double-check that we are in the home directory
pwd
/home/mynewuser

# Create the .ssh directory
mkdir .ssh

# Create authorized_keys file and append the client public key to it
cat id_rsa.pub >> .ssh/authorized_keys

# Delete the client public key (no longer needed)
rm id_rsa.pub

# Optionally, restrict access to .ssh directory
chmod -R 700 .ssh

# Exit the secure shell
exit

# Secure shell into the server again; you won't be prompted for the password
ssh -p 3333 mynewuser@mydomain.com

On the last secure shell attempt, you should be able to log into the server without having to input a password.

Create and Schedule Backup Script

Create a file “C:\home\myuser\backups\backup_wordpress.bat” and input the following content:

@echo off

REM Display current date and time

date /t
time /t

REM Dump the wordPress database
REM The -v verbose flag is optional

ssh -p 3333 -v mynewuser@mydomain.com "mysqldump -uwordpress -pmypassword wordpress | gzip -c > /tmp/wordpress.sql.gz"

REM Download the database dump file to local directory.
REM Using rsync over ssh to avoid the need for a rsync server on VPS.
REM The %date:~10...% below helps to date-stamp the file,
REM resulting in a filename like 2015.04.23-wordpress_4.4.2.sql.gz.

mkdir \home\myuser\backups\wordpress
cd \home\myuser\backups\wordpress
rsync -vrt --progress -e "ssh -p 3333 -l mynewuser -v" mydomain.com:/tmp/wordpress.sql.gz %date:~10,4%.%date:~4,2%.%date:~7,2%-wordpress_4.4.2.sql.gz

REM Sync all Nginx server block files to local directory
REM Note: Be careful, the --delete flag allows Rsync to delete local files
REM       if they do not exist on the server also!

mkdir \home\myuser\backups\nginx
rsync -vrt --progress --delete -e "ssh -p 3333 -l mynewuser -v" mydomain.com:/etc/nginx/sites-available/* .

REM Rsync may not set local permissions correctly, so we'll fix with DeltaCopy's chmod.
REM Note: chmod fails for files with Windows-style perms already set, but that is ok.

chmod 660 *

You may wish to add additional rsync commands to download the WordPress configuration file (“/var/www/wordpress/wp-config.php”) and pictures uploaded to WordPress (“/var/www/wordpress/wp-content/uploads/*”).

Schedule the script:

  1. Run the “Task Scheduler”.
  2. Click on “Create Basic Task…” on the right sidebar.
  3. Input a name like “Backup WordPress”. Click Next.
  4. Select your schedule. I recommend “Weekly”. Click Next. Select a specific day and time that works for you. Click Next.
  5. Keep Action as “Start a program”. Click Next.
  6. Input “C:\home\myuser\backups\backup_wordpress.bat” into the “Program/script” box.
  7. Input “> C:\home\myuser\backups\backup_log.txt 2>&1” into the “Add arguments (optional)” box. This will redirect any standard or error outputs from the “backup_wordpress.bat” to “backup_log.txt” for review at your convenience.
  8. Click Next and Finish.
  9. To manually test, click on “Task Schedule Library” on the left sidebar, right-click on the “Backup WordPress” task in the top-center panel (if you don’t see it, click on the Refresh action to the right first), and select Run.

Getting PuTTY to Use Private/Public Key Pair

You may notice that the PuTTY pscp tool still requires a password to be inputted. Unfortunately, PuTTY does not use the RSA key format or the %HOME% environmental variable.

If you wish to use the pscp tool in the backup script, we’ll need to convert the RSA private key to the PPK (PuTTY Private Key) format:

  1. Download the “puttygen.exe” file from PuTTY.
  2. Run it.
  3. Go to menu Conversions and select “Import key”. Browse to the client RSA private key at “C:/home/myuser/.ssh/id_rsa”.
  4. Click the “Save private key” button. Answer Yes to the “Are you sure you want to save this key without a passphrase to protect it?” dialog.
  5. Input filename “id_rsa.ppk” and save to the same location as the original RSA key pair files.

When running the pscp tool in the script, use the “-i” option to tell it where to find the PPK file like so:

REM Copy the WordPress config file to local directory

cd \home\myuser\backups\wordpress
pscp -P 3333 -i /home/myuser/.ssh/id_rsa.ppk mynewuser@mydomain.com:/var/www/wordpress/wp-config.php .

Hopefully the above will help you to sleep well, knowing that your WordPress data is safe.

See my followup post, Free SSL Certificate from Let’s Encrypt for Nginx, on how to install a free SSL certificate for HTTPS access and as a result, maybe give your Google ranking a boost.

No Comments

DigitalOcean After A Year: Still Good

Linux No Comments

Two weekends ago, on a Friday, my droplet was automatically upgraded to DigitalOcean‘s new cloud. I had received an email about the upgrade but had ignored it, believing that the upgrade would go smoothly. I was on a trip that Friday and the weekend, so did not check my website until Monday morning. Unfortunately, my website was unreachable and had been so since Friday.

Droplet Up, Website Down

050DragonSlayerI logged into DigitalOcean and the web interface said that my droplet was up and working fine. However, I could not ping it or secure shell to it. DigitalOcean’s Console Access web interface did worked and showed the necessary processes running on my droplet. The droplet was working fine but network access to it appeared to be broken.

I contacted support and was eventually routed to second level support (an engineer) who told me that I had to manually power off (run “sudo poweroff” on the command line) and then power on the droplet (using DigitalOcean’s web interface). This fixed the network connection issue and my website was back online. Note that doing a “shutdown -r” command or a “Power Cycle” (using the web interface) did not fix the connectivity problem.

DigitalOcean support was very responsive. Of the three support cases I’ve opened in the year that I’ve been with them, first line support had responded promptly. Of course, support did make use of canned responses (which I didn’t object to because it made sense to filter out beginners). Though I was vexed by the network connectivity issue (my website was offline for 3 days) and my irritation showed in my communications, the support staff always remained very polite and gracious.

Doing such a system-wide upgrade without checking network connectivity to affected droplets concerns me. Checking that upgraded droplets are up and reachable would have been my first validation test after the upgrade, instead of putting the burden on the customer to make sure everything is okay. Then again, this expectation might be acceptable for an unmanaged VPS; though I think it is a grey area because the upgrade was initiated by DigitalOcean. For full disclosure, DigitalOcean did provide a manual upgrade process; which in hindsight, I should have taken advantage of. Lesson learned.

Slow and Slowerer

When I configured my droplet a year ago, I was very impressed by the performance. My website loaded pages within 1 second, as opposed to the 2-4 seconds on my previous shared web hosting service. Recently, I would have been very glad to get back my 2-4 seconds page load time.

Since the past few months, I’ve noticed my website getting slower and slower. Even a simple PHP application I had running (also on the droplet) took longer and longer to process. Like a frog slowly being boiled, I got used to a 4-6 seconds page load time as being “normal”.

Worse, after my droplet was upgraded, the page load time dropped to 8-9 seconds. I installed the “WP Super Cache” WordPress plugin in a quick fix attempt to increase performance and it worked. Once WP Super Cache was activated, page load times moved back to 4-6 seconds.

You know what they say about quick fixes. A week later, the page load times increased to 8-15 seconds. 15 seconds! I disabled WP Super Cache and page load times dropped to 4-6 seconds. I didn’t understand why but at least, the crisis was averted.

Bottleneck? What Bottleneck?

The performance of any VPS (or shared web hosting) is determined by the allocated CPU power, amount of memory, disk access speed, software stack (programs running), and network traffic. The first three can be collectively known as the hardware or virtual hardware. In my website’s case, the software stack is composed of the Ubuntu operating system, LEMP infrastructure, WordPress and its plugins. And though I would love to say that the slowdown was due to increased network traffic to my website, it wasn’t.

When optimizing for performance, it pays to determine where the bottleneck is. For example, you could waste time optimizing the LEMP (by adding Varnish) or WordPress (by adding the WP Super Cache plugin) when the bottleneck is that you are out of memory (Varnish won’t help) or free disk space (WP Super Cache could actually make this worse with its caching mechanism). Having said that, there are ways to optimize LEMP (and WordPress to a lesser extent) to reduce memory usage; but then, it is usually at the cost of performance.

I contacted DigitalOcean support for help. I got a mostly canned reply back. They stated that the fault wasn’t because of network connectivity, hardware, or over-subscription (where there are too many droplets running on the same physical hardware). They had tested loading a couple of static images from my website, which only took 100-200ms each and proved that the problem was not on their end. The canned reply suggested using sysstat to figure out the problem with the droplet.

Sysstat is a collection of utilities to monitor performance under Linux. Here’s how to install and use sysstat:

# Install sysstat
sudo apt-get install sysstat

# Enable sysstat system metrics collection
sudo vi /etc/default/sysstat
  # Change ENABLE="false" to "true"

# Start sysstat
/etc/init.d/sysstat start

# Check CPU usage
sar -u

# Check memory usage
sar -r

Because we have just started the sysstat process, the check CPU and memory usage will only return the current CPU and memory usage. Sysstat will collect system metrics every 10 minutes; so in the future, the “sar” commands above would return the CPU and memory usage collected every 10 minutes in the past. Sysstat has a lot more functionality which I have yet to explore.

My favorite performance monitoring tool is the “top” command. It displays a real-time summary of the CPU, memory, and swap usage with a list of the processes consuming the most CPU. (Note that the Ubuntu image from DigitalOcean has swap disabled by default.) The top command allows me to see what is happening on the system as I load a page.

linux_top

Right off the bat, I noticed that my CPU usage was around 90% constantly, which was a big red flag. After a day of recording, sysstat returned the same average 90% CPU usage. This might explain why the WP Super Cache plugin, which required more CPU and disk access to do the page caching, made the website performance worse. I didn’t recall seeing the CPU that high when I first configured the droplet a year (it would have concerned me very much then).

Memory also looked alarming with a 98% usage (493424 used / 501808 total); however, it was a false alarm. Evidently, Linux operating systems like Ubuntu will allocate all the free memory for disk caching. Then, when applications need more memory, they get it from the disk cache. So, the important data to look for is the cache size. Here, the cache size was 28% of total memory (144132 cached Mem / 501808 total), which means only about 2/3 of memory was actually used by applications.

Note: The Linux command to display free memory, “free -m”, supports the same conclusion. Look for the reported “cached” number.

What Is Eating My Hard Drive?

Running the Linux command to report file system disk space usage, “df -h”, indicated that 93% of my 20GB quota was used. I remembered that my droplet used much less than 50% of the 20GB a year ago.

Find the space hogs:

cd /
sudo du -h --max-depth=1
16G    ./var

cd var
sudo du -h --max-depth=1
6.0G   ./www
7.4G   ./lib
2.0G   ./log

Note: If your system’s estimate file usage “du” command does not support the “max-depth” flag, then you will need to run this command on each directory one by one like so:

sudo du -sh /var/www
sudo du -sh /var/lib
sudo du -sh /var/log

The “/var/www” directory contained my website content so that was a keeper. The “/var/lib” directory contained important system and application files, so we could not just delete anything in it without a lot of caution. The “/var/lib” directory’s large size was caused primarily by the MySQL database file, “/var/lib/mysql/ibdata1”, which we will examine in detail later. I was certain that I could safely delete the archived log files from the “/var/log” directory though.

# Manually rotate log files (basically create new active log files)
sudo /etc/cron.daily/logrotate

# Delete all gzipped archive files
sudo find /var/log -type f -name "*.gz" -delete

# Delete all next-to-be-archived files
sudo find /var/log -type f -name "*.1" -delete

# Double-check size again
sudo du -sh /var/log
563M   /var/log

Strangely, I found the mail directory, “/var/mail”, taking up 150MB. There were a lot of non-delivery notification emails sent to the website address. (I don’t know why but plan to investigate it at a later time.) I was sure that it is also safe to delete those emails.

# Check usage
sudo du -sh /var/mail
156M   /var/mail

# Truncate all mail files to zero size
sudo find /var/mail -type f -exec truncate {} --size 0 \;

# Double-check usage
sudo du -sh /var/mail
4.0K   /var/mail

Note: I did read a recommendation to enable swap on Ubuntu to guard against out of memory errors (because swap allows disk space to be used as additional memory at the expense of performance); however, because I have 1/3 free memory and a performance problem, I don’t think enabling swap is the appropriate solution for my case.

Die, WordPress Plugin, You Die!

I strongly believed that the bottleneck was the CPU; the top five most CPU-intensive processes were the “php5-fpm” processes (responsible for executing PHP scripts). So, optimizing LEMP by adding Varnish (an additional HTTP accelerator process) would probably not help, and might even harm the performance further. What could be causing so much CPU usage?

According to Google Analytics, the traffic to my website had not changed significantly this past year. Even if it had, the now roughly 200 visitors per day should not cause such a high CPU usage. I had not changed my website in any way (for example, by adding new plugins). The only changes had been software updates to Ubuntu, WordPress and its plugins.

For LEMP infrastructure issues, the recommended step is to check the log files for errors.

# Linux system log
sudo tail /var/log/dmesg

# Nginx log
sudo tail /var/log/nginx/error.log

# PHP log
sudo tail /var/log/php5-fpm.log

# MySQL log
sudo tail /var/log/mysql/error.log

Looking at the Nginx log was eye-opening because I could see hacking attempts against my website using invalid URLs. However, that could not be the cause of the high CPU usage. There were no other errors or clues in the log files.

For WordPress performance issues, the universally-recommended first step is to disable the plugins and see if that fixes the issue. Rather than disabling all the plugins and re-enabling them one by one, my gut told me that the culprit plugin might be the “WordPress SEO”. When I get daily-in-row or even twice-a-day updates to a piece of software, I know that the software is very buggy. WordPress SEO was guilty of that behavior. Disabling the WordPress SEO plugin resulted in an immediate drop in CPU usage to the 30-50% range. Page load times dropped to 2-3 seconds.

Unfortunately, when I checked a few days later, the CPU was back up to 90% and page load times had increased back to 8-10 seconds. The WordPress SEO plugin was a contributor, but it was not the primary cause of my droplet’s performance issue.

MySQL, What Big Eyes You Have

In addition, the “/var/lib” directory had grown another 1.5GB in size and at a total 9GB, had consumed almost half of my 20GB allocation. Digging further, I found that it was the “/var/lib/mysql/ibdata1” file that had grown to over 6GB. The “ibdata1” file was where MySQL (specifically the InnoDB storage engine) stored the database data and while it can grow, unfortunately it can never decrease in size.

A MySQL query on the database size was necessary to investigate further. Log into MySQL as the root user and run this query to show the sizes of the existing databases:

SELECT table_schema "Data Base Name",
sum( data_length + index_length ) / 1024 / 1024 "Data Base Size in MB",
sum( data_free )/ 1024 / 1024 "Free Space in MB"
FROM information_schema.TABLES
GROUP BY table_schema;

I found that my MediaWiki database was over 6GB in size. I had a MediaWiki for personal use. Access to it was restricted by a password-protected directory. I hadn’t used it in a long time (over half a year) so hadn’t paid any attention to it. When I logged into it, I found the main page was blank with a link to an unknown website. A check of the history indicated that multiple unknown revisions had been made to it since February of this year. My MediaWiki had been hacked.

Evidently, someone had gotten past the password-protection and was using the MediaWiki to store 6GB of data. Worse, that someone may have hacked MediaWiki to run their own PHP code (very unlikely but not impossible as I had a very old version of MediaWiki running). This explained the high CPU usage and the low free disk space.

I savaged my personal info from the MediaWiki (using the history to view old page revisions). I then deleted the MediaWiki database and directory containing the MediaWiki PHP code. The CPU usage immediately went down to a few percentages. Page load time dropped to around one second. Hurrah! (I also changed all my passwords just in case.)

MySQL Database Surgery

To reclaim the 6GB in space used by MySQL’s “ibdata1” file required major surgery. I needed to delete the “ibdata1” file which required deleting and re-creating the WordPress database (and my other personal databases).

Before starting, I recommend configuring MySQL to store each InnoDB table in its own separate file, instead of in the “ibdata1” file, to allow more options to manage drive space usage. Doing this will support the MySQL “Optimize Table” command, which can reduce the table’s file size.

sudo nano /etc/mysql/my.cnf
  # Add "innodb_file_per_table" to the [mysqld] section
  [mysqld]
  innodb_file_per_table

The change above won’t take effect until we restart MySQL.

We need to do some steps before and after deleting the “ibdata1” file:

# Dump a backup of "wordpress" database (and any other personal database)
mysqldump -u[username] -p[password] wordpress > /tmp/wordpress.sql

# Delete "wordpress" database (and any other database except "mysql" and "performance_schema")
mysql -u root -p
mysql> drop database wordpress;
mysql> quit

# Stop MySQL server
sudo service mysql stop

# Log into root user (necessary to access "/var/lib/mysql" directory)
su

# Delete subdirectories and files ("ibdata1") under "/var/lib/mysql" except for "/var/lib/mysql/mysql"
cd /var/lib/mysql
ls -I "mysql" | xargs rm -r -f

# Exit root user
exit

# Start MySQL server
sudo service mysql start

# Create "wordpress" database (and any other database)
mysql -u root -p
mysql> create database wordpress;
mysql> quit

# Restore "wordpress" database (and any other database)
mysql -u [username] -p[password] wordpress < /tmp/wordpress.sql

Viewing the “/var/lib/mysql” directory showed a much smaller “ibdata1” file (about 18M). Strangely, my WordPress database was configured to use MyISAM (an alternative storage engine to InnoDB) by default, so it didn’t use the “ibdata1” file. The “/var/lib/mysql/wordpress” directory contained MyISAM .myd storage files. However, my other personal database did use InnoDB and its directory, “/var/lib/mysql/personal_database”, did contain individual InnoDB .ibd storage files (per table).

WordPress On A Diet

While I was poking around WordPress, I decided to optimize the MySQL database by deleting unnecessary data such as previous versions of posts. Rather than manually truncating database tables myself (a very dangerous, though oddly satisfying pastime), I decided to use the “Optimize Database after Deleting Revisions” plugin, which did exactly what its name said it did.

Before running the “Optimize Database after Deleting Revisions” plugin, backup your WordPress MySQL database. Then do the following to manually optimize your database:
wordpress_optimize_db

  1. Go to “Settings/Optimize Database” in the WordPress administration.
  2. Configured the options. I checked all the “Delete…” options except for “Delete pingbacks and trackbacks”. I did not enable the Scheduler because I only wish to run this plugin manually when I decide to.
  3. Click the “Save Settings” button.
  4. Click the “Go To Optimizer” button.
  5. Click the “Start Optimization” button.

Thoughts on Hardware Upgrade

Had I not fixed the high CPU usage issue (and it had been a valid issue), the next step would have been to look at options to upgrade the hardware. This would mean upgrading to DigitalOcean’s higher-priced plans or even another VPS provider (I have heard that Linode has better performance overall).

Because my bottleneck was the CPU, I would have had to upgrade to DigitalOcean’s $20/month plan which includes a 2 core processor. Upgrading to the $10/month plan, which only included a 1 core processor (the DigitalOcean website didn’t say whether it is a faster processor than the 1 core processor in the $5/month plan), would not have fixed my CPU issue. Had my bottleneck been a memory limitation, I would chose the $10/month plan, which would have doubled the memory size (1GB versus 512MB).

Thinking outside the box, a cheaper option than the $20/month plan would be to get a second $5/month droplet to run a dedicated MySQL server (hosting the WordPress database). The original droplet would run only the WordPress application and talk to the second droplet’s database. This $10/month option with two $5/month droplets would have two 1 core processors, which might be better than a single 2 core processor! Alas, the MySQL process used only 10-15% of the CPU so removing it from the original droplet would not have made much of a difference.

Hopefully documenting my trials and tribulations above will help you to have an easier time with the performance of your unmanaged VPS.

Some info above taken from:

No Comments

Subversion Over SSH on an Unmanaged VPS

Linux No Comments

See my previous post, Upgrade Ubuntu and LEMP on an Unmanaged VPS, to learn how to upgrade LEMP and Ubuntu to the latest versions. In this post, we will install Subversion on the server and learn how to access it using Subversion over SSH (svn+ssh).

Note: Though I’m doing the work on a DigitalOcean VPS running Ubuntu, the instructions may also apply to other VPS providers.

Subversion allows a client to execute svn commands on the server over SSH. As a result, there is no need to have a Subversion server process (svnserve) running or an Apache server configured to support Subversion (mod_dav_svn); one only needs SSH access. Subversion over SSH is simple and sufficient for my needs.

For svn+ssh, access to Subversion is controlled by the Linux user login. To avoid having to input your SSH login password every time you run a svn command, I recommend configuring SSH with public key authentication between your client and the server. For instructions, see the “SSH With Public Key Authentication” section in my previous post, SSH and SSL With HostGator Shared Web Hosting.

To begin, on the server, install the Subversion package and create a repository:

# Install subversion
sudo apt-get install subversion

# Check that subversion is installed
svn --version

# Make a repository directory
sudo mkdir /var/repos

# Create a repository
sudo svnadmin create /var/repos

We need to change the permissions on the newly-created repository directory so that our Linux user can have read-write access. I recommend adding your user to the ‘www-data’ group and giving that group modify access to the repository like so:

# Change mynewuser's primary group to www-data
sudo usermod -g www-data mynewuser

# Check by showing all groups that mynewuser belongs to
groups mynewuser

# Change repository group owner to be www-data
sudo chgrp -R www-data /var/repos

# Add group write permission to repository
sudo chmod -R g+w /var/repos

On the remote client machine, we will use the Subversion client with svn+ssh to access the repository. Because we are using a custom SSH port and the Subversion command line does not provide an option to input the SSH custom port, we have to configure SSH to use the custom port automatically.

Configure SSH to use the custom port when connecting to your server by creating a SSH configuration file located at “~/.ssh/config” (on Mac OS X) or “%HOME%/.ssh/config” (on Windows). Input the following file content:

Host mydomain.com
  Port 3333
  PreferredAuthentications publickey,password

After this, you can run “ssh mynewuser@mydomain.com” instead of “ssh -p 3333 mynewuser@mydomain.com” because SSH will use the custom 3333 port automatically when connecting to “mydomain.com”.

Note: On Windows, I am using the DeltaCopy “ssh.exe” client in combination with the CollabNet “svn.exe” Subversion client. On Mac OS X, I am using the built-in ssh client and the svn client (installed using MacPorts).

To test access to the repository, run the following command on the client:

# List all projects in the repository.
svn list svn+ssh://mynewuser@mydomain.com/var/repos

This command will return an empty line because there are no projects in the repository currently. If you do not see an error, then the command works correctly.

On the client, you can now issue the standard Subversion commands like the following:

# Import a project into the repository
svn import ./myproject svn+ssh://mynewuser@mydomain.com/var/repos/myproject -m "Initial Import"

# The list command should now show your newly-imported project
svn list svn+ssh://mynewuser@mydomain.com/var/repos

# Check out a local, working copy of the project from the repository
svn co svn+ssh://mynewuser@mydomain.com/var/repos/myproject ./myproject2

# View the working copy's info (no need to input the svn+ssh URL once inside the project)
cd ./myproject2
svn info

# Update the project to the latest version
svn update

If you should wish to run Subversion commands locally on the server, you can do so using the “file:///” path instead of “svn+ssh://” URL.

# List all projects in the repository.
svn list file:///var/repos

# Check out a local, working copy of the project from the repository
svn co file:///var/repos/myproject ./myproject2

And we are done. Hopefully the above info will be useful should you ever need to get Subversion working.

See my followup post, Automate Remote Backup of WordPress Database, on how to create and schedule a Windows batch script to backup the WordPress database.

No Comments

Upgrade Ubuntu and LEMP on an Unmanaged VPS

Linux No Comments

See my previous post in my unmanaged VPS (virtual private server) series, Nginx HTTPS SSL and Password-Protecting Directory, to learn how to configure Nginx to enable HTTPS SSL access and password-protect a directory. In this post, I will explore how to upgrade LEMP and Ubuntu.

Upgrade LEMP

While one can upgrade each component of LEMP (Linux, Nginx, MySQL, PHP) separately, the safest way is to upgrade all software components installed on the system to ensure that the dependencies are handled properly.

Upgrade all software packages, including LEMP, by running the following commands:

# Update apt-get repositories to the latest with info
# on the newest versions of packages and their dependencies.
sudo apt-get update

# Use apt-get dist-upgrade, rather than apt-get upgrade, to
# intelligently handle dependencies and remove obsolete packages.
sudo apt-get dist-upgrade

# Remove dependencies which are no longer used (frees up space)
sudo apt-get autoremove

Some changes may require a reboot. To initiate a reboot, execute this recommended command:

# Following command equivalent to: sudo shutdown -r now
sudo reboot

Updating PHP-FPM Breaks WordPress

If the PHP-FPM (FastCGI Process Manager for PHP) package is updated, one may be prompted to overwrite the “/etc/php5/fpm/php.ini” and “/etc/php5/fpm/pool.d/www.conf” configuration files with the latest versions. I recommend selecting the option to show the differences, making a note of the differences (hitting the “q” key to quit out of the compare screen), and accepting the latest version of the files.

After the upgrade, WordPress may be broken because the PHP-FPM is no longer configured correctly. To fix this issue, update the two PHP-FPM configuration files with these changes to ensure that Nginx will successfully integrate with PHP-FPM:

# Fix security hole by forcing the PHP interpreter to only process the exact file path.
sudo nano /etc/php5/fpm/php.ini
   # Add the following or change the "cgi.fix_pathinfo=1" value to:
   cgi.fix_pathinfo=0

# Configure PHP to use a Unix socket for communication, which is faster than default TCP socket.
sudo nano /etc/php5/fpm/pool.d/www.conf
   # Keep the following or change the "listen = 127.0.0.1:9000" value to:
   listen = /var/run/php5-fpm.sock
   # The latest Nginx has modified security handling which requires
   # uncommenting the "listen.owner" and "listen.group" properties:
   listen.owner = www-data
   listen.group = www-data
   ;listen.mode = 0660

# Restart the PHP-FPM service to make the changes effective.
sudo service php5-fpm restart

Test by browsing to the “info.php” file (containing the call to “phpinfo” function) to ensure that Nginx can call PHP-FPM successfully. Hopefully, you won’t see the “502 Bad Gateway” error which means it didn’t. If so, look at the Nginx and PHP-FPM error log files for hints on what could have gone wrong.

sudo tail /var/log/nginx/error.log
sudo tail /var/log/php5-fpm.log

Note: If you accidentally select the option to keep the current version of the PHP-FPM configuration files and now wish to the get the latest versions, you will need to uninstall and re-install the PHP-FPM service:

sudo apt-get purge php5-fpm
sudo apt-get install php5-fpm

You will then need to update the two PHP-FPM configuration files per the instructions above.

Upgrade May Break iptables

After a recent upgrade, a “problem running iptables” error message is displayed when logging into the droplet. The whole error is displayed when I attempt to view the firewall status:

~$ sudo ufw status
ERROR: problem running iptables: modprobe: ERROR: could not insert 'ip_tables': Exec format error
iptables v1.4.21: can't initialize iptables table `filter': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.

Thanks to this page, problem with iptables and ubuntu Ubuntu 13.10, I found that the issue was caused by the upgrade process switching the kernel to a 64bit version. The problem is that the rest of the system (executables, object code, shared librairies) is 32bit!

# Check the kernel version (x86_64 means 64bit)
~$ uname -a
Linux mydomain 3.13.0-39-generic #66-Ubuntu SMP Tue Oct 28 13:30:27 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

# Check the system executables and libraries (32-bit means 32bit!)
~$ file /sbin/init
/sbin/init: ELF 32-bit LSB  shared object, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=c394677bccc720a3bb4f4c42a48e008ff33e39b1, stripped

To fix the 64bit/32bit mismatch, I did the following:

  1. Browse to the DigitalOcean web interface, drill into my droplet, and select “Kernel” configuration (on left panel).
  2. I then selected the 32bit version of the kernel, which is “Ubuntu 14.04 x32 vmlinuz-3.13.0-39-generic” (only difference from the current kernel “Ubuntu 14.04 x64 vmlinuz-3.13.0-39-generic” is changing “x64” to “x32”). Click the Change button.
  3. Power down the droplet by running the “sudo poweroff” command.
  4. Use the DigitalOcean web interface to power on the droplet.

After doing the above, I no longer see the “problem running iptables” error message. Viewing the firewall status now successfully returns the correct set of rules:

~$ sudo ufw status
Status: active

To                         Action      From
--                         ------      ----
3333/tcp                   ALLOW       Anywhere
80                         ALLOW       Anywhere
25/tcp                     ALLOW       Anywhere
443                        ALLOW       Anywhere
...

Note: Be patient because the DigitalOcean web interface can take a minute to recognize reflect that the droplet is powered off (and then enable the Power On button). Also, the first two times I tried to power on the droplet, I got timeout errors. The 3rd attempt didn’t do anything. Finally, the 4th attempt successfully powered on the droplet. Whew!

Upgrade Ubuntu

The following is particular to my VPS provider, DigitalOcean, but perhaps it may help provide a general idea on what to expect with your own provider when doing an operating system upgrade.

On logging into my server, I saw the following notice:

New release '14.04.1 LTS' available.
Run 'do-release-upgrade' to upgrade to it.

Your current Hardware Enablement Stack (HWE) is no longer supported
since 2014-08-07.  Security updates for critical parts (kernel
and graphics stack) of your system are no longer available.

For more information, please see:
http://wiki.ubuntu.com/1204_HWE_EOL

To upgrade to a supported (or longer supported) configuration:

* Upgrade from Ubuntu 12.04 LTS to Ubuntu 14.04 LTS by running:
sudo do-release-upgrade

Update: One does not necessarily have to upgrade to the latest Ubuntu release version when prompted to. However, in the case above, support for the 12.04 LTS release had ended so an upgrade to 14.04 LTS was mandatory. Recently, I got a message to upgrade from release 14.04 LTS to 16.04 LTS. However, I don’t plan to upgrade because the 14.04 LTS release will be supported until 2019.

When I ran “sudo do-release-upgrade”, there was a dire warning about running upgrade over SSH (which I ignored) and many prompts to overwrite configuration files with newer versions (which I accepted after taking note of the differences between the new and old versions). There was also a warning about how the upgrade could take hours to complete, though it ended up taking less than 15 minutes. The upgrade ended with a prompt to reboot, which I accepted.

Note: To be safe, one should run the “sudo do-release-upgrade” command from the Console window (accessible through the DigitalOcean web interface), instead of from a SSH session. I was lucky that nothing went wrong with the release upgrade.

After reboot, I updated the two PHP-FPM configuration files, “/etc/php5/fpm/php.ini”
and “/etc/php5/fpm/pool.d/www.conf”, per the instructions in the above section.

In addition, I had to re-enable sudo permissions for my user by running the following:

# visudo opens /etc/sudoers using vi or nano editor, whichever is the configured text editor.
# It is equivalent to "sudo vi /etc/sudoers" or "sudo nano /etc/sudoers" but includes validation.
visudo
   # Add mynewuser to the "User privilege specification" section
   root       ALL=(ALL:ALL) ALL
   mynewuser  ALL=(ALL:ALL) ALL

I found the upgrade process, especially upgrading the Ubuntu operating system, to be a relatively painless experience. Hopefully you will find it to be the same when you do your upgrade.

See my followup post, Subversion Over SSH on an Unmanaged VPS, to learn how to install and use Subversion over SSH (svn+ssh).

Some info above derived from:

No Comments

Nginx HTTPS SSL and Password-Protecting Directory

Linux 1 Comment

See my previous post in my unmanaged VPS (virtual private server) series, Nginx Multiple Domains, Postfix Email, and Mailman Mailing Lists, to learn how to configure multiple domains and get Postfix email and Mailman mailing lists working. In this post, I will configure Nginx to enable HTTPS SSL access and password-protect a directory.

Note: Though I’m doing the work on a DigitalOcean VPS running Ubuntu LTS 12.04.3, the instructions may also apply to other VPS providers.

Enable HTTPS/SSL Acess

I have a PHP application which I want to secure. If I use HTTP, then the information sent back from the server to my browser is in clear text (and visible to anyone sniffing the network). If I use HTTPS (HTTP Secure) with a SSL (Secure Sockets Layer) server certificate, then the information will be encrypted. In the steps below, I will configure HTTPS/SSL to work for a domain and then force HTTPS/SSL access on a particular directory (where the PHP application would be located).

To get HTTPS working, we need a SSL server certificate. While you can get a 3rd party certificate authority to issue a SSL certificate for your domain for about $10 per year, I only need a self-signed certificate for my purpose. A 3rd party issued SSL certificate is convenient because if the browser trusts the 3rd party certificate authority by default, the browser won’t prompt you to accept the SSL certificate like it would for a self-signed certificate (which the browser can’t establish a chain of trust on). If you run a business on your website, I recommend investing in a 3rd party SSL certificate so that your website would behave professionally.

Create a self-signed SSL server certificate by running these commands on the server:

Note: You don’t need to input the lines that start with the pound character # below because they are comments.

# Create a directory to store the server certificate.
sudo mkdir /etc/nginx/ssl

# Change to the newly-created ssl directory.  Files created below will be stored here.
cd /etc/nginx/ssl

# Create a private server key.
sudo openssl genrsa -des3 -out server.key 1024
   # Remember the passphrase you entered; we will need it below.

# Create certificate signing request.
# (This is what you would send to a 3rd party authority.)
sudo openssl req -new -key server.key -out server.csr
   # When prompted for common name, enter your domain name.
   # You can leave the challenge password blank.

# To avoid Nginx requiring the passphrase when restarting,
# remove the passphrase from the server key. (Otherwise, on
# reboot, if you don't input the passphrase, Nginx won't run!)
sudo mv server.key server.key.pass
sudo openssl rsa -in server.key.pass -out server.key

# Create a self-signed certificate based upon certificate request.
# (This is what a 3rd party authority would give back to you.)
sudo openssl x509 -req -days 3650 -in server.csr -signkey server.key -out server.crt

Note: I set the certificate expiration time to 3650 days (10 years); 3rd party certificates will usually expire in 365 days (1 year). The maximum expiration days you can input is dependent upon the OpenSSL implementation. Inputting 36500 days (100 years) would probably fail due to math overflow errors (once you convert 100 years into seconds, the value is too big to store in a 32bit variable). I believe the highest you can go is about 68 years, but I haven’t tested it.

Configure Nginx to use the SSL server certificate we created by editing the server block file for the domain you want to use it on:

sudo nano /etc/nginx/sites-available/domain2

In the “domain2” server block file, find the commented-out “HTTPS server” section at the bottom, uncomment it, and edit it to look like the following:

# HTTPS server
#
server {
        listen 443;
        server_name mydomain2.com www.mydomain2.com;

        root /var/www/mydomain2;
        index index.php index.html index.htm;

        ssl on;
        ssl_certificate /etc/nginx/ssl/server.crt;
        ssl_certificate_key /etc/nginx/ssl/server.key;

#       ssl_session_timeout 5m;
#
#       ssl_protocols SSLv3 TLSv1;
#       ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
#       ssl_prefer_server_ciphers on;

        location / {
                try_files $uri $uri/ /index.php;
        }

        # pass the PHP scripts to FPM-PHP
        location ~ \.php$ {
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_pass unix:/var/run/php5-fpm.sock;
                fastcgi_index index.php;
                include fastcgi_params;
        }
}

Note: The “HTTPS Server” section looks like the “HTTP Section” we configured previously at the top, except for the addition of “listen 443” (port 443 is the HTTPS port) and the SSL enabling configurations.

Open up the HTTPS port in the firewall and reload Nginx by running these commands on the server:

# Allow HTTPS port 443.
sudo ufw allow https

# Double-check by looking at the firewall status.
sudo ufw status

# Reload Nginx so changes can take effect.
sudo service nginx reload

Test by browsing to “https://mydomain2.com/”. When the browser prompts you to accept the self-signed server certificate, answer Yes.

Require HTTPS/SSL Access on a Directory

To require HTTPS/SSL-only access on a particular subdirectory under the domain, we need to add a directive to the domain’s HTTP Server to redirect to the HTTPS Server whenever a browser accesses that directory.

Note: Apache uses a .htaccess file to allow users to configure such actions as redirecting or password-protecting directories. Nginx does not use .htaccess; instead, we will put such directives in the server block files.

Create a secure test directory by running these commands on the server:

# Create a secure test directory.
sudo mkdir /var/www/mydomain2/secure

# Create a secure test page.
sudo nano /var/www/mydomain2/secure/index.html
   # Input this content:
   <html><body>
   This page is secure!
   </body></html>

# Change owner to www-data (which Nginx threads run as) so Nginx can access.
sudo chown -R www-data:www-data /var/www/mydomain2/secure

Edit the domain’s server block file by running this command on the server:

sudo nano /etc/nginx/sites-available/domain2

Under the “domain2” server block file, in the “HTTP Section” at the top (not the “HTTPS Section” at the bottom), add these lines to do the redirect:

server {
        #listen   80; ## listen for ipv4; this line is default and implied
        #listen   [::]:80 default ipv6only=on; ## listen for ipv6
        ...

        # Redirect mydomain2.com/secure to port 443.
        # Please put this before location / block as
        # Nginx stops after seeing the first match.
        # Note: ^~ means match anything that starts with /secure/
        location ^~ /secure/ {
                rewrite ^ https://$host$request_uri permanent;
        }

        ...
        location / {
        ...
}

Restart Nginx so the changes above can take effect.

sudo service nginx reload

Test by browsing to “http://mydomain2/secure/” and the browser should redirect to “https://mydomain2/secure/”.

Password-Protect a Directory

By password-protecting a directory (aka requiring basic authentication), when a browser accesses that directory, the user will get a dialog asking for the user name and password. To get this functionality working, we will create a user and password file and configure the Nginx server block to require basic authentication based upon that file.

Note: Accessing a password-protected directory over HTTP would result in the user and password being sent in clear text by the browser to the server.

Create a protected test directory by running these commands on the server:

# Create a protected test directory.
sudo mkdir /var/www/mydomain2/protect

# Create a protected test page.
sudo nano /var/www/mydomain2/protect/index.html
   # Input this content:
   <html><body>
   This page is password-protected!
   </body></html>

# Change owner to www-data (which Nginx threads run as) so Nginx can access.
sudo chown -R www-data:www-data /var/www/mydomain2/protect

We will need a utility from Apache to create the user and password file. Run this command on the server to install and use it:

# Install htpasswd utility from Apache.
sudo apt-get install apache2-utils

# Create a user and password file using htpasswd.
sudo htpasswd -c /var/www/mydomain2/protect/.htpasswd myuser

# Add an additional user using htpasswd without "-c" create parameter.
sudo htpasswd /var/www/mydomain2/protect/.htpasswd myuser2

# Change owner to www-data (which Nginx threads run as) so Nginx can access.
sudo chown www-data:www-data /var/www/mydomain2/protect/.htpasswd

Note: If you move the “.htpasswd” file to another location (say, not under the domain’s document root), make sure that the “www-data” user or group can access it; otherwise, Nginx won’t be able to read it.

Edit the Nginx server block file by running this command on the server:

sudo nano /etc/nginx/sites-available/domain2

In the “domain2” server block file, in the “HTTP Section” at the top (not the “HTTPS Section” at the bottom), add these lines to password-protect the “/protect” directory:

server {
        #listen   80; ## listen for ipv4; this line is default and implied
        #listen   [::]:80 default ipv6only=on; ## listen for ipv6
        ...

        # Password-protect mydomain2.com/protect directory.
        # Please put this before location / block as
        # Nginx stops after seeing the first match.
        # Note: ^~ means match anything that starts with /protect/
        location ^~ /protect/ {
                auth_basic "Restricted"; # Enable Basic Authentication
                auth_basic_user_file /var/www/mydomain2/protect/.htpasswd;
        }

        ...
        location / {
        ...

        # Uncomment this section to deny access to .ht files like .htpasswd
        # Recommend to copy this to the HTTPS server below also.
        location ~ /\.ht {
                deny all;
        }

    ...
}

The “^~” in “location ^~ /protect/” above tells Nginx to match anything that starts with “/protect/”. This is necessary to ensure that all files and directories under “/protect/” are also password-protected. Because Nginx stops once it finds a match, it won’t process subsequent match directives, such as the PHP-FPM directive, and PHP scripts won’t execute. If you wish to run PHP scripts under the password-protected directory, you must copy the PHP-FPM directive (and any other directives) under the password-protected location directive like so:

server {
        ...

        # Password-protect mydomain2.com/protect directory.
        # Please put this before location / block as
        # Nginx stops after seeing the first match.
        # Note: ^~ means match anything that starts with /protect/
        location ^~ /protect/ {
                auth_basic "Restricted"; # Enable Basic Authentication
                auth_basic_user_file /var/www/mydomain2/protect/.htpasswd;

                # pass the PHP scripts to FPM-PHP
                location ~ \.php$ {
                        fastcgi_split_path_info ^(.+\.php)(/.+)$;
                        fastcgi_pass unix:/var/run/php5-fpm.sock;
                        fastcgi_index index.php;
                        include fastcgi_params;
                }

                # deny access to .ht files like .htpasswd
                location ~ /\.ht {
                        deny all;
                }
        }

        ...
        # pass the PHP scripts to FPM-PHP
        location ~ \.php$ {
                ...        
}

Restart Nginx so the changes above can take effect.

sudo service nginx reload

Test by browsing to “http://mydomain2/protect/” and the browser should prompt you to input a user and password.

Secure Mailman

To run Mailman under HTTPS/SSL, move the “location /cgi-bin/mailman” definition in the server block file, “/etc/nginx/sites-available/mydomain2”, from the HTTP server to the HTTPS server section.

You will also need to modify Mailman to use the HTTPS url:

# Edit Mailman's configuration
sudo nano /etc/mailman/mm_cfg.py
   # Change its default url pattern from 'http://%s/cgi-bin/mailman/' to:
   DEFAULT_URL_PATTERN = 'https://%s/cgi-bin/mailman/'

# Propagate the HTTPS URL pattern change to all the mailists
sudo /usr/lib/mailman/bin/withlist -l -a -r fix_url

Note: It is not necessary to restart the Mailman service for the changes above to take effect.

If you only want the default URL Pattern change to apply to a specific mailing list, like “test@mydomain2.com”, use this command instead:

sudo /usr/lib/mailman/bin/withlist -l -r fix_url test -u mydomain2.com

Take a Snapshot

DigitalOcean provides a web tool to take a snapshot image of the VPS. I can restore using that image or even create a duplicate VPS with it. Because my VPS is now working the way I need it to, it makes sense to take a snapshot at this time.

Unfortunately, performing a snapshot requires that I shutdown the VPS first. More unfortunate, the time required to take the snapshot varies from minutes to over an hour (more on this below). Worst, there is no way to cancel or abort the snapshot request. I have to wait until DigitalOcean’s system completes the snapshot request before my VPS is automatically restarted.

digitalocean_snapshot_stuckI did my first snapshot after getting WordPress working on the VPS. There was about 6GB of data (including the operating system) to make an image of. I shut down the VPS and submitted a snapshot request. The “Processing…” status with zero progress was what I saw for over one hour. During this time, my VPS and WordPress site was offline.

A little over an hour later, the status went from “Processing…” with zero progress to done in a split second. My VPS and WordPress site were back online. I think an hour to backup 6GB of data is excessive. DigitalOcean support agreed. Evidently, there was a backlog on the scheduler and requests were delayed. Because I couldn’t cancel the snapshot request, I had to wait for the backlog to clear in addition to however long it took to do the snapshot.

If I had known more about the snapshot feature, I would have opted to pay for the backup feature, which cost more but doesn’t require shutting down the VPS. Unfortunately, the backup feature can only be enabled during VPS creation so it is too late for me.

The recommended method to shutdown the VPS is to run this command:

# Following command equivalent to: sudo shutdown -h now
sudo poweroff

Update: I just did a snapshot and it only took 5 minutes this time.

See my followup post, Upgrade Ubuntu and LEMP on an Unmanaged VPS, to learn how to upgrade LEMP and Ubuntu to the latest versions.

Most info above derived from:

1 Comment

Nginx Multiple Domains, Postfix Email, and Mailman Mailing Lists

Linux No Comments

See my previous post, Install Ubuntu, LEMP, and WordPress on an Unmanaged VPS, to learn how to set up an unmanaged VPS (virtual private server) with Ubuntu, LEMP, and WordPress. In this post, I will configure Nginx to support multiple domains (aka virtual hosts) on the VPS, get Postfix email send and receive working, and install a Mailman mailing list manager.

Note: Though I’m doing the work on a DigitalOcean VPS running Ubuntu LTS 12.04.3, the instructions may also apply to other VPS providers.

Host Another Domain

To host another domain (say mydomain2.com) on the same VPS, we need to add another Nginx server block (aka virtual host) file. Run the commands below on the server.

Note: You don’t need to input the lines that start with the pound character # below because they are comments.

# Create a new directory for the new domain
sudo mkdir /var/www/mydomain2

# Create a test page.
sudo nano /var/www/mydomain2/index.html
   # Input this content:
   <html><body>
   Welcome to mydomain2.com.
   </body></html>

# Change owner to www-data (which Nginx threads run as) so Nginx can access.
sudo chown -R www-data:www-data /var/www/mydomain2

# Create a new Nginx server block by copying from existing and editing.
sudo cp /etc/nginx/sites-available/wordpress /etc/nginx/sites-available/mydomain2
sudo nano /etc/nginx/sites-available/mydomain2
        # Change document root from "root /var/www/wordpress;" to:
        root /var/www/mydomain2;
        # Change server name from "server_name mydomain.com www.mydomain.com;" to:
        server_name mydomain2.com www.mydomain2.com;

# Activate new server block by create a soft link to it.
sudo ln -s /etc/nginx/sites-available/mydomain2 /etc/nginx/sites-enabled/mydomain2

# Reload the Nginx service so changes take effect.
sudo service nginx restart

The server block files allow Nginx to match the “server_name” domain to the inbound URL and to use the matching “root” directory. When a browser connects to the VPS by IP address (and thus, doesn’t provide a domain for matching), Nginx will use the first virtual host that it loaded from the “/etc/nginx/sites-enabled/” directory (the order of which could change every time you reload Nginx).

To select a specific virtual host to load when accessed by IP address, edit the related server block file under “/etc/nginx/sites-available/” directory and add a “listen 80 default” statement to the top like so:

server {
        #listen   80; ## listen for ipv4; this line is default and implied
        #listen   [::]:80 default ipv6only=on; ## listen for ipv6
        listen 80 default;

Note: The “listen 80 default;” line should only be added to one of the server block files. The behavior may be unpredictable if you add it to more than one block file.

Send Email (using Postfix)

We will install Postfix, a Mail Transfer Agent which works to route and deliver email, on the VPS to support sending and receiving mail. WordPress (and its plugins like Comment Reply Notification) uses Postfix to send emails. While we could use a more simple, send-only mail transfer agent like Sendmail, we will need Postfix later when we install Mailman (a mailing list service) which depends on it. In this section, we will configure Postfix and test the send mail function.

Before we start, we need to talk about Postfix. Postfix is very sophicated and can be configured in many different ways to receive mail. I want to suggest one way which I believe works well for many domains on a VPS. The setup I’m suggesting is to have one default local delivery domain (mydomain.com) and many virtual alias domains (mydomain2.com, etc). A local delivery domain is an endpoint domain, meaning that when mail arrives there, it is placed into the local Linux user’s mailbox. A virtual alias domain is used to route mail sent to it to a local delivery domain.

For example, if you send an email to “susan@mydomain2.com” (virtual alias domain), Postfix will route the email to “susan@mydomain.com” (local delivery domain), and then delivered the mail to the local Linux “susan” user’s inbox.

Keep the above in mind as we configure Postfix and hopefully everything will be understandable. We will go step by step and build upon our understanding. We will get the local delivery domain working first and then later, add the virtual alias domain into the mix.

Install Postfix by running these commands on the server:

# Install Postfix package and dependencies.
sudo apt-get install postfix
   # Select "Internet Site" and input our local delivery domain (mydomain.com).

# Configure Postfix to use local delivery domain.
sudo nano /etc/postfix/main.cf
   # Update "myhostname = example.com" to:
   myhostname = mydomain.com
   # Double-check that "mydestination" includes local delivery domain:
   mydestination = mydomain.com, localhost, localhost.localdomain

# Reload Postfix so changes will take effect.
sudo service postfix reload

Note: I had trouble inputting the domain name when installing Postfix and ended up with a combination of the default “localhost” and my domain name, specifically “lomydomain.com”. To fix this, I had to modify the “mydestination” value in “/etc/postfix/main.cf” and in the content of “/etc/mailname” file to be the correct domain name. The “myorigin” value in “/etc/postfix/main.cf” file references the “/etc/mailname” file.

The Postfix service should be started already and it is configured to start on boot by default. To send a test email, use the Sendmail command line (Sendmail was installed as dependency of the Postfix installation) on the server:

sendmail emailto@domain.com

To: emailto@domain.com
Subject: PostFix Test Email

This is the subject of a test email sent after configuring Postfix.

# Press CTRL-D key combo to end

Note: The From email address is constructed based upon the currently logged-in Linux username and the “myorigin” value in “/etc/postfix/main.cf” (which in turn, points at the “/etc/mailname” file that contains the local delivery domain name). Thus, the From address should be “mynewuser@domain.com”).

To test the PHP send mail feature, run the following commands on the server:

# Install the PHP command line package.
sudo apt-get install php-cli

# Enable logging of PHP-CLI errors to a file
sudo nano /etc/php5/cli/php.ini
   # Add this line:
   error_log = /tmp/php5-cli.log
   # You must use a writeable directory like /tmp.

# Open the PHP interpretive shell and call the mail() function
php -a
php > mail('emailto@domain.com', 'Subject here', 'Message body here');
php > quit

If the PHP mail function works, then most likely WordPress and its plugins should be able to send emails. To be absolutely sure, you can install the Check Email plugin to test sending an email from within WordPress.

Receive Mail (to Local Delivery Domain)

By default, Postfix is configured to deliver emails sent to “postmaster@mydomain” to the local Linux “root” user’s inbox. You can tell this from the following Postfix settings:

cat /etc/postfix/main.cf
   ...
   alias_maps = hash:/etc/aliases
   mydestination = mydomain.com, localhost, localhost.localdomain
   ...

cat /etc/aliases
   ...
   postmaster: root

In the Postfix’s “main.cf” file, the “alias_maps” value points to an email-to-local-user mapping file for the local delivery domain, and the “mydestination” value contains the default local delivery domain “mydomain.com” (ignore the localhost entries).

In the “alias_maps” file “/etc/aliases”, the “postmaster” email username is mapped to the root user’s inbox. Putting it together, any email sent to “postmaster@mydomain.com” will be delivered to the root user’s inbox.

Note: Mail can be delivered to any of the local Linux users by using their exact usernames, even though they are not listed in “alias_maps”. For example, emails sent to “root@mydomain.com” will be delivered to the local root user and emails sent to “mynewuser@mydomain.com” will be delivered to the local mynewuser user.

To receive external emails sent to the VPS, we need to open up the SMTP (Simple Mail Transfer Protocal) port in the firewall and create a DNS MX (Mail exchange) record. (Port 25 is the default SMTP port use for receiving emails.)

To open up the SMTP port, run the following commands on the server:

# Allow SMTP port 25.
sudo ufw allow smtp

# Double-check by looking at the firewall status.
sudo ufw status

I used DigitalOcean’s DNS management web interface to add a MX record pointing at “@” (which is the A record that resolves to mydomain.com) and priority 10 for mydomain.com. The priority allows us to add more than one MX record and determines the order of mail servers to submit the emails to. Rather than using the highest priority 0, using priority 10 will allow me to easily add a mail server before or after this one in the future.

Note: Most websites will suggest creating a CNAME record (redirecting “mail” to “@”) for mail.mydomain.com and then configuring the MX record to point at mail.mydomain.com. This is not necessary. The most simple configuration is to just point the MX record to the A record “@” as I did above.

To see if the DNS was updated with the MX record, I ran the following test command on the server (or any Linux machine):

dig mydomain.com MX @ns1.digitalocean.com

# technical details are returned, the most important is the ANSWER SECTION.
;; ANSWER SECTION:
mydomain.com.         1797    IN      MX      10 mydomain.com.

In the above example “ANSWER SECTION”, we can see that the MX record for mydomain.com points at mydomain.com (as the mail receiving server) with priority 10 (as configured). The 1797 value is the TTL (Time to Live) setting in seconds (1797 seconds is about 29.95 minutes) which indicates how long this MX record is valid for. DNS servers which honor this TTL setting will refresh at that rate; however, some DNS servers may ignore the TTL value in favor of much longer refresh times. (The A and CNAME records also have TTL values. DigitalOcean does not allow me to customize the TTL values for any DNS record.)

If the “ANSWER SECTION” is missing from the output, then your VPS provider may not have updated its DNS servers yet. (DigitalOcean DNS servers took 20 minutes to update the MX record.) Similar to the mydomain.com’s A and CNAME record changes, you may need to wait for the MX record to propagate across the internet (most DNS servers will be updated in minutes, while some may take hours).

Also, you can use the intoDNS website to check your MX record details. Input your domain name, click on the Report button, and look for the “MX Records” section. If your domain’s MX record shows up there, you can be reasonably certain that it has propagated far enough for you to start sending emails to your domain.

Test by sending an email to “postmaster@mydomain.com” from your local mail client (or Google Mail or Yahoo Mail). To see if the mail was received, do the following on your server:

# View the root user's received mail store for the email you sent.
sudo more /var/spool/mail/root

# Alternatively, install the mail client to view and delete received emails.
sudo apt-get install mailutils
# Read mail sent to the local root user.
sudo mail
   # type "head" to see a list of all mail subject lines.
   # type a number (ex: 1) to see the mail content.
   # type "del 1" to delete that mail.
   # type "quit" to exit mail client.

Note: If you want to check a non-root user’s inbox, log in as that non-root user and just run “mail”, instead of “sudo mail”.

Note: According to Internet standards, all mail-capable servers should be able to receive emails sent to their “postmaster” address. While it may be overkill, I decided to create MX records and postmaster aliases for all the domains that I host on the VPS.

Receive Mail (to other Virtual Alias Domains)

Now that we know that the server can receive emails, we want to configure Postfix to support emails sent to the multiple domains hosted on the VPS. (If you want more local users than “root” and “mynewuser”, use the “adduser” command per the previous post to create new users.)

Recall our earlier discussion about how a virtual alias domain will route mail to the local delivery domain (“mydomain.com”), which will finally deliver the mail to the local user’s inbox. We will configure the additional domains like “mydomain2.com” to be virtual alias domains.

First, we need to create a mapping file that will map from the virtual alias domain to the local delivery domain. Run this command on the server to create that file:

sudo nano /etc/postfix/virtual

In the “virtual” mapping file, input the following lines:

mydomain2.com IGNORE # Declare virtual alias domain
postmaster@mydomain2.com postmaster

In the first line, we are using a newer feature of Postfix to declare a virtual alias domain “mydomain2.com” by starting the line with it. (The previous, alternative method was put the virtual alias domain declarations into a “virtual_alias_domains” property in the “/etc/postfix/main.cf” file.) The rest of the first line, “IGNORE …”, is ignored. The second line indicates that mail sent to “postmaster@mydomain2.com” should be routed to “postmaster” at the local delivery domain; that is, “postmaster@mydomain.com”.

Configure Postfix to use the new virtual alias domain mapping file:

sudo nano /etc/postfix/main.cf
   # Add a new virtual_alias_maps line:
   virtual_alias_maps = hash:/etc/postfix/virtual

# Update the hash db version of /etc/postfix/virtual that Postfix uses.
sudo postmap /etc/postfix/virtual

# Reload Postfix so changes take effect.
sudo service postfix reload

Test by sending an email to the email address configured in the virtual alias mapping file; in this case, “postmaster@domain2.com”. Per the previous instruction, check the root user’s inbox by using the “sudo mail” command.

Install Mailing List Service (Mailman)

Besides supporting mailing lists, Mailman (GNU Mailing List Manager) allows mailing list administration by web interface or by sending email messages (like subscribe or unsubscribe messages).

Update: Recently (latter half of 2016), emails sent to my mailing list were not delivered to Google Mail recipients. I found the following error from Google in the ‘/var/log/mail.log’ file: “Our system has detected an unusual rate of 421-4.7.0 unsolicited mail originating from your IP address. To protect our 421-4.7.0 users from spam, mail sent from your IP address has been temporarily 421-4.7.0 rate limited.” The issue is not spam because my mailing list gets only a few emails per week. This support page suggests that Google now requires SPF+DKIM for mail relays. I decided to migrate the mailing list to Google Groups for now.

To install Mailman, run the following commands on the server:

# Install Mailman
sudo apt-get install mailman

# Create a mandatory site list named mailman
sudo newlist mailman

The “newlist” command will request the following:

To finish creating your mailing list, you must edit your /etc/aliases (or
equivalent) file by adding the following lines, and possibly running the
`newaliases' program:

## mailman mailing list
mailman:              "|/var/lib/mailman/mail/mailman post mailman"
mailman-admin:        "|/var/lib/mailman/mail/mailman admin mailman"
...

Ignore that instruction. You don’t need to manually edit the Postfix “/etc/aliases” file. Later on, we will configure Mailman to automatically generate its own aliases file, which Postfix will read from.

Once the site wide “mailman” list is created, we can start the Mailman service by running this command:

sudo service mailman start

Mailman is configured to start on boot by default. (Running “sudo service mailman status” won’t output anything useable; to see if Mailman is running, list its processes using “ps -aef | grep -i mailman” instead.)

To get Mailman’s web interface working, we will need to install FcgiWrap (Simple CGI support) so that Nginx can integrate with Mailman. FcgiWrap works similarly to how PHP-FPM (FastCGI Process Manager for PHP) was used by Nginx to pass the processing of PHP files to the PHP platform. FcgiWrap will be used by Nginx to pass the Mailman-related interface calls to Mailman.

To install FcgiWrap, run the following command on the server:

sudo apt-get install fcgiwrap

FcgiWrap will be started automatically after installation. By default, FcgiWrap is configured to start at boot time. FcgiWrap uses a unix socket file “/var/run/fcgiwrap.socket” (similar to how PHP-FPM uses “/var/run/php5-fpm.sock”) to communicate with Mailman. (Similar to Mailman, running “service fcgiwrap status” won’t output anything useable; to see if FcgiWrap is running, list its processes using “ps -aef | grep -i fcgiwrap” instead.)

Edit the Nginx server block file belonging to the domain that you want to make the Mailman web interface accessible under. For example, run this command on the server:

sudo nano /etc/nginx/sites-available/mydomain2

In the mydomain2 server block file, add the following lines to the end of the “service” section:

service {
        ...

        location /cgi-bin/mailman {
               root /usr/lib/;
               fastcgi_split_path_info (^/cgi-bin/mailman/[^/]*)(.*)$;
               fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
               include /etc/nginx/fastcgi_params;
               fastcgi_param PATH_INFO $fastcgi_path_info;
               #fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;
               fastcgi_intercept_errors on;
               fastcgi_pass unix:/var/run/fcgiwrap.socket;
        }
        location /images/mailman {
               alias /usr/share/images/mailman;
        }
        location /pipermail {
               alias /var/lib/mailman/archives/public;
               autoindex on;
        }
}

Note: I did two things above differently from what most websites would say to do:

  • I put the “fastcgi_param SCRIPT_FILENAME …” line before “include /etc/nginx/fastcgi_params;” to avoid it getting overwritten by “fastcgi_params”; otherwise, the call to Mailman would fail with a 403 Forbidden Access error message.
  • I commented out “fastcgi_param PATH_TRANSLATED …” because it is not necessary.

Reload Nginx to make the changes take effect:

sudo service nginx reload

You can now browse to the following Mailman administrative pages:

  • http://mydomain2.com/cgi-bin/mailman/admin/mylistname – manage the “mylistname” list.
  • http://mydomain2.com/cgi-bin/mailman/listinfo/mylistname – info about the “mylistname” list.
  • http://mydomain2.com/pipermail – view the mailing list archives.

We still need to integrate Mailman with Postfix so that emails sent to mailing lists, especially those belonging to virtual alias domains, will be routed to Mailman by Postfix.

Edit the Mailman configuration by running this command on the server:

sudo nano /etc/mailman/mm_cfg.py

In the “mm_cfg.py” file, add or uncomment and modify these lines like so:

MTA = 'Postfix'
POSTFIX_STYLE_VIRTUAL_DOMAINS = ['mydomain2.com']

Mailman has the capability of generating aliases for Postfix. We will use that capability. Run these commands on the server:

# Create Mailman's aliases and virtual-mailman files.
sudo /usr/lib/mailman/bin/genaliases

# Make the generated files group-writeable.
sudo chmod g+w /var/lib/mailman/data/aliases*
sudo chmod g+w /var/lib/mailman/data/virtual-mailman*

Note: The “genaliases” command will generate “aliases”, “aliases.db”, “virtual-mailman”, and “virtual-mailman.db” files in the “/var/lib/mailman/data” directory.

We then add the generated Mailman aliases and virtual aliases files to the Postfix “alias_maps” and “virtual_alias_maps” properties.

To edit Postfix, run this command on the server:

sudo nano /etc/postfix/main.cf

In the Postfix “main.cf” file, add to the end of the “alias_maps” and “virtual_alias_maps” lines like so:

alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases
virtual_alias_maps = hash:/etc/postfix/virtual, hash:/var/lib/mailman/data/virtual-mailman

Note: The changes above will configure Postfix to read and process Mailman’s generated aliases files, in addition to its own aliases files.

Reload Postfix to have the changes take effect:

sudo service postfix reload

Recall earlier, I said to ignore the instructions by “newlist” to add Mailman aliases to the Postfix “/etc/aliases” file because we would do it automatically later. That is just what we did above.

Look at the Mailman’s generated “aliases” file by running this command on the server:

sudo cat /var/lib/mailman/data/aliases

# STANZA START: mailman
# CREATED: Tue Mar 25 05:53:44 2014
mailman:             "|/var/lib/mailman/mail/mailman post mailman"
mailman-admin:       "|/var/lib/mailman/mail/mailman admin mailman"
...
# STANZA END: mailman

It should look exactly like the aliases outputted by the “newlist” command. Mailman’s generated “aliases” file is included in Postfix’s “alias_maps” and thus is processed by Postfix along with the contents of the original “/etc/aliases” file.

To test a mailing list belonging to a virtual alias domain, run these commands on the server:

# Create a test mailing list.
sudo newlist test@mydomain2.com

# Reload Postfix to make changes take effect.
sudo service postfix reload

The “newlist” command will automatically update Mailman’s “aliases” and “virtual-mailman” aliases file with entries for “test@mydomain2.com”. However, we still need to manually reload Postfix so that Postfix will pick up the changes. (Reloading Postfix requires sudo/root access, so Mailman can’t do it automatically).

Let’s look at Mailman’s updated “aliases” and “virtual-mailman” files to see what was added (the pre-existing, generated “mailman” list aliases are omitted below):

sudo cat /var/lib/mailman/data/aliases

# STANZA START: test
# CREATED: Tue Mar 25 05:56:47 2014
test:             "|/var/lib/mailman/mail/mailman post test"
test-admin:       "|/var/lib/mailman/mail/mailman admin test"
...
# STANZA END: test

sudo cat /var/lib/mailman/data/virtual-mailman

# STANZA START: test
# CREATED: Tue Mar 25 05:56:47 2014
test@mydomain2.com              test
test-admin@mydomain2.com        test-admin
...
# STANZA END: test

Recall that a virtual alias domain routes to a local delivery domain, which then delivers to an endpoint (inbox or in the case above, a program called Mailman). For example, when a mail is sent to the “test@mydomain2.com” mailing list, it is routed to “test@mydomain.com” (local delivery domain), and then passed to the “mailman post test” program, which then forwards a copy to each member of the “test” mailing list.

Note: Because all mailing lists also exist under the local delivery domain, the mailing list name must be unique across all the domains hosted on the machine.

To test, access the Mailman web interface at “http://mydomain2.com/cgi-bin/mailman/admin/test” to add members to the “test@mydomain2.com” mailing list. Then send an email to that mailing list and its members should each receive a copy.

Once, you are done testing, you can delete the list by running this command on the server:

# Remove list test@mydomain2.com (don't include @mydomain2.com part below).
sudo /usr/lib/mailman/bin/rmlist -a test

# Reload Postfix to make changes take effect.
sudo service postfix reload

Debugging Mail

Both Postfix and Mailman will output error messages and debug logs to:

/var/log/mail.err
/var/log/mail.log

At this point, my VPS is hosting several domains, I can send and receive emails, and I have mailing lists working. See my followup post, Nginx HTTPS SSL and Password-Protecting Directory, to learn how to configure Nginx to enable HTTPS SSL access and to password-protect a directory.

Most info above derived from:

No Comments

Install Ubuntu, LEMP, and WordPress on an Unmanaged VPS

Linux 4 Comments

Before this post, I was hosting my website using a shared web hosting provider. Shared web hosting is convenient because the provider takes care of the software platform and its security updates (though I am still responsible to update a PHP application like WordPress). And if there is a problem with the platform, the provider is responsible to fix it. Unfortunately, shared web hosting may have performance and scalability issues (resulting from overcrowded websites on the single shared server and strict restrictions on CPU and memory usage) and disallows non-PHP software installation such as a Subversion server.

With the above in mind, I decided to look into unmanaged VPS (virtual private server) hosting as an alternative to shared web hosting. A virtual server is cheaper than a physical server and an unmanaged server is cheaper than a managed server. A managed VPS provider would install the software stack for me and provide support for about $30 or more per month. An unmanaged VPS depends on me to install the software and only costs $5 per month with DigitalOcean. The downside to unmanaged VPS is that if anything goes wrong with the software, I am responsible to fix it.

Note: If you decide to, please use this referral link to signup for a DigitalOcean account and get $10 in credit. Once you spend $25, I will get a $25 credit. It’s a win-win for both of us.

In this post, I will outline the steps I took to install WordPress on an unmanaged VPS hosted by DigitalOcean. Most of these instructions may be applicable to other VPS providers.

Create VPS

When creating a VPS, the most important choice is the operating system. I recommend getting the latest Ubuntu Server LTS (long-term support) version, currently 12.04.4. All up-to-date software packages should support the LTS version of Ubuntu so it is a safe choice to make. Unfortunately, DigitalOcean only offered the LTS version 12.04.3 so I chose that. Because it will be a long time, if ever, before I would need a VPS with more than 4GB memory, I decided to choose the 32bit version to keep memory usage as minimal as possible.

You should have an IP address and root password for your VPS before proceeding.

Secure VPS

Remote access to the VPS is accomplished by SSH (Secure Shell). (If you know telnet, think of SSH as an encrypted version of telnet.) By default, servers are setup to use SSH with port 22 and user root. Unsophisticated hackers would attempt to gain access to a server using those settings and a brute force password generator. While a very hard to guess root password would make the server more secure, it is even better to change the SSH port number and use a non-root user.

Note: While Mac OS X comes with a built-in SSH client, Windows does not. I recommend downloading the free DeltaCopy SSH client “ssh.exe” for Windows. Alternatively, you can download the free PuTTY SSH client “putty.exe” if you want a GUI client, instead of a command line client.

Note: Lines below that start with the pound character # are comments and you don’t need to input them.

Run these commands:

# Connect to your server.
ssh root@mydomain.com

# Change the root password.
passwd

# Create a new non-root user.
adduser mynewuser

We will configure the new user to execute commands with root privileges by using the sudo (super user) tool. Sudo involves pre-pending all commands with the word “sudo”. Sudo will prompt for the root password. (You can also configure sudo to log all commands issued using it.) We will grant all sudo privileges to the new user by adding to “/etc/sudoers” under the “User privilege specification” section like so:

# visudo opens /etc/sudoers using vi or nano editor, whichever is the configured text editor.
# It is equivalent to "sudo vi /etc/sudoers" or "sudo nano /etc/sudoers" but includes validation.
visudo
   # Add mynewuser to the "User privilege specification" section
   root       ALL=(ALL:ALL) ALL
   mynewuser  ALL=(ALL:ALL) ALL

To disallow SSH root login and to change the SSH port number (say from 22 to 3333), edit the SSH configuration “sshd_config” file and make the following changes:

sudo nano /etc/ssh/sshd_config
   # Change the default listen "Port 22" to the custom port:
   Port 3333

   # Do not permit root user login by changing "PermitRootLogin yes" to:
   PermitRootLogin no

   # Allow only mynewuser to connect using SSH
   AllowUsers mynewuser

   # Optionally, disable useDNS as it provides no real security benefit
   UseDNS no

Reload the SSH service so the changes can take effect:

sudo reload ssh

Test the new settings by opening up a command window on your client and running the following commands:

ssh -p 3333 root@mydomain.com
ssh -p 3333 mynewuser@mydomain.com

The attempt to SSH using the root user should fail. The attempt using the new user should succeed. If you cannot SSH into the server with the new user, double-check the changes using your original SSH window (which should still be connected to your server). If you don’t have that original SSH window still connected, your VPS provider should provide console access (like having a virtual keyboard and monitor connected directly to the VPS) through their website for recovery scenarios such as this.

Tip: You can log into the root account after you SSH into the mynewuser account by running the “su -” superuser command. You will be prompted for the root password.

The UFW (Uncomplicated Firewall) tool allows us to easily configure the iptables firewall service, which is built into the Ubuntu kernel. Run these commands on the server:

# Allow access to custom SSH port and HTTP port 80.
sudo ufw allow 3333/tcp
sudo ufw allow http

# Enable the firewall and view its status.
sudo ufw enable
sudo ufw status

The above steps configure a basic level of security for the VPS.

Install LEMP

WordPress requires an HTTP server, PHP, and MySQL. The LEMP (Linux, Nginx, MySQL, PHP) software stack matches those requirements. (Nginx is pronounced “engine-ex” which explains where the “E” acronym came from.) You may be more familiar with the LAMP stack, which uses Apache instead of Nginx as the HTTP server. Nginx is a high-performance HTTP server which uses significantly less CPU and memory than Apache would under high load situations. By using Nginx, we allow for the capability of handling greater numbers of page requests than usual.

On the server, run these commands:

# Update installed software packages.
sudo apt-get update

# Install MySQL.
sudo apt-get install mysql-server php5-mysql
sudo mysql_install_db

# Secure MySQL.
sudo /usr/bin/mysql_secure_installation

# Do a test connect to MySQL service.
mysql -u root -p
mysql> show databases;
mysql> quit

When installing MySQL, you will be prompted to input a MySQL root password. If you leave it blank, you will have another opportunity to change it when running the “mysql_secure_installation” script. You will want to answer yes to all the prompts from the “mysql_secure_installation” script to remove anonymous MySQL users, disallow remote MySQL root login, and remove the test database.

MySQL is not configured to start on boot by default. To start MySQL at boot time, run only the first command below:

# Start MySQL at boot time.
sudo update-rc.d mysql defaults

# FYI, undo start MySQL at boot time.
sudo update-rc.d -f mysql remove

If you have issues connecting to MySQL, you can start MySQL in an unsecured safe mode (which bypasses the password requirement) to perform a recovery action such as resetting the MySQL root password like so:

# Stop normal MySQL service and start MySQL in safe mode.
sudo service mysql stop
sudo mysqld_safe --skip-grant-tables &

# Connect to MySQL, change root password, and exit.
mysql -u root
mysql> use mysql;
mysql> update user set password=PASSWORD("newrootpassword") where User='root';
mysql> flush privileges;
mysql> quit

# Stop MySQL safe mode and start normal MySQL service.
sudo mysqladmin -u root -p shutdown
sudo service mysql start

Install and start Nginx by running these commands on the server:

sudo apt-get install nginx
sudo service nginx start

Browse to your server’s IP address and you should see a “Welcome to nginx!” page.

To make it possible for Nginx to serve PHP scripts, we need to install the PHP platform and the PHP-FPM (FastCGI Process Manager for PHP) service. PHP-FPM enables Nginx to call the PHP platform to interpret PHP scripts. PHP-FPM should be already installed as dependencies of the “php5-mysql” package (part of the MySQL installation instructions above). We can make sure that PHP-FPM (and its dependency, the PHP platform) is installed by trying to re-install it again (trying to install an already installed package doesn’t do any harm).

# List the installed packages and grep for php name matches:
dpkg --get-selections | grep -i php

# Install PHP-FPM package.
sudo apt-get install php-fpm

# Test the install by displaying the version of PHP-FPM.
php5-fpm -v

Secure and optimize the PHP-FPM service by running these commands on the server:

# Fix security hole by forcing the PHP interpreter to only process the exact file path.
sudo nano /etc/php5/fpm/php.ini
   # Change the "cgi.fix_pathinfo=1" value to:
   cgi.fix_pathinfo=0

# Configure PHP to use a Unix socket for communication, which is faster than default TCP socket.
sudo nano /etc/php5/fpm/pool.d/www.conf
   # Change the "listen = 127.0.0.1:9000" value to:
   listen = /var/run/php5-fpm.sock

# Restart the PHP-FPM service to make the changes effective.
sudo service php5-fpm restart

Nginx defines the site host (and each virtual host) in a server block file. The server block file links the domain name to a directory where the domain’s web files (HTML, PHP, images, etc.) are located. When you browse to the VPS, Nginx will serve files from the directory that corresponds to the domain name given by your browser. That is simple explanation of how Nginx can support hosting more than one domain on a single VPS.

Edit the default server block file to support PHP scripts:

sudo nano /etc/nginx/sites-available/default

In the “default” server block file, change the following:

server {
        # Add index.php to front of "index" to execute it first by default (if it exists)
        index index.php index.html index.htm;
        # Optionally, WordPress sites only need the index.php value like so:
        #index index.php

        # Change "server_name localhost;" to:
        server_name mydomain.com www.mydomain.com;

        # Use these directives if URL is matched to root / location.
        location / {
                # try_files will try a directory if file does not exist, and then
                # if directory does not exist, will try a default like index.html.
                # For WordPress site, we need to change "try_files $uri $uri/ /index.html;" to:
                try_files $uri $uri/ /index.php?$args;
                # The $args is necessary to support the Wordpress post preview function.
                # If you don't change this, then non-default permalink URLs will fail with 500 error.
        }

        # Uncomment the whole "location ~ \.php$" block except for the "fast_cgi_pass 127.0.0.1:9000;" line.
        # "location ~\.php$" means to match against any files ending in .php.
        # Leave the default "fastcgi_pass unix:/var/run/php5-fpm.sock" line
        # (it already matches what is in /etc/php5/fpm/pool.d/www.conf above).
        location ~ \.php$ {
                fastcgi_split_path_info ^(.+\.php)(/.+)$;

                # With php5-cgi alone:
                #fastcgi_pass 127.0.0.1:9000;
                # With php5-fpm:
                fastcgi_pass unix:/var/run/php5-fpm.sock;
                fastcgi_index index.php;
                include fastcgi_params;
        }

Restart the Nginx service to have the changes take effect:

sudo service nginx restart

Create a PHP test script by running this edit command:

sudo nano /usr/share/nginx/www/info.php

In “info.php” file, input the following text:

<?php
phpinfo();
?>

Browse to “http://mydomain.com/info.php” and you should see a page containing information about the PHP installation.

Both PHP-FPM (php5-fpm) and Nginx are configured to start at boot time by default. You can double-check by running the chkconfig utility to list the services and their runlevel configurations:

# Install chkconfig package.
sudo apt-get install chkconfig

# List all services and their runlevels configurations.
chkconfig --list

Note: You won’t be able to use chkconfig to change the runlevels because it is not compatible with the new Upstart runlevel configuration used by Ubuntu. Instead, use update-rc.d or sysv-rc-conf to make runlevel changes.

Debugging LEMP

To debug issues with LEMP, look at these log files:

MySQL: /var/log/mysql/error.log
Nginx: /var/log/nginx/error.log
PHP: /var/log/php5-fpm.log

For performance reasons, the debug logs from the PHP-FPM worker threads are discarded by default. If you wish to see error logs from your PHP applications, you will need to enable logging from worker threads.

Run the following commands on the server:

# Edit the PHP-FPM worker pool config file to enable logging.
sudo nano /etc/php5/fpm/pool.d/www.conf
   # Uncomment this line:
   catch_workers_output = yes

# Reload the PHP-FPM service to make the changes take effect.
sudo service php5-fpm reload

You should now see error logs from the PHP worker threads outputted to the “/var/log/php5-fpm.log” file.

Install WordPress

Install WordPress by running the following commands on the server:

# Get the latest WordPress version.
cd /tmp
wget http://wordpress.org/latest.tar.gz

# Uncompress the WordPress archive file.
tar zxvf latest.tar.gz

# Create a wp-config.php configuration file by copying from the sample.
cd wordpress
cp wp-config-sample.php wp-config.php

# Move the WordPress files to the Nginx root document directory.
sudo mv wordpress/* /usr/share/nginx/www/

# Change ownership to www-data user (which Nginx worker threads are configured to run under).
sudo chown -R www-data:www-data /usr/share/nginx/www/*

Note: If WordPress detects its configuration file “wp-config.php” is missing, it will offer to run a web-based wizard to create it. However, the wizard won’t work because our MySQL root user requires a password. Besides, using the wizard would not be very secure because the WordPress database’s MySQL user password would be sent in the clear over HTTP. Instead, we manually created the “wp-config.php” file in the above steps and will modify it below.

Create a MySQL database and user for WordPress by running these commands on the server:

# Open a MySQL interactive command shell.
mysql -u root -p

# Create a MySQL WordPress database.
mysql> create database wordpress;

# Create a MySQL user and password.
mysql> create user wordpress@localhost;
mysql> set password for wordpress@localhost = PASSWORD('mypassword');

# Grant the MySQL user full privileges on the WordPress database.
mysql> grant all privileges on wordpress.* to wordpress@localhost identified by 'mypassword';

# Make the privilege changes effective.
mysql> flush privileges;

# Double-check by showing the privileges for the user.
mysql> show grants for wordpress@localhost;

# Exit the MySQL interactive shell.
mysql> quit

Update the WordPress configuration file by running this command:

sudo nano /usr/share/nginx/www/wp-config.php

In the “wp-config.php” file, input the newly-created MySQL database, user, and password like so:

define('DB_NAME', 'wordpress');
define('DB_USER', 'wordpress');
define('DB_PASSWORD', 'mypassword');

Browse to your server’s IP address and follow the WordPress instructions to complete the installation.

Change WordPress Document Root

This section is optional. If you wish to store the WordPress installation into an alternative directory path, say “/var/www/wordpress”, instead of “/usr/share/nginx/www”, follow the steps below. (I suggest “/var/www/wordpress” instead of “/var/www” so that later, when you host additional domains, the WordPress installation will be in its own separate directory.)

To move WordPress to a new directory, run these commands on the server:

# Move WordPress files to new directory.
sudo mkdir -p /var/www/wordpress
sudo mv /usr/share/nginx/www/* /var/www/wordpress/

# Rename the existing Nginx server block file.
sudo mv /etc/nginx/sites-available/default /etc/nginx/sites-available/wordpress

# Update the Nginx server block file with new location.
sudo nano /etc/nginx/sites-available/wordpress
   # Change document root from "root /usr/share/nginx/www;" to "root /var/www/wordpress;".

# Enable the renamed Nginx server block by creating soft link.
sudo ln -s /etc/nginx/sites-available/wordpress /etc/nginx/sites-enabled/wordpress

# Remove the old Nginx server block soft link (which points at a non-existing file).
sudo rm /etc/nginx/sites-enabled/default

# Reload the Nginx service so the changes can take effect.
sudo service nginx reload

Test this change by browsing to your server’s IP address. You should see the WordPress website.

Migrate WordPress

When migrating an existing WordPress site to your new VPS, I suggest doing the following steps:

  1. On your old WordPress server, update WordPress and all plugins to the latest versions.
  2. On the new WordPress server, browse to “http://mydomain.com/wp-admin/” to install the same theme and plugins as exist on the old server. Activate the theme. Leave all the plugins inactive. When we do the WordPress database restore, the plugins will be configured and activated to match what was in the old server.
  3. Copy the old image uploads directory to the new server. Supposing that the WordPress on the old server is located at “/home/username/public_html/wordpress”, run the following commands on the new server:
    sudo scp -r username@oldserver:/home/username/public_html/wordpress/wp-content/uploads /var/www/wordpress/wp-content/
    sudo chown -R www-data:www-data /var/www/wordpress/wp-content/uploads

    Note: If the old server uses a custom SSH port number, scp will require the custom port number as a “-P” input parameter; for example, “sudo scp -r -P 2222 username@oldserver…”.

  4. Export the WordPress database from the old server using the recommended phpMyAdmin interface (which generates a more human-friendly SQL output than mysqldump) or by running the following command on the old server:
    mysqldump -u oldusername -p olddatabasename > wordpress.sql
  5. Before importing the WordPress database into the new server, we will need to change references to the image uploads directory (and other directories) in the exported SQL file. If you don’t make this change, then images may not be visible in the WordPress postings. Following the example above, replace any occurance of “/home/username/public_html/wordpress/” with “/var/www/wordpress/” in the exported database SQL file.
  6. Copy the exported SQL file to the new server, say to the “/tmp” directory.
  7. On the new server, run these commands:
    # Open up MySQL command shell.
    mysql -u root -p

    # Empty the existing WordPress database.
    mysql> drop database wordpress;
    mysql> create database wordpress;

    # Exit the MySQL command shell.
    mysql> quit

    # Import the exported SQL file.
    mysql -u root -p wordpress < /tmp/wordpress.sql

    Note: Dropping and re-creating the WordPress database does not affect the WordPress database user and its privileges.

Browse to your new server’s IP address and you should see your WordPress website. Unfortunately, we cannot verify that the images are loading correctly on the new server because the image URLs use the domain name which points at the old server (the images are loaded from the old server, not the new server). We now need to point the domain at our new server.

Migrate Domain

I used DigitalOcean’s DNS (Domain Name System) “Add Domain” tool to create an A (Address) record (“@” => “server_ip_address”) linking mydomain.com to the new server’s IP address. I also added a CNAME (Canonical name) record (“www” => “@”) to have www.mydomain.com point at mydomain.com. I tested whether DigitalOcean’s DNS servers were updated or not by repeatedly running one of these two commands on the server (or any Linux machine):

nslookup mydomain.com ns1.digitalocean.com
nslookup www.mydomain.com ns1.digitalocean.com

Note: DigitalOcean’s DNS servers took about 20 minutes to update.

Once DigitalOcean’s DNS servers had lookup entries for my domain name, I went to my domain registrar and updated my domain’s name servers to point at DigitalOcean’s DNS servers. To test whether DigitalOcean’s DNS servers were being used or not, I occasionally ran the following commands on my client machine to check the IP address returned:

nslookup mydomain.com
ping mydomain.com

Once the new IP address was returned consistently (it took 24 hours before my internet provider’s DNS servers were updated), I then browsed to mydomain.com and checked that the images were loading correctly.

You can empty the DNS caches on your machine and browser by using any of these commands:

# On Windows:
ipconfig /flushdns

# On Mac OS X:
sudo dscacheutil -flushcache

# On Chrome, browse to URL below and click on the "clear host cache" button.
chrome://net-internals/#dns

If you want to check the DNS change propagation across the whole world, try the What’s My DNS? website. It will make calls to DNS name servers located all over the planet.

At this point, I have a working VPS that is reasonably secured and successfully hosting my migrated WordPress website. Some rough page load tests resulted in 0.5-1 second load times, as opposed to the 2-4 seconds on the old server (which was shared web hosting with a LAMP stack). I hope that this guide will help you should you decide to move your WordPress website to an unmanaged VPS.

See my followup post, Nginx Multiple Domains, Postfix Email, and Mailman Mailing Lists, to configure Nginx to support multiple domains, get Postfix email flowing, and get Mailman mailing lists working.

Most info above is derived from the following sources:

4 Comments

Subversion Over SSH With HostGator Shared Web Hosting

Linux 1 Comment

Even more surprising, I was able to get Subversion over SSH working with my HostGator shared web hosting account. I had been searching for a private Subversion repository that I could use for my own projects. Something very simple for one developer doing infrequent source control file check-ins. I didn’t want to use the free and/or public Subversion hosting companies because I didn’t want to expose my code; I wanted to strictly control code ownership. So, using Subversion with my shared web hosting account was the perfect answer to my needs.

BasselopeDisclaimer: Please don’t abuse this info by setting up a Subversion repository on your shared HostGator web hosting account for a bunch of active developers. Because it is shared hosting, such an action would probably cause issues for other customers and Hostgator may decide to prevent this usage. Use this as a private, one developer Subversion repository (equivalent to having a local repository and rsync’ing it to/from your web host, though a lot more convenient).

Before you start, you must have SSH public key authentication working. On Mac OS X, you also must create the “%HOME%/.ssh/config” file to use port 2222 by default. See my previous post, SSH and SSL With HostGator Shared Web Hosting, for instructions.

Create Subversion Repository

Create a Subversion repository on the server by issuing the following commands:

ssh -p 2222 myusername@mydomainname.com
cd mydirectory
mkdir myrepos
svnadmin create myrepos

Test Subversion Client Connection

On Mac OS X, test your connection to the Subversion repository by running the following:

svn list svn+ssh://myusername@mydomainname.com/myhome/myusername/mydirectory/myrepos

If successfully, this should output an empty line (because you don’t have anything in the repository yet) instead of a connection error message.

On Windows, we need to configure Subversion to use a SSH tunnel with port 2222 by default. Modify the “%APPDATA%\Subversion\config” file by adding a custom line below the “[tunnels]” section:

[tunnels]
ssh = ssh -p 2222

We can then test the Subversion client’s connectivity on Windows by running the same command as on Mac OS X:

svn list svn+ssh://myusername@mydomainname.com/myhome/myusername/mydirectory/myrepos

The change to “%APPDATA%\Subversion\config” will cause all svn+ssh calls to use port 2222. This will break connectivity to servers which do not use 2222 for the SSH port. One can accommodate this scenario by creating a custom tunnel name like so:

[tunnels]
ssh2222 = ssh -p 2222

Then, to use it, the Subversion client command to run would look like this:

svn list svn+ssh2222://myusername@mydomainname.com/myhome/myusername/mydirectory/myrepos

To avoid duplication, we will use “svn+ssh” for both Mac OS X and Windows in the instructions below. Please adjust accordingly if you decide to use a custom tunnel name.

Add Project To Repository

Add or import a project into the Subversion repository using the following command on the Mac OS X or Windows client:

svn import ./myproject svn+ssh://myusername@mydomainname.com/myhome/myusername/mydirectory/myrepos/myproject -m "Initial import"

Run the previous “svn list” test command to see the imported project listed in the subversion repository.

Checkout the Project

On the client, checkout or export the project to another local directory:

svn co svn+ssh://myusername@mydomainname.com/myhome/myusername/mydirectory/myrepos/myproject ./myproject2

Once the checkout is complete, you can issue subversion commands in the local, checked out project directory without having to specify the “svn+ssh” URL like so:

cd ./myproject2
svn update
...
svn diff
svn ci -m "first commit"

The subversion commands will use the “svn+ssh” URL stored locally in the checked out project’s “.svn” configuration directory.

The purpose of the subversion commands above and others are explained in a previous blog, Add Subversion to the XAMPP Apache Server.

Secure File Permissions

The Subversion server may write files which include read access for the group and others. To close this security hole, I suggest manually restricting the file permissions in the Subversion repository after importing a project or adding assets.

These commands will set read-write access for only the user on all folders and files under the Subversion repository:

find ~/mydirectory/myrepos -type f -print0 | xargs -0 chmod 600
find ~/mydirectory/myrepos -type d -print0 | xargs -0 chmod 700

Information concerning the custom SSH tunnel gotten from How to configure SVN/SSH with SSH on non standard port?.

1 Comment

SSH and SSL With HostGator Shared Web Hosting

Linux No Comments

Surprisingly, I found that a Hostgator shared web hosting account supports secure shell (SSH) access and a shared secure sockets layer (SSL) certificate. For those who might not be familiar with them, SSH provides interactive terminal access to your account and SSL supports secure HTTPS browsing to your website.

Note: Most instructions below are not specific to a Hostgator shared web hosting account. They may work with your shared web hosting account also.

Enable SSH Access

To enable SSH access for your Hostgator shared web hosting account, do the following:

  1. Browse to Hostgator’s Billing/Support System page.
  2. Log in using your billing/support email address and password (this may be different from your cPanel administrative password).
  3. Click on the “View Hosting Packages” link under “Hosting Packages”.
  4. Click on the “Enable Shell Access” link near the top of the middle content pane.

Note: Hostgator SSH uses port 2222, instead of the standard SSH port 22. So when running the SSH client, make sure to use port 2222.

SSH Into Your Hostgator Account

Mac OS X comes with a built-in SSH client. To connect to Hostgator, launch the Terminal application and run ssh with port 2222 with this command:

ssh -p 2222 myusername@mydomainname.com

Windows does not come with a built-in SSH client, so I recommend using the free Putty SSH client. Browse to the PuTTY Download Page and download the “putty.exe” file. Run it and input the following:
putty_2222

  1. Under Session (selected by default under the Category panel on the left), input the “Host Name” and the Port 2222.
  2. Under Connection and then Data, input your username in the “Auto-login username” field.
  3. Optional: To avoid having to re-input these values the next time you run Putty, go back to Session, input a name in the “Saved Sessions” field, and click the Save button. The next time, just select the session you saved and click Load to automatically re-populate the fields.
  4. Click on the Open button to make the SSH connection.

Your website files are located under the “~/www” directory which is soft-linked to the “~/public_html” directory.

SSH With Public Key Authentication

If you SSH into Hostgator often, it may be worthwhile to use public key authentication to avoid having to input your password. Public key authentication consists of two steps: (a) generate a public and private key pair on the client and (b) copy the public key to the server into a trusted location. After those steps, instead of asking for a password, the server will authenticate the SSH connection by matching its trusted copy of the client’s public key against the client’s private key.

Before we start, SSH into your Hostgator account and make sure that the “~/.ssh” directory exists on the server by running these commands:

mkdir -p ~/.ssh
chmod 700 ~/.ssh

The mkdir command above will create the “~/.ssh” directory if it does not already exist. The “~/.ssh” directory is the server’s default location for trusted public and private key files. The chmod command sets the permission on the “~/.ssh” directory to only allow access for the user and no one else. We will copy the client’s public key to this “~/.ssh” directory on the server.

SSH Public Key Authentication on Mac OS X

Mac OS X comes with the built-in “ssh-keygen” and “scp” (secure copy) utilities which we can use to generate a public and private key pair, and to copy the public key to the server.

ssh-keygen -t rsa
scp -P 2222 ~/.ssh/id_rsa.pub myusername@mydomainname.com:~/.ssh/authorized_keys
ssh -p 2222 myusername@mydomainname.com 'chmod 600 ~/.ssh/authorized_keys'

The ssh-keygen command above will generate a public and private key pair using RSA protocol 2 with 1024 bits. It will prompt you to input a passphrase (to protect access to the private key) which I recommend you leave blank; otherwise, you will be prompted for the passphrase each time you connect, which would defeat the purpose of avoiding password input. The private and public key files are created in the client’s “~/.ssh” directory as “id_rsa” and “id_rsa.pub” respectively. The scp command copies the public key to the server as “~/.ssh/authorized_keys”, which is the server’s default trusted public key file. The chmod command sets permission on the “~/.ssh/authorized_keys” file to only allow access for the user and no one else.

To test, run the SSH command and you should automatically be authenticated using the public key. You should not be prompted to input the password.

ssh -p 2222 myusername@mydomainname.com

If you are tired of having to input the port 2222, you can set it as the default by creating the “~/.ssh/config” file with the following content:

Host mydomainname.com
  Port 2222
  PreferredAuthentications publickey,password

When connecting to your hosting server, the SSH client will use port 2222 by default and either public key authentication (publickey) or password authentication (password).

Once the file above is created, you should be able to SSH without having to input the port 2222:

ssh myusername@mydomainname.com

SSH Public Key Authentication on Windows

Because Windows does not have the built-in “ssh-keygen” and “scp” utilities, you will need to download the following files from PuTTY Download Page: “puttygen.exe” (ssh-keygen), “pscp.exe” (scp), and “plink.exe” (SSH command line).

Then, launch “puttygen.exe” to generate the public and private key pair:

    puttygen_1024

  1. RSA Protocol 2, “SSH-2 RSA”, should be selected by default.
  2. Leave the “Number of bits in a generated key” as 2048 or change it to 1024. (I used 1024 bits which is adequate for my purpose.)
  3. Click the Generate button.
  4. Move the mouse inside the dialog window until the key pair is generated.
  5. I recommend that you leave the “Key passphrase” blank; otherwise, you will be prompted for the passphrase every time you connect.
  6. Copy the contents of the “Public key for pasting into OpenSSH authorized_keys file” textfield to a file named “id_rsa.pub”.
  7. Click the “Save private key” button and name the private key file “id_rsa.ppk”.
  8. Click the “Save public key” button and name the public key file “id_rsa.publickey”. Note that the contents of this public key file is different from that of the “Public key for pasting into OpenSSH authorized_keys file”.

Finally, copy the the “Public key for pasting into OpenSSH authorized_keys file” to the server using the Window’s Command Prompt shell and the Putty versions of scp and SSH command line utilities:

pscp -scp -P 2222 id_rsa.pub myusername@mydomainname.com:~/.ssh/authorized_keys
plink -P 2222 myusername@mydomainname.com chmod 600 ~/.ssh/authorized_keys

Configure the Putty SSH client to use public key authentication:

  1. Per the previous Putty instructions, input the server’s hostname, port 2222, and your username. Or if you have a saved session, under Session, select your session name, and click the Load button.
  2. Under Connection, SSH, and Auth, click on the “Browse…” button at the bottom and locate the private key file “id_rsa.ppk”.
  3. Optional: You can update your saved session by going to Session, selecting your named session, and clicking the Save button.
  4. Click the Open button to connect to your server by SSH. You should not be prompted to input the password.

Troubleshoot SSH Public Key Authentication

If the above does not work (you are still prompted for the password), then it may be that the server has its own generated public and private key pair installed. For my Hostgator account, I found that the public key authentication failed because my server had its own public and private key files in the “~/.ssh” directory.

To fix this issue, SSH into your Hostgator account and delete all files under the “~/.ssh” directory except the “authorized_keys” file. Try to SSH from your client again and hopefully you won’t need to input the password.

Using Shared SSL Certificate

SSL certificates are used to encrypt the web traffic between your browser and the server. On your browser, the URL will start with “https” (instead of the unsecured “http”), with perhaps a lock icon visible, when SSL is in use. Normally, you would buy a SSL certificate that is linked directly to your domain name; if the domain name doesn’t match the name in the SSL certificate, the browser would display a warning. Purchasing a SSL certificate can be expensive because you must renew it every year; for example, a SSL certificate costs $69/year from GoDaddy.

Hostgator provides a free shared SSL certificate for your use. It is less secure than your own personal SSL certificate because it is shared by all accounts hosted on the same Hostgator server. (Conceivably, another account holder on the same Hostgator server could decrypt the encrypted web traffic to your server, but that requires a lot of know-how and a ton of trouble.)

Because the shared SSL certificate is tied to the Hostgator server’s hostname, you cannot use it when browsing to your domain name. Instead, you would browse to the Hostgator server’s hostname with a relative path to your username, which corresponds to your primary domain website directory.

https://secureXXXX.hostgator.com/~myusername/

To find the hostname of the Hostgator server which your account is hosted on, do the following:

  1. Browse to Hostgator’s cPanel interface using “http://mydomainname/cpanel”.
  2. Log in using your Hostgator administrative username and password.
  3. Look for the “Account Information” panel in the bottom-left corner.
  4. The “Server Name” field contains your hosted server’s hostname (ex: “gator3141”). To get the secured hostname, replace “gator” with “secure” (ex: “secure3141.hostgator.com”).

Instead of using the cryptic secured URL above, you can create a more friendly redirect from your website. You could browse to your domain name and automatically be redirected to the secured URL. I don’t recommend redirecting from your website’s root address (unless that is what you want); instead, I suggest creating a directory called “secure” under the website’s root directory, which will hosts the content to be accessed by SSL.

To create the redirect, SSH into your Hostgator account and create a file with this path and name, “~/www/secure/.htaccess”, and the following content:

RewriteEngine On
RewriteCond %{SERVER_PORT} 80
RewriteCond %{REQUEST_URI} register
RewriteRule ^(.*)$ https://secureXXXX.hostgator.com/~myusername/secure/$1 [R,L]

Please make sure that the “.htaccess” file has 644 permission. When you browse to any file under “http://mydomainname/secure/”, you will be redirected to “https://secureXXXX.hostgator.com/~myusername/secure/”.

If you wish to use SSL with an add-on or sub domain, just append the add-on or sub domain name to the end of the secured URL:

https://secureXXXX.hostgator.com/~myusername/mysubdomainname.com/

Some info above derived from How can I force users to access my page over HTTPS instead of HTTP?.

No Comments

« Previous Entries