Buying a Used iPhone in One Word: Painful

Mobile Devices 3 Comments

I purchased a T-Mobile iPhone 5s for a great price on Craigslist. The seller seemed like a nice young man. The phone worked without any issues. Coming from a Blackberry Pearl, the iPhone was a wonderful treat. Three months later, I couldn’t make or receive phone calls or text messages. The T-Mobile service representative explained that T-Mobile had blacklisted the phone’s IMEI number (phone’s unique identifier) because the phone was financed and the original owner had stopped paying. I tried calling the seller, no pickup. I called the seller from a different phone number, he picked up, said he would investigate and get back to me; he never did. I asked T-Mobile how much was due and whether I could pay it myself; T-Mobile said it was against their policy to reveal any info or allow such an action. Evidently, only the original owner could remove the blacklisting. So I ended up with a very expensive iPod Touch.

051HannibalI learned that a used phone’s IMEI number is attached to the account used to activate it, and the original account owner retains some control over whether the phone is useful or not. Thankfully, there are online tools to check the IMEI status. Inputting the phone’s IMEI number into T-Mobile’s IMEI Status Check page returned “Your device is blocked and will not work on T-Mobile’s network”. If I had checked the IMEI status before buying, I might have seen that the iPhone was being financed (it would say “financed” and/or “balance due”) and I would not have purchased it (I would not want to depend on the seller to continue making monthly payments). Lesson learned: check the phone’s IMEI number before buying.

At about the same time, my sister bought a used T-Mobile iPhone 5. Her T-Mobile micro SIM card did not fit (the iPhone 5 requires a nano SIM card) so she was not able to test it. The seller insisted that the iPhone 5 worked with T-Mobile. She purchased it and took it to a T-Mobile store to get a nano SIM card. The T-Mobile store representative told her that it was a Sprint phone and would not work with T-Mobile.

I offered to resell the Sprint iPhone in an attempt to recoup some of her lost money. Before posting it on Craigslist, I inputted the phone’s ESN (Sprint’s alternative name for IMEI) number into Swappa’s Check Your ESN page (the Sprint website did not have an ESN check status tool) and it responded that the phone was reported lost or stolen. I called Sprint to see if I could return the phone. The Sprint service representative said that Sprint did not have any process in place to return lost or stolen phones, that she could not provide me with information about the original owner (so I could mail it myself), and that the phone was mine to do with as I wished. Looks like my sister also ended up with a very expensive iPod. Lesson learned: make sure the phone can connect to the carrier before handling over the cash.

Update: There is a way to check the ESN status on the Sprint website if you have an account. Log into your Sprint account, go to the “I want to” menu, select “activate a new phone”, and input the ESN of the new phone. After you click Next, the Sprint website will give you a confirmation page (the ESN is good) or an error message (the ESN is bad).

locked_iphone4sLater, I purchased a used T-Mobile iPhone 4s for my niece. I checked that the IMEI was good (the Craigslist seller texted me the IMEI when I requested it and the T-Mobile website said it was useable; no mention of “financed” or “balance due”). Having read up on the subject, I made sure that the security passcode was not set on the phone. And I made sure to delete the iCloud account to disable the Find My iPhone feature; erroneously, the iPhone allowed me to remove the iCloud account without requiring a password input. When I came home, I did a reset and the iPhone booted into the activation lock screen, asking for the iCloud password belonging to a mostly-blacked out iCloud email address. After several phone calls, I finally reached the original seller and he stated that his friend had set up the iCloud account years ago and that he had lost all contact with that friend. So there was no way to recover the iCloud password. He then insisted that the sale was final, even though I remembered asking him if it was okay to do a reset and he had said yes. I ended up with a very expensive brick. Lesson learned: do a reset before forking over the money.

Update: Apple now provides a Check Activation Lock Status page using the IMEI number, which eliminates the need to do a reset.

The above and more really happened to me. I’m not saying that all Craigslist iPhone sellers were dishonest, I’m saying that almost half of them were guilty of selling nonworking iPhones. I did buy several iPhones (one iPhone 5c and three iPhone 4s’) that continue to work fine. So my success rate was over 50 percent.

In this post, I will provide suggestions to help you to avoid purchasing a nonworking used iPhone. Because it is not possible to be 100 percent safe when buying a used phone, I will also offer suggestions on how to recoup some of your loss, should any occur.

Note: Though I talk about the iPhone specifically, most of the following tips are applicable to Android phones. And while my buying experience involves using craigslist, most of the suggestions may be applicable to other venues like eBay.

Quick Checklist

For your convenience, I’ve summarized the info in this post into a high-level checklist.

  1. Before meeting, ask for the model and IMEI (or ESN) number of the phone.
    1. Google the model; for example, to make sure you are getting an iPhone 5s, instead of an iPhone 5 which looks the same.
    2. Check the IMEI number using Swappa’s Check Your ESN page (or the specific carrier’s IMEI/ESN status check page if available). Look out for phones that are financed or blacklisted (lost/stolen).
    3. Check Apple’s Check Activation Lock Status page to make sure that the iPhone is not iCloud-locked.
  2. When meeting, double-check that the phone has the model and IMEI number given above.
  3. Check out the phone functions. For example, capture a short video and play it back to test camera, microphone, and speakers.
  4. Insert a working carrier-specific SIM card. Make a phone call and send/receive a text message. (Make sure you bring an appropriately-sized SIM card or SIM adapters.)
  5. Finally hand over the cash for the phone.
  6. If months later, your phone becomes blacklisted or nonworking, consider selling it to International buyers or for parts and repair.

Be An Informed Buyer

Because iPhones are very expensive and tend to become obsolete within a few years, it makes sense to buy a used iPhone to save significant amounts of money. Of course, the usual downsides to buying used apply, including not knowing how the phone has been treated (previous water damage, cracked screen replaced), undisclosed defects (Bluetooth doesn’t work), and the phone may have fallen off the back of a truck (aka lost or stolen).

In the past, a lost or stolen phone can still be useful because the U.S. carriers did not do a good job of tracking such phones or even sharing data with each other. For example, a nonworking AT&T phone with a blacklisted IMEI number can be unlocked for use with T-Mobile because on T-Mobile, the IMEI number wouldn’t be blacklisted. However, this all changed in 2013 when the carriers started sharing the same database of IMEI numbers (mandated by law). So a phone which is blacklisted by one carrier would be blacklisted by all other carriers. (Currently, international carriers do not share IMEI numbers with U.S. carriers.)

With the release of iOS 7.0, Apple introduced the Activation Lock feature which required inputting the iCloud password when resetting the iPhone if the Find My iPhone feature was enabled. This added another possibility to render your used iPhone useless when you did a reset and didn’t know the iCloud password. Once activation lock occurred, you wouldn’t be able to use the iPhone as an iPod or anything else.

To increase your chances of buying a working used iPhone, it pays to be informed about all the pitfalls and to take steps to mitigate the risks.

Before You Meet The Buyer

When looking at ads for iPhones, you will want to watch out for clues. For example, I see ads selling just the iPhones without the wall adapter or any other accessories; most likely, this indicates that the phone is found or stolen. Of course, don’t buy phones advertised with text that says “bad, blocked, or blacklisted IMEI (or ESN)”; you won’t be able to use that phone with any U.S. carrier. Good clues are text saying “clean IMEI or ESN” or if the seller is including the original box (this seller takes very good care of her stuff), wall adapter, and accessories (I would hesitate to re-use someone else’s earphones).

Ask the seller to provide the IMEI or ESN number. Check the IMEI or ESN status on the carrier’s website or Swappa (if the carrier doesn’t provide such a tool). Beware of statuses that include words like “financed under contract” or “balance due”. Status of “phone paid off”, “ready for use with network”, and “unknown” (new, unused iPhones or unlocked phones from another carrier have this status on T-Mobile) are good. Of course, avoid phones with status “blacklisted”, “blocked”, “lost”, or “stolen”.

Unfortunately, you cannot be 100 percent certain even with an IMEI status check. For example “ready for use with network” or “balance fully paid” could mean that the seller has just made the monthly payment, but the iPhone could still be financed. Even a “phone paid off” and “unknown” status is no guarantee. There is scam where the seller has phone insurance, sells the phone, and months later, declares that the phone is lost or stolen so he could receive a replacement phone. This is allowed because the phone is still associated with the original owner’s account. There is nothing that you can do as a buyer to guard against this scam. (Probably because of abuse, T-Mobile had to institute a one replacement per year limitation on their phone insurance plan.)

Note: I have read of people arranging to meet at the cell phone store to move the phone to the new account, but have never done it myself. I have also read that it didn’t work. Doing the transfer does sound like a good idea because it may prevent the original owner from declaring the phone lost or stolen later. Meeting at the store definitely provides the opportunity to ask the service representative if the phone is fully paid off or financed. If the seller refuses to meet at the store or goes silent, you will know that it is probably a lost or stolen phone.

Before You Hand Over The Cash

When you finally meet the seller, double-check that the given IMEI matches the IMEI displayed on the iPhone under Settings, General, and About. The IMEI is also printed on the back of the iPhone, but because an iPhone can be repaired with the back cover from another iPhone, the printed IMEI may not match. If the IMEI on the back is not identical to the IMEI in the iPhone Settings, you are probably looking at an iPhone which had significant repairs done.

Bring a SIM card to test the iPhone to be certain that it will work with your carrier. Research the phone model so you will know what size SIM card is required. You can cut your SIM card down to size or use a SIM adapter to make it larger. With the SIM card inserted, if you get an invalid or locked carrier error, then the phone won’t work with your carrier.

I’ve successfully cut SIM cards with a sharp pair of scissors and one time, I used a nail clipper. Unless you like to live dangerously, I recommend going to a cell phone store where they will usually cut it for free. If you don’t want to cut your existing SIM card or don’t have a SIM card (because you are switching carriers), you can buy a starter SIM card. With a starter or unregistered SIM card, the phone will still connect to the carrier, but you won’t be able to make calls. (The T-Mobile website sells a starter SIM card for $10; however, they usually have frequent sales where the SIM card is free or only costs one penny.)

The discerning buyer will check the iPhone’s model number (under Settings, General, About) to determine if the phone is originally made for the carrier and not an unlocked phone from another carrier. The reason to do so is because an unlocked phone may not have the same capabilities; for example, an unlocked AT&T iPhone 4s will not support T-Mobile LTE 4G speeds. (Again, the model number printed on the back of the iPhone may not match what is in the iPhone’s Settings if the back cover has been replaced with another iPhone’s back cover.)

As a minimum hardware check, I suggest launching the Camera app, switching from the back to the front camera, recording a video, and then playing the video back. This will test the cameras, microphone, and speakers. Also, check that the Wi-Fi works if you have access to a hotspot. (Due to a hardware bug, the Wi-Fi feature on some iPhone 4s phones were broken by iOS 7.) If you have an activated SIM card installed, make a phone call and send a text message.

Note: Make sure to ask permission from the seller before you do any modifications to the iPhone, especially before doing the reset below.

If the passcode is enabled, remove it by going into Settings, Passcode or “Touch ID & Passcode”, and selecting “Turn Passcode Off”. Technically, you could remove the passcode by doing a full restore using iTunes, but that is avoidable with this little bit of effort.

Most importantly, go into Settings, iCloud, and manually turn off the “Find My iPhone” feature. (Deleting the iCloud account is not guaranteed to disable the “Find My iPhone” function.) Check Apple’s Check Activation Lock Status page to make sure that the iPhone is not activation-locked. Alternatively, you can reset the iPhone to check the activation lock status. Make sure that you have a SIM card inserted and have access to a wireless network before resetting the phone; the iPhone requires a working SIM card and wireless network in order to fully activate. Upon restart after the reset, you will be prompted for the password if the phone is activation-locked. (I think you might be able to activate the iPhone with a SIM card that has the Internet data plan enabled, but I have not tried it.)

What to Do With a Blacklisted iPhone

The above suggestions offer you better odds of getting a working iPhone. Unfortunately, you cannot protect yourself against the seller reporting the phone as lost or stolen months later. If you end up with an iPhone which you cannot use with your carrier (or even use as an iPod), perhaps my experiences below might help you to recoup some of your loss.

I was able to get a working T-Mobile iPhone 5s by exchanging the blocked iPhone 5s with a non-blocked one for $100. I found an eBay seller, iphoneswaps, whose auction offered to exchange my blocked iPhone for an iPhone (of the same model, color, and carrier) with a clean IMEI. The requirements are that the iPhone be in good, original condition (no major dents or repairs) and that the IMEI is not lost or stolen; my iPhone 5s thankfully met those requirements. I took the gamble and got back what looked to be a refurbished iPhone 5s. I don’t know what made the swap possible; I was just grateful that I ended up with a working iPhone.

If the swap is not available or applicable, you can sell your blocked phone to an International buyer who might be able to unlock it for use with a foreign carrier. I was able to do this when the Android phone that I got my sister quit working after six months, because the original owner had stopped making payments (the Android was financed; this was before I learned about IMEI). I put the Android phone on eBay with a full disclosure about why it was blocked, and it was purchased by a buyer from Texas who told me that he planned to ship it down south for use in Mexico.

I found out later that there is a service, advertised on eBay, which could have swapped out the logic board on my sister’s Android phone for $50. This swap would have given my sister’s Android a new, clean IMEI number. Too late, my sister had already purchased a replacement phone directly from T-Mobile; she didn’t want to deal with the hassle of buying used again. I felt bad but was glad that I was able to recoup half of what she had lost by selling her blocked Android to a cowboy.

I resold the activation-locked iPhone 4s on Craigslist for parts or repair. I got a low offer and took it, resulting in a huge loss. (I had originally posted the iPhone on eBay but got a message saying that eBay no longer allowed the sale of activation-locked iPhones. Strangely, eBay still allows the sale of iPhones with blacklisted IMEI numbers.) After the sale, I realized that I could have purchased a replacement logic board with a clean IMEI for $20 and with some elbow grease, gotten a working iPhone 4s; assuming that I didn’t destroy the phone during the process.

While I applaud the government (which mandated the common IMEI database) and the carriers for working to prevent the sale of lost or stolen phones, I don’t think they have implemented the necessary processes and infrastructure to support that intention. For example, the carriers could identify abuse by the owner if his iPhone is suddenly used by another person and then several months later, he declares that it was lost or stolen. The government could require that phone status be lost or stolen, but not both (currently, the IMEI database does not seem to distinguish between the two); so that in the case of the phone being lost, more leniency in enforcement by the carriers can be used. And of course, the carriers should have processes in place to return lost or stolen phones to the original owners. Currently, as a result of the above, innocent used phone buyers are the only ones paying the price; the perpetrators continue to go unpunished and are actually rewarded.

I’m certain that the info in this post is not comprehensive, but I hope it is a good start in helping you to become a more informed buyer of used iPhones.


Bootable USB Flash Drive to Install Mac OS X 10.10 Yosemite

Mac OS X No Comments

Update: Go to Install macOS Sierra Using Bootable USB Flash Drive if you want to install macOS Sierra instead.

In this post, I will go over instructions on how to create a bootable USB flash drive containing the Mac OS X 10.10 Yosemite installer. These instructions will also work for Mac OS X 10.9 Mavericks (excluding a Yosemite-specific step) and differ significantly from the instructions for creating a Mac OS X 10.6 Snow Leopard installer. You will need an 8GB USB flash drive for Mac OS X Yosemite or Mavericks.

I tried several methods which failed to create a bootable USB flash drive before finding one that succeeded. The instructions I found that worked, using Disk Utility, were located at How to Make a Bootable OS X Mavericks USB Install Drive and How to Create a Bootable Install USB Drive of Mac OS X 10.10 Yosemite.

Download the Mac OS X 10.10 Yosemite

First, download the latest Mac OS X version, which is 10.10 Yosemite. It is the version currently available for download from the “App Store”. (If you want an earlier version like Mac OS X 10.9 Mavericks, you’ll need to get it from elsewhere.)

Launch “App Store” and search for “OS X Yosemite”. Download it. (It is 5.16GB in size.)

Note: If you run the Yosemite installer to upgrade your Mac, the downloaded installer file will be deleted automatically after the upgrade is completed. To keep that file, you will want to move it out of the Applications folder so it won’t be deleted after an upgrade. Launch the “Terminal” app and run this command to move the downloaded installer app to your user’s “Downloads” folder:

sudo mv /Applications/Install\ OS\ ~/Downloads/

Create Bootable USB Flash Drive Installer

By default, the Finder will hide system files which we will need to see. Run these commands in the “Terminal” app to expose the hidden files:

# Configure Finder to show hidden system files.
defaults write AppleShowAllFiles TRUE

# Close all Finder instances (and re-launch so settings take effect).
killall Finder

Prepare the USB flash drive:

  1. Plug in a USB flash drive of size 8GB or larger.
  2. Launch the “Disk Utility” to format the USB Flash drive.
  3. On the left-hand pane, select the USB drive (not the partition under it, if any).
  4. Click on the “Erase” tab, select “Mac OS Extended (Journaled)” for “Format” and input a name like “Install Yosemite” (or anything because this name will be overwritten later).
  5. Click the “Erase…” button at the bottom and then the “Erase” button in the popup dialog. This format operation should take less than a minute to complete.

Restore the Yosemite installation image to the USB flash drive:


  1. Launch the Finder and locate the “Install OS” file. Right-click (hold the “control” key and click) on it and select “Show Package Contents”.
  2. Open Contents, then SharedSupport, and double-click on the InstallESD.dmg (disk image) file to mount it. A volume called “OS X Install ESD” will show up on the desktop and under DEVICES in the Finder.
  3. In the “OS X Install ESD” volume, right-click on the “BaseSystem.dmg” file and select “Open” to mount it. (Double-click won’t perform any action because it is a hidden file.)
  4. Use Disk Utility to clone the “BaseSystem.dmg” to the USB flash drive:

    1. Select the “BaseSystem.dmg” in the left-hand pane and click on the “Restore” tab. The “Source” field will be populated with “BaseSystem.dmg”.
    2. Drag the “Install Yosemite” partition under the USB flash drive to the “Destination” field.
    3. Click the Restore button and then the Erase button.
    4. The USB flash drive will be written with the contents of “BaseSystem.dmg” file. Depending on the speed of your USB flash drive, it may take several minutes or longer to complete this operation.
    5. Once complete, the “Install Yosemite” partition will be renamed to “OS X Base System”.


  5. Use the Finder to navigate to the USB flash drive. You will see two “OS X Base System” volumes in the Finder’s left-hand pane. The USB flash drive is the last one.
  6. Under the USB flash drive’s “OS X Base System” partition, open the “System/Installation” folder. You will see an alias file named “Packages”. Delete it because we will replace it with a “Packages” folder below.
  7. Use a second Finder window to open the “OS X Install ESD” volume. (To open a second Finder window, you can use the Finder menu’s “File/New Finder Window” command.)
  8. Copy the “Packages” folder from the “OS X Install ESD” volume to the USB flash drive’s “System/Installation” folder.

  9. Required for Yosemite (not required for Mavericks): Copy the “BaseSystem.chunklist” and “BaseSystem.dmg” files from the “OS X Install ESD” volume to the USB flash drive’s root “/” folder. If you don’t do this, you will get an “undefined error 0” when attempting to install Yosemite.
  10. The USB flash drive is now complete. You can use it to boot a Mac to install Mac OS X 10.10 Yosemite.
  11. Unmount all the Yosemite installer volumes by ejecting them; you must eject “OS X Base System” before “OS X Install ESD”.

Re-configure the Finder to hide system files. Run these commands in the “Terminal” app:

# Configure Finder to not show hidden system files.
defaults write AppleShowAllFiles FALSE

# Close all Finder instances (and re-launch so settings take effect).
killall Finder

Boot With USB Flash Drive

To boot a Mac with the USB flash drive:

  1. Insert the USB flash drive.
  2. While holding the “option/alt” key down, turn on the Mac to display the boot Startup Manager.
  3. You should see one or two icons, one for the internal hard drive and/or another called “OS X Base System” for the USB flash drive. (The internal hard drive may not be visible if it does not have a bootable partition installed.)
    • Note: If you don’t see the USB flash drive’s “OS X Base System”, try removing and re-inserting the USB flash drive while viewing the Startup Manager screen. The USB flash drive should then appear after a few seconds.
  4. Select the “OS X Base System” and hit the “return/enter” key to boot from the USB flash drive.

Hopefully, this post will help you to create your own bootable USB flash drive installer for Mac OS X 10.10 Yosemite or Mac OS X 10.9 Mavericks.

No Comments

Subversion Over SSH on an Unmanaged VPS

Linux No Comments

See my previous post, Upgrade Ubuntu and LEMP on an Unmanaged VPS, to learn how to upgrade LEMP and Ubuntu to the latest versions. In this post, we will install Subversion on the server and learn how to access it using Subversion over SSH (svn+ssh).

Note: Though I’m doing the work on a DigitalOcean VPS running Ubuntu, the instructions may also apply to other VPS providers.

Subversion allows a client to execute svn commands on the server over SSH. As a result, there is no need to have a Subversion server process (svnserve) running or an Apache server configured to support Subversion (mod_dav_svn); one only needs SSH access. Subversion over SSH is simple and sufficient for my needs.

For svn+ssh, access to Subversion is controlled by the Linux user login. To avoid having to input your SSH login password every time you run a svn command, I recommend configuring SSH with public key authentication between your client and the server. For instructions, see the “SSH With Public Key Authentication” section in my previous post, SSH and SSL With HostGator Shared Web Hosting.

To begin, on the server, install the Subversion package and create a repository:

# Install subversion
sudo apt-get install subversion

# Check that subversion is installed
svn --version

# Make a repository directory
sudo mkdir /var/repos

# Create a repository
sudo svnadmin create /var/repos

We need to change the permissions on the newly-created repository directory so that our Linux user can have read-write access. I recommend adding your user to the ‘www-data’ group and giving that group modify access to the repository like so:

# Change mynewuser's primary group to www-data
sudo usermod -g www-data mynewuser

# Check by showing all groups that mynewuser belongs to
groups mynewuser

# Change repository group owner to be www-data
sudo chgrp -R www-data /var/repos

# Add group write permission to repository
sudo chmod -R g+w /var/repos

On the remote client machine, we will use the Subversion client with svn+ssh to access the repository. Because we are using a custom SSH port and the Subversion command line does not provide an option to input the SSH custom port, we have to configure SSH to use the custom port automatically.

Configure SSH to use the custom port when connecting to your server by creating a SSH configuration file located at “~/.ssh/config” (on Mac OS X) or “%HOME%/.ssh/config” (on Windows). Input the following file content:

  Port 3333
  PreferredAuthentications publickey,password

After this, you can run “ssh” instead of “ssh -p 3333” because SSH will use the custom 3333 port automatically when connecting to “”.

Note: On Windows, I am using the DeltaCopy “ssh.exe” client in combination with the CollabNet “svn.exe” Subversion client. On Mac OS X, I am using the built-in ssh client and the svn client (installed using MacPorts).

To test access to the repository, run the following command on the client:

# List all projects in the repository.
svn list svn+ssh://

This command will return an empty line because there are no projects in the repository currently. If you do not see an error, then the command works correctly.

On the client, you can now issue the standard Subversion commands like the following:

# Import a project into the repository
svn import ./myproject svn+ssh:// -m "Initial Import"

# The list command should now show your newly-imported project
svn list svn+ssh://

# Check out a local, working copy of the project from the repository
svn co svn+ssh:// ./myproject2

# View the working copy's info (no need to input the svn+ssh URL once inside the project)
cd ./myproject2
svn info

# Update the project to the latest version
svn update

If you should wish to run Subversion commands locally on the server, you can do so using the “file:///” path instead of “svn+ssh://” URL.

# List all projects in the repository.
svn list file:///var/repos

# Check out a local, working copy of the project from the repository
svn co file:///var/repos/myproject ./myproject2

And we are done. Hopefully the above info will be useful should you ever need to get Subversion working.

See my followup post, Automate Remote Backup of WordPress Database, on how to create and schedule a Windows batch script to backup the WordPress database.

No Comments

Upgrade Ubuntu and LEMP on an Unmanaged VPS

Linux No Comments

See my previous post in my unmanaged VPS (virtual private server) series, Nginx HTTPS SSL and Password-Protecting Directory, to learn how to configure Nginx to enable HTTPS SSL access and password-protect a directory. In this post, I will explore how to upgrade LEMP and Ubuntu.

Upgrade LEMP

While one can upgrade each component of LEMP (Linux, Nginx, MySQL, PHP) separately, the safest way is to upgrade all software components installed on the system to ensure that the dependencies are handled properly.

Upgrade all software packages, including LEMP, by running the following commands:

# Update apt-get repositories to the latest with info
# on the newest versions of packages and their dependencies.
sudo apt-get update

# Use apt-get dist-upgrade, rather than apt-get upgrade, to
# intelligently handle dependencies and remove obsolete packages.
sudo apt-get dist-upgrade

# Remove dependencies which are no longer used (frees up space)
sudo apt-get autoremove

Some changes may require a reboot. To initiate a reboot, execute this recommended command:

# Following command equivalent to: sudo shutdown -r now
sudo reboot

Updating PHP-FPM Breaks WordPress

If the PHP-FPM (FastCGI Process Manager for PHP) package is updated, one may be prompted to overwrite the “/etc/php5/fpm/php.ini” and “/etc/php5/fpm/pool.d/www.conf” configuration files with the latest versions. I recommend selecting the option to show the differences, making a note of the differences (hitting the “q” key to quit out of the compare screen), and accepting the latest version of the files.

After the upgrade, WordPress may be broken because the PHP-FPM is no longer configured correctly. To fix this issue, update the two PHP-FPM configuration files with these changes to ensure that Nginx will successfully integrate with PHP-FPM:

# Fix security hole by forcing the PHP interpreter to only process the exact file path.
sudo nano /etc/php5/fpm/php.ini
   # Add the following or change the "cgi.fix_pathinfo=1" value to:

# Configure PHP to use a Unix socket for communication, which is faster than default TCP socket.
sudo nano /etc/php5/fpm/pool.d/www.conf
   # Keep the following or change the "listen =" value to:
   listen = /var/run/php5-fpm.sock
   # The latest Nginx has modified security handling which requires
   # uncommenting the "listen.owner" and "" properties:
   listen.owner = www-data = www-data
   ;listen.mode = 0660

# Restart the PHP-FPM service to make the changes effective.
sudo service php5-fpm restart

Test by browsing to the “info.php” file (containing the call to “phpinfo” function) to ensure that Nginx can call PHP-FPM successfully. Hopefully, you won’t see the “502 Bad Gateway” error which means it didn’t. If so, look at the Nginx and PHP-FPM error log files for hints on what could have gone wrong.

sudo tail /var/log/nginx/error.log
sudo tail /var/log/php5-fpm.log

Note: If you accidentally select the option to keep the current version of the PHP-FPM configuration files and now wish to the get the latest versions, you will need to uninstall and re-install the PHP-FPM service:

sudo apt-get purge php5-fpm
sudo apt-get install php5-fpm

You will then need to update the two PHP-FPM configuration files per the instructions above.

Upgrade May Break iptables

After a recent upgrade, a “problem running iptables” error message is displayed when logging into the droplet. The whole error is displayed when I attempt to view the firewall status:

~$ sudo ufw status
ERROR: problem running iptables: modprobe: ERROR: could not insert 'ip_tables': Exec format error
iptables v1.4.21: can't initialize iptables table `filter': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.

Thanks to this page, problem with iptables and ubuntu Ubuntu 13.10, I found that the issue was caused by the upgrade process switching the kernel to a 64bit version. The problem is that the rest of the system (executables, object code, shared librairies) is 32bit!

# Check the kernel version (x86_64 means 64bit)
~$ uname -a
Linux mydomain 3.13.0-39-generic #66-Ubuntu SMP Tue Oct 28 13:30:27 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

# Check the system executables and libraries (32-bit means 32bit!)
~$ file /sbin/init
/sbin/init: ELF 32-bit LSB  shared object, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=c394677bccc720a3bb4f4c42a48e008ff33e39b1, stripped

To fix the 64bit/32bit mismatch, I did the following:

  1. Browse to the DigitalOcean web interface, drill into my droplet, and select “Kernel” configuration (on left panel).
  2. I then selected the 32bit version of the kernel, which is “Ubuntu 14.04 x32 vmlinuz-3.13.0-39-generic” (only difference from the current kernel “Ubuntu 14.04 x64 vmlinuz-3.13.0-39-generic” is changing “x64” to “x32”). Click the Change button.
  3. Power down the droplet by running the “sudo poweroff” command.
  4. Use the DigitalOcean web interface to power on the droplet.

After doing the above, I no longer see the “problem running iptables” error message. Viewing the firewall status now successfully returns the correct set of rules:

~$ sudo ufw status
Status: active

To                         Action      From
--                         ------      ----
3333/tcp                   ALLOW       Anywhere
80                         ALLOW       Anywhere
25/tcp                     ALLOW       Anywhere
443                        ALLOW       Anywhere

Note: Be patient because the DigitalOcean web interface can take a minute to recognize reflect that the droplet is powered off (and then enable the Power On button). Also, the first two times I tried to power on the droplet, I got timeout errors. The 3rd attempt didn’t do anything. Finally, the 4th attempt successfully powered on the droplet. Whew!

Upgrade Ubuntu

The following is particular to my VPS provider, DigitalOcean, but perhaps it may help provide a general idea on what to expect with your own provider when doing an operating system upgrade.

On logging into my server, I saw the following notice:

New release '14.04.1 LTS' available.
Run 'do-release-upgrade' to upgrade to it.

Your current Hardware Enablement Stack (HWE) is no longer supported
since 2014-08-07.  Security updates for critical parts (kernel
and graphics stack) of your system are no longer available.

For more information, please see:

To upgrade to a supported (or longer supported) configuration:

* Upgrade from Ubuntu 12.04 LTS to Ubuntu 14.04 LTS by running:
sudo do-release-upgrade

Update: One does not necessarily have to upgrade to the latest Ubuntu release version when prompted to. However, in the case above, support for the 12.04 LTS release had ended so an upgrade to 14.04 LTS was mandatory. Recently, I got a message to upgrade from release 14.04 LTS to 16.04 LTS. However, I don’t plan to upgrade because the 14.04 LTS release will be supported until 2019.

When I ran “sudo do-release-upgrade”, there was a dire warning about running upgrade over SSH (which I ignored) and many prompts to overwrite configuration files with newer versions (which I accepted after taking note of the differences between the new and old versions). There was also a warning about how the upgrade could take hours to complete, though it ended up taking less than 15 minutes. The upgrade ended with a prompt to reboot, which I accepted.

Note: To be safe, one should run the “sudo do-release-upgrade” command from the Console window (accessible through the DigitalOcean web interface), instead of from a SSH session. I was lucky that nothing went wrong with the release upgrade.

After reboot, I updated the two PHP-FPM configuration files, “/etc/php5/fpm/php.ini”
and “/etc/php5/fpm/pool.d/www.conf”, per the instructions in the above section.

In addition, I had to re-enable sudo permissions for my user by running the following:

# visudo opens /etc/sudoers using vi or nano editor, whichever is the configured text editor.
# It is equivalent to "sudo vi /etc/sudoers" or "sudo nano /etc/sudoers" but includes validation.
   # Add mynewuser to the "User privilege specification" section
   root       ALL=(ALL:ALL) ALL
   mynewuser  ALL=(ALL:ALL) ALL

I found the upgrade process, especially upgrading the Ubuntu operating system, to be a relatively painless experience. Hopefully you will find it to be the same when you do your upgrade.

See my followup post, Subversion Over SSH on an Unmanaged VPS, to learn how to install and use Subversion over SSH (svn+ssh).

Some info above derived from:

No Comments

Update to Latest Subversion Using MacPorts

Mac OS X No Comments

Because I make use of MacPorts to install my developmental tools on Mac OS X, installing or updating Subversion is simple, consisting of a single command line.

Install MacPorts

If you don’t already have MacPorts, go ahead and install it. MacPorts depends upon Xcode and the Xcode Command Line Tool. Instructions for installing both are provided on the MacPorts website.


  • MacPorts is specific to the Mac OS X version, so download and install the correct version.
  • After you install the Xcode Developer Tools (free from the Mac’s App Store), run this Terminal command to install the Command Line Developer Tools:
    xcode-select --install

Update MacPorts

Before installing or updating Subversion, you will want to update MacPorts by issuing this command:

sudo port -v selfupdate

Install Subversion

To install Subversion, issue a MacPorts command to install it like so:

sudo port install subversion subversion-javahlbindings

The Subversion JavaHL Bindings (“subversion-javahlbindings”) package is necessary to support integration with Eclipse, specifically using the Subclipse plugin. Thank heaven that MacPorts got around to supporting the Subversion JavaHL Bindings installation. Before, I had to manually find a compatible version of the JavaHL Bindings, download, and install it myself.

Note: When installing the Eclipse Subclipse plugin, you will need to select the specific Subclipse version that uses a Subversion version that is the same as your installed Subversion and JavaHL Bindings. The version numbers don’t match so you will need to look at the Subclipse documentation to determine which version of Subclipse to install. For example, Subclipse 1.10 uses the latest Subversion 1.8.

Update Subversion

You can update Subversion specifically or update all outdated MacPorts-installed packages by issuing these commands:

# Update only Subversion and JavaHL Bindings
sudo port –v upgrade subversion subversion-javahlbindings

# Update all outdated installed packages including Subversion
sudo port -v upgrade outdated

Install or Update Subclipse

To install or update the Eclipse Subclipse plugin, you will use the same installation instructions. Subclipse doesn’t have a separate update mechanism. To update Subclipse, you would basically install a newer version of it (without needing to remove the older version first).

Note: Eclipse has a menu item, Help->Check for Updates, which will update itself and supported plugins; unfortunately, Subclipse does not support this function.

To install or update Subclipse, follow these steps:

  1. Go to Eclipse menu: Help->Install New Software…
  2. Input “” into the “Work with” field and the table will be updated with installation packages available at that location. (Note: Subclipse 1.10 uses the latest Subversion 1.8.)
  3. Check just the Subclipse package and keep clicking Next until the end. Half-way through, you will be asked to accept the license agreement. Select the “I accept the terms of the license agreements” radio button and click Finish.
  4. You will get a security warning popup with the message, “Warning: You are installing software that contains unsigned content.” Click the OK button to proceed.
  5. Eclipse will need to restart. You will be prompted with a “Software Updates” popup asking “You will need to restart Eclipse for the changes to take effect. Would you like to restart now?” Answer Yes.

Use Older Subversion

MacPorts allows you to select an older version of its packages for use, instead of using the latest version. This is useful in case you do an update and realize that you can’t use the latest version of a particular package, perhaps due to software version incompatibility with one of your tools or applications. For example, because the latest version of Subclipse may not support the latest version of Subversion, you may need to force the use of the previous version of Subversion.

To see all the installed versions of Subversion, run this command:

sudo port installed | grep -i subversion

You should see something like the following output:

subversion @1.7.8_2
subversion @1.7.10_1
subversion @1.8.8_0 (active)
subversion-javahlbindings @1.7.8_2
subversion-javahlbindings @1.7.10_0
subversion-javahlbindings @1.8.8_0 (active)

To activate the previous version of Subversion, use these commands:

sudo port activate subversion @1.7.10_1
sudo port activate subversion-javahlbindings @1.7.10_0

If you are using the latest Subversion and want to uninstall all the older versions, run either of these commands:

# To uninstall a specific version of Subversion
sudo port uninstall subversion @1.7.10_1
sudo port uninstall subversion-javahlbindings @1.7.10_0

# To uninstall inactive versions for all packages including Subversion
sudo port uninstall inactive

I’m very glad that MacPorts exist to make installations and updates so painless.

Eclipse Keeps Asking For Subversion Password

I encountered a bug where Eclipse kept prompting me to input the Subversion password whenever I attempted to run a Subversion command such as update. Even though I checked the save password option, Eclipse would still prompt me each time. I did not encounter this issue using the command line Subversion, so I thought it was a Subclipse bug.

Turns out that this was an Eclipse bug, involving how Eclipse interacted with the Mac OS X Keychain where the subversion password was stored. I used the solution found at the bottom of this page, Subclipse 1.10.0 not saving passwords, to update the Eclipse code signature, which eliminated the password prompts.

Quit Eclipse and run this command:

codesign --force --sign - /Applications/eclipse/

Run Eclipse and issue a Subversion command like update. If you get a Keychain access dialog, select “Always Allow”.

Note: The above command will also fix the problem where the latest Eclipse Mars.2 version keeps asking for permission to “Accept Incoming Network Connections” on startup. Just run the “codesign” command and you will need to answer that prompt once only.

codesign --force --sign - /Applications/
No Comments

The Internet’s Future is Blacklisted

Internet No Comments

Over the weekend, I signed up for a shared web hosting plan because of a special deal. I spent a day setting up the host, migrating a website, and testing to make sure it worked. On Monday, when I went to work, I thought I would check to see the status of my website. Imagine my surprise when I got a security warning that my website was dangerous, known to host viruses and spyware. How could this be? This is a respectable website which I have just moved to a new server.

senderbase_bad_ipIt turns out that my work’s Intranet is protected by a network security appliance called Ironport. Ironport in turn depends upon SenderBase, a blacklist service that identifies dangerous websites. The blacklist is keyed off the IP address. The new server’s IP address was flagged and thus, anything hosted on it (like my website) inherits the negative status.

When we get a shared web hosting account, we are assigned one of the servers which have available capacity. Now, why would that server have excess capacity? Perhaps, it is because a previous user was kicked out for bad behavior, like distributing viruses, spyware, or spam. Well, that someone’s bad behavior got the IP address blacklisted. And now, I am the proud owner of that banned IP address.

Note: The above doesn’t just apply to shared web hosting. If you get a private server or virtual private server, the provider company will give you an available IP address. That IP address could have belonged to someone previously who had misbehaved.

So maybe I and others whose companies use network security appliances can’t browse to my website. So what, we’re supposed to be working, right? Unfortunately, it turns out that email is also affected. If you expect to send and receive mail using your server, the server’s blacklisted IP address could cause all the email traffic to and from your server to get bounced (not delivered).

Worse, as far as I can tell, once the IP address is blacklisted, it is very hard to get that status removed. You’ll have to hope that your hosting provider is motivated enough to go through the hassle of engaging one or more blacklisting companies to remove that negative status. Even if your provider is willing, it will take time before the IP address is cleared.

Having learnt my lesson, the first thing I suggest doing after getting a web hosting or private server account is to check that its IP address is not blacklisted. You can check the IP address on the following websites:

Note: Not all of the blacklists are widely used, so it may be okay for the IP address to be on one or two blacklists. However, to be on the safe side, it is best to have an IP address which doesn’t appear on any blacklist.

If your IP address is blacklisted, ask your hosting provider company for another. If the company won’t accommodate you, then cancel and go with one that will. Believe me, doing so will avoid a lot of wasted effort and work. You don’t want a customer browsing to your company website only to get a stern warning that your website is known to distribute viruses.

I am afraid that I am seeing the future of the Internet. As security concerns grow, companies will invest in solutions, like network security appliances, that make use of blacklists (and maybe whitelists). Heck, if I was in charge of my company’s network security, a network security appliance would be the minimal that I would advocate. I would take more drastic steps like locking down inbound and outbound ports, and aggressively running heuristic checks on all internal traffic to detect viruses and spyware.

No Comments

Make Mac Screen Lock Secure and Convenient

Mac OS X No Comments

The Macbook I got for work is configured to require the password after the screensaver turns on or the display goes to sleep. By default, the screen is set to sleep after 2 minutes of inactivity on battery and 10 minutes on power adapter. When I work on two computers, alternating between the Macbook and a desktop, I hate having to keep inputting the password on the Macbook to unlock it.

I understand the need for security, but I draw the line when it makes using the Macbook too inconvenient. I don’t want to eliminate the password requirement, I just want the screen locks (which require the password to exit from) not to occur so often.

I considered adjusting the power settings so that the Macbook won’t go to sleep until an hour of inactivity occurs on either battery or power adapter. (Likewise, changing the screen saver to wait an hour.) However, making such a change would cause the battery usage to increase (the display uses a lot of power) and require a shorter interval between charges. (To preserve the battery capacity, I usually use the battery until it is very low before charging. And when charging, I try to give it an opportunity to charge to 100 percent.) While I don’t use the Macbook differently on battery versus power adapter, having to charge and being tethered to the wall socket more often is inconvenient.

macbook_screen_lockI found the solution in “System Preferences”, under the “Security & Privacy” section. There is an option named “Require password [time interval] after sleep or screen saver beings” that controls when the screen lock activates. I changed the time interval from the initial 5 seconds to 1 hour. (There are 7 selectable time intervals ranging from immediately to “4 hours”.) Now, when the screen saver runs or the Macbook goes to sleep (for example, when I close the lid), I don’t need to input the password when I wake the Macbook before the 1 hour interval expires.

This setting gave me a good compromise between security and convenience. I am not required to input the password for any inactivity less than an hour and I can leave the power (and screen saver) settings on battery conservation mode.

But what if I need to put the Macbook immediately into screen lock mode? The answer surprisingly lies in the “Keychain Access” application. To support manually locking the Mac, do the following:

  1. Run the “Keychain Access” application (under /Applications/Utilities directory).
  2. Go to the “Keychain Access” menu, Preferences, General, and check the “Show keychain status in menu bar” option.

You should now see a lock icon on the top-right menu bar. When you want to manually lock the Mac, click on the lock icon and select “Lock Screen”.

Hopefully the above will help you to secure your Mac without making it too inconvenience to use.

Note: The “Lock Screen” method above was gotten from Quickly lock your screen. Unfortunately, on Mac OS X Mountain Lion, the re-arrange menu bar icon function (hold Cmd and drag icon left or right) didn’t work so I was not able to get a keyboard shortcut working for “Lock Screen”.

No Comments

Nginx HTTPS SSL and Password-Protecting Directory

Linux 1 Comment

See my previous post in my unmanaged VPS (virtual private server) series, Nginx Multiple Domains, Postfix Email, and Mailman Mailing Lists, to learn how to configure multiple domains and get Postfix email and Mailman mailing lists working. In this post, I will configure Nginx to enable HTTPS SSL access and password-protect a directory.

Note: Though I’m doing the work on a DigitalOcean VPS running Ubuntu LTS 12.04.3, the instructions may also apply to other VPS providers.

Enable HTTPS/SSL Acess

I have a PHP application which I want to secure. If I use HTTP, then the information sent back from the server to my browser is in clear text (and visible to anyone sniffing the network). If I use HTTPS (HTTP Secure) with a SSL (Secure Sockets Layer) server certificate, then the information will be encrypted. In the steps below, I will configure HTTPS/SSL to work for a domain and then force HTTPS/SSL access on a particular directory (where the PHP application would be located).

To get HTTPS working, we need a SSL server certificate. While you can get a 3rd party certificate authority to issue a SSL certificate for your domain for about $10 per year, I only need a self-signed certificate for my purpose. A 3rd party issued SSL certificate is convenient because if the browser trusts the 3rd party certificate authority by default, the browser won’t prompt you to accept the SSL certificate like it would for a self-signed certificate (which the browser can’t establish a chain of trust on). If you run a business on your website, I recommend investing in a 3rd party SSL certificate so that your website would behave professionally.

Create a self-signed SSL server certificate by running these commands on the server:

Note: You don’t need to input the lines that start with the pound character # below because they are comments.

# Create a directory to store the server certificate.
sudo mkdir /etc/nginx/ssl

# Change to the newly-created ssl directory.  Files created below will be stored here.
cd /etc/nginx/ssl

# Create a private server key.
sudo openssl genrsa -des3 -out server.key 1024
   # Remember the passphrase you entered; we will need it below.

# Create certificate signing request.
# (This is what you would send to a 3rd party authority.)
sudo openssl req -new -key server.key -out server.csr
   # When prompted for common name, enter your domain name.
   # You can leave the challenge password blank.

# To avoid Nginx requiring the passphrase when restarting,
# remove the passphrase from the server key. (Otherwise, on
# reboot, if you don't input the passphrase, Nginx won't run!)
sudo mv server.key server.key.pass
sudo openssl rsa -in server.key.pass -out server.key

# Create a self-signed certificate based upon certificate request.
# (This is what a 3rd party authority would give back to you.)
sudo openssl x509 -req -days 3650 -in server.csr -signkey server.key -out server.crt

Note: I set the certificate expiration time to 3650 days (10 years); 3rd party certificates will usually expire in 365 days (1 year). The maximum expiration days you can input is dependent upon the OpenSSL implementation. Inputting 36500 days (100 years) would probably fail due to math overflow errors (once you convert 100 years into seconds, the value is too big to store in a 32bit variable). I believe the highest you can go is about 68 years, but I haven’t tested it.

Configure Nginx to use the SSL server certificate we created by editing the server block file for the domain you want to use it on:

sudo nano /etc/nginx/sites-available/domain2

In the “domain2” server block file, find the commented-out “HTTPS server” section at the bottom, uncomment it, and edit it to look like the following:

# HTTPS server
server {
        listen 443;

        root /var/www/mydomain2;
        index index.php index.html index.htm;

        ssl on;
        ssl_certificate /etc/nginx/ssl/server.crt;
        ssl_certificate_key /etc/nginx/ssl/server.key;

#       ssl_session_timeout 5m;
#       ssl_protocols SSLv3 TLSv1;
#       ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
#       ssl_prefer_server_ciphers on;

        location / {
                try_files $uri $uri/ /index.php;

        # pass the PHP scripts to FPM-PHP
        location ~ \.php$ {
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_pass unix:/var/run/php5-fpm.sock;
                fastcgi_index index.php;
                include fastcgi_params;

Note: The “HTTPS Server” section looks like the “HTTP Section” we configured previously at the top, except for the addition of “listen 443” (port 443 is the HTTPS port) and the SSL enabling configurations.

Open up the HTTPS port in the firewall and reload Nginx by running these commands on the server:

# Allow HTTPS port 443.
sudo ufw allow https

# Double-check by looking at the firewall status.
sudo ufw status

# Reload Nginx so changes can take effect.
sudo service nginx reload

Test by browsing to “”. When the browser prompts you to accept the self-signed server certificate, answer Yes.

Require HTTPS/SSL Access on a Directory

To require HTTPS/SSL-only access on a particular subdirectory under the domain, we need to add a directive to the domain’s HTTP Server to redirect to the HTTPS Server whenever a browser accesses that directory.

Note: Apache uses a .htaccess file to allow users to configure such actions as redirecting or password-protecting directories. Nginx does not use .htaccess; instead, we will put such directives in the server block files.

Create a secure test directory by running these commands on the server:

# Create a secure test directory.
sudo mkdir /var/www/mydomain2/secure

# Create a secure test page.
sudo nano /var/www/mydomain2/secure/index.html
   # Input this content:
   This page is secure!

# Change owner to www-data (which Nginx threads run as) so Nginx can access.
sudo chown -R www-data:www-data /var/www/mydomain2/secure

Edit the domain’s server block file by running this command on the server:

sudo nano /etc/nginx/sites-available/domain2

Under the “domain2” server block file, in the “HTTP Section” at the top (not the “HTTPS Section” at the bottom), add these lines to do the redirect:

server {
        #listen   80; ## listen for ipv4; this line is default and implied
        #listen   [::]:80 default ipv6only=on; ## listen for ipv6

        # Redirect to port 443.
        # Please put this before location / block as
        # Nginx stops after seeing the first match.
        # Note: ^~ means match anything that starts with /secure/
        location ^~ /secure/ {
                rewrite ^ https://$host$request_uri permanent;

        location / {

Restart Nginx so the changes above can take effect.

sudo service nginx reload

Test by browsing to “http://mydomain2/secure/” and the browser should redirect to “https://mydomain2/secure/”.

Password-Protect a Directory

By password-protecting a directory (aka requiring basic authentication), when a browser accesses that directory, the user will get a dialog asking for the user name and password. To get this functionality working, we will create a user and password file and configure the Nginx server block to require basic authentication based upon that file.

Note: Accessing a password-protected directory over HTTP would result in the user and password being sent in clear text by the browser to the server.

Create a protected test directory by running these commands on the server:

# Create a protected test directory.
sudo mkdir /var/www/mydomain2/protect

# Create a protected test page.
sudo nano /var/www/mydomain2/protect/index.html
   # Input this content:
   This page is password-protected!

# Change owner to www-data (which Nginx threads run as) so Nginx can access.
sudo chown -R www-data:www-data /var/www/mydomain2/protect

We will need a utility from Apache to create the user and password file. Run this command on the server to install and use it:

# Install htpasswd utility from Apache.
sudo apt-get install apache2-utils

# Create a user and password file using htpasswd.
sudo htpasswd -c /var/www/mydomain2/protect/.htpasswd myuser

# Add an additional user using htpasswd without "-c" create parameter.
sudo htpasswd /var/www/mydomain2/protect/.htpasswd myuser2

# Change owner to www-data (which Nginx threads run as) so Nginx can access.
sudo chown www-data:www-data /var/www/mydomain2/protect/.htpasswd

Note: If you move the “.htpasswd” file to another location (say, not under the domain’s document root), make sure that the “www-data” user or group can access it; otherwise, Nginx won’t be able to read it.

Edit the Nginx server block file by running this command on the server:

sudo nano /etc/nginx/sites-available/domain2

In the “domain2” server block file, in the “HTTP Section” at the top (not the “HTTPS Section” at the bottom), add these lines to password-protect the “/protect” directory:

server {
        #listen   80; ## listen for ipv4; this line is default and implied
        #listen   [::]:80 default ipv6only=on; ## listen for ipv6

        # Password-protect directory.
        # Please put this before location / block as
        # Nginx stops after seeing the first match.
        # Note: ^~ means match anything that starts with /protect/
        location ^~ /protect/ {
                auth_basic "Restricted"; # Enable Basic Authentication
                auth_basic_user_file /var/www/mydomain2/protect/.htpasswd;

        location / {

        # Uncomment this section to deny access to .ht files like .htpasswd
        # Recommend to copy this to the HTTPS server below also.
        location ~ /\.ht {
                deny all;


The “^~” in “location ^~ /protect/” above tells Nginx to match anything that starts with “/protect/”. This is necessary to ensure that all files and directories under “/protect/” are also password-protected. Because Nginx stops once it finds a match, it won’t process subsequent match directives, such as the PHP-FPM directive, and PHP scripts won’t execute. If you wish to run PHP scripts under the password-protected directory, you must copy the PHP-FPM directive (and any other directives) under the password-protected location directive like so:

server {

        # Password-protect directory.
        # Please put this before location / block as
        # Nginx stops after seeing the first match.
        # Note: ^~ means match anything that starts with /protect/
        location ^~ /protect/ {
                auth_basic "Restricted"; # Enable Basic Authentication
                auth_basic_user_file /var/www/mydomain2/protect/.htpasswd;

                # pass the PHP scripts to FPM-PHP
                location ~ \.php$ {
                        fastcgi_split_path_info ^(.+\.php)(/.+)$;
                        fastcgi_pass unix:/var/run/php5-fpm.sock;
                        fastcgi_index index.php;
                        include fastcgi_params;

                # deny access to .ht files like .htpasswd
                location ~ /\.ht {
                        deny all;

        # pass the PHP scripts to FPM-PHP
        location ~ \.php$ {

Restart Nginx so the changes above can take effect.

sudo service nginx reload

Test by browsing to “http://mydomain2/protect/” and the browser should prompt you to input a user and password.

Secure Mailman

To run Mailman under HTTPS/SSL, move the “location /cgi-bin/mailman” definition in the server block file, “/etc/nginx/sites-available/mydomain2”, from the HTTP server to the HTTPS server section.

You will also need to modify Mailman to use the HTTPS url:

# Edit Mailman's configuration
sudo nano /etc/mailman/
   # Change its default url pattern from 'http://%s/cgi-bin/mailman/' to:
   DEFAULT_URL_PATTERN = 'https://%s/cgi-bin/mailman/'

# Propagate the HTTPS URL pattern change to all the mailists
sudo /usr/lib/mailman/bin/withlist -l -a -r fix_url

Note: It is not necessary to restart the Mailman service for the changes above to take effect.

If you only want the default URL Pattern change to apply to a specific mailing list, like “”, use this command instead:

sudo /usr/lib/mailman/bin/withlist -l -r fix_url test -u

Take a Snapshot

DigitalOcean provides a web tool to take a snapshot image of the VPS. I can restore using that image or even create a duplicate VPS with it. Because my VPS is now working the way I need it to, it makes sense to take a snapshot at this time.

Unfortunately, performing a snapshot requires that I shutdown the VPS first. More unfortunate, the time required to take the snapshot varies from minutes to over an hour (more on this below). Worst, there is no way to cancel or abort the snapshot request. I have to wait until DigitalOcean’s system completes the snapshot request before my VPS is automatically restarted.

digitalocean_snapshot_stuckI did my first snapshot after getting WordPress working on the VPS. There was about 6GB of data (including the operating system) to make an image of. I shut down the VPS and submitted a snapshot request. The “Processing…” status with zero progress was what I saw for over one hour. During this time, my VPS and WordPress site was offline.

A little over an hour later, the status went from “Processing…” with zero progress to done in a split second. My VPS and WordPress site were back online. I think an hour to backup 6GB of data is excessive. DigitalOcean support agreed. Evidently, there was a backlog on the scheduler and requests were delayed. Because I couldn’t cancel the snapshot request, I had to wait for the backlog to clear in addition to however long it took to do the snapshot.

If I had known more about the snapshot feature, I would have opted to pay for the backup feature, which cost more but doesn’t require shutting down the VPS. Unfortunately, the backup feature can only be enabled during VPS creation so it is too late for me.

The recommended method to shutdown the VPS is to run this command:

# Following command equivalent to: sudo shutdown -h now
sudo poweroff

Update: I just did a snapshot and it only took 5 minutes this time.

See my followup post, Upgrade Ubuntu and LEMP on an Unmanaged VPS, to learn how to upgrade LEMP and Ubuntu to the latest versions.

Most info above derived from:

1 Comment

Nginx Multiple Domains, Postfix Email, and Mailman Mailing Lists

Linux No Comments

See my previous post, Install Ubuntu, LEMP, and WordPress on an Unmanaged VPS, to learn how to set up an unmanaged VPS (virtual private server) with Ubuntu, LEMP, and WordPress. In this post, I will configure Nginx to support multiple domains (aka virtual hosts) on the VPS, get Postfix email send and receive working, and install a Mailman mailing list manager.

Note: Though I’m doing the work on a DigitalOcean VPS running Ubuntu LTS 12.04.3, the instructions may also apply to other VPS providers.

Host Another Domain

To host another domain (say on the same VPS, we need to add another Nginx server block (aka virtual host) file. Run the commands below on the server.

Note: You don’t need to input the lines that start with the pound character # below because they are comments.

# Create a new directory for the new domain
sudo mkdir /var/www/mydomain2

# Create a test page.
sudo nano /var/www/mydomain2/index.html
   # Input this content:
   Welcome to

# Change owner to www-data (which Nginx threads run as) so Nginx can access.
sudo chown -R www-data:www-data /var/www/mydomain2

# Create a new Nginx server block by copying from existing and editing.
sudo cp /etc/nginx/sites-available/wordpress /etc/nginx/sites-available/mydomain2
sudo nano /etc/nginx/sites-available/mydomain2
        # Change document root from "root /var/www/wordpress;" to:
        root /var/www/mydomain2;
        # Change server name from "server_name;" to:

# Activate new server block by create a soft link to it.
sudo ln -s /etc/nginx/sites-available/mydomain2 /etc/nginx/sites-enabled/mydomain2

# Reload the Nginx service so changes take effect.
sudo service nginx restart

The server block files allow Nginx to match the “server_name” domain to the inbound URL and to use the matching “root” directory. When a browser connects to the VPS by IP address (and thus, doesn’t provide a domain for matching), Nginx will use the first virtual host that it loaded from the “/etc/nginx/sites-enabled/” directory (the order of which could change every time you reload Nginx).

To select a specific virtual host to load when accessed by IP address, edit the related server block file under “/etc/nginx/sites-available/” directory and add a “listen 80 default” statement to the top like so:

server {
        #listen   80; ## listen for ipv4; this line is default and implied
        #listen   [::]:80 default ipv6only=on; ## listen for ipv6
        listen 80 default;

Note: The “listen 80 default;” line should only be added to one of the server block files. The behavior may be unpredictable if you add it to more than one block file.

Send Email (using Postfix)

We will install Postfix, a Mail Transfer Agent which works to route and deliver email, on the VPS to support sending and receiving mail. WordPress (and its plugins like Comment Reply Notification) uses Postfix to send emails. While we could use a more simple, send-only mail transfer agent like Sendmail, we will need Postfix later when we install Mailman (a mailing list service) which depends on it. In this section, we will configure Postfix and test the send mail function.

Before we start, we need to talk about Postfix. Postfix is very sophicated and can be configured in many different ways to receive mail. I want to suggest one way which I believe works well for many domains on a VPS. The setup I’m suggesting is to have one default local delivery domain ( and many virtual alias domains (, etc). A local delivery domain is an endpoint domain, meaning that when mail arrives there, it is placed into the local Linux user’s mailbox. A virtual alias domain is used to route mail sent to it to a local delivery domain.

For example, if you send an email to “” (virtual alias domain), Postfix will route the email to “” (local delivery domain), and then delivered the mail to the local Linux “susan” user’s inbox.

Keep the above in mind as we configure Postfix and hopefully everything will be understandable. We will go step by step and build upon our understanding. We will get the local delivery domain working first and then later, add the virtual alias domain into the mix.

Install Postfix by running these commands on the server:

# Install Postfix package and dependencies.
sudo apt-get install postfix
   # Select "Internet Site" and input our local delivery domain (

# Configure Postfix to use local delivery domain.
sudo nano /etc/postfix/
   # Update "myhostname =" to:
   myhostname =
   # Double-check that "mydestination" includes local delivery domain:
   mydestination =, localhost, localhost.localdomain

# Reload Postfix so changes will take effect.
sudo service postfix reload

Note: I had trouble inputting the domain name when installing Postfix and ended up with a combination of the default “localhost” and my domain name, specifically “”. To fix this, I had to modify the “mydestination” value in “/etc/postfix/” and in the content of “/etc/mailname” file to be the correct domain name. The “myorigin” value in “/etc/postfix/” file references the “/etc/mailname” file.

The Postfix service should be started already and it is configured to start on boot by default. To send a test email, use the Sendmail command line (Sendmail was installed as dependency of the Postfix installation) on the server:


Subject: PostFix Test Email

This is the subject of a test email sent after configuring Postfix.

# Press CTRL-D key combo to end

Note: The From email address is constructed based upon the currently logged-in Linux username and the “myorigin” value in “/etc/postfix/” (which in turn, points at the “/etc/mailname” file that contains the local delivery domain name). Thus, the From address should be “”).

To test the PHP send mail feature, run the following commands on the server:

# Install the PHP command line package.
sudo apt-get install php-cli

# Enable logging of PHP-CLI errors to a file
sudo nano /etc/php5/cli/php.ini
   # Add this line:
   error_log = /tmp/php5-cli.log
   # You must use a writeable directory like /tmp.

# Open the PHP interpretive shell and call the mail() function
php -a
php > mail('', 'Subject here', 'Message body here');
php > quit

If the PHP mail function works, then most likely WordPress and its plugins should be able to send emails. To be absolutely sure, you can install the Check Email plugin to test sending an email from within WordPress.

Receive Mail (to Local Delivery Domain)

By default, Postfix is configured to deliver emails sent to “postmaster@mydomain” to the local Linux “root” user’s inbox. You can tell this from the following Postfix settings:

cat /etc/postfix/
   alias_maps = hash:/etc/aliases
   mydestination =, localhost, localhost.localdomain

cat /etc/aliases
   postmaster: root

In the Postfix’s “” file, the “alias_maps” value points to an email-to-local-user mapping file for the local delivery domain, and the “mydestination” value contains the default local delivery domain “” (ignore the localhost entries).

In the “alias_maps” file “/etc/aliases”, the “postmaster” email username is mapped to the root user’s inbox. Putting it together, any email sent to “” will be delivered to the root user’s inbox.

Note: Mail can be delivered to any of the local Linux users by using their exact usernames, even though they are not listed in “alias_maps”. For example, emails sent to “” will be delivered to the local root user and emails sent to “” will be delivered to the local mynewuser user.

To receive external emails sent to the VPS, we need to open up the SMTP (Simple Mail Transfer Protocal) port in the firewall and create a DNS MX (Mail exchange) record. (Port 25 is the default SMTP port use for receiving emails.)

To open up the SMTP port, run the following commands on the server:

# Allow SMTP port 25.
sudo ufw allow smtp

# Double-check by looking at the firewall status.
sudo ufw status

I used DigitalOcean’s DNS management web interface to add a MX record pointing at “@” (which is the A record that resolves to and priority 10 for The priority allows us to add more than one MX record and determines the order of mail servers to submit the emails to. Rather than using the highest priority 0, using priority 10 will allow me to easily add a mail server before or after this one in the future.

Note: Most websites will suggest creating a CNAME record (redirecting “mail” to “@”) for and then configuring the MX record to point at This is not necessary. The most simple configuration is to just point the MX record to the A record “@” as I did above.

To see if the DNS was updated with the MX record, I ran the following test command on the server (or any Linux machine):

dig MX

# technical details are returned, the most important is the ANSWER SECTION.
;; ANSWER SECTION:         1797    IN      MX      10

In the above example “ANSWER SECTION”, we can see that the MX record for points at (as the mail receiving server) with priority 10 (as configured). The 1797 value is the TTL (Time to Live) setting in seconds (1797 seconds is about 29.95 minutes) which indicates how long this MX record is valid for. DNS servers which honor this TTL setting will refresh at that rate; however, some DNS servers may ignore the TTL value in favor of much longer refresh times. (The A and CNAME records also have TTL values. DigitalOcean does not allow me to customize the TTL values for any DNS record.)

If the “ANSWER SECTION” is missing from the output, then your VPS provider may not have updated its DNS servers yet. (DigitalOcean DNS servers took 20 minutes to update the MX record.) Similar to the’s A and CNAME record changes, you may need to wait for the MX record to propagate across the internet (most DNS servers will be updated in minutes, while some may take hours).

Also, you can use the intoDNS website to check your MX record details. Input your domain name, click on the Report button, and look for the “MX Records” section. If your domain’s MX record shows up there, you can be reasonably certain that it has propagated far enough for you to start sending emails to your domain.

Test by sending an email to “” from your local mail client (or Google Mail or Yahoo Mail). To see if the mail was received, do the following on your server:

# View the root user's received mail store for the email you sent.
sudo more /var/spool/mail/root

# Alternatively, install the mail client to view and delete received emails.
sudo apt-get install mailutils
# Read mail sent to the local root user.
sudo mail
   # type "head" to see a list of all mail subject lines.
   # type a number (ex: 1) to see the mail content.
   # type "del 1" to delete that mail.
   # type "quit" to exit mail client.

Note: If you want to check a non-root user’s inbox, log in as that non-root user and just run “mail”, instead of “sudo mail”.

Note: According to Internet standards, all mail-capable servers should be able to receive emails sent to their “postmaster” address. While it may be overkill, I decided to create MX records and postmaster aliases for all the domains that I host on the VPS.

Receive Mail (to other Virtual Alias Domains)

Now that we know that the server can receive emails, we want to configure Postfix to support emails sent to the multiple domains hosted on the VPS. (If you want more local users than “root” and “mynewuser”, use the “adduser” command per the previous post to create new users.)

Recall our earlier discussion about how a virtual alias domain will route mail to the local delivery domain (“”), which will finally deliver the mail to the local user’s inbox. We will configure the additional domains like “” to be virtual alias domains.

First, we need to create a mapping file that will map from the virtual alias domain to the local delivery domain. Run this command on the server to create that file:

sudo nano /etc/postfix/virtual

In the “virtual” mapping file, input the following lines: IGNORE # Declare virtual alias domain postmaster

In the first line, we are using a newer feature of Postfix to declare a virtual alias domain “” by starting the line with it. (The previous, alternative method was put the virtual alias domain declarations into a “virtual_alias_domains” property in the “/etc/postfix/” file.) The rest of the first line, “IGNORE …”, is ignored. The second line indicates that mail sent to “” should be routed to “postmaster” at the local delivery domain; that is, “”.

Configure Postfix to use the new virtual alias domain mapping file:

sudo nano /etc/postfix/
   # Add a new virtual_alias_maps line:
   virtual_alias_maps = hash:/etc/postfix/virtual

# Update the hash db version of /etc/postfix/virtual that Postfix uses.
sudo postmap /etc/postfix/virtual

# Reload Postfix so changes take effect.
sudo service postfix reload

Test by sending an email to the email address configured in the virtual alias mapping file; in this case, “”. Per the previous instruction, check the root user’s inbox by using the “sudo mail” command.

Install Mailing List Service (Mailman)

Besides supporting mailing lists, Mailman (GNU Mailing List Manager) allows mailing list administration by web interface or by sending email messages (like subscribe or unsubscribe messages).

Update: Recently (latter half of 2016), emails sent to my mailing list were not delivered to Google Mail recipients. I found the following error from Google in the ‘/var/log/mail.log’ file: “Our system has detected an unusual rate of 421-4.7.0 unsolicited mail originating from your IP address. To protect our 421-4.7.0 users from spam, mail sent from your IP address has been temporarily 421-4.7.0 rate limited.” The issue is not spam because my mailing list gets only a few emails per week. This support page suggests that Google now requires SPF+DKIM for mail relays. I decided to migrate the mailing list to Google Groups for now.

To install Mailman, run the following commands on the server:

# Install Mailman
sudo apt-get install mailman

# Create a mandatory site list named mailman
sudo newlist mailman

The “newlist” command will request the following:

To finish creating your mailing list, you must edit your /etc/aliases (or
equivalent) file by adding the following lines, and possibly running the
`newaliases' program:

## mailman mailing list
mailman:              "|/var/lib/mailman/mail/mailman post mailman"
mailman-admin:        "|/var/lib/mailman/mail/mailman admin mailman"

Ignore that instruction. You don’t need to manually edit the Postfix “/etc/aliases” file. Later on, we will configure Mailman to automatically generate its own aliases file, which Postfix will read from.

Once the site wide “mailman” list is created, we can start the Mailman service by running this command:

sudo service mailman start

Mailman is configured to start on boot by default. (Running “sudo service mailman status” won’t output anything useable; to see if Mailman is running, list its processes using “ps -aef | grep -i mailman” instead.)

To get Mailman’s web interface working, we will need to install FcgiWrap (Simple CGI support) so that Nginx can integrate with Mailman. FcgiWrap works similarly to how PHP-FPM (FastCGI Process Manager for PHP) was used by Nginx to pass the processing of PHP files to the PHP platform. FcgiWrap will be used by Nginx to pass the Mailman-related interface calls to Mailman.

To install FcgiWrap, run the following command on the server:

sudo apt-get install fcgiwrap

FcgiWrap will be started automatically after installation. By default, FcgiWrap is configured to start at boot time. FcgiWrap uses a unix socket file “/var/run/fcgiwrap.socket” (similar to how PHP-FPM uses “/var/run/php5-fpm.sock”) to communicate with Mailman. (Similar to Mailman, running “service fcgiwrap status” won’t output anything useable; to see if FcgiWrap is running, list its processes using “ps -aef | grep -i fcgiwrap” instead.)

Edit the Nginx server block file belonging to the domain that you want to make the Mailman web interface accessible under. For example, run this command on the server:

sudo nano /etc/nginx/sites-available/mydomain2

In the mydomain2 server block file, add the following lines to the end of the “service” section:

service {

        location /cgi-bin/mailman {
               root /usr/lib/;
               fastcgi_split_path_info (^/cgi-bin/mailman/[^/]*)(.*)$;
               fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
               include /etc/nginx/fastcgi_params;
               fastcgi_param PATH_INFO $fastcgi_path_info;
               #fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;
               fastcgi_intercept_errors on;
               fastcgi_pass unix:/var/run/fcgiwrap.socket;
        location /images/mailman {
               alias /usr/share/images/mailman;
        location /pipermail {
               alias /var/lib/mailman/archives/public;
               autoindex on;

Note: I did two things above differently from what most websites would say to do:

  • I put the “fastcgi_param SCRIPT_FILENAME …” line before “include /etc/nginx/fastcgi_params;” to avoid it getting overwritten by “fastcgi_params”; otherwise, the call to Mailman would fail with a 403 Forbidden Access error message.
  • I commented out “fastcgi_param PATH_TRANSLATED …” because it is not necessary.

Reload Nginx to make the changes take effect:

sudo service nginx reload

You can now browse to the following Mailman administrative pages:

  • – manage the “mylistname” list.
  • – info about the “mylistname” list.
  • – view the mailing list archives.

We still need to integrate Mailman with Postfix so that emails sent to mailing lists, especially those belonging to virtual alias domains, will be routed to Mailman by Postfix.

Edit the Mailman configuration by running this command on the server:

sudo nano /etc/mailman/

In the “” file, add or uncomment and modify these lines like so:

MTA = 'Postfix'

Mailman has the capability of generating aliases for Postfix. We will use that capability. Run these commands on the server:

# Create Mailman's aliases and virtual-mailman files.
sudo /usr/lib/mailman/bin/genaliases

# Make the generated files group-writeable.
sudo chmod g+w /var/lib/mailman/data/aliases*
sudo chmod g+w /var/lib/mailman/data/virtual-mailman*

Note: The “genaliases” command will generate “aliases”, “aliases.db”, “virtual-mailman”, and “virtual-mailman.db” files in the “/var/lib/mailman/data” directory.

We then add the generated Mailman aliases and virtual aliases files to the Postfix “alias_maps” and “virtual_alias_maps” properties.

To edit Postfix, run this command on the server:

sudo nano /etc/postfix/

In the Postfix “” file, add to the end of the “alias_maps” and “virtual_alias_maps” lines like so:

alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases
virtual_alias_maps = hash:/etc/postfix/virtual, hash:/var/lib/mailman/data/virtual-mailman

Note: The changes above will configure Postfix to read and process Mailman’s generated aliases files, in addition to its own aliases files.

Reload Postfix to have the changes take effect:

sudo service postfix reload

Recall earlier, I said to ignore the instructions by “newlist” to add Mailman aliases to the Postfix “/etc/aliases” file because we would do it automatically later. That is just what we did above.

Look at the Mailman’s generated “aliases” file by running this command on the server:

sudo cat /var/lib/mailman/data/aliases

# STANZA START: mailman
# CREATED: Tue Mar 25 05:53:44 2014
mailman:             "|/var/lib/mailman/mail/mailman post mailman"
mailman-admin:       "|/var/lib/mailman/mail/mailman admin mailman"
# STANZA END: mailman

It should look exactly like the aliases outputted by the “newlist” command. Mailman’s generated “aliases” file is included in Postfix’s “alias_maps” and thus is processed by Postfix along with the contents of the original “/etc/aliases” file.

To test a mailing list belonging to a virtual alias domain, run these commands on the server:

# Create a test mailing list.
sudo newlist

# Reload Postfix to make changes take effect.
sudo service postfix reload

The “newlist” command will automatically update Mailman’s “aliases” and “virtual-mailman” aliases file with entries for “”. However, we still need to manually reload Postfix so that Postfix will pick up the changes. (Reloading Postfix requires sudo/root access, so Mailman can’t do it automatically).

Let’s look at Mailman’s updated “aliases” and “virtual-mailman” files to see what was added (the pre-existing, generated “mailman” list aliases are omitted below):

sudo cat /var/lib/mailman/data/aliases

# CREATED: Tue Mar 25 05:56:47 2014
test:             "|/var/lib/mailman/mail/mailman post test"
test-admin:       "|/var/lib/mailman/mail/mailman admin test"
# STANZA END: test

sudo cat /var/lib/mailman/data/virtual-mailman

# CREATED: Tue Mar 25 05:56:47 2014              test        test-admin
# STANZA END: test

Recall that a virtual alias domain routes to a local delivery domain, which then delivers to an endpoint (inbox or in the case above, a program called Mailman). For example, when a mail is sent to the “” mailing list, it is routed to “” (local delivery domain), and then passed to the “mailman post test” program, which then forwards a copy to each member of the “test” mailing list.

Note: Because all mailing lists also exist under the local delivery domain, the mailing list name must be unique across all the domains hosted on the machine.

To test, access the Mailman web interface at “” to add members to the “” mailing list. Then send an email to that mailing list and its members should each receive a copy.

Once, you are done testing, you can delete the list by running this command on the server:

# Remove list (don't include part below).
sudo /usr/lib/mailman/bin/rmlist -a test

# Reload Postfix to make changes take effect.
sudo service postfix reload

Debugging Mail

Both Postfix and Mailman will output error messages and debug logs to:


At this point, my VPS is hosting several domains, I can send and receive emails, and I have mailing lists working. See my followup post, Nginx HTTPS SSL and Password-Protecting Directory, to learn how to configure Nginx to enable HTTPS SSL access and to password-protect a directory.

Most info above derived from:

No Comments

Install Ubuntu, LEMP, and WordPress on an Unmanaged VPS

Linux 4 Comments

Before this post, I was hosting my website using a shared web hosting provider. Shared web hosting is convenient because the provider takes care of the software platform and its security updates (though I am still responsible to update a PHP application like WordPress). And if there is a problem with the platform, the provider is responsible to fix it. Unfortunately, shared web hosting may have performance and scalability issues (resulting from overcrowded websites on the single shared server and strict restrictions on CPU and memory usage) and disallows non-PHP software installation such as a Subversion server.

With the above in mind, I decided to look into unmanaged VPS (virtual private server) hosting as an alternative to shared web hosting. A virtual server is cheaper than a physical server and an unmanaged server is cheaper than a managed server. A managed VPS provider would install the software stack for me and provide support for about $30 or more per month. An unmanaged VPS depends on me to install the software and only costs $5 per month with DigitalOcean. The downside to unmanaged VPS is that if anything goes wrong with the software, I am responsible to fix it.

Note: If you decide to, please use this referral link to signup for a DigitalOcean account and get $10 in credit. Once you spend $25, I will get a $25 credit. It’s a win-win for both of us.

In this post, I will outline the steps I took to install WordPress on an unmanaged VPS hosted by DigitalOcean. Most of these instructions may be applicable to other VPS providers.

Create VPS

When creating a VPS, the most important choice is the operating system. I recommend getting the latest Ubuntu Server LTS (long-term support) version, currently 12.04.4. All up-to-date software packages should support the LTS version of Ubuntu so it is a safe choice to make. Unfortunately, DigitalOcean only offered the LTS version 12.04.3 so I chose that. Because it will be a long time, if ever, before I would need a VPS with more than 4GB memory, I decided to choose the 32bit version to keep memory usage as minimal as possible.

You should have an IP address and root password for your VPS before proceeding.

Secure VPS

Remote access to the VPS is accomplished by SSH (Secure Shell). (If you know telnet, think of SSH as an encrypted version of telnet.) By default, servers are setup to use SSH with port 22 and user root. Unsophisticated hackers would attempt to gain access to a server using those settings and a brute force password generator. While a very hard to guess root password would make the server more secure, it is even better to change the SSH port number and use a non-root user.

Note: While Mac OS X comes with a built-in SSH client, Windows does not. I recommend downloading the free DeltaCopy SSH client “ssh.exe” for Windows. Alternatively, you can download the free PuTTY SSH client “putty.exe” if you want a GUI client, instead of a command line client.

Note: Lines below that start with the pound character # are comments and you don’t need to input them.

Run these commands:

# Connect to your server.

# Change the root password.

# Create a new non-root user.
adduser mynewuser

We will configure the new user to execute commands with root privileges by using the sudo (super user) tool. Sudo involves pre-pending all commands with the word “sudo”. Sudo will prompt for the root password. (You can also configure sudo to log all commands issued using it.) We will grant all sudo privileges to the new user by adding to “/etc/sudoers” under the “User privilege specification” section like so:

# visudo opens /etc/sudoers using vi or nano editor, whichever is the configured text editor.
# It is equivalent to "sudo vi /etc/sudoers" or "sudo nano /etc/sudoers" but includes validation.
   # Add mynewuser to the "User privilege specification" section
   root       ALL=(ALL:ALL) ALL
   mynewuser  ALL=(ALL:ALL) ALL

To disallow SSH root login and to change the SSH port number (say from 22 to 3333), edit the SSH configuration “sshd_config” file and make the following changes:

sudo nano /etc/ssh/sshd_config
   # Change the default listen "Port 22" to the custom port:
   Port 3333

   # Do not permit root user login by changing "PermitRootLogin yes" to:
   PermitRootLogin no

   # Allow only mynewuser to connect using SSH
   AllowUsers mynewuser

   # Optionally, disable useDNS as it provides no real security benefit
   UseDNS no

Reload the SSH service so the changes can take effect:

sudo reload ssh

Test the new settings by opening up a command window on your client and running the following commands:

ssh -p 3333
ssh -p 3333

The attempt to SSH using the root user should fail. The attempt using the new user should succeed. If you cannot SSH into the server with the new user, double-check the changes using your original SSH window (which should still be connected to your server). If you don’t have that original SSH window still connected, your VPS provider should provide console access (like having a virtual keyboard and monitor connected directly to the VPS) through their website for recovery scenarios such as this.

Tip: You can log into the root account after you SSH into the mynewuser account by running the “su -” superuser command. You will be prompted for the root password.

The UFW (Uncomplicated Firewall) tool allows us to easily configure the iptables firewall service, which is built into the Ubuntu kernel. Run these commands on the server:

# Allow access to custom SSH port and HTTP port 80.
sudo ufw allow 3333/tcp
sudo ufw allow http

# Enable the firewall and view its status.
sudo ufw enable
sudo ufw status

The above steps configure a basic level of security for the VPS.

Install LEMP

WordPress requires an HTTP server, PHP, and MySQL. The LEMP (Linux, Nginx, MySQL, PHP) software stack matches those requirements. (Nginx is pronounced “engine-ex” which explains where the “E” acronym came from.) You may be more familiar with the LAMP stack, which uses Apache instead of Nginx as the HTTP server. Nginx is a high-performance HTTP server which uses significantly less CPU and memory than Apache would under high load situations. By using Nginx, we allow for the capability of handling greater numbers of page requests than usual.

On the server, run these commands:

# Update installed software packages.
sudo apt-get update

# Install MySQL.
sudo apt-get install mysql-server php5-mysql
sudo mysql_install_db

# Secure MySQL.
sudo /usr/bin/mysql_secure_installation

# Do a test connect to MySQL service.
mysql -u root -p
mysql> show databases;
mysql> quit

When installing MySQL, you will be prompted to input a MySQL root password. If you leave it blank, you will have another opportunity to change it when running the “mysql_secure_installation” script. You will want to answer yes to all the prompts from the “mysql_secure_installation” script to remove anonymous MySQL users, disallow remote MySQL root login, and remove the test database.

MySQL is not configured to start on boot by default. To start MySQL at boot time, run only the first command below:

# Start MySQL at boot time.
sudo update-rc.d mysql defaults

# FYI, undo start MySQL at boot time.
sudo update-rc.d -f mysql remove

If you have issues connecting to MySQL, you can start MySQL in an unsecured safe mode (which bypasses the password requirement) to perform a recovery action such as resetting the MySQL root password like so:

# Stop normal MySQL service and start MySQL in safe mode.
sudo service mysql stop
sudo mysqld_safe --skip-grant-tables &

# Connect to MySQL, change root password, and exit.
mysql -u root
mysql> use mysql;
mysql> update user set password=PASSWORD("newrootpassword") where User='root';
mysql> flush privileges;
mysql> quit

# Stop MySQL safe mode and start normal MySQL service.
sudo mysqladmin -u root -p shutdown
sudo service mysql start

Install and start Nginx by running these commands on the server:

sudo apt-get install nginx
sudo service nginx start

Browse to your server’s IP address and you should see a “Welcome to nginx!” page.

To make it possible for Nginx to serve PHP scripts, we need to install the PHP platform and the PHP-FPM (FastCGI Process Manager for PHP) service. PHP-FPM enables Nginx to call the PHP platform to interpret PHP scripts. PHP-FPM should be already installed as dependencies of the “php5-mysql” package (part of the MySQL installation instructions above). We can make sure that PHP-FPM (and its dependency, the PHP platform) is installed by trying to re-install it again (trying to install an already installed package doesn’t do any harm).

# List the installed packages and grep for php name matches:
dpkg --get-selections | grep -i php

# Install PHP-FPM package.
sudo apt-get install php-fpm

# Test the install by displaying the version of PHP-FPM.
php5-fpm -v

Secure and optimize the PHP-FPM service by running these commands on the server:

# Fix security hole by forcing the PHP interpreter to only process the exact file path.
sudo nano /etc/php5/fpm/php.ini
   # Change the "cgi.fix_pathinfo=1" value to:

# Configure PHP to use a Unix socket for communication, which is faster than default TCP socket.
sudo nano /etc/php5/fpm/pool.d/www.conf
   # Change the "listen =" value to:
   listen = /var/run/php5-fpm.sock

# Restart the PHP-FPM service to make the changes effective.
sudo service php5-fpm restart

Nginx defines the site host (and each virtual host) in a server block file. The server block file links the domain name to a directory where the domain’s web files (HTML, PHP, images, etc.) are located. When you browse to the VPS, Nginx will serve files from the directory that corresponds to the domain name given by your browser. That is simple explanation of how Nginx can support hosting more than one domain on a single VPS.

Edit the default server block file to support PHP scripts:

sudo nano /etc/nginx/sites-available/default

In the “default” server block file, change the following:

server {
        # Add index.php to front of "index" to execute it first by default (if it exists)
        index index.php index.html index.htm;
        # Optionally, WordPress sites only need the index.php value like so:
        #index index.php

        # Change "server_name localhost;" to:

        # Use these directives if URL is matched to root / location.
        location / {
                # try_files will try a directory if file does not exist, and then
                # if directory does not exist, will try a default like index.html.
                # For WordPress site, we need to change "try_files $uri $uri/ /index.html;" to:
                try_files $uri $uri/ /index.php?$args;
                # The $args is necessary to support the Wordpress post preview function.
                # If you don't change this, then non-default permalink URLs will fail with 500 error.

        # Uncomment the whole "location ~ \.php$" block except for the "fast_cgi_pass;" line.
        # "location ~\.php$" means to match against any files ending in .php.
        # Leave the default "fastcgi_pass unix:/var/run/php5-fpm.sock" line
        # (it already matches what is in /etc/php5/fpm/pool.d/www.conf above).
        location ~ \.php$ {
                fastcgi_split_path_info ^(.+\.php)(/.+)$;

                # With php5-cgi alone:
                # With php5-fpm:
                fastcgi_pass unix:/var/run/php5-fpm.sock;
                fastcgi_index index.php;
                include fastcgi_params;

Restart the Nginx service to have the changes take effect:

sudo service nginx restart

Create a PHP test script by running this edit command:

sudo nano /usr/share/nginx/www/info.php

In “info.php” file, input the following text:


Browse to “” and you should see a page containing information about the PHP installation.

Both PHP-FPM (php5-fpm) and Nginx are configured to start at boot time by default. You can double-check by running the chkconfig utility to list the services and their runlevel configurations:

# Install chkconfig package.
sudo apt-get install chkconfig

# List all services and their runlevels configurations.
chkconfig --list

Note: You won’t be able to use chkconfig to change the runlevels because it is not compatible with the new Upstart runlevel configuration used by Ubuntu. Instead, use update-rc.d or sysv-rc-conf to make runlevel changes.

Debugging LEMP

To debug issues with LEMP, look at these log files:

MySQL: /var/log/mysql/error.log
Nginx: /var/log/nginx/error.log
PHP: /var/log/php5-fpm.log

For performance reasons, the debug logs from the PHP-FPM worker threads are discarded by default. If you wish to see error logs from your PHP applications, you will need to enable logging from worker threads.

Run the following commands on the server:

# Edit the PHP-FPM worker pool config file to enable logging.
sudo nano /etc/php5/fpm/pool.d/www.conf
   # Uncomment this line:
   catch_workers_output = yes

# Reload the PHP-FPM service to make the changes take effect.
sudo service php5-fpm reload

You should now see error logs from the PHP worker threads outputted to the “/var/log/php5-fpm.log” file.

Install WordPress

Install WordPress by running the following commands on the server:

# Get the latest WordPress version.
cd /tmp

# Uncompress the WordPress archive file.
tar zxvf latest.tar.gz

# Create a wp-config.php configuration file by copying from the sample.
cd wordpress
cp wp-config-sample.php wp-config.php

# Move the WordPress files to the Nginx root document directory.
sudo mv wordpress/* /usr/share/nginx/www/

# Change ownership to www-data user (which Nginx worker threads are configured to run under).
sudo chown -R www-data:www-data /usr/share/nginx/www/*

Note: If WordPress detects its configuration file “wp-config.php” is missing, it will offer to run a web-based wizard to create it. However, the wizard won’t work because our MySQL root user requires a password. Besides, using the wizard would not be very secure because the WordPress database’s MySQL user password would be sent in the clear over HTTP. Instead, we manually created the “wp-config.php” file in the above steps and will modify it below.

Create a MySQL database and user for WordPress by running these commands on the server:

# Open a MySQL interactive command shell.
mysql -u root -p

# Create a MySQL WordPress database.
mysql> create database wordpress;

# Create a MySQL user and password.
mysql> create user wordpress@localhost;
mysql> set password for wordpress@localhost = PASSWORD('mypassword');

# Grant the MySQL user full privileges on the WordPress database.
mysql> grant all privileges on wordpress.* to wordpress@localhost identified by 'mypassword';

# Make the privilege changes effective.
mysql> flush privileges;

# Double-check by showing the privileges for the user.
mysql> show grants for wordpress@localhost;

# Exit the MySQL interactive shell.
mysql> quit

Update the WordPress configuration file by running this command:

sudo nano /usr/share/nginx/www/wp-config.php

In the “wp-config.php” file, input the newly-created MySQL database, user, and password like so:

define('DB_NAME', 'wordpress');
define('DB_USER', 'wordpress');
define('DB_PASSWORD', 'mypassword');

Browse to your server’s IP address and follow the WordPress instructions to complete the installation.

Change WordPress Document Root

This section is optional. If you wish to store the WordPress installation into an alternative directory path, say “/var/www/wordpress”, instead of “/usr/share/nginx/www”, follow the steps below. (I suggest “/var/www/wordpress” instead of “/var/www” so that later, when you host additional domains, the WordPress installation will be in its own separate directory.)

To move WordPress to a new directory, run these commands on the server:

# Move WordPress files to new directory.
sudo mkdir -p /var/www/wordpress
sudo mv /usr/share/nginx/www/* /var/www/wordpress/

# Rename the existing Nginx server block file.
sudo mv /etc/nginx/sites-available/default /etc/nginx/sites-available/wordpress

# Update the Nginx server block file with new location.
sudo nano /etc/nginx/sites-available/wordpress
   # Change document root from "root /usr/share/nginx/www;" to "root /var/www/wordpress;".

# Enable the renamed Nginx server block by creating soft link.
sudo ln -s /etc/nginx/sites-available/wordpress /etc/nginx/sites-enabled/wordpress

# Remove the old Nginx server block soft link (which points at a non-existing file).
sudo rm /etc/nginx/sites-enabled/default

# Reload the Nginx service so the changes can take effect.
sudo service nginx reload

Test this change by browsing to your server’s IP address. You should see the WordPress website.

Migrate WordPress

When migrating an existing WordPress site to your new VPS, I suggest doing the following steps:

  1. On your old WordPress server, update WordPress and all plugins to the latest versions.
  2. On the new WordPress server, browse to “” to install the same theme and plugins as exist on the old server. Activate the theme. Leave all the plugins inactive. When we do the WordPress database restore, the plugins will be configured and activated to match what was in the old server.
  3. Copy the old image uploads directory to the new server. Supposing that the WordPress on the old server is located at “/home/username/public_html/wordpress”, run the following commands on the new server:
    sudo scp -r username@oldserver:/home/username/public_html/wordpress/wp-content/uploads /var/www/wordpress/wp-content/
    sudo chown -R www-data:www-data /var/www/wordpress/wp-content/uploads

    Note: If the old server uses a custom SSH port number, scp will require the custom port number as a “-P” input parameter; for example, “sudo scp -r -P 2222 username@oldserver…”.

  4. Export the WordPress database from the old server using the recommended phpMyAdmin interface (which generates a more human-friendly SQL output than mysqldump) or by running the following command on the old server:
    mysqldump -u oldusername -p olddatabasename > wordpress.sql
  5. Before importing the WordPress database into the new server, we will need to change references to the image uploads directory (and other directories) in the exported SQL file. If you don’t make this change, then images may not be visible in the WordPress postings. Following the example above, replace any occurance of “/home/username/public_html/wordpress/” with “/var/www/wordpress/” in the exported database SQL file.
  6. Copy the exported SQL file to the new server, say to the “/tmp” directory.
  7. On the new server, run these commands:
    # Open up MySQL command shell.
    mysql -u root -p

    # Empty the existing WordPress database.
    mysql> drop database wordpress;
    mysql> create database wordpress;

    # Exit the MySQL command shell.
    mysql> quit

    # Import the exported SQL file.
    mysql -u root -p wordpress < /tmp/wordpress.sql

    Note: Dropping and re-creating the WordPress database does not affect the WordPress database user and its privileges.

Browse to your new server’s IP address and you should see your WordPress website. Unfortunately, we cannot verify that the images are loading correctly on the new server because the image URLs use the domain name which points at the old server (the images are loaded from the old server, not the new server). We now need to point the domain at our new server.

Migrate Domain

I used DigitalOcean’s DNS (Domain Name System) “Add Domain” tool to create an A (Address) record (“@” => “server_ip_address”) linking to the new server’s IP address. I also added a CNAME (Canonical name) record (“www” => “@”) to have point at I tested whether DigitalOcean’s DNS servers were updated or not by repeatedly running one of these two commands on the server (or any Linux machine):


Note: DigitalOcean’s DNS servers took about 20 minutes to update.

Once DigitalOcean’s DNS servers had lookup entries for my domain name, I went to my domain registrar and updated my domain’s name servers to point at DigitalOcean’s DNS servers. To test whether DigitalOcean’s DNS servers were being used or not, I occasionally ran the following commands on my client machine to check the IP address returned:


Once the new IP address was returned consistently (it took 24 hours before my internet provider’s DNS servers were updated), I then browsed to and checked that the images were loading correctly.

You can empty the DNS caches on your machine and browser by using any of these commands:

# On Windows:
ipconfig /flushdns

# On Mac OS X:
sudo dscacheutil -flushcache

# On Chrome, browse to URL below and click on the "clear host cache" button.

If you want to check the DNS change propagation across the whole world, try the What’s My DNS? website. It will make calls to DNS name servers located all over the planet.

At this point, I have a working VPS that is reasonably secured and successfully hosting my migrated WordPress website. Some rough page load tests resulted in 0.5-1 second load times, as opposed to the 2-4 seconds on the old server (which was shared web hosting with a LAMP stack). I hope that this guide will help you should you decide to move your WordPress website to an unmanaged VPS.

See my followup post, Nginx Multiple Domains, Postfix Email, and Mailman Mailing Lists, to configure Nginx to support multiple domains, get Postfix email flowing, and get Mailman mailing lists working.

Most info above is derived from the following sources:


« Previous Entries