Rotate Video Without Black Bars

Audio Visual 1 Comment

rotate_video_black_barsHave you ever taken a vertical portrait video using your iPhone, import it to your computer, find that Windows Media Player will play it horizontally, and gotten a neck crick from holding your head sideways? Since the beginning, vertical portrait videos (tall and skinny) have been unwanted and unsupported, living in the shadow of the horizontal landscape videos (short and wide). Portrait videos were usually shoehorned into a widescreen frame, resulting in ugly black bars on the left and right. (Using free video editors, like Windows Movie Maker or Mac OS X iMovie, to rotate videos will result in such travesties.)

Thankfully, things have gotten better. Recent smartphones will embed the rotation information into the video file. Some video players, like VLC and QuickTime Player, will act on that rotation data to show the video correctly on computers. (Unfortunately, Windows Media Player does not make use of the video rotation data.) In addition, VLC allows manual adjustment of the playback video orientation (menu “Tools->Effects and Filters->Video Effects->Geometry->Transform->Rotate by 90 degrees”), but does not permanently change the video file’s rotation data. QuickTime Pro has a rotate video function which does not really rotate the video, but does adjust the video file’s rotation data permanently instead.

I read that some online services, such as YouTube and Google Plus, do support video rotation on imported videos, but have not tried them myself yet. (Supposedly, the new Google Photos does not support video rotation yet, but should eventually. In the meantime, the workaround is to rotate the video using the Google Plus interface.)

If you must rotate a video file (perhaps because you wish to use Windows Media Player), you will want to use a commercial video editor. To rotate a video without introducing black bars requires a program that can rotate the video, change the resolution (to avoid black bars), and re-encode with minimal video quality loss. All three functions are usually only found in commercial video editing software such as Adobe Premiere.

Instructions on how to rotate a video file using Sony Vegas Pro 10 and Adobe Premiere Pro CS5 on Windows 7 follow. I am only a beginner with both programs, so there may be better ways to do what I am attempting to do below.

Sony Vegas Pro 10

Though Sony Vegas is less powerful than Adobe Premiere, I find it much simpler to use. Here’s how to rotate a video using Sony Vegas Pro 10:

  1. rotate_video_sony_vegas_1Launch Sony Vegas Pro.
  2. Go to menu “File->Import->Media…”, browse to the video file, select it, and click Open.
  3. Surprisingly, Sony Vegas Pro supports the rotation data and displays the video correctly. Because we want to actually rotate the video, we need to tell Sony Vegas Pro not to use the rotation data. To do so:
    1. Right-click on the top-left thumbnail image of the imported video file and select Properties.
    2. Under the Media tab, in the “Stream properties” section at bottom, select “Video 1” in the “Stream” drop-down list .
    3. Change the Rotation field from “90 degrees clockwise” to “0 degrees (original)”. Click OK and the video file thumbnail will rotate to the true orientation.
  4. Double-click on the video file thumbnail to populate the timeline panel at the bottom.
  5. In the timeline panel, right-click on the video track thumbnail image and select the “Video Event Pan/Crop…” item.

  6. In the Event Pan/Crop dialog:
    1. Disable the “Lock Aspect Ratio” option by clicking on that icon if it is depressed (third icon from the bottom on the left toolbar).
    2. Under Position, switch the values for the Width and Height fields.
    3. Under Rotation, change the Angle field from “0.0” to “-90.0”. (Not sure why but I had to use -90 instead of 90.)
    4. Close the dialog by clicking on the tiny top-right “x” icon.
  7. Go to menu “File->Render As…” to open the “Render As” dialog. In that dialog, do the following:
    1. The “Save as type” and dependent Template fieldsrotate_video_sony_vegas_3 determine the quality of the rendered video, specifically the resolution. Because our rotated video will have a height of 1920, one “Save as type” option that allows such a height is “Video for Windows (*.avi)”. (Other options will allow different maximum widths and heights.)
    2. Select “HD 1080-24p YUV” in the Template field, which is the closest match to our imported video.
    3. Click on the “Custom…” button to configure a portrait resolution (ex: 1080×1920).
    4. In the “Custom Settings” dialog, inside the Video tab, select “(Custom frame size)” in the “Frame size” field, and switch the Width and Height values. Click OK to close the dialog.
    5. Back in the “Render As” dialog, make sure “Render loop region only” is not checked because we want the whole video to be exported. (The “Render loop region only” box will be disabled if no selection is done on the video track.)
    6. Click on the Save button.

Adobe Premiere Pro CS5

Adobe Premiere is very powerful and thus, not simple to use. Definitely, it is overkill for just rotating a video file. But if you ever need to rotate a video, here’s how to do it in Adobe Premiere Pro CS5:

  1. Launch Adobe Premiere Pro, choose a “New Project” and click Ok to accept the defaults. When the “New Sequence” dialog appears, click Cancel to skip creating it.
  2. Right-click inside the top-left Project panel and select “Import…” (or go to menu “File->Import…”). Browse to your video file, select it, and click Open.
    • Note: If you are opening a QuickTime .mov file and Adobe Premiere displays a “no audio or video streams” error message, rename the .mov file to .mpg file and try again.
  3. When you select the imported video file in the Project panel, the mini-preview on top of the panel will show information concerning the video. Take note of the resolution (ex: 1920×1080), frame rate (ex: 29.97 fps), and audio sample rate (ex: 44100 Hz).
  4. Right-click inside the Project panel and select “New Item->Sequence…” (or go to menu “File->New->Sequence…”) to create a sequence. In the “New Sequence” dialog, do the following:
    1. rotate_video_adobe_premiere_1Open the Settings tab.
    2. Swap the horizontal and vertical field values for the “Frame Size” (ex: 1080×1920).
    3. Select a matching Timebase (ex: 29.97 fps) and Audio “Sample Rate” (44100 Hz).
    4. Under the “Video Previews” section, select “Microsoft AVI” for the “Preview File Format”. We want to select the highest-quality codec that we can. Unfortunately, the high-quality “V210 10-bit YUV” and “Uncompressed UYVY 422 8bit” codecs support a maximum resolution of 607×1080. For 1080×1920, I recommend using the “Intel IYUV codec”. (The “Microsoft RLE” and “Microsoft Video 1” codecs will degrade the video quality noticeably.)
    5. Click the Reset button and the Width and Height fields will be updated to match the “Frame Size” (ex: 1080×1920) or as close to it as possible (depending on the codec selected).
    6. Click OK to create the sequence.
  5. The sequence will appear as a tab in the Timeline panel at the bottom-middle. To populate it, drag the imported video from the Project panel to the very beginning of the “Video 1” track in the timeline. The Preview panel at the top-right will show the sequence video frame with the video data and top/bottom black bars. (We will get rid of those black bars in the steps below.)
  6. Select all the video data in the Timeline panel. This action will populate the “Effect Controls” tab in the top-middle “Source, Effect Controls, Audio Mixer, Metadata” panel.
  7. In the Effect Controls pane, expand the Motion selection under “Video Effects”. Input a value of 90 in the Rotation field to rotate clockwise (or -90 to rotate counter-clockwise). The Preview panel will show the rotated video which fits the sequence frame perfectly without any black bars.
  8. rotate_video_adobe_premiere_2With the sequence selected in the Project panel, go to menu “File->Export->Media…”. Check the “Match Sequence Settings” box. Click on the Output tab to double-check that the exported video will not contain black bars.
  9. Click on the Export button. By default, the exported .avi video file will be created in the documents directory at “C:\Users\your_username\Documents\Adobe\Premiere Pro\5.5”.

Tip: If you want to easily create a sequence that matches the video file exactly, just drag the imported video file to the “New Item” icon on the Project panel’s bottom toolbar. (The “New Item” icon is immediately to the left of the Clear/trash icon.) This action will create a sequence that matches the imported video as close as possible and populate the sequence’s timeline with the video data automatically.

The exported video files may be significantly larger in size than the original video files. In most cases, re-encoding video will result in loss of quality or increase in file size. I think the best thing to do is to leave the original video file untouched and use a video player that is aware of the embedded rotation data. If the rotation data is wrong or missing, it might make more sense to use a program, like Sony Vegas Pro, to modify or add it without re-encoding the video.

Some info above derived from:

1 Comment

Clone a Hard Drive Using Clonezilla Live

Windows No Comments

I needed to clone one hard drive to another. In the past, I would have used a bootable MS-DOS CD containing an old copy of Norton Ghost 8. This time, I decided to see what is currently available and could be launched from a bootable USB flash drive. I found the open source Clonezilla Live utility, which is a small GNU/Linux distribution capable of running from a USB flash drive and cloning hard drives.

I decided to follow Clonezilla Live’s “MS Windows Method B: Manual” instructions to create a bootable USB flash drive.

  1. Follow these DiskPart instructions to create a bootable USB flash drive. (Clonezilla Live requires the FAT32 format and at least a 200MB capacity flash drive.)
  2. Download the latest stable release of Clonezilla Live. If you have a 64-bit capable machine, select “amd64” for “CPU architecture”. Or select “i586” for 32-bit. Select “zip” for “file type”.
  3. Unzip the Clonezilla Live Zip archive to the USB flash drive.
  4. syslinux_makebootLaunch the Command Prompt utility, change directory to the USB flash drive, and run “utils\win64\makeboot64.bat” for 64-bit or “utils\win32\makeboot.bat” for 32-bit. The “makeboot64.bat” or “makeboot.bat” script will modify the USB flash drive to boot the small GNU/Linux distribution and run the Clonezilla Live utility. (The makeboot utility will display the drive letter to be modified before continuing; please make sure that it is the correct one belonging to the USB flash drive.)

Clonezilla Live will show a lot of options which unfortunately are not easy to understand. The simplest way to deal with it is to accept the default when you are not sure.

  1. Attach the destination hard drive to the same machine containing the source hard drive.
  2. Start the machine and boot from the USB flash drive. You may need to press a particular function key to load the boot menu (F12 on my Lenovo desktop) or you may need to adjust the BIOS setup to boot from a USB drive before the hard drive.
  3. clonezilla_liveOn Clonezilla Live’s startup screen, keep the default “Clonezilla live (Default settings, VGA 800×600)” and press Enter.
  4. Press Enter to accept the pre-selected language, “en_US.UTF-8 English”.
  5. Keep the default “Don’t touch keymap” and press Enter.
  6. Make sure “Start_Clonezilla” is selected and press Enter to start.
  7. Because I am copying from one hard drive to another, I select the “device-device work directly from a disk or partition to a disk or partition” option. Press Enter.
  8. To keep it simple, stay with the “Beginner mode” option and press Enter.
  9. Select the source hard drive and press Enter.
  10. Select the target destination hard drive and press Enter.
  11. Keep the default “Skip checking/repairing source file system” selection and press Enter.
  12. Type “y” and press Enter to acknowledge the warning that all data on the destination hard drive will be destroyed.
  13. Type “y” and press Enter a second time to indicate that you are really sure.
  14. In answer to the question “do you want to clone the boot loader”, type uppercase “Y” and press Enter. (I need to clone the boot loader so the destination hard drive will be bootable like the source hard drive.)
  15. The hard drive cloning will occur. It took me around 10 minutes copying from one SSD to another SSD. (The length of time required to complete the process is dependent on the speed of both the source and destination hard drives.)
  16. When the cloning completes, press Enter to continue.
  17. Select “poweroff” to shut down the machine.
  18. Once the machine is off, swap the hard drives (or remove the source hard drive) and boot from the destination hard drive.

Even though my destination hard drive was twice the size of the source hard drive, the cloned destination partition size was the same size as the original source partition. I then used the free EaseUS Partition Master utility to increase the size of the destination partition (without destroying the data on it). Probably, Clonezilla Live’s expert mode has a setting to adjust the destination partition size.

No Comments

Create Bootable USB Flash Drive With DiskPart Command-Line Utility

Windows No Comments

The instructions below will create a bootable system partition on a USB flash drive, which is exactly the same as creating such a partition on a hard drive. Specifically, I will be using Windows 7’s built-in DiskPart (Disk Partition) command-line utility to create a bootable USB flash drive containing a Windows 8.1 Setup image.

diskpart_usbIf you are interested, here’s the technical reason why our bootable USB flash drive will use the MBR layout and FAT32 format: Computers, including both Windows and Macs, boot using a standard called UEFI (Unified Extensible Firmware Interface), which is based upon the EFI specification (Extensible Firmware Interface). (When folks say EFI, they are usually referring to UEFI because all modern computers use UEFI.) UEFI is a replacement for the previous BIOS method of booting up, but UEFI still supports the older BIOS method. The BIOS boot method uses the MBR (Master Boot Record) layout. In addition to BIOS+MBR, UEFI also supports the new GPT (GUID Partition Table) layout. The UEFI specification requires bootable removable media (such as a bootable USB flash drive) to use the MBR layout and FAT32 format.

To create a bootable USB flash drive, do the following:

  1. Insert a USB flash drive with sufficient capacity. (The 64-bit Windows 8.1 Professional ISO image I had is 4.5GB in size and requires at least an 8GB USB flash drive).
  2. Launch the “diskpart” or “diskpart.exe” utility from the Windows Start/Run menu or the Command Prompt. You will be prompted with a popup message asking “Do you want to allow the following program to make changes to this computer?” Answer Yes.
  3. Run the following commands in the DiskPart utility (ignore the comment lines marked by the pound # character):
    # Show all disks (aka drives, like hard drives or removable media).
    DISKPART> list disk
    # Select a disk to operate on.
    DISKPART> select disk [number identifying USB flash drive]
    # Delete all partitions, resulting in a blank disk.
    DISKPART> clean

    # If the USB drive is using the GPT layout, the "list disk" command output
    # above will show an asterisk in the last "Gpt" column.  If this is the
    # case, issue this command to change it to the MBR layout.
    DISKPART> convert mbr

    # Create a primary partition (using MBR).
    DISKPART> create partition primary

    # Show all partitions (should just be the one newly-created partition).
    DISKPART> list partition

    # Select the primary partition to operate on (only 1 partition exists).
    DISKPART> select partition 1
    # Make the primary partition active (aka bootable).
    # This command will fail if the USB drive is using the GPT layout.
    DISKPART> active
    # Format the active primary partition using FAT32.
    # To do a full format, instead of a quick format, omit the "quick" flag.
    DISKPART> format fs=fat32 quick
    # Assign a drive letter to the primary partition
    # (just in case Windows didn't already do it).
    DISKPART> assign

    # Quit DiskPart.
    DISKPART> exit
  4. Test by opening the contents of USB flash drive using Windows Explorer. If you get an inaccessible error when accessing the drive, unplug and re-plug it back into the computer. You should then be able to access it.
  5. Insert the Windows Setup DVD or mount the Windows Setup ISO file (I recommend using the free Slysoft Virtual CloneDrive utility to perform the ISO mount).
  6. Copy all the Windows Setup content to the USB flash drive by running the xcopy command from the Command Prompt:
    # Supposing USB flash drive is K: drive and Windows is L: drive,
    # copy all files and directories from the latter to the former.
    # /e = Copies directories and sub-directories, including empty ones.
    # /f = Displays full source and destination file names while copying.
    xcopy L:*.* /e/f K:

And we are done. The resulting USB flash drive should be bootable on both Windows and Macs. I tested the USB flash drive on my Macbook Pro Retina and it booted fine.

Info above derived from Install Windows 7 From a USB Flash Drive.

No Comments

Windows 8.1 Boot Camp on 2015 Macbook Pro Retina 13 Inch

Mac OS X, Windows No Comments

I recently upgraded to a 2015 Macbook Pro Retina 13 inch laptop. I attempted to install Windows 7 using the Boot Camp Assistant, which immediately asked for a Windows 8 or later installation media to be inserted. Darn it. I managed to create and insert a USB flash drive containing the latest Windows 8.1 with Update. After that, the Boot Camp Assistant asked me for the Boot Camp Support Software (Windows drivers). I inserted a second USB flash drive containing the latest Boot Camp Support Software I had manually downloaded from the Apple website, but Boot Camp Assistant still complained that it couldn’t be found. It turned out that for newer Macbooks, I must download the Boot Camp Support Software using the Boot Camp Assistant.

After I overcame the above and other issues, I was able to get a Windows 8.1 Boot Camp working. I’ve documented the steps I took below.

Create a Windows 8.1 Install USB Flash Drive

I used my Windows 7 desktop to create a USB flash drive containing the 64-bit version of Windows 8.1 with Update. (2015 Macbooks only support 64-bit Windows 8 or later.) Because Windows 8.1 setup requires 4.5GB of space, you must use an 8GB or larger USB flash drive; I ended up using a spare 16GB flash drive that I had.

Update: Instead of using the WinToFlash utility below and dealing with its browser plugin spam, use Window’s built-in DiskPart command-line utility to create a bootable USB flash drive containing the Windows Setup.

NovicorpWinToFlashLiteI used the free Novicorp WinToFlash Lite utility to copy the contents of my Windows 8.1 with Update ISO file (alternatively, you can use a Windows 8.1 DVD) to the USB flash drive. WinToFlash will re-format the USB flash drive using FAT32 format before copying the content over.

Note: Strangely, WinToFlash won’t throw an error even if you use a USB flash drive that is too small. I tried a 1GB USB flash drive and WinToFlash completed successfully. So make sure to use an 8GB USB flash drive or larger.

Unfortunately, the first time you run the latest version of WinToFlash, it will install a browser plugin called “WinToFlash Suggestor” which adds advertisements to search suggestions. Go ahead and uninstall this unnecessary browser plugin using the Control Panel’s “Uninstall a program” function.

Note: The Microsoft website has a Windows USB/DVD Download Tool which can do what WinToFlash does. Unfortunately, that tool re-formats the USB flash drive as NTFS. Because the Macbook uses UEFI BIOS boot up which only works with FAT32, the USB flash drive created by the Windows USB/DVD Download Tool won’t be bootable.

Download The Boot Camp Support Software

For Macs released in 2014 and 2015, you must use the Boot Camp Assistant to download a specific version of the 64-bit Boot Camp Support Software for your Mac. Apple does not provide links to manually download all the available Boot Camp Support Software versions. (You can manually download the older Boot Camp Support Software 5.1.5640 64bit for Mid and Late 2013 Macs here and Boot Camp Support Software 5.1.5621 64bit for Early 2013 or previous Macs here.) For 32-bit Windows installation and other options, check Apple’s System requirements to install Windows on your Mac using Boot Camp page.

BootCampAssistantWindows8In order to install Windows 8.1, the Boot Camp Support Software needs to be incorporated into the Windows 8.1 Install USB flash drive. The simplest method is to have the Boot Camp Assistant download the Boot Camp Support Software directly to the Windows 8.1 Install USB flash drive.

Note: I tried installing with two USB flash drives, one containing the Windows 8.1 Install and the other containing the Boot Camp Support Software, but the Windows 8.1 setup threw a “No new devices drivers were found” error even after I had manually selected the I/O driver on the Boot Camp Support Software USB flash drive.

To download the latest Windows drivers from Apple:

  1. Insert the FAT32-formatted Windows 8.1 Install USB flash drive.
  2. Run Boot Camp Assistant.
  3. Select the “Download the latest Windows support software from Apple” option. Click Continue.
  4. Select the Windows 8.1 Install USB flash drive. (Boot Camp Assistant requires FAT32 format and at least 500MB free.) Click Continue.
  5. Boot Camp Assistant will copy all the Boot Camp Support Software content (“$WinPEDriver$” directory, “BootCamp” directory, and “AutoUnattend.xml” file) to the USB flash drive’s root directory (which is where the Windows 8.1 setup will expect them to be).

Install Windows 8.1

To install Windows 8.1, run the Boot Camp Assistant and select the option to “Install Windows 8 or later version”. Follow the instructions to create a BOOTCAMP partition. With the Windows 8.1 Install USB flash drive still inserted, agree to restart the Macbook.

On reboot, the Macbook will boot from the Windows 8.1 Install USB flash drive. (If it doesn’t, shutdown the Macbook and power it up while holding the alt/option key. When the boot screen appears, select the USB flash drive’s “EFI Boot” option.)

The Windows 8.1 setup will automatically use the Boot Camp Support Software’s I/O driver to access the hard drive and show the list of partitions. Select the BOOTCAMP partition and allow it to be re-formatted as NTFS. Windows 8.1 setup will then install itself onto that NTFS partition.

After reboot and once the Windows 8.1 initial setup is completed (can take several minutes), the Boot Camp Support Software installer will automatically execute to install the necessary Apple hardware drivers.

Note: The Windows 8 version of Windows Defender is different from the Windows 7 version. Windows 7 Defender only protects against spyware, so the recommendation is to disable it and install Microsoft Security Essentials which protects against virus, spyware, and malware. However, Windows 8 Defender protects against virus, spyware, and malware so there is no need to replace it. (Microsoft Security Essentials can’t be installed on Windows 8.)

All in all, the Windows 8.1 Boot Camp installation went smoothly once I knew to creat a USB flash drive containing both Windows 8.1 and the Boot Camp Support Software.

Some info above derived from:

No Comments

Unroot and Upgrade Nexus 5 to Latest Android Version

Mobile Devices No Comments

I purchased a used Nexus 5 recently. When I used the built-in Settings app to upgrade it to the latest Android version, it got stuck at the bootloader screen on reboot. The previous owner had rooted the Nexus with a custom bootloader which broke the stock upgrade process. Because I am planning to develop apps on the Nexus 5, I needed a stock device. Below are the steps I took to unroot the Nexus 5 and upgrade it to the latest stock Android 5.0 Lollipop version using my Macbook.

Note: Actually, there are no special steps to unroot the Nexus 5. Re-imaging the phone with the latest stock Android version will automatically do the unroot (by overwriting the custom bootloader and/or android OS with the stock version).

Install Android SDK Tools

To prepare, I downloaded the stand-alone Android SDK Tools . The latest version for Mac OS X was “”. (Alternatively, you could install the Android Studio, which includes the SDK Tools and a graphical IDE equivalent to Eclipse.)

I extracted the Android SDK Tools to my “~/Development/android-sdk-macosx” directory (you can choose your own path) and added it to the execution path by inserting the following into my “~/.profile” file:

export ANDROID_SDK=$HOME/Development/android-sdk-macosx
export PATH=$PATH:$ANDROID_SDK/tools:$ANDROID_SDK/platform-tools

Note: The ANDROID_SDK variable is my own shortcut and is not used by the Android SDK Tools in any way.

nexus5_bootloaderUnlock the Nexus 5

Before we can do the re-image, we need to put the Nexus 5 phone into Bootloader Mode (aka Fastboot Mode) and unlock it.

To put the phone into Bootloader Mode, do the following:

  1. Power off the phone.
  2. Power on by holding the volume up, volume down, and power buttons simultaneously.
  3. When the phone vibrates, let go of the power button while continuing to hold the volume up and down buttons.
  4. The phone will then display the bootloader screen.

On the bootloader screen, look for the “LOCK STATE” status at the bottom. My Nexus 5 was already unlocked so the display showed “LOCK STATE – unlocked”. If you see a “locked” state instead, connect the phone by USB cable, launch the Terminal and run these Android SDK Tools commands (ignore the comments marked by the pound # symbol below):

# Optionally show all connected devices in Bootloader Mode
fastboot devices

# Unlock bootloader
fastboot oem unlock

Note: When you unlock the bootloader, all data on the phone will be wiped as a security precaution. Also, there is no harm if you run the unlock command on a phone that is already unlocked; you will just get a “FAILED (remote: Already Unlocked)” error.

Re-image the Nexus 5

Download the latest Android 5.0 Lollipop image file for the Nexus 5 from Google’s Factory Images for Nexus Devices. Look for the section named “hammerhead for Nexus 5 (GSM/LTE)”. The latest version when I checked was “5.1.1 (LMY48B)” with a downloaded filename “hammerhead-lmy48b-factory-596bb9c1.tgz” (579 MB in size).

Extract the contents to a directory like “~/Downloads/hammerhead-lmy48b”.

Note: Before doing the steps below, make sure that Nexus 5 phone is connected by USB cable and showing the Bootloader Mode.

The extracted “hammerhead-lmy48b” directory has a script named “” which you can run to do the factory re-image. It will run several commands with hard-coded 5 second pauses in between. To execute it, launch Terminal and run the following commands:

cd ~/Downloads/hammerhead-lmy48b
sh ./

Instead of using the “” script, I recommend running its commands manually just in case the phone takes longer than 5 seconds to complete each command. Also, if an error occurs, it would be easier to tell which command had failed. Do to so, run the commands in the “” script one-by-one, ignoring the “sleep 5” lines.

cd ~/Downloads/hammerhead-lmy48b
# Reimage the bootloader
fastboot flash bootloader bootloader-hammerhead-hhz12h.img
fastboot reboot-bootloader
# Reimage the radio firmware
fastboot flash radio radio-hammerhead-m8974a-
fastboot reboot-bootloader
# Reimage OS with Android 5.0 Lollipop
fastboot -w update

The re-image process took about 5 minutes for me. The initial boot up (the screen showed flying colored dots) took another 8 minutes. Be patient because some folks reported that the initial boot up could take up to 30 minutes.

Once the phone has booted up, I recommend doing a data wipe to ensure that everything is consistent. To do the data wipe, go to Settings, under Personal section, Backup & reset, under Personal data, Factory data reset. Reboot the phone.

Lock the Nexus 5

To avoid any security risks, I recommend locking the bootloader. To do so, put the phone into Bootloader Mode and run the following command in the Terminal:

fastboot oem lock

Once done, exit the Bootloader Mode by clicking on the Start button to boot the Android OS.

If you run into issues, some troubleshooting tips are located at How to Unroot Nexus 5! – Complete Stock.

No Comments

Revert Mac OS X Yosemite Core Storage Back to Mac OS Extended HFS+

Mac OS X 8 Comments

My Macbook Pro’s hard drive died recently, again. The Apple Store had to order the replacement Hitachi 750GB hard drive which took several days. I have a suspicion that Apple is using refurbished hard drives because it has just been two months since the last hard drive replacement. Hitachi is a brand I can trust… to fail quickly… because in addition to these two failures, when I first got my brand new Macbook years ago, the Hitachi drive died within three months. This time, I decided to install a Samsung EVO 850 500GB solid state hard drive to avoid having to visit the Apple Store again.

I also decided to do a fresh installation using the latest Mac OS X 10.10 Yosemite version. When I attempted to partition the hard drive into a Mac OS X Yosemite, a Windows 7 Boot Camp, and a shared FAT32 partition, I found that I couldn’t merge or resize the Mac partition to create the shared FAT32 partition using either Disk Utility or the diskutil command line.

The problem is that Yosemite installs the Mac OS X partition using a new Core Storage volume manager, which in turn uses the HFS+ (aka Mac OS Extended) file system. Unfortunately, Apple has not yet updated the Disk Utility or diskutil command line to resize, merge, or delete a Core Storage volume. Boot Camp does know how to resize the Core Storage volume correctly though. (Supposedly, there is an undocumented “diskutil coreStorage resizeVolume” command that supports resizing a Core Storage volume, but I hesitate to use it. And I couldn’t find an undocumented command to merge a Core Storage volume with a blank partition.)

Run the Terminal app and execute the “diskutil list” command to show the Core Storage volume (under “/dev/disk0”), which wraps the old HFS+ partition (under “/dev/disk1”).


The Core Storage volume manager is the basis for Apple’s Fusion Drive technology, which is used to present several partitions on multiple drives as a single logical volume. Because I had only one hard drive on my Macbook, I didn’t need that function. (I can see using the Fusion Drive feature on a server which usually has more than one hard drive.) Thankfully, I found a way to convert the Core Storage volume back to a plain old HFS+ partition. Note that this method only works if the Core Storage volume is not encrypted.

To start, you need to find the Core Storage Logical Volume identifier used by the wrapped HFS+ partition. It is the long alpha-numeric string (“1FC06AF9-EED2-4066-BE3F-FE47166A2190” on my machine) listed under the HFS+ partition (“/dev/disk1/”) in the “diskutil list” command output above.

You can also find the Logical Volume identifier and encryption status by running the “diskutil cs list” command in the Terminal app.


Locate the bottom-most “Logical Volume” entry which has the identifier listed. Underneath that is the “Revertible: Yes (no decryption required)” line which indicates that the volume is not encrypted and can be reverted.

Once you have the Logical Volume identifier, you can revert the Core Storage volume back to a plain HFS+ partition using the “diskutil coreStorage revert” command like so:


You will need to reboot the Macbook. If you don’t reboot and run the “diskutil list” command, it will incorrectly show the logical volume (under “/dev/disk1”) even though the Core Storage has been replaced by HFS+ (under “/dev/disk0”).


Once the old HFS+ partition was back in place, I was able to delete, merge, and resize to my heart’s content. The solid state hard drive makes Mac OS X blazing fast; I regret waiting so long before installing one.


Sync Google Contacts on iPhone

Mobile Devices 2 Comments

046WinonaRyderCartoonRecently, my niece’s iPhone 4S died. We got her a replacement phone, but all the data on her old phone was gone. Unfortunately, even though she had a Gmail account configured, her iPhone did not sync its addresses to Google Contacts. I checked my own Google Contacts account and found that it contained almost none of the addresses on my iPhone. Even though I had enabled contacts sync for my Gmail account, the iPhone addresses were not getting synced to Google. So I did some research, found the issue, and have summarized my findings and solution below.

Four Contact Types

For my purpose, there are four contact types that exist on the iPhone:

  1. SIM contacts are located on your legacy SIM card (which you will need to import into your iPhone to make visible).
  2. Local contacts exist only on your iPhone (not synced from Google).
  3. Google contacts synced from Google to the iPhone.
  4. iCloud contacts synced from iCloud to the iPhone. When you enable iCloud contacts sync, it will merge or delete any local contacts so the two types cannot co-exist. iCloud will also merge or delete any Google contacts on the iPhone.

Note: There are other contact types, like Yahoo contacts, but I don’t use them.

When you view your Contacts app, all the addresses (except non-imported SIM contacts) will show in one listing. The iPhone does remember which contacts are of which type, and there is a way to filter what you see. If you have more than one type of contacts, the Contacts app will show a Groups link on the top left. Click on it and you will see two or more these options to select which types to make visible (by checkmark):

  • “All on My iPhone” (Local contacts; you won’t see this option if iCloud contacts sync is enabled.)
  • “All your_Gmail_address” (Google Contacts)
  • “All iCloud” (iCloud Contacts)

Set Google Contacts as the Default

On my iPhone, I had many local contacts and a few Google contacts. Even after configuring Google contacts sync, newly-created addresses were saved as local contacts and not synced to Google. The problem was a second, separate setting called “Default Account” which controlled the default type for newly-created contacts; on my iPhone, that setting forced all new addresses to be created as local contacts. I believe that the default type was set to local contacts because I had created some local addresses before configuring Google contacts sync and Google contacts sync did not set “Default Account” to itself. To fix this, I had to manually change “Default Account” to be Google contacts.

Note: If you configure Google contacts sync and there are no pre-existing local contacts on your iPhone (and iCloud contact sync is not enabled), then the default type for newly-created addresses will be set to Google contacts automatically.

To configure a Google account, go to “Settings->Mail, Contacts, Calendars” on your iPhone. Under Accounts at the top, click “Add Accounts” and input your Google account access information. Or if you have already created the Google account, it will be listed and you can just click on it to see its settings. In the Google account settings , make sure that “Contacts” sync is turned on (in addition to other services like “Mail” or “Calendar”). You will be prompted to “Keep on My iPhone” your current contacts (local or iCloud), which I recommend you agree to; otherwise, those pre-existing addresses will be deleted.

Note: Only addresses in your Google “My Contacts” group will be synced to the iPhone. So if you want a particular Google contact available on the iPhone, just move that contact into the “My Contacts” group.

Then make sure to set the iPhone to save new addresses to Google Contacts by going back to “Settings->Mail, Contacts, Calendars” and locating the “Default Account” setting. Change it to “your_Gmail_address” (Google contact), instead of “On My iPhone” (local contact) or “iCloud” (iCloud contact). The “Default Account” setting is also used to identify the default sender email address when you compose a new email.

Note: You will only see the “Default Account” setting if you have more than one contact type existing on your iPhone. If you have two contact types but still do not see the “Default Account” setting, manually close the Settings app (click on Home button twice and drag the Settings app up and out) and then re-open it to refresh the display.

Be aware that when creating a new address in the Contacts app, if the Contacts app is configured to not show Google contacts, then the newly-created address may be stored locally (or to iCloud) instead of to Google; even if the “Default Account” setting is configured to use your Google account.

Transfer Local Contacts to Google

Google contacts sync did not provide an option to merge local contacts. I had to figure out a workaround. The method I decided to use was to merge my local contacts into iCloud and then to export the contacts from iCloud for import into Google. (Alternatively, I could use iTunes sync, but it looked a lot cleaner to use iCloud.)

Developed by Apple, iCloud contacts sync is very comprehensive. When you enable iCloud contacts sync, it will offer the option to merge with any contacts on the iPhone (including local and Google contact types); if you decline to merge, it will delete all the pre-existing addresses. Once merged, whatever you see in the Contacts app (with all types visible) is what you will see in iCloud. Unlike Google contacts sync which only affects Google contacts, iCloud takes ownership of all the contacts on the iPhone. In addition, iCloud contact sync will set itself as the “Default Account” automatically.

First, configure the iCloud account by going to “Settings->iCloud” to add or view an existing iCloud account. (Alternatively, you can go to “Settings->Mail, Contacts, Calendars” and “Add [an] Account” of type “iCloud”. In the iCloud settings, turn on “Contacts” sync and agree to the merge prompt (to not delete all the pre-existing addresses).

Once iCloud contacts sync was enabled and the local contacts were merged (almost instantaneously because I only had 75 addresses), these are the steps which I took to transfer the addresses to Google:

  1. icloud_export_contactsBrowse to the iCloud website, log in, and click on Contacts. (Note: You must use a browser other than Chrome, like Internet Explorer or Firefox, because the iCloud contacts export function is currently broken under Chrome.)
    1. Verify that all addresses from the iPhone are present.
    2. Click on the grey gear icon on the bottom-left.
    3. Click on “Select All” and then “Export vCard” to download a .vcf file containing all the addresses.
  2. At this time, I recommend disabling the iCloud contact sync on the iPhone so that it won’t interfere with the Google contact sync. Disabling iCloud contacts sync will prompt you to delete or leave the addresses on the phone. If you want to start with a clean slate, I recommend deleting all the addresses. Once we are done, the iPhone will re-populate them from Google Contacts.
  3. Browse to Google Contacts and log in.
    1. Unfortunately, Google will default you to their new contacts UI preview which strangely does not make the “My Contacts” group visible or accessible. To see the “My Contacts” group, click on the “More” link at the bottom-left and then select “Leave the Contacts preview” to get back to the old UI. You should now see “My Contacts” listed as the top group.
    2. Click on “Import Contacts…” on the bottom-left.
    3. Browse to the downloaded .vcf file that was exported from iCloud.
    4. Click on the “Import” button.
    5. You will see a new “Imported ” group under the “My Contacts” group. It should contain all the addresses exported from iCloud.
    6. Optionally, you can delete the Imported group by selecting it, clicking on the “More” link at the top, and picking “Delete Group”. Don’t worry, the imported contacts won’t be deleted because they are also kept under the parent “My Contacts” group.

I ended up with a lot of duplicate contacts because my Google account had email addresses while my iPhone had telephone numbers. Thankfully, Google provided a mechanism to merge duplicate contacts. I recommend using the more sophisticated merge contacts function under the new UI preview, instead of under the old UI (under the old UI, click on the Imported folder under “My Contacts” and you will see a banner asking if you wish to “Find & merge duplicates”).

On the old Google Contacts UI:

  1. Click on “Try Contacts preview” on the bottom-left menu to return to the new UI preview.
  2. Click on “Find duplicates”.
  3. The duplicate contacts are nicely grouped together (2 or more by name) with their own “Merge” buttons. Click on the “Merge” button to merge the associated set of duplicate contacts.

Contacts Not Syncing From Google

I then checked my iPhone and did not see the newly imported Google contacts. There is an iPhone setting which controls how the iPhone syncs with Google. The default configuration on my iPhone is to manually sync Google contacts; meaning that when I start or use the Contacts app, my Google contacts will be synced. Unfortunately, even when starting and using the Contacts app, my Google addresses weren’t downloaded.

iphone_fetch_scheduleTo see the Google sync setting, go to “Settings -> Mail, Contacts, Calendars”, click on “Fetch New Data”, and locate “your_Gmail_address”. Gmail doesn’t support “Push” (where Google would send newly-created contacts to the iPhone), so it is set to “Fetch” (the iPhone queries Google for new contacts) by default. The fetch schedule is located at the bottom and on my iPhone, it was set to “Manually”. The manual fetch meant that I had to start or use the Contacts app for the addresses to sync with Google. I could change the fetch schedule to sync with Google periodically (by minutes or hour), but decided that manual was fine (I want to reduce battery usage).

To force the iPhone to sync immediately with Google Contacts, I disabled and then re-enabled the Google contacts sync (see the Google account under “Settings -> Mail, Contacts, Calendars”). When disabling the Google contacts sync, it will force you to delete all Google contacts from the iPhone; this is okay because when we re-enable, it will re-download all the Google addresses. After that, I was able to see all the Google addresses in the Contacts app. (Make sure to double-check that the “Default Account” is still set to Google.)

Import SIM Contacts

If you have an old SIM card that contains addresses (the SIM storage was used by the older, non-smart phones) and wish to import it into your iPhone, put the SIM card into the iPhone, go to “Settings ->Mail, Contacts, Calendars”, click on “Import SIM Contacts”, and select your Google account (instead of “On My iPhone” or “iCloud”).

Simultaneous Google and iCloud Contacts Sync

It occurred to me that I could have both Google and iCloud contacts sync enabled at the same time. However, I’m not sure about some of the behavior. Every address on the iPhone will replicate from iCloud, while only Google-type addresses will replicate from the Google account. Will downloaded Google addresses that don’t exist in iCloud be replicated then to iCloud? Will downloaded iCloud addresses that don’t exist in Google be replicated to Google? Does this behavior depend upon which contact type “Default Account” is set to? I don’t know. It is probably best to use one or the other, not both.

Because I am leery about unnecessarily exposing my personal information on the Internet, I manually removed all the contacts from iCloud after I was confident that I had imported them into Google Contacts successfully.


Split One WordPress Blog Into Two

Internet No Comments

When I started my Do It Scared! blog (later moved to Folded Life), I did not have in mind any goal beyond sharing technical knowledge and random thoughts. As I’ve written more content, I’ve come to realize that my posts split into two very different camps, technical how-to instructions and non-technical realizations about life. One of my non-technical friends told me a while ago, “I enjoy reading your blog except for the stuff that I have no idea what you are talking about.” My blog had become schizophrenic.

spideywebheadThe cure I’ve implemented is to split the blog into two different blogs, one technical and the other non-technical. Readers can focus on one or the other, without getting distracted. Because I’m certain that this is not a rare problem for a blog creator to have, I’ve documented what I’ve done to separate my blog into two.

A Domain By Any Other Name

Choosing a second domain name is probably the toughest step because a good domain name is hard to come up with and when you do, the chances of it being available are very low. However, there is hope because not all domain names are taken. You might just luck out or come up with something so unique that no one else has thought about it before (and had been willing to register it).

I suggest first deciding which content you want to keep under the old domain name and which content you wish to move to the new domain name. The nature of the latter’s content will help you to come up with an appropriate new domain name. I’ve decided to keep the non-technical content at the old domain and move the technical content to the new domain. Because I know the new domain is technical, I could try to come up with a nerdy domain name.

If you don’t have a preference for where each content should go after splitting the content into two, you may wish to keep the larger half at the old domain and move the smaller half to the new domain. This will reduce the effort required later to create redirects from the old domain to the new. Unfortunately, I’ve decided to move the technical content, the much larger half of my blog, to a new domain.

Being dissatisfied with all the potential, available new domain names that I came up with for my technical content, I’ve decided to put the technical content under an existing domain name that I had registered previously; this domain name is composed of my full name.

Attack of the WordPress Clones

To start the move, I’ve duplicated the WordPress content from the old domain to the new domain. This involved copying the Nginx site configuration file, WordPress source files and the WordPress MySQL database, and then making minor modifications. The instructions below were performed on my unmanaged virtual private server.

Note: To keep things consistent, I’ve always taken care to name the Nginx www directory, Nginx domain server configuration file, MySQL database name and username the same as the domain name. The instructions below will reflect this naming convention. Please adjust to match your own custom names accordingly.

Once the new domain name is registered and the DNS records are updated, we can configure Nginx to serve the new domain by doing the following:

# Copy the Nginx site config file
sudo cp /etc/nginx/sites-available/olddomain /etc/nginx/sites-available/newdomain

# Edit the new Nginx config file
sudo nano /etc/nginx/sites-available/newdomain
    # Update file location and server name
    root /var/www/newdomain;

# Enable the new Nginx domain
sudo ln -s /etc/nginx/sites-available/newdomain /etc/nginx/sites-enabled/newdomain

# Copy WordPress code from the old domain to new domain
sudo mkdir /var/www/newdomain
sudo cp -r /var/www/olddomain /var/www/newdomain

# Adjust permissions for the new domain directory
sudo chown -R www-data:www-data /var/www/newdomain
sudo chmod -R g+w /var/www/newdomain

# Update the new domain WordPress configuration
sudo nano /var/www/newdomain/wp-config.php
    # Update database, user, and password variables.
    define('DB_NAME', 'newdomain');
    define('DB_USER', 'newdomain');
    define('DB_PASSWORD', 'newdomain_password');

# Reload the Nginx server to make the changes effective
sudo service nginx reload

Create the new domain’s WordPress MySQL database and user:

# Open a MySQL interactive command shell
mysql -u root -p

# Create MySQL WordPress database for new domain blog
mysql> create database newdomain;

# Create MySQL user and password
mysql> create user newdomain@localhost;
mysql> set password for newdomain@localhost = PASSWORD('newdomain_password');

# Grant the MySQL user full privileges on the WordPress database
mysql> grant all privileges on newdomain.* to newdomain@localhost identified by 'newdomain_password';

# Make the privilege changes effective
mysql> flush privileges;

# Exit the MySQL interactive shell
mysql> quit

The old domain’s WordPress database may contain URL references (to the old domain) and directory references (to the old domain directory). Because I’ve named the old domain directory the same as the old domain name (and likewise for the new domain), modifying the old WordPress database for the new domain requires a simple text replacement. A global search and replacement of the old domain name with the new domain name fixes all the URL and directory references.

# Export the old domain's WordPress database
mysqldump -u[olddomain] -p[mypassword] olddomain > /tmp/olddomain_modified.sql

# Use your favorite editor to search and replace all olddomain matches with newdomain
# Below is an example using the vi editor to do a global string replace
sudo vi /tmp/olddomain_modified.sql

# Import the modified WordPress database into the new domain's WordPress database
mysql -u newdomain -p[mypassword] newdomain < /tmp/olddomain_modified.sql

At this point, you should be able to browse to the new domain address to see an identical copy of your old domain’s blog.

Note: If your WordPress uses embedded code (like Google Analytics) or WordPress plugins that contain references to the old domain name (like Google FeedBurner), you will want to update them manually to use the new domain name.

Posts Not Here!

To ensure that external links to my old domain’s blog will continue to work, I need to create redirects from the old domain to the new domain for the posts that have been moved. To do so, I recommend using the WordPress Redirection plugin. It is a simple plugin that supports regular expression matching and keeps a history of the number of redirects that occur (very useful to record which URLs are being redirected).

My old website’s permalink format looks like the below. Many variations of it are also allowed.

# Permalink format

# Allowed variations include just the post_id

# Also allowed is removal of the ending forward-slash

In addition, the post title doesn’t matter because the title could be incorrect and WordPress will pull up the correct post using the post identifier. Because we need to redirect any of these variations, regular expression matching is required.

An example regular expression matching a URL like “/” and its variations would be:


Note: The “” is dropped when doing URL matching, so the actual match is against “/314/the-value-of-pie/”.

Here’s a very brief explanation of the regular expression above.

  • The first caret “^” character says to match starting with the beginning of the string. The last dollar sign “$” character says to match to the end of the string.
  • The “/314” means that the beginning of the string should match that sequence exactly.
  • The parenthesis and vertical bar combination “( | )” creates a logical OR construct.
  • The first alternative “(|” means that nothing follows ‘/314’. This alternative would match a URL like “/” exactly.
  • The second alternative “|/.*)$” means that there should be a forward-slash “/” followed by zero or more number of any characters “.*” until the end “$”. This would match URLs like “/”, “/domain/314/blah”, “/domain/314/blah/”, “/domain/314/blah/blah/”, and of course, “/”.

Create a URL redirect using the Redirection plugin.

  1. Install and activate the WordPress Redirection plugin on the old domain.
  2. Go to the Redirection plugin’s Settings (also found under menu Tools->Redirection).
  3. Click on the “Redirects” section.
  4. In the “Add new redirection” form at the bottom, input the matching regular expression into the “Source URL”, check the “Regular expression” checkbox, and input the new domain target in the “Target URL”.
  5. Click the “Add Redirection” button when done.
  6. Repeat the steps for each post that will be moved to the new domain.


The Redirection plugin will create a 301 permanent redirect. When encountering 301 redirects, browsers will cache the resulting redirected URLs. Search engines may also update their records accordingly. If you are just experimenting, I recommend editing the redirect (under the Redirection Settings, click on the redirect item, select Edit, and click on the empty square under “Source URL” to expand the set of available options) and choosing “307 – Temporary Redirect” in the “HTTP Code” field.

Besides the permalink format, WordPress accepts this default query format, “/”. Unfortunately, the Redirection plugin does not support this format for redirects. I looked at some other redirect plugins but they require that the original posts be kept on the old domain’s WordPress because they use an extended post property to perform the redirect. Because I wish to delete the moved posts, I cannot use any of the existing WordPress redirect plugins to redirect the query formatted URLs. However, because external websites should only use the permalink format when referencing my old blog, I don’t actually need to redirect the query format.

Nginx Rewrites

As an alternative to the WordPress Redirection plugin, Nginx rewrite directives can be used. The advantage is speed because Nginx will redirect before WordPress is even involved. The disadvantage is that there isn’t an easy way to track the redirects that occur.

To perform a redirect using Nginx, edit the Nginx configuration file “/etc/nginx/sites-available/olddomain” and insert the following rewrite statement immediately beneath the “server_name” directive:

# 301 permanent redirect
rewrite ^/314(|/.*)$ permanent;

# Or 307 temporary redirect
rewrite ^/314(|/.*)$ redirect;

You will need to create a rewrite directive for each post to be moved. The rewrite directives will take effect when the Nginx server is reloaded.

Unlike the WordPress redirect plugins, Nginx supports redirecting post identifier query formatted URLs. In the old domain’s Nginx configuration file, immediately beneath the “server_name” directive, insert the if-return statement below:

# 301 permanent redirect
if ($arg_p = 314) {
    return 301;

# Or 307 temporary redirect
if ($arg_p = 314) {
    return 307;

Note: You will need an if-return statement for each post which you wish to redirect. If there are many posts to redirect, the Nginx configuration file may become bloated with if-return statements. It may be possible to use the Nginx map function to replace the if-return statements; unfortunately, I haven’t figured out how to use the map function yet.

Batch Import For Redirection

Because I have over a hundred technical URLs to redirect and do not wish to manually input them, I’ve looked into ways to batch import them into the Redirection plugin. The most direct method is to insert the redirects directly into the MySQL database.

Here is an example MySQL insert statement:

# Log into MySQL interface
mysql -u root -p

# Use the old domain's WordPress database
mysql> use olddomain;

# Insert the redirect into Redirection's items table
mysql> INSERT INTO wp_redirection_items VALUES (NULL,'^/314(|/.*)$',1,0,0,

# Exit MySQL interface
mysql> quit

Note: Before executing the MySQL insert statement above, view the Redirection plugin’s Settings page at least once to trigger the Redirection plugin to create its MySQL tables, including “wp_redirection_items”.

Because I am lazy, I wrote a PHP script to generate a file containing the MySQL insert statements for all my moved posts, and then executed that file against the old domain’s WordPress database.

The Redirection plugin’s user interface exposes support for importing a comma-separated file containing the redirects. Unfortunately, there is no documentation on the import format and I could not get it to work based upon the few related forum posts which I found. Even had I gotten import to work, it looks like a MySQL update statement is required to enable the regular expression type check on all the imported redirects. Since MySQL would be required in any case, I am satisfied with the MySQL insert solution above.

Very Slow Cleanup

Once the redirects are working, you can start moving the moved posts from the old domain’s blog to the trash. Also, move the posts in the new domain’s blog that remain in the old domain to the trash. I recommend waiting a few weeks to make sure everything is working fine before emptying the trash. Once you have emptied the trash, don’t forget to delete any images referenced by the deleted posts.

In a few months, I plan to look at the redirect statistics to figure out which URLs are being redirected. I plan to delete the redirects that are not being used. Once I’ve reduced the number of redirects as much as I can, I intend to convert them into permanent Nginx rewrite statements and disable the Redirection plugin.

Update: The latest version of the Redirection plugin has a bulk action “Reset Hits” which resets the redirect counts, so we wouldn’t need to use the MySQL commands below.

Because I want to do several rounds of checking which URLs are being redirected, I needed a way to reset the redirection counts to zero. Unfortunately, there is no user interface option to reset the counts, so I had to use these MySQL statements:

# Log into MySQL interface
mysql -u root -p

# Use the old domain's WordPress database
mysql> use olddomain;

# Reset the redirection counts to zero
mysql> update wp_redirection_items set last_count=0;

# Exit MySQL interface
mysql> quit

If you decide to split your blog, I hope the instructions above will help.

No Comments

DigitalOcean After A Year: Still Good

Linux No Comments

Two weekends ago, on a Friday, my droplet was automatically upgraded to DigitalOcean‘s new cloud. I had received an email about the upgrade but had ignored it, believing that the upgrade would go smoothly. I was on a trip that Friday and the weekend, so did not check my website until Monday morning. Unfortunately, my website was unreachable and had been so since Friday.

Droplet Up, Website Down

050DragonSlayerI logged into DigitalOcean and the web interface said that my droplet was up and working fine. However, I could not ping it or secure shell to it. DigitalOcean’s Console Access web interface did worked and showed the necessary processes running on my droplet. The droplet was working fine but network access to it appeared to be broken.

I contacted support and was eventually routed to second level support (an engineer) who told me that I had to manually power off (run “sudo poweroff” on the command line) and then power on the droplet (using DigitalOcean’s web interface). This fixed the network connection issue and my website was back online. Note that doing a “shutdown -r” command or a “Power Cycle” (using the web interface) did not fix the connectivity problem.

DigitalOcean support was very responsive. Of the three support cases I’ve opened in the year that I’ve been with them, first line support had responded promptly. Of course, support did make use of canned responses (which I didn’t object to because it made sense to filter out beginners). Though I was vexed by the network connectivity issue (my website was offline for 3 days) and my irritation showed in my communications, the support staff always remained very polite and gracious.

Doing such a system-wide upgrade without checking network connectivity to affected droplets concerns me. Checking that upgraded droplets are up and reachable would have been my first validation test after the upgrade, instead of putting the burden on the customer to make sure everything is okay. Then again, this expectation might be acceptable for an unmanaged VPS; though I think it is a grey area because the upgrade was initiated by DigitalOcean. For full disclosure, DigitalOcean did provide a manual upgrade process; which in hindsight, I should have taken advantage of. Lesson learned.

Slow and Slowerer

When I configured my droplet a year ago, I was very impressed by the performance. My website loaded pages within 1 second, as opposed to the 2-4 seconds on my previous shared web hosting service. Recently, I would have been very glad to get back my 2-4 seconds page load time.

Since the past few months, I’ve noticed my website getting slower and slower. Even a simple PHP application I had running (also on the droplet) took longer and longer to process. Like a frog slowly being boiled, I got used to a 4-6 seconds page load time as being “normal”.

Worse, after my droplet was upgraded, the page load time dropped to 8-9 seconds. I installed the “WP Super Cache” WordPress plugin in a quick fix attempt to increase performance and it worked. Once WP Super Cache was activated, page load times moved back to 4-6 seconds.

You know what they say about quick fixes. A week later, the page load times increased to 8-15 seconds. 15 seconds! I disabled WP Super Cache and page load times dropped to 4-6 seconds. I didn’t understand why but at least, the crisis was averted.

Bottleneck? What Bottleneck?

The performance of any VPS (or shared web hosting) is determined by the allocated CPU power, amount of memory, disk access speed, software stack (programs running), and network traffic. The first three can be collectively known as the hardware or virtual hardware. In my website’s case, the software stack is composed of the Ubuntu operating system, LEMP infrastructure, WordPress and its plugins. And though I would love to say that the slowdown was due to increased network traffic to my website, it wasn’t.

When optimizing for performance, it pays to determine where the bottleneck is. For example, you could waste time optimizing the LEMP (by adding Varnish) or WordPress (by adding the WP Super Cache plugin) when the bottleneck is that you are out of memory (Varnish won’t help) or free disk space (WP Super Cache could actually make this worse with its caching mechanism). Having said that, there are ways to optimize LEMP (and WordPress to a lesser extent) to reduce memory usage; but then, it is usually at the cost of performance.

I contacted DigitalOcean support for help. I got a mostly canned reply back. They stated that the fault wasn’t because of network connectivity, hardware, or over-subscription (where there are too many droplets running on the same physical hardware). They had tested loading a couple of static images from my website, which only took 100-200ms each and proved that the problem was not on their end. The canned reply suggested using sysstat to figure out the problem with the droplet.

Sysstat is a collection of utilities to monitor performance under Linux. Here’s how to install and use sysstat:

# Install sysstat
sudo apt-get install sysstat

# Enable sysstat system metrics collection
sudo vi /etc/default/sysstat
  # Change ENABLE="false" to "true"

# Start sysstat
/etc/init.d/sysstat start

# Check CPU usage
sar -u

# Check memory usage
sar -r

Because we have just started the sysstat process, the check CPU and memory usage will only return the current CPU and memory usage. Sysstat will collect system metrics every 10 minutes; so in the future, the “sar” commands above would return the CPU and memory usage collected every 10 minutes in the past. Sysstat has a lot more functionality which I have yet to explore.

My favorite performance monitoring tool is the “top” command. It displays a real-time summary of the CPU, memory, and swap usage with a list of the processes consuming the most CPU. (Note that the Ubuntu image from DigitalOcean has swap disabled by default.) The top command allows me to see what is happening on the system as I load a page.


Right off the bat, I noticed that my CPU usage was around 90% constantly, which was a big red flag. After a day of recording, sysstat returned the same average 90% CPU usage. This might explain why the WP Super Cache plugin, which required more CPU and disk access to do the page caching, made the website performance worse. I didn’t recall seeing the CPU that high when I first configured the droplet a year (it would have concerned me very much then).

Memory also looked alarming with a 98% usage (493424 used / 501808 total); however, it was a false alarm. Evidently, Linux operating systems like Ubuntu will allocate all the free memory for disk caching. Then, when applications need more memory, they get it from the disk cache. So, the important data to look for is the cache size. Here, the cache size was 28% of total memory (144132 cached Mem / 501808 total), which means only about 2/3 of memory was actually used by applications.

Note: The Linux command to display free memory, “free -m”, supports the same conclusion. Look for the reported “cached” number.

What Is Eating My Hard Drive?

Running the Linux command to report file system disk space usage, “df -h”, indicated that 93% of my 20GB quota was used. I remembered that my droplet used much less than 50% of the 20GB a year ago.

Find the space hogs:

cd /
sudo du -h --max-depth=1
16G    ./var

cd var
sudo du -h --max-depth=1
6.0G   ./www
7.4G   ./lib
2.0G   ./log

Note: If your system’s estimate file usage “du” command does not support the “max-depth” flag, then you will need to run this command on each directory one by one like so:

sudo du -sh /var/www
sudo du -sh /var/lib
sudo du -sh /var/log

The “/var/www” directory contained my website content so that was a keeper. The “/var/lib” directory contained important system and application files, so we could not just delete anything in it without a lot of caution. The “/var/lib” directory’s large size was caused primarily by the MySQL database file, “/var/lib/mysql/ibdata1”, which we will examine in detail later. I was certain that I could safely delete the archived log files from the “/var/log” directory though.

# Manually rotate log files (basically create new active log files)
sudo /etc/cron.daily/logrotate

# Delete all gzipped archive files
sudo find /var/log -type f -name "*.gz" -delete

# Delete all next-to-be-archived files
sudo find /var/log -type f -name "*.1" -delete

# Double-check size again
sudo du -sh /var/log
563M   /var/log

Strangely, I found the mail directory, “/var/mail”, taking up 150MB. There were a lot of non-delivery notification emails sent to the website address. (I don’t know why but plan to investigate it at a later time.) I was sure that it is also safe to delete those emails.

# Check usage
sudo du -sh /var/mail
156M   /var/mail

# Truncate all mail files to zero size
sudo find /var/mail -type f -exec truncate {} --size 0 \;

# Double-check usage
sudo du -sh /var/mail
4.0K   /var/mail

Note: I did read a recommendation to enable swap on Ubuntu to guard against out of memory errors (because swap allows disk space to be used as additional memory at the expense of performance); however, because I have 1/3 free memory and a performance problem, I don’t think enabling swap is the appropriate solution for my case.

Die, WordPress Plugin, You Die!

I strongly believed that the bottleneck was the CPU; the top five most CPU-intensive processes were the “php5-fpm” processes (responsible for executing PHP scripts). So, optimizing LEMP by adding Varnish (an additional HTTP accelerator process) would probably not help, and might even harm the performance further. What could be causing so much CPU usage?

According to Google Analytics, the traffic to my website had not changed significantly this past year. Even if it had, the now roughly 200 visitors per day should not cause such a high CPU usage. I had not changed my website in any way (for example, by adding new plugins). The only changes had been software updates to Ubuntu, WordPress and its plugins.

For LEMP infrastructure issues, the recommended step is to check the log files for errors.

# Linux system log
sudo tail /var/log/dmesg

# Nginx log
sudo tail /var/log/nginx/error.log

# PHP log
sudo tail /var/log/php5-fpm.log

# MySQL log
sudo tail /var/log/mysql/error.log

Looking at the Nginx log was eye-opening because I could see hacking attempts against my website using invalid URLs. However, that could not be the cause of the high CPU usage. There were no other errors or clues in the log files.

For WordPress performance issues, the universally-recommended first step is to disable the plugins and see if that fixes the issue. Rather than disabling all the plugins and re-enabling them one by one, my gut told me that the culprit plugin might be the “WordPress SEO”. When I get daily-in-row or even twice-a-day updates to a piece of software, I know that the software is very buggy. WordPress SEO was guilty of that behavior. Disabling the WordPress SEO plugin resulted in an immediate drop in CPU usage to the 30-50% range. Page load times dropped to 2-3 seconds.

Unfortunately, when I checked a few days later, the CPU was back up to 90% and page load times had increased back to 8-10 seconds. The WordPress SEO plugin was a contributor, but it was not the primary cause of my droplet’s performance issue.

MySQL, What Big Eyes You Have

In addition, the “/var/lib” directory had grown another 1.5GB in size and at a total 9GB, had consumed almost half of my 20GB allocation. Digging further, I found that it was the “/var/lib/mysql/ibdata1” file that had grown to over 6GB. The “ibdata1” file was where MySQL (specifically the InnoDB storage engine) stored the database data and while it can grow, unfortunately it can never decrease in size.

A MySQL query on the database size was necessary to investigate further. Log into MySQL as the root user and run this query to show the sizes of the existing databases:

SELECT table_schema "Data Base Name",
sum( data_length + index_length ) / 1024 / 1024 "Data Base Size in MB",
sum( data_free )/ 1024 / 1024 "Free Space in MB"
FROM information_schema.TABLES
GROUP BY table_schema;

I found that my MediaWiki database was over 6GB in size. I had a MediaWiki for personal use. Access to it was restricted by a password-protected directory. I hadn’t used it in a long time (over half a year) so hadn’t paid any attention to it. When I logged into it, I found the main page was blank with a link to an unknown website. A check of the history indicated that multiple unknown revisions had been made to it since February of this year. My MediaWiki had been hacked.

Evidently, someone had gotten past the password-protection and was using the MediaWiki to store 6GB of data. Worse, that someone may have hacked MediaWiki to run their own PHP code (very unlikely but not impossible as I had a very old version of MediaWiki running). This explained the high CPU usage and the low free disk space.

I savaged my personal info from the MediaWiki (using the history to view old page revisions). I then deleted the MediaWiki database and directory containing the MediaWiki PHP code. The CPU usage immediately went down to a few percentages. Page load time dropped to around one second. Hurrah! (I also changed all my passwords just in case.)

MySQL Database Surgery

To reclaim the 6GB in space used by MySQL’s “ibdata1” file required major surgery. I needed to delete the “ibdata1” file which required deleting and re-creating the WordPress database (and my other personal databases).

Before starting, I recommend configuring MySQL to store each InnoDB table in its own separate file, instead of in the “ibdata1” file, to allow more options to manage drive space usage. Doing this will support the MySQL “Optimize Table” command, which can reduce the table’s file size.

sudo nano /etc/mysql/my.cnf
  # Add "innodb_file_per_table" to the [mysqld] section

The change above won’t take effect until we restart MySQL.

We need to do some steps before and after deleting the “ibdata1” file:

# Dump a backup of "wordpress" database (and any other personal database)
mysqldump -u[username] -p[password] wordpress > /tmp/wordpress.sql

# Delete "wordpress" database (and any other database except "mysql" and "performance_schema")
mysql -u root -p
mysql> drop database wordpress;
mysql> quit

# Stop MySQL server
sudo service mysql stop

# Log into root user (necessary to access "/var/lib/mysql" directory)

# Delete subdirectories and files ("ibdata1") under "/var/lib/mysql" except for "/var/lib/mysql/mysql"
cd /var/lib/mysql
ls -I "mysql" | xargs rm -r -f

# Exit root user

# Start MySQL server
sudo service mysql start

# Create "wordpress" database (and any other database)
mysql -u root -p
mysql> create database wordpress;
mysql> quit

# Restore "wordpress" database (and any other database)
mysql -u [username] -p[password] wordpress < /tmp/wordpress.sql

Viewing the “/var/lib/mysql” directory showed a much smaller “ibdata1” file (about 18M). Strangely, my WordPress database was configured to use MyISAM (an alternative storage engine to InnoDB) by default, so it didn’t use the “ibdata1” file. The “/var/lib/mysql/wordpress” directory contained MyISAM .myd storage files. However, my other personal database did use InnoDB and its directory, “/var/lib/mysql/personal_database”, did contain individual InnoDB .ibd storage files (per table).

WordPress On A Diet

While I was poking around WordPress, I decided to optimize the MySQL database by deleting unnecessary data such as previous versions of posts. Rather than manually truncating database tables myself (a very dangerous, though oddly satisfying pastime), I decided to use the “Optimize Database after Deleting Revisions” plugin, which did exactly what its name said it did.

Before running the “Optimize Database after Deleting Revisions” plugin, backup your WordPress MySQL database. Then do the following to manually optimize your database:

  1. Go to “Settings/Optimize Database” in the WordPress administration.
  2. Configured the options. I checked all the “Delete…” options except for “Delete pingbacks and trackbacks”. I did not enable the Scheduler because I only wish to run this plugin manually when I decide to.
  3. Click the “Save Settings” button.
  4. Click the “Go To Optimizer” button.
  5. Click the “Start Optimization” button.

Thoughts on Hardware Upgrade

Had I not fixed the high CPU usage issue (and it had been a valid issue), the next step would have been to look at options to upgrade the hardware. This would mean upgrading to DigitalOcean’s higher-priced plans or even another VPS provider (I have heard that Linode has better performance overall).

Because my bottleneck was the CPU, I would have had to upgrade to DigitalOcean’s $20/month plan which includes a 2 core processor. Upgrading to the $10/month plan, which only included a 1 core processor (the DigitalOcean website didn’t say whether it is a faster processor than the 1 core processor in the $5/month plan), would not have fixed my CPU issue. Had my bottleneck been a memory limitation, I would chose the $10/month plan, which would have doubled the memory size (1GB versus 512MB).

Thinking outside the box, a cheaper option than the $20/month plan would be to get a second $5/month droplet to run a dedicated MySQL server (hosting the WordPress database). The original droplet would run only the WordPress application and talk to the second droplet’s database. This $10/month option with two $5/month droplets would have two 1 core processors, which might be better than a single 2 core processor! Alas, the MySQL process used only 10-15% of the CPU so removing it from the original droplet would not have made much of a difference.

Hopefully documenting my trials and tribulations above will help you to have an easier time with the performance of your unmanaged VPS.

Some info above taken from:

No Comments

Online Fax For Free (Cheaper And Faster Than A Stamp)

Internet 2 Comments

Recently, my rebate application was rejected. I called and was told that I needed to resubmit the rebate with a copy of the invoice, instead of the order receipt that I had included in the original application. They offered to receive it by fax, in addition to snail mail. (The original rebate application did not mention a fax option. I was told about the fax option verbally by the customer representative.)

Unfortunately, I had just canceled my GreenFax Internet faxing service because I hadn’t used it for longer than a year. (GreenFax is a paid service which charges 5-10 cents per page sent.) I had believed that email or online form submission had made faxing obsolete. So I had arrived at the conclusion that fax was dead and no longer necessary. But I was premature because many companies were still using fax and were slow to adopt better technology (such as online form submission). Faxing remains a viable, convenient and much faster alternative to a stamped letter.

Rather than re-opened a GreenFax account for this rare instance, I decided to look for free options. Surprisingly, there were several free online faxing services. I ended up choosing FaxZero because it had good reviews, had a simple webpage, allowed 3 pages (excluding the cover sheet), and did not require me to create an online account. FaxZero does place its logo on the fax cover sheet. I was concerned about how large that logo would be, but discovered it to be small and unobtrusive. After using FaxZero, I whole-heartedly recommend it. Hey, it saved me a 49 cent stamp and a trip to the post office!

faxzero_submitTo send a free fax, do the following:

  1. Browse to FaxZero.
  2. Fill out the Sender and Receiver Information.
  3. Attach your multi-page PDF or Word document. You can only send 3 pages for free.
  4. Type some text into the cover sheet. The FaxZero logo will appear on the top-right corner of the cover sheet. (The cover sheet is not counted as one of the 3 free pages.)
  5. Input the displayed confirmation code to prove that you are a living human.
  6. Click on the “Send Free Fax Now” button.
  7. Check you inbox for an email titled “ Action Required – Please Confirm Your Fax”. Click on the confirmation link.
  8. You will be directed to a page with a link to your fax’s unique status page. Save that status page link so you can refresh it to see what is going on.
  9. Though FaxZero warns that it can take up to 30 minutes to send the fax; I found that my fax was sent within a few minutes. The status page was updated with a success message and I also got an email stating the same.

faxzero_logoThe rebate company scanned what I had faxed and helpfully provided the scan online (in the rebate status page). I was able to verify that the FaxZero logo was as small and unobtrusive as the sample image in the FaxZero FAQ.

A question plagued me. Why didn’t the rebate company support uploading the rebate form? Instead, the company forces people to mail in the rebate form, scans it, and then puts it online. I’m afraid that the answer is good, old capitalism.

Companies want customers to jump through hoops when submitting rebates. You have to mail it in, which requires you to get a stamp and go the post office. And of course, to make copies of everything you send in case they claim to have never received anything. You need to wait months before having to call in to ask why you haven’t received your check (you need to resubmit because they never got your rebate form) or that your rebate was denied because of something or another. I must admit that the online rebate status (and emails) that most companies provide nowadays is very helpful; instead of the black hole of waiting without any information common in the past.

I found the FaxZero free faxing experience very pleasant, convenient, and quick. So if you ever need to fax something and don’t have an old, clunky fax machine, consider using FaxZero or another free faxing service. I hope that FaxZero remains in business until faxing is finally obsolete.