Outdoor Home Video Surveillance With CleverLoop

Hardware No Comments

candace_flynnMy Dad was concerned. He thought he heard noises from the backyard and maybe saw some movement there at night. My Dad grows fruit trees and vegetables, which might attract intruders. His neighborhood is not the safest place; his house had recently been burglarized. He wanted a way to check that the perimeter was safe before leaving the house. The obvious answer, besides moving away, was an outdoor video surveillance system.

After days of research, I settled on the CleverLoop Smart Home Security System. CleverLoop is expensive. I could get video systems (especially coaxial ones) with more cameras for significantly less. However, my primary concern is usability. My Dad is going to use it so it has to be very easy to use.

CleverLoop’s selling point is usability, implemented by their mobile user interface and smart backend (a base station device is included with the cameras). They admit that their cameras are the same generic ones made in China as the other video surveillance systems use. Hardware is not what CleverLoop is competing on, it is software. CleverLoop’s mobile phone app (for iPhone or Android) looked to be more user friendly and feature rich than the competitors’ versions.

Here is how CleverLoop did against my selection criteria, in prioritized order:

  1. User-friendly interface – CleverLoop mobile phone app looks easy to use
  2. Accessible from anywhere over Internet – Dad wanted to check the video feeds when he is away from the house
  3. Decent video quality – CleverLoop can support 720p HD (High Definition)
  4. Night vision mode – automatic IR (infrared) night vision
  5. Motion detection and alerts – CleverLoop’s base station handles this
  6. No monthly fee – free CleverLoop plan includes multiple users and backs up alerts to the cloud for 7 days
  7. Wireless – all CleverLoop cameras are wireless, but we ended up not using this function

When I was researching, CleverLoop was selling a kit containing 3 outdoor cameras. At the time, the base station only supported 3 cameras. This was a problem because a house is usually square shaped and I needed 4 cameras to cover the 4 sides. Thankfully, when I was ready to buy, CleverLoop offered a kit containing 4 outdoor cameras. A camera at each corner of the house looking down each side will cover the whole perimeter of the house, albeit with blind spots under each camera. I supposed that they had updated the base station software to support 4 cameras, instead of 3.

What to Buy

Before purchasing an outdoor video surveillance system, you must decide on the type of wiring because that is the hardest part of the installation. I had decided that the most future proof wiring is PoE (Power over Ethernet) Cat6 (Category 6) Ethernet cable. PoE reduces installation to running just one cable conveying both power and data. The Cat6 Ethernet cable will support higher data rates for when more powerful cameras become available. If I use a coaxial or non-PoE Ethernet cable, I would need to run a second cable for the power. Trust me on using a PoE cable, because you don’t want to have to drill a second extra hole through two or more plywood layers (common for exterior walls).

While CleverLoop does not support PoE, there is a way to simulate it. Basically, use adapters, passive PoE injectors and splitters, to feed both data (non-PoE Ethernet cable) and power through the “PoE” Ethernet cable. To protect the PoE adapters from the elements, you can use an outdoor electrical outlet cover box. This very helpful Youtube video, Installing CleverLoop Outdoor IP Camera, was my installation bible. CleverLoop has a support page, Powering an outdoor camera using Power-over-Ethernet (PoE), that references the video along with useful pictures of the PoE injector and splitter wiring.

In addition to ordering the CleverLoop Security System with 4 Outdoor Cameras, I purchased the following for the installation:

  • 500 ft of Cat6 Ethernet Cable (from Amazon). I think I used 100-150 feet at best.
  • 100 Cat6 RJ45 cable connectors (from Amazon). Maybe used a dozen.
  • RJ45 Crimp, Cut, and Strip Tool (from Amazon). Used to cut the Cat6 Ethernet cable and crimp the RJ45 connectors onto the cable ends.
  • 4 Passive PoE Injector and Splitter Kits with 5.5×2.1mm DC Connector (from Amazon). Double-check that the splitters and injectors have 5.5×2.1mm DC connectors because that is what CleverLoop cameras use.
  • 4 Weatherproof Outlet Covers (from Home Depot; Bell Outdoor Weatherproof In-Use Cover for $8.44 each). Get the 2 inches thick cover, not the 1 inch thick cover; you’ll need the extra space to hold the PoE splitter, Cat6 Ethernet cable, and CleverLoop Ethernet data and power cables.

cleverloop_outlet_coverI ordered way more Cat6 Ethernet cable and RJ45 connectors than I needed. You can safely order much less, depending upon the size of your house.

If I had to do the installation again, I would also purchase a Network Cable RJ45 Tester (like this one from Amazon). The tester can determine whether the RJ45 connectors are crimped onto the ends of the Ethernet cable correctly or not. Instead, I used an Internet-enabled router and my Macbook to test the cable, which was very inconvenient.

Besides the above, you will probably need a power drill with a long drill bit (one foot or longer). The external frame on my Dad’s house under the roof line (where we placed the cameras) consisted of two 2×6 planks with an inch spacing between. The long drill bit was necessary to punch an Ethernet cable-sized hole all the way through.

Drill, Baby, Drill

After everything arrived, I reserved a weekend to complete the installation. The idea was to run the Cat6 Ethernet cables in the attic and drill holes from the attic to the outside at each corner of the house. I didn’t look forward to working in the hot and dirty attic. Thankfully, my older brother volunteered to do the cabling and drilling.

It took my brother the better part of the first day to drill the necessary holes and to run the cables, though with a long break in the middle. He had to drive home to get his 1.5 feet long drill bit and disposable coveralls (for getting into tight, dirty corners filled with insulation) once he realized both were necessary.

I crimped the RJ45 connectors onto the ends of the Cat6 Ethernet cables and tested them; I only had to redo 5 connectors. When installing the RJ45 connectors, the sequence of color wires needs to be the same at both ends. I found this page, Making Ethernet Cables – Tricks of the Trade, helpful and followed the sequence of colors that he used to make a straight through Ethernet cable.

On the second day, my brother mounted the cameras and outlet covers (containing the PoE splitters, assorted Ethernet cables, and CleverLoop power cables). Because the drilled holes, cameras and outlet covers were located under the eaves, he did not bother to waterproof the drilled holes (the Ethernet cables fit the holes quite snuggly). He also didn’t bother arranging drip loops for all the cables coming to and leaving from the outlet covers.

In the attic, the Cat6 Ethernet cables from each camera met at a point above where my Dad’s Internet router was located. We tried to run a Cat6 Ethernet cable down the wall to the router, but a horizonal wood frame member prevented it. Makes no sense; who would put a solid, horizontal plank of wood in the middle of an interior wall? (Even fish tape would not have helped in this scenario!) We ended up drilling a hole in the ceiling close to the wall and running the Cat6 Ethernet cable down the wall to the router. It’s a little ugly, but the alternative, ripping out the dry wall, is even more undesirable.

Good thing I had a spare 5-port Ethernet switch; otherwise, I would have had to purchase one. In the attic, I connected all the camera’s Cat6 Ethernet cables to the switch and then the switch to the Internet router. Thankfully the attic had an outlet socket nearby because I needed to plug in all the CleverLoop cameras’ power bricks (I used a power strip) and connect them to the PoE injectors, and then the PoE injectors to the Cat6 Ethernet cables. In the end, I ended up with a rat’s nest of cables, PoE injectors, and power bricks. I put the whole pile on a sheet of wood to prevent any contact with the attic’s insulation material.

Unfortunately, the cameras did not have any power indicators. (Though, if it was at night, I might have been able to see the IR LEDs light up.) Even worse, the camera took up to 5 minutes to come online (for it to be accessible by IP address) and to start streaming video. (The CleverLoop manual says it takes 90 seconds to boot up the camera.) Initially, I thought that the passive PoE injectors and splitters didn’t work because the Cat6 Ethernet cables were too long (one ran over 50 feet in length). I re-tested the longest Cat6 Ethernet cable and double-checked that there were no loose connections. Thankfully, one camera’s IP address appeared on my router’s client list (minutes had past during my debugging) and slowly the rest came online. Everything was fine; I just needed to be patient.

CleverLoop App

I connected the CleverLoop base station directly to the router. I installed the CleverLoop app onto my iPhone, created an account, and scanned the QR code at the bottom of the base station. In the CleverLoop app, I added the four cameras and started seeing their video streams (have to view each camera and click the play icon). Now that we can see what the cameras were seeing, my brother made final adjustments to the cameras’ positions with my Dad’s input.

When it got dark, the cameras automatically switched to black-and-white IR night vision mode. I am impressed by the night vision; I was able to see pretty far and the video was clearer than expected.

In the morning, one camera had an “Access Denied” error. And a second camera was still stuck in black-and-white IR mode. Rebooting the two cameras, removing and re-adding them did not fix the problem. As common with software, it turned out the cameras needed firmware updates to eliminate these and other bugs.

Tip: To reboot a camera, I can open up the camera’s outlet cover and disconnect the camera’s power cord from the PoE splitter. This is a lot easier than going into the attic, locating the particular camera’s Cat6 Ethernet cable and disconnecting its power plug from the PoE injector.

Per the CleverLoop support page, Check your camera firmware version (For X series indoor camera and outdoor camera only), I upgraded all the cameras’ firmwares to the latest recommended version. Note that you cannot upgrade directly to the latest version, you must upgrade incrementally. For example, I upgraded from version to, and then from to (latest version).

Upgrading to the latest firmware fixed both the “Access Denied” and stuck in black-and-white IR mode errors. And then 10 minutes after the upgrades, all cameras went offline (inaccessible by the CleverLoop app or browser). I had to power all cameras off and on to get them back. So, I recommend that you always manually reboot the camera after a firmware update.

Tip: You can browse to the camera’s IP address and view its video feed in the browser. The default username and password is “admin” and “123456”. On my Chrome browser running on Windows 7, I had to select “view video -Mode 2” (which uses Adobe Flash) to see the streaming video feed. Selecting “view video -Mode 1” (which uses “application/x-hyplayer” and QuickTime Player) did not work.

Video Quality

By default, the cameras are set to stream SD (Standard Definition 480p or 640×480 pixels) video quality. On a smartphone’s small screen, SD video looks pretty sharp. I did enable the 720p HD video on the cameras and only saw a slight improvement in quality on my iPhone 5’s tiny screen (it could be my imagination). I was more concerned about overwhelming the Internet connection’s upload bandwidth (for streaming over the Internet) so I changed it back to SD quality. So far, no complaints about video quality from Dad, who uses an iPhone 6 Plus.

Multiple Users

When I first installed the CleverLoop app, CleverLoop allowed multiple users to log in using the same account. So both my Dad and my sister (who lives with our father and has an Android phone) used the same account. A month later, that was no longer allowed. Logging into an account would log out anyone currently logged into that account. My sister had to create a new Cleverloop account and then add the base station (by scanning the QR code at the bottom of the base station). Thankfully, she didn’t need to re-add the cameras.

I think that CleverLoop disallowed account sharing in order to better support the Geo-Fencing function (automatically arms the motion detection system when your smartphone leaves the home location). However, because my Dad wanted alerts regardless of whether he was home or not, I had disabled the Geo-Fencing feature.

Alerts vs Movements

At first, I was confused by the movement and alert notifications. After reading up on it, I learned that we needed to train the base station to distinguished between harmless movements (like a branch waving in the wind) and important alerts (an intruder at the door). We are expected to view the video clip attached to each notification. And if we don’t agree with the classification (movement or alert), we can inform the system so by clicking on the “This should be an alert/movement” text at the bottom of the video feed.

cleverloop_motion_detectionTo eliminate the clutter caused by many notifications, you can manually delete the alerts and movements. There is a delete button that you can press when viewing the movement or alert. Additionally, there is an batch action (the top-right pencil icon when viewing a particular camera) that allows you to select multiple notifications for removal.

You can fine-tune the motion detection by marking areas of the video feed for analysis. Go to camera settings (the top-right gear icon when viewing a particular camera) and select “Fine-tune Smart Detection”. You can create up to 3 rectangular areas (a.k.a. hotspots) for motion detection and analysis. The parts of the video outside of the hotspots are not analyzed.

One major problem I see with the motion detection video clips is that the first few seconds are not shown. So, if you have someone who enters the video feed and exits quickly, the video clip won’t show that person. For example, when someone comes to the front door, drops off a package, and leaves, all I see in the video clip is a part of his back or his shadow as he is leaving. This is a known problem with a known solution called video pre-buffering. Basically, the system records continuously and when motion is detected, the generated video clip includes the previous 5-10 seconds. Hopefully, the CleverLoop base station will be updated to do pre-buffering soon.

Wireless Not Needed

When my Dad first asked about video surveillance for the backyard, he mentioned putting a camera on the detached garage and pointing it at the back of the house. Because I didn’t want to run an Ethernet cable from the house to the detached garage, I decided that a wireless camera was a necessary requirement.

Given a choice, I would prefer to avoid a wireless solution. Having a camera constantly streaming video across a wireless connection does not sound like a good idea, especially if it is 720p HD video. I can foresee complaints about Youtube and Netflix being flaky and having to reboot the wireless router frequently. Besides, if you’re wiring for power (which the camera needs regardless), you might as well wire for data too.

Thankfully, we managed to avoid using the camera’s wireless function. I asked my Dad to wait, to use the existing four wired cameras first. If he still wishes to have a camera on the detached garage, we can buy another CleverLoop base station with one or more outdoor cameras. So far, no requests for additional cameras from Dad.

Progress Report

After a month of operation, the CleverLoop base station stopped working (no alerts sent and CleverLoop app showed blank video feeds) and had to be rebooted. This is normal. I remember when cable modems and wireless routers first came out, I had to reboot them once a week. Then as they were improved, once a month. Now, several months go by before a reboot is necessary. I expect the same progress for the CleverLoop base station.

Because the CleverLoop app showed blank video feeds, I thought the cameras had stopped working. But it was the base station that had gone kaput. The base station pulls video from the cameras, analyzes the video for motion, and sets alert notifications. The CleverLoop app, in turn, pulls video from the base station. So, if the base station stops working, you get no video and no alerts in the CleverLoop app. Browsing directly to the cameras show all four video feeds working. Surprisingly, all cameras continue to function flawlessly without requiring any reboots.

Tip: The base station is responsible for doing the motion detection and sending the alert notifications. If you don’t get motion detection alerts after arming the system, check that the base station is powered on and connected (blinking LEDs in the back right on top of the ethernet jack).

Feature Requests

Below are my improvement and feature requests for CleverLoop:

  • The alert and movement video clips to include the previous 5 seconds. The video clips are missing the first few seconds immediately after the motion was detected. (See “pre-buffering” reference above.)
  • CleverLoop app to have a user-only mode where administrative functions like add/remove base station, add/remove camera, and camera settings are not accessible. In the current user interface, my Dad could easily remove the base station, remove a camera, or modify a camera’s settings with some accidental touches.
  • Camera list screen to update snapshot images of all four cameras once per second (or even once per 5-10 seconds). Currently, outdated snapshot images are shown.
  • Like the above, camera screen to update camera’s snapshot image once per second.
  • CleverLoop app to disallow adding a 5th camera to a base station. Even though I have added the maximum 4 cameras, the camera list screen still shows the “Add a New Camera” option.
  • Install CleverLoop app on an iPad. Currently, the iPad’s App Store does not list the CleverLoop app. It would be nice to see a bigger video feed on the iPad.

One feature that I think my Dad might like is to display all four video feeds on his large LED TV. If he ever asks for it, I think I can create an HTML page that shows the video feeds from all four cameras (using Adobe Flash). Run it on a laptop (or tiny desktop) attached to his TV and voila, a security guard’s dream come true!

No Comments

Clone a Big Hard Drive to a Smaller One

Windows No Comments

I had tried out Windows 10 by installing it on a second, bigger 500GB SSD (Solid State Drive) than my existing Windows 7’s 240GB SSD. Having determined that I wanted to permanently move to Windows 10, I decided to move Windows 10 to the smaller drive, overwriting Windows 7.

026GarfieldFirst, to clone from a bigger to smaller drive requires that the bigger drive not contain more data than can fit into the smaller drive. Second, the bigger drive must not have data stored at a location beyond the maximum supported location on the smaller drive. The safest way to satisfy both requirements is to shrink the source partition to ensure that it will fit 100% onto the hard drive.

Disable BitLocker

Before doing anything, I decided to decrypt the drive by turning Bitlocker off. I had tested cloning a Bitlocker-protected Windows 7 drive but it failed with a blue screen on startup, after getting past the annoying Bitlocker recovery procedure (because the hard drive signature had changed). So, I decided that it would be best to decrypt, clone, and then re-encrypt. Turning Bitlocker off didn’t take too long (about 20 minutes) because my Windows 10 was a fresh install with just Office and some other apps (about 35GB in size).

To turn BitLocker off, run “Manage BitLocker” and select the “Turn off BitLocker” option.

Resize Source Partition

So, here’s how to resize the Windows 10 source partition:

  1. Run Window 10’s “Create and format hard disk partitions” application (aka “Disk Management”).
  2. Right-click on the Windows 10 partition and select “Shrink Volume…”.
  3. Adjust the “Enter the amount of space to shrink in MB” until the “Total size after shrink in MB” is significantly smaller than the target hard drive size. (Because my target hard drive is 240GB, I tried to get the partition below 200GB to be on the safe side. Make sure to account for Windows 10’s two system partitions, a 300MB recovery partition and a 500MB EFI partition.)

There may be an upper limit to how much you can shrink the volume. You will see a text, “You cannot shrink a volume beyond the point where any unmovable files are located”, that explains why. Disk Management cannot move files used by system hibernation, paging, and protection (aka system restore) so it cannot shrink the volume past the furthest located of these files.

Note: If you have a non-SSD hard drive, you will want to run “Disk Defrag” (aka “Defragment and Optimize Drives”) first to consolidate the file locations to the head of the hard drive, before shrinking the partition.

Disable System Services

The solution to allow you to shrink the volume further is to disable system hibernation, paging, and protection first.

  • Disable Hibernation
    1. Click on Start, type “Command Prompt”, right-click on it and select “Run as administrator”.
    2. In the Command Prompt, type “powercfg /h off” to turn Hibernation off.
  • Disable Paging
    1. Run “View advanced system settings” to open the “System Properties” dialog, make sure the Advanced tab is selected, and click on the “Settings” button in the Performance section.
    2. Under the Advanced tab in the “Performance Options” dialog, click on the “Change…” button.
    3. Select “No paging file” and click the Set button. (We will need to reboot for this change to take effect.)
  • Disable System Protection
    1. Run “View advanced system settings” to open the “System Properties” dialog and select the “System Protection” tab.
    2. Select the C:\ drive and click on the “Configure…” button.
    3. Check the “Disable system protection” box and click OK. Answer Yes.

Once you have disabled the system services above, reboot (so the paging change can take effect), and repeat the Disk Management steps above to shrink the Windows 10 partition. You should be able to shrink the volume smaller than the destination drive’s size. (If you have more data on the source drive than can be contained by the target drive, you will need to uninstall and/or delete things from the source drive.)

CloneZilla the Drives

We have to use CloneZilla in expert mode (instead of beginner mode) in order to configure it to allow cloning from a bigger to a smaller drive.

Follow the first set of instructions at Clone a Hard Drive Using Clonezilla Live to create a bootable USB flash drive containing the latest version of Clonezilla Live.

Then follow the revised instructions below to clone the drives. (Steps 1 thru 7 are the same. In Step 8, we select “Expert mode” instead of “Beginner mode”.)

  1. Attach the destination drive to the same machine containing the source drive.
  2. Start the machine and boot from the USB flash drive. You may need to press a particular function key to load the boot menu (F12 on my Lenovo desktop) or you may need to adjust the BIOS setup to boot from a USB drive before the hard drive. (If you get offered Legacy or UEFI bootup options for the USB flash drive, choose UEFI.)
  3. On Clonezilla Live’s startup screen, keep the default “Clonezilla live (Default settings, VGA 800×600)” and press Enter.
  4. Press Enter to accept the pre-selected language, “en_US.UTF-8 English”.
  5. Keep the default “Don’t touch keymap” and press Enter.
  6. Make sure “Start_Clonezilla” is selected and press Enter to start.
  7. Because I am copying from one hard drive to another, I select the “device-device work directly from a disk or partition to a disk or partition” option. Press Enter.
  8. Change to “Expert mode” option and press Enter.clonezilla_expert
  9. Keep the first “disk_to_local_disk” option and press Enter.
  10. Select the source drive and press Enter.
  11. Select the target destination drive and press Enter.
  12. Check the “-icds Skip checking destination disk size before creating partition table” flag and press Enter.clonezilla_icds
  13. Keep the default “Skip checking/repairing source file system” selection and press Enter.
  14. Select the “-k1 Create partition table proportionally” flag and press Enter.clonezilla_k1
  15. Type “y” and press Enter to acknowledge the warning that all data on the destination hard drive will be destroyed.
  16. Type “y” and press Enter a second time to indicate that you are really sure.
  17. In answer to the question “do you want to clone the boot loader”, type uppercase “Y” and press Enter. (I need to clone the boot loader so the destination hard drive will be bootable like the source hard drive.)
  18. The hard drive cloning will occur.
  19. When the cloning completes, press Enter to continue.
  20. Select “poweroff” to shut down the machine.
  21. Once the machine is off, remove the source drive and boot from the destination drive. (Or use the boot menu to select the destination drive.)

Thankfully, CloneZilla automatically increase the size of the Windows 10 partition on the destination drive to take up the remaining available free space. (If CloneZilla didn’t increase the partition size for you, you can use the “Extend Volume…” function in “Disk Management” to grow the partition size manually.)

Re-enable System Services

Once you are certain that Windows 10 is working successfully off the smaller drive, you can re-enable the system hibernation, paging, and protection.

  • Enable Hibernation
    1. Click on Start, type “Command Prompt”, right-click on it and select “Run as administrator”.
    2. In the Command Prompt, type “powercfg /h on” to turn Hibernation on.
  • Enable Paging
    1. Run “View advanced system settings” to open the “System Properties” dialog, make sure the Advanced tab is selected, and click on the “Settings” button in the Performance section.
    2. Under the Advanced tab in the “Performance Options” dialog, click on the “Change…” button.
    3. Select “Automatically manage paging file size for all drives” at the top and click the OK button. (We will need to reboot for this change to take effect.)
  • Enable System Protection
    1. Run “View advanced system settings” to open the “System Properties” dialog and select the “System Protection” tab.
    2. Select the C:\ drive and click on the “Configure…” button.
    3. Check the “Turn on system protection” box and click OK.

Re-enable BitLocker

If you want to, re-encrypt the hard drive by turning Bitlocker on. Run “Manage BitLocker” and select the “Turn on BitLocker” option. (I don’t recommend choosing the option to encrypt the entire drive, instead of the used disk space only, unless you want to make sure that no one can recover deleted files. Encrypting the entire drive takes significantly more time, depending upon the amount of free disk space.)

If BitLocker didn’t already ask you to reboot, do a reboot to ensure that the paging change above takes effect.

Note: If you leave the source drive attached, it won’t show up in Windows 10’s File Explorer. Run “Disk Management” and you will see that the source drive’s status is “Offline (The disk is offline because it has a signature collision with another disk that is online)”. To make the source drive visible and accessible, right-click on the source drive’s label (“Disk 1” in my case) and select Online.

CloneZilla Didn’t Work!

I tried using CloneZilla to clone my laptop’s HDD (Hard Disk Drive) to a smaller SSD. Unfortunately, CloneZilla threw an error, “Write block error: no space left on device”. Even though it then completed the cloning process, my laptop was not able to boot off the resulting SSD.

Instead, I attached the laptop HDD and SSD to my desktop and ran the free version of EaseUS Partition Master on my desktop to successfully clone from the laptop HDD to the SSD. Here is what I did:

  1. Install EaseUS Partition Master Free Edition, run it, and click “Launch Application”.
  2. Select the source disk (in the right-hand panel listing all the disks), right-click and choose “Copy disk”. (Alternatively, you can run menu Wizard, “Clone disk wizard”, and select the source disk.)
  3. After the wizard finishes analyzing the source disk, click Next.
  4. Select the destination disk. Next.
  5. Choose the “Delete partitions on the destination hard disk” option. Next.
  6. The wizard should select the same sizes for the destination partitions as the source partition’s (except for the last partition, which should be the Windows partition, if the hard drive sizes are different).
    • On a MBR (Master Boot Record) disk, you should have two partitions, a tiny System partition and a large Windows partition. On a GPT (GUID Partition Table) disk, you will have one or two other tiny system or reserved partitions.
    • Note: The wizard had a problem selecting the destination partition sizes for my laptop’s SSD. It increased the 1GB System partition to 100GB. I had to drag to resize the System partition to a value closed to 1GB (couldn’t get it exactly the same) and increase the Windows partition size accordingly (to eliminate the Unallocated space).
    • If you drag the partition to a small enough size, you won’t be able to see the text inside showing the size. Just rest your mouse pointer on the partition and a popup text will appear with the size info.
  7. Once you are satisfied with the sizes, click Next and Finish. The Partition Master’s disk info will change to reflect the changes you made.
  8. Click the top-left Apply button to make those changes take effect.

If you don’t have a second Windows machine to do the above, you can do a self-migration of Windows 10 from the current disk to another disk using the EaseUS Todo Backup Free Edition. Run it and click the top-right Clone button. More instructions can be found at How to Migrate Windows 10 from HDD to SSD?

Most info above derived from:

No Comments

Node.js Express with Nginx Reverse Proxy and Cache

Mac OS X No Comments

As a web developer, have you ever asked yourself whether to use a period or a plus sign to concatenate two strings? The former is used by PHP and the latter by Javascript. If you switch between the frontend Javascript and the backend PHP languages often, you’ll find yourself asking this and other syntax questions.

nodejs-logoNode.js eliminates that problem by supporting the use of Javascript in the backend. It is a Javascript runtime that replaces the PHP backend. However, using Javascript for both frontend and backend is not the best reason to migrate to Node.js. You will want to use Node.js because of the Node Package Manager (NPM). NPM makes finding, sharing, and reusing Javascript code packages a breeze. When implementing a backend function, the first step to take is to search for an NPM package that provides that function already.

Node.js Express is a simple but powerful Node.js web application framework. We’ll use it as the backend to serve web pages. While Express can serve static files, Nginx is much faster at that task and provides many other benefits.

Using Nginx as a reverse proxy (browsers query Nginx which then calls Express) and cache for Express provides the following benefits:

  • Nginx is built as a high performance server with many optimizations. It can handle many concurrent connections and can perform load balancing. Nginx can serve (or cache and serve) static files like HTML, javascript, images, and CSS more efficiently than Node.js.
  • As the point of entry, Nginx provides better security because it is an older, proven web server solution. Nginx supports both the older SSL/TLS and newer HTTP/2 protocols.
  • Nginx can be used for port 80 and 433 binding to avoid having to run Node.js as a root user, which is bad practice and possible vulnerability. (Under Unix, the first 1024 ports require root privileges to bind to.)

Below are the steps I took to setup Node.js and configure a Nginx reverse proxy and cache. Though I’m doing the steps below on Mac OS X 10 Yosemite, the Node.js code and Nginx configuration should be applicable to other platforms.

Install Node.js

We will install Node.js using MacPorts. Launch the Terminal app and run these commands:

# Node.js uses python scripts; python 2.7 is included with Mac OS X since v10.8
python --version

# Install Node.js
sudo port install nodejs
node -v

# Install Node Package Manager
sudo port install npm
npm -v

# Create a project directory
mkdir myproject
cd myproject
mkdir static

# Create a NPM project package.json file
npm init

# Install Express package
npm install express --save

The “npm init” command will prompt for the project name, version, and description in order to generate a “package.json” file. You can accept the defaults and edit the “package.json” file later.

The “npm install express –save” command will download the Express package to a node_modules subdirectory. The optional “–save” flag will add the Express package and its version as a dependency in the “package.json” file. (The alternative “–save-dev” flag will save the package as a development dependency instead.)

The benefit of the above is that you can give the “package.json” file to another developer and they can run the “npm install” command to install the specific versions of all dependent packages (as listed in the “package.json”).

Express Server

Create a a file named “server.js” with the following content:

// Require Express package
var express = require('express');

// Create an Express app
var app = express();

// Serve static files from static dir

// Handle get on root / request
app.get('/', function (req, res) {
  res.send('Hello World!');

// Bind to port 8080
var server = app.listen(8080, function () {
  var port = server.address().port;
  console.log('Listening on port %s', port);

To test, do the following:

  1. Put an image file, say “earth.gif”, under the static subdirectory.
  2. In a Terminal window, launch the server with this command:
    node server.js
  3. Browse to http://localhost:8080/ and you will see a “Hello World!” response.
  4. Browse to http://localhost:8080/earth.gif to see the static image file.
  5. In the Terminal window running Node.js, press Ctrl-C to quit.

Instead of using the “node server.js” command, you can use the “npm start” command. If you look inside the “package.json” file, you will see that there exists a script target named “start” which runs the “node server.js” command.

Install Nginx

Install Nginx with this command:

sudo port install nginx

The Nginx configuration file is located at “/opt/local/etc/nginx/nginx.conf”. The default document root is set to “share/nginx/html”, which maps to the “/opt/local/share/nginx/html” directory.

You can start, reload, and stop Nginx using these commands:

# Start Nginx
sudo port load nginx

# Reload the config (actually restarts)
sudo nginx -s reload

# Stop Nginx
sudo port unload nginx

# Check to see if Nginx is running
ps -e | grep nginx

While Nginx is running, browse to http://localhost/ to see the Nginx welcome page.

If you need to troubleshoot, the Nginx error log file is located at “/opt/local/var/log/nginx/error.log”.

Nginx Reverse Proxy

When Nginx proxies a request to Node.js, it will optimized the request headers it receives from the client:

  • Nginx gets rid of any empty headers from the proxied request.
  • Nginx considers any header names with underscores as invalid. It will remove them. If you wish to preserve these headers, set the Nginx “underscores_in_headers” directive to “on”.
  • The “Host” header is re-written to the value defined by the $proxy_host variable. This will be the IP address or hostname and port number of the upstream, as defined by the “proxy_pass” directive.
  • The “Connection” header is changed to “close” to indicate to the upstream server that this connection will be closed once the original request is responded to.

Configure the Nginx reverse proxy by running “sudo nano /opt/local/etc/nginx/nginx.conf” and making the following changes to the root location:

    server {
        # ...

        # Helpful headers to pass to Node.js
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-Proto $scheme; #http pr https
        proxy_set_header X-Real-IP $remote_addr; #client IP address
        # List of IP addresses client has been proxied through until now
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        location / {
            #root   share/nginx/html;
            #index  index.html index.htm;

            proxy_pass http://localhost:8080;

Reload the Nginx config with the “sudo nginx -s reload” command. Browsing to http://localhost/ will now show the Node.js “Hello World!” message, instead of the Nginx welcome page.

By default, the “Host” header is set to “$proxy_host”, which is the upstream hostname or IP address and port defined in the “proxy_pass” definition. In the above, we have overridden it to be “$host” which is set to, in order of preference: the hostname from the request line itself, the optional “Host” header from the client request, or the server name matching the request. This is a best practice for Nginx and ensures that the “Host” header passed to the proxied server is as accurate as possible.

Note: The headers sent by the client are available in Nginx as variables. The variables start with an “$http_” prefix, followed by the header name in lowercase, and with any dashes replaced by underscores. So the client “Host” header is available in Nginx as “$http_host”.

If you wish to reverse proxy a non-root path location, use this Nginx location configuration:

        location /proxy/ {
            # Ending slash prevents passing /proxy/ path to Node.js
            proxy_pass http://localhost:8080/; # Ending slash required!

Browse to http://localhost/proxy/ to test.

Nginx Serves Static Files

If Nginx has access to the Node.js project directory (for example, “/Users/myuser/myproject”), it is best to configure Nginx to serve the static files directly.

Run “sudo nano /opt/local/etc/nginx/nginx.conf” and add the following “/static/” location:

        location /static/ {
            root /Users/myuser/myproject; # Node.js project location
            expires 30d; # Cache-Control: client cache valid for 30 days

Browse to http://localhost/static/earth.gif to test.

The “expires 30d” command adds a “Cache-Control” response header telling the browser client to only cache the static resource for 30 days maximum. Install Curl and dump the response header to see the “Cache-Control” value:

# Install Curl
sudo port install curl

# Run Curl to dump response headers
curl -X GET -I http://localhost/static/earth.gif
    # Cache-Control: max-age=2592000

Unfortunately, the above solution requires the URL to contain the “/static/” path. One workaround to avoid the “/static/” path is to have Nginx serve static files with specific extensions like so:

        # location ~* means to use case-insensitive match
        location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|bmp|js|html|htm)$ {
            root /Users/chvuong/nodejs/static;
            expires 30d;

Because Node.js returns “Cache-Control” with an immediate expiration, we can verify that Nginx is serving the static file by doing the following:

# Get static image file from Nginx
curl -X GET -I http://localhost/earth.gif
    # Cache-Control: max-age=2592000

# Get static image file directly from Node.js
curl -X GET -I http://localhost:8080/earth.gif
    # Cache-Control: public, max-age=0

If we use Node.js to provide APIs only, we can configure Nginx to return static files only if they exist. Because API paths don’t usually correspond to existing directories and/or static files, Nginx will pass the API calls to Node.js. Here’s the Nginx configuration to use:

        location / {
            root /Users/chvuong/nodejs/static;
            expires 30d;

            # Return static file if exist; otherwise pass to Node.js            
            try_files $uri @nodejs;

        location @nodejs {
            proxy_pass http://localhost:8080;

Please remove or comment out the previous “location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|bmp|js|html|htm)$” block because it will interfere with the above configuration.

Test using the Curl commands above to get the “earth.gif” static file. You can also edit “server.js” to remove or comment out the line below to prevent Node.js from serving static files:

// Serve static files from static dir

Nginx Cache

If Nginx does not have access to the static files (for example, Nginx is running on another server), you can configure Nginx to cache the static files returned by Node.js. If the browser client requests a static file which is already cached, Nginx will return it without having to request that file from Node.js again.

Run “sudo nano /opt/local/etc/nginx/nginx.conf” and add the following cache configuration:

http {
    # ...

    # Cache for static files
    proxy_cache_path /tmp/nginx-cache levels=1:2 keys_zone=staticcache:8m max_size=100m inactive=60m use_temp_path=off;
        # keyzone size 8MB, cache size 100MB, inactive delete 60min
    proxy_cache_key "$scheme$request_method$host$request_uri";
    proxy_cache_valid 200 302 60m; # cache successful responses for 60 minutes
    proxy_cache_valid 404 1m; # expire 404 responses 1 minute

    server {
        # ...

        location / {
            proxy_pass http://localhost:8080;

        # Only cache static files; don't cache the dynamic API response!
        location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|bmp|js|html|htm)$ {
            proxy_cache staticcache; # Use "staticcache" we defined above
            proxy_cache_bypass $http_cache_control; # Support client "Cache-Control: no-cache" directive
            add_header X-Proxy-Cache $upstream_cache_status; # Hit or Miss

            # Nginx cache to ignore Node.js default "Cache-Control: public, max-age=0"
            # and don't pass it on to clients
            proxy_ignore_headers Cache-Control;
            proxy_hide_header Cache-Control;
            expires 60m; # "Cache-Control: max-age=3600" tells client to cache for 60 minutes

            proxy_pass http://localhost:8080;


  • Double-check that “server.js” is configured to serve static files:
    // Serve static files from static dir
  • Nginx will create the “/tmp/nginx-cache” directory because “/tmp” allows write access to everyone. If you change the cache directory location, please make sure to manually create the necessary directories. You will see an error in the Nginx error log if Nginx can’t create or access the cache directory.

You can test the cache by using Curl and taking note of the “X-Proxy-Cache” response header:

# First try will result in cache miss
curl -X GET -I http://localhost/earth.gif
    # X-Proxy-Cache: MISS

# Second try will result in cache hit
curl -X GET -I http://localhost/earth.gif
    # X-Proxy-Cache: HIT

If you wish to cache a non-root path location, use this Nginx location configuration:

        location /proxy/ {
            proxy_pass http://localhost:8080/; # Ending slash required!        

        # Match string changed to capture path after /proxy/
        location ~* ^/proxy/(.+\.(jpg|jpeg|gif|png|ico|css|bmp|js|html|htm))$ {
            proxy_cache staticcache;
            proxy_cache_bypass $http_cache_control;
            add_header X-Proxy-Cache $upstream_cache_status;

            proxy_ignore_headers Cache-Control;
            proxy_hide_header Cache-Control;
            expires 60m;

            # Pass captured path string without /proxy/ to Node.js
            proxy_pass http://localhost:8080/$1;

Nginx Load Balancing

Once you have Nginx reverse proxy working, load balancing is very simple. Here’s the Nginx load balancing configuration:

http {
    # ...

    # Node.js servers for load balancing
    upstream nodejs-backend {
        least_conn; # Give new connection to backend with least active connections
        nodejs2.example.com; # default port 80

    server {
        # ...

        location / {
            # Pass to the upstream name, instead of the specific nodejs hostname
            proxy_pass http://nodejs-backend;

I only tested the above with one Node.js server in the upstream definition (because I only have one server). It worked for one server and it should work for more.

Most info derived from:

No Comments

Free SSL Certificate from Let’s Encrypt for Nginx

Linux No Comments

See my previous post in the unmanaged VPS (virtual private server) series, Automate Remote Backup of WordPress Database, on how to create and schedule a Windows batch script to backup the WordPress database.

Update: Let’s Encrypt has made changes that have broken my pre-existing certificate renewal. First, the new certbot package has replaced the original letsencrypt package. Second, the certbot package does not recognize the pre-existing certificates in the “/etc/letsencrypt” directory (generated by the old letsencrypt package). If you have the old letsencrypt package, I recommend deleting it, the “~/.local” directory, and the “/etc/letsencrypt” directory before installing the new certbot package. I’ve updated the instructions below to use the new certbot package.

I’ve been meaning to enable HTTPS/SSL access to this WordPress site since I heard that Google had started giving ranking boosts to secure HTTPS/SSL websites; however, the thing stopping me was the expensive and yearly cost of a SSL certificate. (Unfortunately, self-signed SSL certificates wouldn’t work because browsers would throw security warnings when encountering them.) But now, there is a new certificate authority, Let’s Encrypt, which provides free SSL certificates.

The only catch is that the SSL certificates would expire in 90 days. But that’s okay because Let’s Encrypt provides a command line client which can create and renew the certificates. Some elbow grease and a weekly Cron job should automatically renew any expiring SSL certificates.

Note: StartSSL provides a free, non-commercial SSL certificate which can be manually renewed after a year. I learned about it at the same time as Let’s Encrypt, but decided to go with Let’s Encrypt because of the possibility of automation and no restrictions on usage.

Below are the steps I took to create and install the SSL certificates on the Nginx server running on my unmanaged DigitalOcean VPS.

Create SSL Certificate

Ubuntu didn’t have the certbot package, so we will need to build it from source. Secure shell into your VPS server and run these commands:

# Install Git version control (alternative to Subversion)
sudo apt-get install git

# Double-check that Git is installed by getting version
git --version

# Download the certbot client source code (to the home directory)
git clone https://github.com/certbot/certbot

# Install dependencies, update client code, build it, and run it using sudo
cd certbot
./certbot-auto --help
# Input the sudo password if requested to

# Get a SSL certificate for mydomain.com
./certbot-auto certonly --webroot -w /var/www/wordpress -d mydomain.com -d www.mydomain.com
# Input your email address (for urgent notices and lost key recovery)

# Get another SSL certificate for mydomain2.com
./certbot-auto certonly --webroot -w /var/www/mydomain2 -d mydomain2.com -d www.mydomain2.com

Note: The Let’s Encrypt Ubuntu Nginx install instructions suggest using the wget command to get the latest available certbot version. I think “git clone” is a better method because it provides a more powerful way to update the certbot package, as we will see later.

Running the “certbot-auto” script will do the following:

  1. Install any missing dependencies including the GCC compiler and a ton of libraries.
  2. Update the certbot client source code.
  3. If necessary, build or update the certbot client, located at “~/.local/share/letsencrypt/bin/letsencrypt”. (The name switch from letsencrypt to certbot is not complete and thus a little confusing.)
  4. Run the certbot client using sudo; thus, you may be prompted to input the sudo password.

Note: If you want to speed it up by avoiding the extra update steps, you can just run the “sudo ~/.local/share/letsencrypt/bin/letsencrypt” command directly, instead of the “~/certbot/certbot-auto” script.

When running the “certbot-auto certonly –webroot” certificate generation option, the following (with some guesses on my part) occurs:

  1. The certbot client will create a challenge response file under the domain’s root directory (indicated by the “-w /var/www/wordpress” parameter); for example, “/var/www/wordpress/.well-known/acme-challenge/Y8a_KDalabGwur3bJaLfznDr5vYyJQChmQDbVxl-1ro”. (The sudo access is required to write to the domain’s root web directory.)
  2. The certbot client will then call the letsencrypt.org ACME server, passing in necessary credential request info such as the domain name (indicated by the “-d mydomain.com -d www.mydomain.com” parameters).
  3. The letsencrypt.org ACME server will attempt to get the challenge response file; for example, by browsing to “http://mydomain.com/.well-known/acme-challenge/Y8a_KDalabGwur3bJaLfznDr5vYyJQChmQDbVxl-1ro”. This verifies that the domain has valid DNS records and that you have control of the domain.
  4. The letsencrypt.org ACME server passes the generated SSL certificate back to the certbot client.
  5. The certbot client writes the SSL certificate to the “/etc/letsencrypt” directory, including the private key. If you can only backup one thing, it should be the contents of this directory.
  6. The certbot client deletes the contents of the “.well-known” directory; for example, leaving an empty “/var/www/wordpress/.well-known” directory once done. You can manually delete the “.well-known” directory.

Note: It is possible to create a multi-domain certificate containing more than one domain, but I recommend keeping it simple. Multi-domain certificates are bigger to download and may be confusing to the user when viewed.

Configure Nginx

The directory where the SSL certificates are located under, “/etc/letsencrypt/live”, require root user access, so we will need to copy the certificate files to a directory which Nginx can read.

# Copy out the SSL certificate files
sudo cp /etc/letsencrypt/live/mydomain.com/fullchain.pem /etc/nginx/ssl/mydomain-fullchain.pem
sudo cp /etc/letsencrypt/live/mydomain.com/privkey.pem /etc/nginx/ssl/mydomain-privkey.pem
sudo cp /etc/letsencrypt/live/mydomain2.com/fullchain.pem /etc/nginx/ssl/mydomain2-fullchain.pem
sudo cp /etc/letsencrypt/live/mydomain2.com/privkey.pem /etc/nginx/ssl/mydomain2-privkey.pem

# Double-check that the files exist and are readable by group and others
ls -l /etc/nginx/ssl

Because our website will behave the same under HTTPS as under HTTP, we will only need to make minimal changes to the existing HTTP server configuration.

Edit Nginx’s server block file for mydomain (“sudo nano /etc/nginx/sites-available/wordpress”) and add the “#Support both HTTP and HTTPS” block to the “server” section:

server {
        #listen   80; ## listen for ipv4; this line is default and implied
        #listen   [::]:80 default ipv6only=on; ## listen for ipv6

        #Support both HTTP and HTTPS
        listen 80;
        listen 443 ssl;
        ssl_certificate /etc/nginx/ssl/mydomain-fullchain.pem;
        ssl_certificate_key /etc/nginx/ssl/mydomain-privkey.pem;

        root /var/www/wordpress;
        #index index.php index.html index.htm;
        index index.php;

        # Make site accessible from http://localhost/
        #server_name localhost;
        server_name mydomain.com www.mydomain.com;

If you wish to have this server block be the default to serve (for both HTTP and HTTPS) when your server is accessed by IP address, change the “listen” directives to the following:

server {
        #Support both HTTP and HTTPS
        listen 80 default_server; # default_server replaces older default
        listen 443 ssl default_server;

Note: Make sure that only one of your server block files is set to be the default for IP address access. Also, when you browse to the IP address using HTTPS, you will still get a security warning because the IP address won’t match the domain name.

Repeat the above modifications for any other domain’s server block file.

Note: If you want HTTPS to behave differently than HTTP, leave the HTTP server section alone, uncomment the “# HTTPS server” section (at bottom of the server block file) and make your updates there.

Once you are done updating the server block files, tell Nginx to reload its configuration:

# Reload Nginx service
sudo service nginx reload

# If Nginx throws an error, look at the error log for clues
sudo tail /var/log/nginx/error.log

To test, browse to your server using the HTTPS protocol; for example, “https://mydomain.com/”.

Renew SSL Certificate

To renew all your SSL certificates, run this command 30 days before the expiration date:

~/certbot/certbot-auto renew

Note: If you run it earlier than 30 days before the expiration date, no action will be taken.

Cron Job To Renew

To automate the SSL certificate renewal, we will use a Cron job that runs weekly under the root user. Running the job weekly is sufficient to guarantee a certificate renewal within the 30 days before expiration window.

First, create a script by running “nano ~/certbot_cron.sh” and inputting the content below (make sure to replace “mynewuser” with your actual username):


# Run this script from root user's crontab

# Log file

# Print the current time
echo $(date)

# Try to renew certificates and capture the output
/home/mynewuser/certbot/certbot-auto renew --no-self-upgrade > $LOGFILE 2>&1

# Check if any certs were renewed
if grep -q 'The following certs have been renewed:' $LOGFILE; then
  # Copy SSL certs for Nginx usage
  cp /etc/letsencrypt/live/mydomain.com/fullchain.pem /etc/nginx/ssl/mydomain-fullchain.pem
  cp /etc/letsencrypt/live/mydomain.com/privkey.pem /etc/nginx/ssl/mydomain-privkey.pem
  cp /etc/letsencrypt/live/mydomain2.com/fullchain.pem /etc/nginx/ssl/mydomain2-fullchain.pem
  cp /etc/letsencrypt/live/mydomain2.com/privkey.pem /etc/nginx/ssl/mydomain2-privkey.pem

  # Reload Nginx configuration
  service nginx reload

Notes about the script:

  • To test the script, run “sudo sh ~/certbot_cron.sh”. If none of your certificates are renewed, you won’t get a “Reloading nginx configuration” message (outputted by the “service nginx reload” command).
  • The “–no-self-upgrade” argument flag passed to certbot will prevent certbot from upgrading itself. Because we will be running the script under the root user, I hesitate to allow certbot to update with root permissions automatically. Avoiding an update seems more secure and definitely faster to execute.
  • To simulate certificate renewals, use the “–dry-run” argument flag to simulate a successful renewal. Change the “certbot-auto” command in the script to “certbox-auto renew –no-self-upgrade –dry-run > $LOGFILE”. When you re-run the “certbot_cron.sh”, you will get the “Reloading nginx configuration” message. Don’t forget to remove this change from the script once you are done testing.
  • The script copies out all the SSL certificates, instead of checking for and only copying certificates which have been modified. I don’t think the effort to do the latter is worth it.

Add the script to the root user’s Crontab (Cron table) by running these commands:

# Edit the root user's crontab
sudo crontab -e
  # Insert this line at the end of the file
  @weekly sh /home/mynewuser/certbot_cron.sh > /tmp/certbot-cron.log 2>&1

# List content of root user's Crontab
sudo crontab -l

# Find out when @weekly will run; look for cron.weekly entry
cat /etc/crontab

Note: Instead of “@weekly”, you may wish to set a specific time that works best for your situation. Refer to these Cron examples for info on how to set the time format.

If you want to test the Cron job, do the following:

# Delete the script's output log file
sudo rm /tmp/certbot-renew.log

# Change "@weekly" to "@hourly" in the Crontab
sudo crontab -e
  # Edit this line at the end of the file
  @hourly sh /home/mynewuser/certbot_cron.sh > /tmp/certbot-cron.log 2>&1

# Wait more than an hour

# Check if output log files were generated
cat /tmp/certbot-renew.log
cat /tmp/certbot-cron.log

# Change "@hourly" back to "@weekly" in the Crontab
sudo crontab -e
  # Edit this line at the end of the file
  @weekly sh /home/mynewuser/certbot_cron.sh > /tmp/certbot-cron.log 2>&1

Update Certbot

If you have chosen to disable the certbot self update in the cron script (using the “–no-self-upgrade” argument flag), I recommend manually running the “certbot-auto” command (without any arguments) once a month to make sure that certbot is up-to-date.

If you find that the “certbot-auto” command is unable to update or doing update doesn’t solve an issue, you can try to update using the “git pull” command.

# Update the certbot source code using git
cd certbot
git pull

# See status of certbot source code and version
git status

Backup SSL Certs & Keys

Note: Re-issuing the SSL certificates (because of the switch from the letsencrypt to the certbot package) proved to be painless and fast. Thus, I’ve realized that backing up the “/etc/letsencrypt” directory is not necessary. If something goes wrong, just re-issue the SSL certificates.

In Automate Remote Backup of WordPress Database, we created a Windows batch file to download MySQL dump files and other files from the server. Let us add an additional command to that Windows batch script to download a zip archive of the “/etc/letsencrypt” directory.

Originally, I added an ssh command to the Windows batch file to zip up the “/etc/letsencrypt” directory. Unfortunately, accessing that directory requires sudo privileges which causes the script to prompt for the sudo password. I looked at two solutions to running sudo over SSH without interruption. The first involved just echo’ing the sudo password (in plaintext) to the ssh command. The second involved updating the sudoers file to allow running a particular file without requiring the password. I didn’t actually test the two solutions, but they didn’t look secure so I decided go with a very simple solution: run the zip command in the “~/certbot_cron.sh” script.

First, edit the Cron script (“nano ~/certbot_cron.sh”) and add the tar zip command after reloading the Nginx server:

# Check if any certs were renewed
if grep -q 'The following certs have been renewed:' $LOGFILE; then

  # Reload Nginx configuration
  service nginx reload

  # Zip up the /etc/letsencrypt directory
  tar -zcvf /tmp/letsencrypt.tar.gz /etc/letsencrypt

Note: We are using the tar command, instead of gzip, because gzip doesn’t handle the symbolic links under the “/etc/letsencrypt/live” directory correctly.

There is a security issue because “/tmp/letsencrypt.tar.gz” is readable by others; if this is a concern, you can adjust access permissions by adding the following commands to the “~/letsencrypt_cron.sh” script:


  # Zip up the /etc/letsencrypt directory
  tar -zcvf /tmp/letsencrypt.tar.gz /etc/letsencrypt

  # Change owner and restrict access to owner
  chown mynewuser /tmp/letsencrypt.tar.gz
  chmod 600 /tmp/letsencrypt.tar.gz

Second, edit the Windows batch script file “C:\home\myuser\backups\backup_wordpress.bat” and add the following to the end:

REM Download the /etc/letsencrypt tar file

mkdir \home\myuser\backups\letsencrypt
cd \home\myuser\backups\letsencrypt
rsync.exe -vrt --progress -e "ssh -p 3333 -l mynewuser -v" mydomain.com:/tmp/letsencrypt.tar.gz %date:~10,4%.%date:~4,2%.%date:~7,2%-letsencrypt.tar.gz

And we are done with the backup. In the future, if you ever need to restore the contents of “/etc/letsencrypt”, upload the tar archive to the server’s “tmp” directory and run the following on the server:

# Unzip the tar file
cd /tmp
tar -xvzf letsencrypt.tar.gz
# Will uncompress everything to /tmp/etc/letsencrypt

# Copy the contents to its original location
sudo cp -r /tmp/etc/letsencrypt /etc/

Redirect HTTPS to HTTP

If you have a domain which you don’t wish to provide HTTPS access to (i.e., go through the trouble of creating a SSL certificate for), you can configure Nginx to redirect HTTPS requests to HTTP. Uncomment the “# HTTPS server” section in the domain’s server block file and add a redirect statement:

# HTTPS server
server {
        listen 443 ssl;
        server_name monieer.com www.monieer.com;

        #To redirect, use return instead of no longer recommended rewrite:
        #rewrite ^(.*) http://$host$request_uri;
        return 302 http://$host$request_uri;

Because we did not set the “ssl_certificate” and ssl_certificate_Key” values in the server_block above, Nginx will use the default_server’s SSL certificate instead. Unfortunately, the browser will show a security warning because the domain name in the default_server’s SSL certificate won’t match the requested domain name. If the user agrees to proceed, the redirection to non-secure HTTP access would correctly take place.

Info above derived from:

No Comments

Nginx with PHP and MySQL on Windows 7

Windows Development 3 Comments

In the past, whenever I needed a web server on Windows, I would install the XAMPP distribution (comes with Apache, PHP, and MySQL) and call it a day. This time, I wanted to use Nginx instead of Apache as the web server. Below are the steps I took to install Nginx, PHP and MySQL separately on Windows 7.

Install Nginx

Nginx is pretty easy to install on Windows. Just do the following:

  1. Download the latest Nginx for windows version. (Currently, only 32-bit versions are available.)
  2. Unzip to a directory like “c:\nginx”.
  3. Create two subdirectories which “nginx.exe” expects to exist:
    mkdir c:\nginx\logs
    mkdir c:\nginx\temp
  4. If you want to change the document root from the default “c:\nginx\html” and/or enable directory listing, edit the “c:\nginx\conf\nginx.conf” file and adjust the global “location /” declaration like so:
            location / {
                #root   html; # comment out default root at "nginx_install_dir\html"
                root   /www;  # use new root "c:\www" (assuming nginx is install on c: drive)
                index  index.html index.htm;
                autoindex on; # Add this to enable directory listing
  5. To run the Nginx web server, launch the “Command Prompt” and issue these commands:
    # Go to Nginx installation directory
    cd \nginx

    # Start Nginx
    start nginx.exe


    • Running “c:\nginx\nginx.exe” without changing to the “c:\nginx” directory will fail because Nginx will look for the log and temp subdirectories, which won’t exist under another directory.
    • The “start” command will launch Nginx in a separate window; otherwise, Nginx would take control of the current “Command Prompt” window. That separate window will appear and quickly disappear.
    • It is not necessary to run the “Command Prompt” as an administrator.
  6. Browse to http://localhost/ . You should see a “Welcome to nginx!” page.
  7. To quit Nginx, in the “Command Prompt” window, do the following:
    # Go to Nginx installation directory
    cd \nginx

    # Quit Nginx
    nginx.exe -s quit

    If you have started Nginx more than once, the quit command above will only kill the last Nginx process started. To kill all Nginx processes, run the following:

    taskkill /F /IM nginx.exe

To avoid launching multiple instances of Nginx, I created the following “intelligent” Nginx start and stop batch script files.

Create a file named “start_nginx.bat” with the content below:


REM Start Nginx
tasklist /FI "IMAGENAME eq nginx.exe" 2>NUL | find /I /N "nginx.exe">NUL
   REM Nginx is NOT running, so start it
   cd \nginx
   start nginx.exe
   ECHO Nginx started.
) else (
   ECHO Nginx is already running.

And create a file named “stop_nginx.bat” with this content:


REM Stop Nginx
tasklist /FI "IMAGENAME eq nginx.exe" 2>NUL | find /I /N "nginx.exe">NUL
IF "%ERRORLEVEL%"=="0" (
   REM Nginx is currently running, so quit it
   cd \nginx
   nginx.exe -s quit
   ECHO Nginx quit issued.
) else (
   ECHO Nginx is not currently running.

Install and Configure PHP

To install PHP on Windows:

  1. Browse to php.net, click on the “Windows downloads” link, and download the latest thread safe version. Either 32-bit or 64-bit versions will work.
  2. Unzip to a directory, like “c:\nginx\php”.
  3. Select a PHP configuration (I recommend the development version):
    copy c:\nginx\php\php.ini-development c:\nginx\php\php.ini

We will run the “c:\nginx\php\php-cgi.exe” server to allow Nginx to execute PHP scripts using the FastCGI protocol.

  1. Edit “c:\nginx\conf\nginx.conf” to uncomment the FastCGI section and update the fastcgi_param entries like so:
            # pass the PHP scripts to FastCGI server listening on
            location ~ \.php$ {
                root           html;
                fastcgi_index  index.php;
                #fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
                fastcgi_param  REQUEST_METHOD $request_method;
                fastcgi_param  SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include        fastcgi_params;


    • Don’t forget to update the “root” location value if you are not using the default “c:\nginx\html” directory.
    • The fastcgi_param values are recommended extra parameters passed to PHP scripts for their optional use.
  2. Add the following to the bottom of the “start_nginx.bat” file:
    REM Start php-cgi
    tasklist /FI "IMAGENAME eq php-cgi.exe" 2>NUL | find /I /N "php-cgi.exe">NUL
    IF NOT "%ERRORLEVEL%"=="0" (
       REM php-cgi is NOT running, so start it
       start /min c:\nginx\php\php-cgi.exe -b localhost:9000 -c c:\nginx\php\php.ini
       ECHO php-cgi started.
    ) else (
       ECHO php-cgi is already running.


    • Use localhost instead of to support both IPv4 and IPv6 addressing (if it is enabled). was not resolvable on my IPv6-enabled Windows installation. (Strangely, using in the nginx.conf’s FastCGI section above is okay though.)
    • Unfortunately, “start php-cgi.exe” will show a separate “Command Prompt” window which will remain visible; the “/min” parameter flag is used to minimize that window. If you really want to prevent that window from appearing, you’ll need to use a VBScript to execute the batch file.
    • The order in which Nginx and php-cgi are launched does not matter.
    • The PHP distribution has a “php-win.exe” file which supposedly is the same as “php-cgi.exe” but without throwing up a “Command Prompt” window; however, I could not get “php-win.exe” to run as a server.
  3. Add the following to the bottom of the “stop_nginx.bat” file:
    REM Stop php-cgi
    tasklist /FI "IMAGENAME eq php-cgi.exe" 2>NUL | find /I /N "php-cgi.exe">NUL
    IF "%ERRORLEVEL%"=="0" (
       REM php-cgi is currently running, so quit it
       taskkill /f /IM php-cgi.exe
       ECHO php-cgi killed.
    ) else (
       ECHO php-cgi is not currently running.
  4. Create a PHP test script at “c:\nginx\html\info.php”with the following content:
  5. Run “start_nginx.bat” to launch Nginx and php-cgi. Browse to http://localhost/info.php and you should see information about the PHP installation.

Install MySQL

Let’s get MySQL up and running:

  1. Download the latest MySQL Community Server. I suggest the “ZIP Archive” distributions, either the 32-bit or 64-bit version. Click the Download button. You don’t need to log in to download, just click the “No thanks, just start my download” link at the bottom of the page.
  2. Unzip to a directory like “c:\nginx\mysql”.
  3. Select the default MySQL configuration:
    copy c:\nginx\mysql\my-default.ini c:\nginx\mysql\my.ini
  4. Initialize MySQL by running the “Command Prompt” as an administrator (so Windows registry keys and service can be created) and these commands:
    # Current directory must be the MySQL installation directory
    cd c:\nginx\mysql

    # Initialize database with root user and blank password
    bin\mysqld --initialize-insecure

    # Install MySQL as a Windows service
    bin\mysqld --install-manual

    # Start MySQL Server service
    net start mysql
  5. Test by running these commands (administrator privileges not required):
    # Run MySQL client, skipping password input since blank
    c:\nginx\mysql\bin\mysql.exe -u root --skip-password

    # Run some commands and a query
    mysql> SHOW databases;
    mysql> USE mysql;
    mysql> SHOW tables;
    mysql> DESC user;
    mysql> SELECT * FROM user;

    # Assign new root password
    mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'new_password';

    # Quit MySQL client
    mysql> quit;

    # Run MySQL client with password prompt
    c:\nginx\mysql\bin\mysql.exe -u root -p
    # Input the new_password
  6. Enable PHP mysqli extension by uncommenting the line below in “c:\nginx\php\php.ini”:
  7. Create a test script named mysql.php with the following content:
    // HTML response header
    header('Content-type: text/plain');

    // Database connection parameters
    $DB_HOST = 'localhost';
    $DB_PORT = 3306; // 3306 is default MySQL port
    $DB_USER = 'root';
    $DB_PASS = ''; // blank or new_password
    $DB_NAME = 'mysql'; // database instance name

    // Open connection (all args can be optional or NULL!)
    $mysqli = new mysqli($DB_HOST, $DB_USER, $DB_PASS, $DB_NAME, $DB_PORT);
    if ($mysqli->connect_error) {
      echo 'Connect Error (' . $mysqli->connect_errno . ') ' . $mysqli->connect_error . PHP_EOL;
    } else {
      // Query users
      if ($result = $mysqli->query('SELECT User FROM user')) {
        echo 'Database users are:' . PHP_EOL;
        for ($i = 0; $i < $result->num_rows; $i++) {
          $row = $result->fetch_assoc();
          echo $row['User'] . PHP_EOL;
      } else {
        echo 'Query failed' . PHP_EOL;

    // Close connection
  8. Run “stop_nginx.bat” followed by “start_nginx.bat” to restart Nginx and php-cgi processes. Browse to http://localhost/mysql.php and you should see a listing of the MySQL database users.
  9. You can stop and/or uninstall the MySQL Server service by running “Command Prompt” as an administrator and issuing these commands:
    # Stop MySQL Server service
    net stop mysql

    # Uninstall MySQL Server service
    sc delete mysql

You don’t have to run MySQL Server as a Windows service. Instead, you can run MySQL Server from the “Command Prompt” (administrator privileges not required) like so:

# Start MySQL Server

# Stop MySQL Server
c:\nginx\mysql\bin\mysqladmin.exe -u root shutdown

Unfortunately, the “mysqld.exe” will take control of the “Command Prompt” window and the “start” command does not work in this case, so you will need to open a second “Command Prompt” window in order to issue the shutdown command.

Some info taken from:


Setup LAMP (Linux, Apache, MySQL, PHP) on Mac OS X 10.10 Yosemite

Mac OS X No Comments

I downloaded a HTML5 and Javascript demo. When I attempted to browse to it, I encountered the infamous “XMLHttpRequest cannot load file:///” error.  The latest Chrome (and other modern browsers) won’t allow cross domain (a.k.a. cross origin) communication, which would occur when a page from one website domain attempts to read data from another domain.   The demo Javascript code was attempting to read a text file in the same file location using GET, but the local “file:///” protocol was not recognized as a proper website domain and Chrome assumed it was a cross domain security violation.

The only certain solution to the above problem is to run a local web server to host the demo code.  I have a previous post on setting up Apache on Mac OS X (Install Apache, PHP, MySQL, and phpMyAdmin on Mac OS X 10.6 Snow Leopard) which looks to be helpful, but it was outdated.  I have adjusted the instructions for Mac OS X 10.10 Yosemite below.

Configure PHP and Start Apache HTTP Server

Mac OS X 10.10 Yosemite continues to ship with PHP and Apache installed.   (The Apache HTTP server is stopped by default.)  You can check their versions by opening the Terminal app and running these commands:

php -v
httpd -v

Before we start the Apache HTTP Server, enable PHP support by editing the Apache config file (“sudo nano /etc/apache2/httpd.conf”) and uncommenting this line (by removing the initial pound # character):

#LoadModule php5_module libexec/apache2/libphp5.so

The “Web Sharing” option was removed from the “System Preferences” dialog so we have to use the command line to start the Apache server.  You can start, stop, or restart using the following commands:

# Start, stop, or restart Apache HTTP Server
sudo apachectl start
sudo apachectl stop
sudo apachectl restart

# Check to see if Apache HTTP Server is running
ps -e | grep httpd

Note: The “apachectl start/restart” command will configure Apache to start on bootup. (Internally, “apachectl start” calls “launchctl load” and “apachectl stop” calls “launchctl unload”.)

Start the Apache HTTP Server. Browse to http://localhost/ and you should see the “It Works!” message.

Create a test PHP file under the Apache document root directory, “sudo nano /Library/WebServer/Documents/phpinfo.php”, with the following content:

// Show all information about PHP

Browse to http://localhost/phpinfo.php and you should see the PHP configuration information.

If you have problems, check the Apache error log file at “/var/log/apache2/error_log” directory.

You can change the Apache document root to point to a different directory by editing “/etc/apache2/httpd.conf” and modifying the values for these two declarations:

DocumentRoot "/Library/WebServer/Documents"
<Directory "/Library/WebServer/Documents">

Restart the Apache HTTP Server for the change to take effect. Make sure that your new document root directory and its contents have read permission set for others (for example, “chmod 755” for directories and “chmod 644” for files).

Install and Start MySQL Server

Download the free MySQL Community Server distribution; I selected the “Mac OS X 10.10 (x86, 64-bit), DMG Archive” package.  You don’t need to login or sign up; just select the “No thanks, just start my download” link at the bottom.  Open the downloaded “mysql-5.7.11-osx10.10-x86_64.dmg” disk image file and run the “mysql-5.7.11-osx10.9-x86_64.pkg” package inside to install MySQL Server. (Strangely, even though I downloaded the 10.10 version, the names of the disk image and package files refer to the 10.9 version.)

Note: When the installation completes, you will see a dialog containing the temporary password for the MySQL root user. Please make a copy of it because you will need it below. If you forget to do so, you can follow the MySQL website’s How to Reset the Root Password page to reset the root password.

mysql_sys_prefsThe MySQL Server will be installed under the “/usr/local/mysql-5.7.11-osx10.9-x86_64” directory. In addition, a symbolic link to that directory is created as “/usr/local/mysql”.

You can start the MySQL Server and configure whether it will run on bootup under “System Preferences, MySQL”. Alternatively, you can start and stop the MySQL Server from the command line:

# Start or stop MySQL Server
sudo /usr/local/mysql/support-files/mysql.server start
sudo /usr/local/mysql/support-files/mysql.server stop

# Check to see if MySQL Server is running
ps -e | grep mysql

Add the following line to your user environment profile, “nano ~/.profile”, to avoid inputting the full path when executing mysql commands:

export PATH=$PATH:/usr/local/mysql/bin

Start the MySQL Server and try these commands:

# Show MySQL version
mysql -u root --version

# Connect to MySQL Server
mysql -u root -p
# Input the temporary root password when prompted

# Reset the root password to blank
mysql> alter user 'root'@'localhost' identified by '';
# Put your password inside the '' at the end if you don't want a blank password

# Some example queries
mysql> show databases;
mysql> use mysql;
mysql> show tables;

# Exit the MySQL interpreter
mysql> quit

Additional info about LAMP setup can be found at Get Apache, MySQL, PHP and phpMyAdmin working on OSX 10.10 Yosemite.

No Comments

Automate Remote Backup of WordPress Database

Linux No Comments

See my previous post, Subversion Over SSH on an Unmanaged VPS, to learn how to set up Subversion on Ubuntu (running on a DigitalOcean VPS). In this post, we will learn how to create a script to backup the WordPress database and copy it from the server to our local Windows client. We’ll also look at copying other files on the server to our local client’s hard drive. Finally, we will automate the execution of the backup script to run at regular intervals on the local client.

Install Windows SSH Tools

The backup script will use Unix tools, like ssh (secure shell) and rsync (remote sync), which are not included with Windows. Fortunately, there are free distributions of these tools for Windows. Let’s install them.

Get the ssh and rsync tools:

  1. Download the version of DeltaCopy without the installer (see “Download Links” located top-right).
  2. Unzip the downloaded “DeltaCopyRaw.zip” to “C:\Program Files (x86)\DeltaCopy”.
  3. Add DeltaCopy to the execution path and set the home directory (where we will save the public/private RSA key pair files later):
    1. Open up the “System Properties” dialog by running “Edit system environmental variables” (or “sysdm.cpl”). Click on the Advanced tab. Click on the “Environmental Variables” button near the bottom to launch the “Environmental Variables” dialog.
    2. In the “Environmental Variables” dialog, select “Path” under “System variables” and click the “Edit…” button.
    3. Add “;C:\Program Files (x86)\DeltaCopy” (without the double-quotes) to the end of the existing “Variable value” field. Click Ok to save the change.
    4. Back in the “Environmental Variables” dialog, click “New…” button under “System variables”.
    5. Set “Variable name” to “HOME” and “Variable value” to your home directory like “C:\home\myuser”. Click Ok to save the change.
    6. Click Ok to close the “Environmental Variables” dialog and “Ok” again to close the “System Properties” dialog.

Get the ssh-keygen (secure shell authentication key generation) tool:

  1. Download the free version of cwRsync (click on the Get tab).
  2. Unzip the downloaded “cwRsync_5.5.0_x86_Free.zip” to a temporary directory like “C:/temp/cwRsync”. We will only need to use ssh-keygen once to generate the public/private RSA key pair.
  3. Besides ssh-keygen, cwRsync includes ssh and rsync which we won’t use; cwRsync’s ssh and rsync is not as Windows-compatible as DeltaCopy. For example, cwRsync’s ssh and rsync require that the RSA key pair files stored on Windows have Unix-like 0600 permissions, which then require the chmod tool (ironically included with DeltaCopy, but not cwRsync). DeltaCopy doesn’t have such issues. (Both DeltaCopy and cwRsync are based on a tiny part of Cygwin and DeltaCopy is the most Windows-friendly option of the three.)

Get the scp (secure copy) tool:

  1. Download the “pscp.exe” file from PuTTY.
  2. Move it into the “C:\Program Files (x86)\DeltaCopy” directory.

Create the “.ssh” directory under the home directory and test the environmental variables by running the “Command Prompt” (or “cmd.exe”) and inputting these commands. (Don’t type the comment lines below that start with the # pound character.)

# Test the HOME variable
echo %HOME%

# Create the .ssh directory
mkdir %HOME%\.ssh

# Test the PATH variable; ssh should be found and executed
ssh -p 3333 mynewuser@mydomain.com

Server, Trust Me

To enable the backup script to run without requiring password input from the user, we need to establish trust between the remote server and the local client. To do so, we will create a client public/private RSA key pair and configure the server to trust the client public key. Tools like ssh and rsync can then authenticate against the server using the RSA key pair to avoid requiring the user to input a password.

Open the “Command Prompt” and do the following:

# Go the directory where we unzipped the ssh-keygen tool to
cd /temp/cwRsync/bin

# Generate client RSA key pair (for security, 2048 bits is the new minimum)
ssh-keygen -b 2048

Generating public/private rsa key pair.
# When prompted, select the current directory to write to;
# if you keep the default, it will fail
Enter file in which to save the key (/home/myuser/.ssh/id_rsa): ./id_rsa
# Keep the default; do not input a passphrase
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in ./id_rsa.
Your public key has been saved in ./id_rsa.pub.

# Move client RSA key pair to .ssh directory
move ./id_rsa* /home/myuser/.ssh

# Copy client public key to the server
pscp -P 3333 /home/myuser/.ssh/id_rsa.pub mynewuser@domain.com:/home/mynewuser/

# Secure shell into the server; you will be prompted for password
ssh -p 3333 mynewuser@mydomain.com

# On server, double-check that we are in the home directory

# Create the .ssh directory
mkdir .ssh

# Create authorized_keys file and append the client public key to it
cat id_rsa.pub >> .ssh/authorized_keys

# Delete the client public key (no longer needed)
rm id_rsa.pub

# Optionally, restrict access to .ssh directory
chmod -R 700 .ssh

# Exit the secure shell

# Secure shell into the server again; you won't be prompted for the password
ssh -p 3333 mynewuser@mydomain.com

On the last secure shell attempt, you should be able to log into the server without having to input a password.

Create and Schedule Backup Script

Create a file “C:\home\myuser\backups\backup_wordpress.bat” and input the following content:

@echo off

REM Display current date and time

date /t
time /t

REM Dump the wordPress database
REM The -v verbose flag is optional

ssh -p 3333 -v mynewuser@mydomain.com "mysqldump -uwordpress -pmypassword wordpress | gzip -c > /tmp/wordpress.sql.gz"

REM Download the database dump file to local directory.
REM Using rsync over ssh to avoid the need for a rsync server on VPS.
REM The %date:~10...% below helps to date-stamp the file,
REM resulting in a filename like 2015.04.23-wordpress_4.4.2.sql.gz.

mkdir \home\myuser\backups\wordpress
cd \home\myuser\backups\wordpress
rsync -vrt --progress -e "ssh -p 3333 -l mynewuser -v" mydomain.com:/tmp/wordpress.sql.gz %date:~10,4%.%date:~4,2%.%date:~7,2%-wordpress_4.4.2.sql.gz

REM Sync all Nginx server block files to local directory
REM Note: Be careful, the --delete flag allows Rsync to delete local files
REM       if they do not exist on the server also!

mkdir \home\myuser\backups\nginx
rsync -vrt --progress --delete -e "ssh -p 3333 -l mynewuser -v" mydomain.com:/etc/nginx/sites-available/* .

REM Rsync may not set local permissions correctly, so we'll fix with DeltaCopy's chmod.
REM Note: chmod fails for files with Windows-style perms already set, but that is ok.

chmod 660 *

You may wish to add additional rsync commands to download the WordPress configuration file (“/var/www/wordpress/wp-config.php”) and pictures uploaded to WordPress (“/var/www/wordpress/wp-content/uploads/*”).

Schedule the script:

  1. Run the “Task Scheduler”.
  2. Click on “Create Basic Task…” on the right sidebar.
  3. Input a name like “Backup WordPress”. Click Next.
  4. Select your schedule. I recommend “Weekly”. Click Next. Select a specific day and time that works for you. Click Next.
  5. Keep Action as “Start a program”. Click Next.
  6. Input “C:\home\myuser\backups\backup_wordpress.bat” into the “Program/script” box.
  7. Input “> C:\home\myuser\backups\backup_log.txt 2>&1” into the “Add arguments (optional)” box. This will redirect any standard or error outputs from the “backup_wordpress.bat” to “backup_log.txt” for review at your convenience.
  8. Click Next and Finish.
  9. To manually test, click on “Task Schedule Library” on the left sidebar, right-click on the “Backup WordPress” task in the top-center panel (if you don’t see it, click on the Refresh action to the right first), and select Run.

Getting PuTTY to Use Private/Public Key Pair

You may notice that the PuTTY pscp tool still requires a password to be inputted. Unfortunately, PuTTY does not use the RSA key format or the %HOME% environmental variable.

If you wish to use the pscp tool in the backup script, we’ll need to convert the RSA private key to the PPK (PuTTY Private Key) format:

  1. Download the “puttygen.exe” file from PuTTY.
  2. Run it.
  3. Go to menu Conversions and select “Import key”. Browse to the client RSA private key at “C:/home/myuser/.ssh/id_rsa”.
  4. Click the “Save private key” button. Answer Yes to the “Are you sure you want to save this key without a passphrase to protect it?” dialog.
  5. Input filename “id_rsa.ppk” and save to the same location as the original RSA key pair files.

When running the pscp tool in the script, use the “-i” option to tell it where to find the PPK file like so:

REM Copy the WordPress config file to local directory

cd \home\myuser\backups\wordpress
pscp -P 3333 -i /home/myuser/.ssh/id_rsa.ppk mynewuser@mydomain.com:/var/www/wordpress/wp-config.php .

Hopefully the above will help you to sleep well, knowing that your WordPress data is safe.

See my followup post, Free SSL Certificate from Let’s Encrypt for Nginx, on how to install a free SSL certificate for HTTPS access and as a result, maybe give your Google ranking a boost.

No Comments

Rotate Video Without Black Bars

Audio Visual 1 Comment

rotate_video_black_barsHave you ever taken a vertical portrait video using your iPhone, import it to your computer, find that Windows Media Player will play it horizontally, and gotten a neck crick from holding your head sideways? Since the beginning, vertical portrait videos (tall and skinny) have been unwanted and unsupported, living in the shadow of the horizontal landscape videos (short and wide). Portrait videos were usually shoehorned into a widescreen frame, resulting in ugly black bars on the left and right. (Using free video editors, like Windows Movie Maker or Mac OS X iMovie, to rotate videos will result in such travesties.)

Thankfully, things have gotten better. Recent smartphones will embed the rotation information into the video file. Some video players, like VLC and QuickTime Player, will act on that rotation data to show the video correctly on computers. (Unfortunately, Windows Media Player does not make use of the video rotation data.) In addition, VLC allows manual adjustment of the playback video orientation (menu “Tools->Effects and Filters->Video Effects->Geometry->Transform->Rotate by 90 degrees”), but does not permanently change the video file’s rotation data. QuickTime Pro has a rotate video function which does not really rotate the video, but does adjust the video file’s rotation data permanently instead.

I read that some online services, such as YouTube and Google Plus, do support video rotation on imported videos, but have not tried them myself yet. (Supposedly, the new Google Photos does not support video rotation yet, but should eventually. In the meantime, the workaround is to rotate the video using the Google Plus interface.)

If you must rotate a video file (perhaps because you wish to use Windows Media Player), you will want to use a commercial video editor. To rotate a video without introducing black bars requires a program that can rotate the video, change the resolution (to avoid black bars), and re-encode with minimal video quality loss. All three functions are usually only found in commercial video editing software such as Adobe Premiere.

Instructions on how to rotate a video file using Sony Vegas Pro 10 and Adobe Premiere Pro CS5 on Windows 7 follow. I am only a beginner with both programs, so there may be better ways to do what I am attempting to do below.

Sony Vegas Pro 10

Though Sony Vegas is less powerful than Adobe Premiere, I find it much simpler to use. Here’s how to rotate a video using Sony Vegas Pro 10:

  1. rotate_video_sony_vegas_1Launch Sony Vegas Pro.
  2. Go to menu “File->Import->Media…”, browse to the video file, select it, and click Open.
  3. Surprisingly, Sony Vegas Pro supports the rotation data and displays the video correctly. Because we want to actually rotate the video, we need to tell Sony Vegas Pro not to use the rotation data. To do so:
    1. Right-click on the top-left thumbnail image of the imported video file and select Properties.
    2. Under the Media tab, in the “Stream properties” section at bottom, select “Video 1” in the “Stream” drop-down list .
    3. Change the Rotation field from “90 degrees clockwise” to “0 degrees (original)”. Click OK and the video file thumbnail will rotate to the true orientation.
  4. Double-click on the video file thumbnail to populate the timeline panel at the bottom.
  5. In the timeline panel, right-click on the video track thumbnail image and select the “Video Event Pan/Crop…” item.
  6. In the Event Pan/Crop dialog:
    1. Disable the “Lock Aspect Ratio” option by clicking on that icon if it is depressed (third icon from the bottom on the left toolbar).
    2. Under Position, switch the values for the Width and Height fields.
    3. Under Rotation, change the Angle field from “0.0” to “-90.0”. (Not sure why but I had to use -90 instead of 90.)
    4. Close the dialog by clicking on the tiny top-right “x” icon.
  7. Go to menu “File->Render As…” to open the “Render As” dialog. In that dialog, do the following:
    1. The “Save as type” and dependent Template fieldsrotate_video_sony_vegas_3 determine the quality of the rendered video, specifically the resolution. Because our rotated video will have a height of 1920, one “Save as type” option that allows such a height is “Video for Windows (*.avi)”. (Other options will allow different maximum widths and heights.)
    2. Select “HD 1080-24p YUV” in the Template field, which is the closest match to our imported video.
    3. Click on the “Custom…” button to configure a portrait resolution (ex: 1080×1920).
    4. In the “Custom Settings” dialog, inside the Video tab, select “(Custom frame size)” in the “Frame size” field, and switch the Width and Height values. Click OK to close the dialog.
    5. Back in the “Render As” dialog, make sure “Render loop region only” is not checked because we want the whole video to be exported. (The “Render loop region only” box will be disabled if no selection is done on the video track.)
    6. Click on the Save button.

Adobe Premiere Pro CS5

Adobe Premiere is very powerful and thus, not simple to use. Definitely, it is overkill for just rotating a video file. But if you ever need to rotate a video, here’s how to do it in Adobe Premiere Pro CS5:

  1. Launch Adobe Premiere Pro, choose a “New Project” and click Ok to accept the defaults. When the “New Sequence” dialog appears, click Cancel to skip creating it.
  2. Right-click inside the top-left Project panel and select “Import…” (or go to menu “File->Import…”). Browse to your video file, select it, and click Open.
    • Note: If you are opening a QuickTime .mov file and Adobe Premiere displays a “no audio or video streams” error message, rename the .mov file to .mpg file and try again.
  3. When you select the imported video file in the Project panel, the mini-preview on top of the panel will show information concerning the video. Take note of the resolution (ex: 1920×1080), frame rate (ex: 29.97 fps), and audio sample rate (ex: 44100 Hz).
  4. Right-click inside the Project panel and select “New Item->Sequence…” (or go to menu “File->New->Sequence…”) to create a sequence. In the “New Sequence” dialog, do the following:
    1. rotate_video_adobe_premiere_1Open the Settings tab.
    2. Swap the horizontal and vertical field values for the “Frame Size” (ex: 1080×1920).
    3. Select a matching Timebase (ex: 29.97 fps) and Audio “Sample Rate” (44100 Hz).
    4. Under the “Video Previews” section, select “Microsoft AVI” for the “Preview File Format”. We want to select the highest-quality codec that we can. Unfortunately, the high-quality “V210 10-bit YUV” and “Uncompressed UYVY 422 8bit” codecs support a maximum resolution of 607×1080. For 1080×1920, I recommend using the “Intel IYUV codec”. (The “Microsoft RLE” and “Microsoft Video 1” codecs will degrade the video quality noticeably.)
    5. Click the Reset button and the Width and Height fields will be updated to match the “Frame Size” (ex: 1080×1920) or as close to it as possible (depending on the codec selected).
    6. Click OK to create the sequence.
  5. The sequence will appear as a tab in the Timeline panel at the bottom-middle. To populate it, drag the imported video from the Project panel to the very beginning of the “Video 1” track in the timeline. The Preview panel at the top-right will show the sequence video frame with the video data and top/bottom black bars. (We will get rid of those black bars in the steps below.)
  6. Select all the video data in the Timeline panel. This action will populate the “Effect Controls” tab in the top-middle “Source, Effect Controls, Audio Mixer, Metadata” panel.
  7. In the Effect Controls pane, expand the Motion selection under “Video Effects”. Input a value of 90 in the Rotation field to rotate clockwise (or -90 to rotate counter-clockwise). The Preview panel will show the rotated video which fits the sequence frame perfectly without any black bars.
  8. rotate_video_adobe_premiere_2With the sequence selected in the Project panel, go to menu “File->Export->Media…”. Check the “Match Sequence Settings” box. Click on the Output tab to double-check that the exported video will not contain black bars.
  9. Click on the Export button. By default, the exported .avi video file will be created in the documents directory at “C:\Users\your_username\Documents\Adobe\Premiere Pro\5.5”.

Tip: If you want to easily create a sequence that matches the video file exactly, just drag the imported video file to the “New Item” icon on the Project panel’s bottom toolbar. (The “New Item” icon is immediately to the left of the Clear/trash icon.) This action will create a sequence that matches the imported video as close as possible and populate the sequence’s timeline with the video data automatically.

The exported video files may be significantly larger in size than the original video files. In most cases, re-encoding video will result in loss of quality or increase in file size. I think the best thing to do is to leave the original video file untouched and use a video player that is aware of the embedded rotation data. If the rotation data is wrong or missing, it might make more sense to use a program, like Sony Vegas Pro, to modify or add it without re-encoding the video.

Some info above derived from:

1 Comment

Clone a Hard Drive Using Clonezilla Live

Windows No Comments

I needed to clone one hard drive to another. In the past, I would have used a bootable MS-DOS CD containing an old copy of Norton Ghost 8. This time, I decided to see what is currently available and could be launched from a bootable USB flash drive. I found the open source Clonezilla Live utility, which is a small GNU/Linux distribution capable of running from a USB flash drive and cloning hard drives.

I decided to follow Clonezilla Live’s “MS Windows Method B: Manual” instructions to create a bootable USB flash drive.

  1. Follow these DiskPart instructions to create a bootable USB flash drive. (Clonezilla Live requires the FAT32 format and at least a 200MB capacity flash drive.)
  2. Download the latest stable release of Clonezilla Live. If you have a 64-bit capable machine, select “amd64” for “CPU architecture”. Or select “i586” for 32-bit. Select “zip” for “file type”.
  3. Unzip the Clonezilla Live Zip archive to the USB flash drive.
  4. syslinux_makebootLaunch the Command Prompt utility, change directory to the USB flash drive, and run “utils\win64\makeboot64.bat” for 64-bit or “utils\win32\makeboot.bat” for 32-bit. The “makeboot64.bat” or “makeboot.bat” script will modify the USB flash drive to boot the small GNU/Linux distribution and run the Clonezilla Live utility. (The makeboot utility will display the drive letter to be modified before continuing; please make sure that it is the correct one belonging to the USB flash drive.)

Clonezilla Live will show a lot of options which unfortunately are not easy to understand. The simplest way to deal with it is to accept the default when you are not sure.

  1. Attach the destination hard drive to the same machine containing the source hard drive.
  2. Start the machine and boot from the USB flash drive. You may need to press a particular function key to load the boot menu (F12 on my Lenovo desktop) or you may need to adjust the BIOS setup to boot from a USB drive before the hard drive.
  3. clonezilla_liveOn Clonezilla Live’s startup screen, keep the default “Clonezilla live (Default settings, VGA 800×600)” and press Enter.
  4. Press Enter to accept the pre-selected language, “en_US.UTF-8 English”.
  5. Keep the default “Don’t touch keymap” and press Enter.
  6. Make sure “Start_Clonezilla” is selected and press Enter to start.
  7. Because I am copying from one hard drive to another, I select the “device-device work directly from a disk or partition to a disk or partition” option. Press Enter.
  8. To keep it simple, stay with the “Beginner mode” option and press Enter.
  9. Select the source hard drive and press Enter.
  10. Select the target destination hard drive and press Enter.
  11. Keep the default “Skip checking/repairing source file system” selection and press Enter.
  12. Type “y” and press Enter to acknowledge the warning that all data on the destination hard drive will be destroyed.
  13. Type “y” and press Enter a second time to indicate that you are really sure.
  14. In answer to the question “do you want to clone the boot loader”, type uppercase “Y” and press Enter. (I need to clone the boot loader so the destination hard drive will be bootable like the source hard drive.)
  15. The hard drive cloning will occur. It took me around 10 minutes copying from one SSD to another SSD. (The length of time required to complete the process is dependent on the speed of both the source and destination hard drives.)
  16. When the cloning completes, press Enter to continue.
  17. Select “poweroff” to shut down the machine.
  18. Once the machine is off, swap the hard drives (or remove the source hard drive) and boot from the destination hard drive.

Even though my destination hard drive was twice the size of the source hard drive, the cloned destination partition size was the same size as the original source partition. I then used the free EaseUS Partition Master utility to increase the size of the destination partition (without destroying the data on it). Probably, Clonezilla Live’s expert mode has a setting to adjust the destination partition size.

No Comments

Create Bootable USB Flash Drive With DiskPart Command-Line Utility

Windows No Comments

The instructions below will create a bootable system partition on a USB flash drive, which is exactly the same as creating such a partition on a hard drive. Specifically, I will be using Windows 7’s built-in DiskPart (Disk Partition) command-line utility to create a bootable USB flash drive containing a Windows 8.1 Setup image.

diskpart_usbIf you are interested, here’s the technical reason why our bootable USB flash drive will use the MBR layout and FAT32 format: Computers, including both Windows and Macs, boot using a standard called UEFI (Unified Extensible Firmware Interface), which is based upon the EFI specification (Extensible Firmware Interface). (When folks say EFI, they are usually referring to UEFI because all modern computers use UEFI.) UEFI is a replacement for the previous BIOS method of booting up, but UEFI still supports the older BIOS method. The BIOS boot method uses the MBR (Master Boot Record) layout. In addition to BIOS+MBR, UEFI also supports the new GPT (GUID Partition Table) layout. The UEFI specification requires bootable removable media (such as a bootable USB flash drive) to use the MBR layout and FAT32 format.

To create a bootable USB flash drive, do the following:

  1. Insert a USB flash drive with sufficient capacity. (The 64-bit Windows 8.1 Professional ISO image I had is 4.5GB in size and requires at least an 8GB USB flash drive).
  2. Launch the “diskpart” or “diskpart.exe” utility from the Windows Start/Run menu or the Command Prompt. You will be prompted with a popup message asking “Do you want to allow the following program to make changes to this computer?” Answer Yes.
  3. Run the following commands in the DiskPart utility (ignore the comment lines marked by the pound # character):
    # Show all disks (aka drives, like hard drives or removable media).
    DISKPART> list disk
    # Select a disk to operate on.
    DISKPART> select disk [number identifying USB flash drive]
    # Delete all partitions, resulting in a blank disk.
    DISKPART> clean
    # Create a primary partition (using MBR).
    DISKPART> create partition primary

    # Show all partitions (should just be the one newly-created partition).
    DISKPART> list partition

    # Select the primary partition to operate on (only 1 partition exists).
    DISKPART> select partition 1
    # Make that primary partition active (aka bootable).
    DISKPART> active
    # Format the active primary partition using FAT32.
    # To do a full format, instead of a quick format, omit the "quick" flag.
    DISKPART> format fs=fat32 quick
    # Assign a drive letter to the primary partition
    # (just in case Windows didn't already do it).
    DISKPART> assign

    # Quit DiskPart.
    DISKPART> exit
  4. Test by opening the contents of USB flash drive using Windows Explorer. If you get an inaccessible error when accessing the drive, unplug and re-plug it back into the computer. You should then be able to access it.
  5. Insert the Windows Setup DVD or mount the Windows Setup ISO file (I recommend using the free Slysoft Virtual CloneDrive utility to perform the ISO mount).
  6. Copy all the Windows Setup content to the USB flash drive by running the xcopy command from the Command Prompt:
    # Supposing USB flash drive is K: drive and Windows is L: drive,
    # copy all files and directories from the latter to the former.
    # /e = Copies directories and sub-directories, including empty ones.
    # /f = Displays full source and destination file names while copying.
    xcopy L:*.* /e/f K:

And we are done. The resulting USB flash drive should be bootable on both Windows and Macs. I tested the USB flash drive on my Macbook Pro Retina and it booted fine.

Info above derived from Install Windows 7 From a USB Flash Drive.

No Comments

« Previous Entries