Let’s Encrypt SSL and WordPress

This site doesn’t have anything particularly gripping on it.  But that’s no reason for me not to add SSL encryption for people who visit.  Sure, you may not be buying anything or even using authentication but the Electronic Frontier Foundation’s Let’s Encrypt project has lowered the bar so far that it seems wrong not to participate.  Especially if, as they’ve indicated, Google considers HTTPS as a ranking signal for sites.

I used SSL for awhile when I ran my own server but dropped it, for complexity’s sake as well as because I migrated all of my sites to Siteground.  I was delighted when, earlier this year, they indicated they were supporting Let’s Encrypt.

It’s easy to add a certificate to your account.  Siteground’s tutorial will create the certificate and you can then activate SSL.  That ended up being the more difficult part of the process.  If you look at your folders after the certificate is created, Siteground’s auto configuration leaves behind traces, including a folder called .well-known.  It appears to be common in Web hosting approaches and is a bug.

This was perhaps more complex because I am running multisite (MU) WordPress and needed to use a couple of tweaks to get HTTPS working.   This was a great tutorial on getting it done.  I was already using the Domain Mapping plugin and looked at the WordPress HTTPS one.

In the end, I decided against WordPress HTTPS for a couple of reasons.  First, I don’t like using any more plugins than I need.  Second, it is out of date.  Lastly, it was an optional tweak.  As the author of that post did, I used phpMyAdmin to edit the wp_options and site_meta tables so that the URLs were https://ofaolain.com and not http://ofaolain.com.

By editing my .htaccess file – specifically the WordPress part, so that the URL always defaulted to https – I was able to achieve manually what the plugin offered.

RewriteCond %{HTTP_REFERER} !.*ofaolain.com.* [OR]
RewriteCond %{HTTP_USER_AGENT} ^$
RewriteRule (.*) https://%{REMOTE_ADDR}/$ [R=301,L]

# END WordPress

So far, so good.  However, not everything went smoothly:

  • Jetpack, the WordPress plugin, broke on a couple (but not all of the sites).  After seeing various accounts suggesting that the fix when Jetpack couldn’t connect was to delete and manually reinstall, I disconnected, deactivated, and reactivated it.  When I reconnected, it was fine again.
  • My Siteground service went into CPU overload.  It’s unclear to me why, but the CPU overload (of about 1000x normal usage) was contemporaneous with this change.When they unlocked my account, I deactivated what plugins I could, turned off the Cloudflare CDN, and IP blocked an address that appeared to have been pinging my site pretty egregiously.  However, as I added one after another back on and watched the CPU load, nothing changed.What I don’t know is whether it was caused (a) by Siteground’s certificate creation script, (b) the manual changes I made to the database and/or testing out the WordPress HTTPS plugin, (c) something else like that Jetpack connectivity, or (d) a combination.  In any event, Siteground cut me off and when I came back online, the CPU was back to normal.
  • I had a deactivated version of SSL Insecure Content Fixer installed, from my previous run at SSL.  Once I activated it, it eliminated the mixed secure/insecure content message in Google Chrome and Microsoft Internet Explorer.  I still get a mixed message from Firefox, although I can’t see any mixed content when I view source.  It’s odd.
  • Although my Domain Mappings were correct, the URL is listed in 3 places.  On the Info tab (uneditable) and twice under the Settings tab (home, site url).  When I edited the database table, the two entries on the Settings tab updated.  The one on the Info tab has not, and I can’t find where it’s coming from.  It doesn’t appear to have any impact but I’m still going to try to hunt it down because I don’t like loose ends.snip_20160414141018

I edited my .htaccess file to make sure the site automatically forces any visitor over to HTTPS, so hopefully you’re seeing that in your browser as well.

RewriteEngine On
RewriteCond %{HTTP_HOST} ^www.ofaolain.com [NC]
RewriteRule ^(.*)$ https://ofaolain.com/$1 [L,R=301]
RewriteCond %{SERVER_PORT} 80
RewriteRule ^(.*)$ https://ofaolain.com/$1 [R,L]

Except for the CPU issue, it couldn’t have been more straightforward.  Even with the fiddling around and bringing up two different multisite … sites with HTTPS, I was done in less than 90 minutes.  I’m not entirely sure why I had the CPU problem  – in the last 10 months, it’s happened one other time, also for no apparent reason – so I’ll be watching the CPU monitor closely over the next day or so.

If you’ve been toying with the idea of SSL on your Web site, and you have access to Let’s Encrypt, you might give it a try.  All told it was free to acquire the certificate and activate the settings for WordPress to support it.

What ISN'T Cloud Computing?

I finished my cloud computing manuscript a few weeks ago and have detox’d a bit now, although I did speak about it yesterday.  It’s given me a chance to get some perspective on cloud computing and firm up some thoughts I’d been leaning towards anyway.  As I mentioned at the session we did yesterday, cloud is shifting from a marketing term to something with a definition.  Which doesn’t stop every technology company’s marketing department from trying to wrap their product as either cloud or something-as-a-service.  The latest one I came across was desktop-as-a-service.  I loved Bob Lewis’ (I use his name like I know him!) riff recently in talking about leveraging cloud concepts internally:  “… the Conjunction Society of America will soon offer ‘as’ as a service.”

One thing I’m more firmly committed to is the idea that so-called private cloud is marketing hype.  Someone may actually be doing this but the ones I’ve heard of sound like they’re hosted, virtualized services.  And there’s nothing wrong with that, but they’re not cloud just because you traverse the Internet to get to them.  If you are opening up a remote terminal window that looks like a Windows desktop running Windows applications, you’re not using cloud computing.

Data Disappearing Due to Latency

Two comments were made at yesterday’s session, both by people at law firms who have bought something billed as cloud but which isn’t.  The first lawyer noted that they were having significant issues with data not being up to date, so that you might log in to your calendar only to see that certain appointments were missing that had been there before.  The explanation of the provider was that that’s how the cloud works.  But that’s the marketing cloud.  The whole point of cloud computing, as I understand it, is that the resources are pooled in such a way so that this sort of latency doesn’t happen.  If this is not in fact just a problem with insufficient bandwidth – trying to pour too much honey in the funnel and having to wait for it to empty – then it sounds like these are not really cloud systems but just hosted ones.  Because that’s the sort of problem that I’ve seen in Lotus Notes, in Microsoft Exchange, etc., in other words, systems that have some notoriety for being complex and/or clunky and not often built for cloud computing.

Our Cloud Reboots Nightly

The second comment was that they experienced substantial slowdowns in using their systems until the cloud provider shifted to a nightly reboot of their servers (Windows) which enabled the applications to run more smoothly.  The downside is that the entire private cloud vaporizes for 30 minutes every morning while the system restarts.  Anyone who has administered a Windows system knows the benefits of bouncing your servers – I had to do this about 15 years ago in order to ensure that our printing cost recovery system wouldn’t hang with a bunch of print jobs in queue.  And yes, hahaha for the Unix folks who rarely find themselves having to reboot due to a memory leak or other problem.  I run a pair of Linux servers and that’s my experience too  – they can stay up for weeks at a time without any need to restart.

If you have to bounce your servers nightly for operations, you’re not in the cloud.  I think the NIST definition of resource pooling and infiniteness underscores why – because you’re maxing out your resources – but also because these are clearly legacy applications with legacy problems that have been placed on virtual servers.  Virtualized servers are a fantastic breakthrough in the last 10-15 years, whether hosted or not.  But they’re not cloud.  A cloud provider with this problem would be forced to choose between bouncing ALL customers and disabling connectivity for 30 minutes or fixing the problem.  The rapid iterative updating that happens with software-as-a-service providers would mean that they’d fix the problem, rather than just falling back on the nightly restart.

None of which matters a jot.  Lawyers need to use the technology best suited to how they practice and that meets their regulatory and other obligations.  Cloud computing is a potent option but many firms have used hosted services for years, if not decades, enabled by broadband Internet or dedicated lines.  Just don’t confuse cloud computing with hosting companies that market themselves as cloud.