WPStagecoach saved my life

pink carriage with brown horse

Quit messing around with lesser staging processes and get the real deal.

I don’t mean to be too glowing or make this seem like some infomercial endorsement, but I do really think it saved me from having a heart attack the past couple days. I’ve been using InfiniteWP to manage most of my stable of WordPress sites, and it’s served me well for managing updates and backups, and is even handy for migrating websites from one host to another. It’s well worth the $120 or so that I paid for it a few months ago. It’s staging features aren’t really that great.

Part of the problem is that it only wants to install the staging site as a subfolder of the main site. It also makes a copy of the database on the production database, it just uses a different table prefix. I shouldn’t have to tell you why this is not great from a performance and quota standpoint. The other problem is that it doesn’t provide much information when things go wrong. Ideally, I want my staging sites in separate subdomains, but IWP just can’t do this. And the documentation is very mum about this. I have a support ticket open with them right now to figure out why I was unable to clone a particular client site, and to make sure that this paragraph is correct. What I can tell you is that I spent days trying to get a proper staging site setup for my client using IWP.

It’s not all their fault. I’m taking over a project that seems to have been abandoned by the original developer, and there were many problems with the site that may have contributed to the problems I’ve been having, as we shall see shortly. IWP has three staging options, on the original site, on my configured staging server, or custom FTP. I was able to clone the site to my custom staging server, but the theme didn’t operate properly. I believe this may have been a problem with hotlinked theme assets, I haven’t figured it out yet.

I literally spent days trying creating subdomains and updating DNS on the client site, and couldn’t figure out why IWP kept giving me “error: check your hostname” when I tried to update things. I figured it was a DNS propagation error between the server hosting my IWP and the client’s host. I usually only work on sites I host directly, but this was the first time I actually had to use the staging features. I was getting very anxious. I had wasted several days was already dealing with an irate client, and was starting to get a panicked feeling when working on the project.

So I decided to go another route and explore some other options. I read through several blog posts on WordPress staging sites, and one name that came up several times was WPStagecoach. And it was only $12 for a month, so I signed up for a trial and had the staging site up in less than an hour. No kidding.

The setup process was impressive. Getting the plugin installed and activated was pretty standard, and creating the staging site was very user friendly. It started off by scanning the site for large files, and found a backup archive, which it asked to exclude. Then it starting creating a tar file of the site to move to staging, and showed me a status percentage as it did so. This was very much needed considering IWP had been “working” for hours without so much as a log update. After the tar process was completed, I did get an error that the archive was missing files, and was asked whether I wanted to abort, retry, or “proceed fearlessly.” I retried, waited another five minutes, and got the same error, so I went ahead and pressed proceed. Another five minutes, and BAM. There was my staging site, and it looked perfect.

And one thing that really impressed me was that after the creation of the staging site, I was given a list of errors that WPS had found, mainly places where the site’s URL was hardcoded in the theme templates. These are likely why I had the rendering issues on my previous staging attempt. So now I have a list of files that I need to target, as hard coded URLs will play havoc with my development environment as well. And this feature really shows how WPStagecoach really shines as a specialized product.

WPS hosts the staging site on their own servers, giving each site their own subdomain. I got ten with my account, which is way more than I’m going to need anytime soon. So now I can proceed with the next step on this project, which is getting our MemberPress module up and running. Then I’ll be able to see if pushing changes back to the live site is as easy as creating it in the first place. If my experience so far is any indication, it’ll be a sinch.

IT fiction:The Phoenix Project

Thoughts on the first half of the business book

I’ve been reading The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win the past couple days. It’s an interesting book that takes a fiction approach to teaching “the Three Ways”, which are some devops patterns and principles. It’s really interesting, although some of the setup seems a bit contrived, the writing is good enough that I found myself blowing through half the book in two days, and found myself reading it through past my bedtime last night.

Part one of the book is a journey into enterprise IT hell, as our hero, Bill, is promoted from his small operations group to IT director for the large automotive parts company that he works for. They’re in the midst of preparing for a huge software rollout, which is bound to fail, and Bill struggles to get a grip on things before things inevitably crash and burn. In short it it’s a trainwreck, and the authors start introducing the reader into change management devops concepts.

I think anyone who’s ever worked in an enterprise environment will have PTSD from reading this, I know I sure did. Although it’s aimed squarely at teaching workers in larger firms understand these best practices, I think it may be useful to smaller operators and teams like the one I work with. The book was written more than six years ago, which seems like a lifetime ago in IT, but it doesn’t get into the details of any actual tech tools, instead focusing on the process. In fact, the change management process they use in the book is literally postcards on a whiteboard, and the description of the rest of the environment is literally generic enough that it’s irrelevant.

Part one ends with Bill quitting after too many of his warnings are unheeded by the CEO, and part two starts with said CEO seeing the light and bringing him back in as they struggle to work together and save the company.

I’m already thinking that this will be one of those books that I recommend to all my IT colleagues. I may buy a few copies and send them to a few people I’m working with. I think it could be a valuable book for people who haven’t actually operated in a large corporate environment. It may be good for stakeholders as well. Hell, it might actually be good to give a copy out as a sales tool next time we have a big prospect.

One thing that I’ve taken away from the book so far is the breakdown of four types of work: projects, internal IT tasks, changes, and unplanned work, which I’ve always referred to as firefighting. They describe it as anti-work, which is an apt description, and I’m going to be more cognizant about the type of work that I’m doing from day to day.

The Phoenix Project falls in an interesting class of book that I haven’t run into before, business fiction. I’m curious if there are any others that are similar. I’m sure that the situation told within it is real enough, probably culled together from various real experiences, names changed to protect the innocent and all that. The first-person voice used by the authors is a style that seems familiar from many business books going all the way back to Dale Carnegie, but I don’t think I’ve ever seen it deployed in quite this way, with the book as one large case study.

Besides the operational side of things, there were a couple of work-related things that struck out at me like a sore thumb. During the failed deployment of the new software product, the entire core project team is forced to pull an all-nighter trying to restore operations, and then spend many long days during the following weeks trying to shore things up. After Bill’s promotion to IT director, he seems to lose all grasp on work-life balance. He’s reading a story to his kid and means to lookup something about Thomas the Train when he gets drawn into a work email and then another call. The situation completely disrupts his family life. Another employee at the firm, Brent, the key-man with a hand in seemingly every system at the company has gone years without taking a vacation without being on call.

Apparently these two issues will somehow be resolved as Part Two progresses, but there was one detail about Bill’s circumstances that really had me shaking my head. Near the end of Part One, as he’s fretting over losing his career, he questions how they’re going to pay off their second mortgage and start saving for their kids’ college. Apparently they were just treading water, and the unexpected promotion has finally put them on the right track. This detail caught me, and I found it interesting. Perhaps to appeal to a broader base of people, or elicit sympathy, but to me it struck me as slightly incongruent with the rest of Bill’s disciplined personality.

Maybe I’m reading too much into it. If anything, The Phoenix Project has reminded me of the life that I don’t want. I spent four years working in an enterprise firm, and I came out of there in a rough way. I’m going to need to think long and hard before I think about getting back into a leadership role at a large firm where I have the type off responsibility where I’m going to be on call for emergencies in the middle of the night, or get sucked into some project deployment that’s going to require anything resembling a war room.

I’ll find out how life changes for Bill and the employees of Parts Unlimited soon, as I’ll probably wrap the book up over the next day or two. I’m looking forward to getting copies in the hands of a few more people to see how they like it, and, more importantly, to see what effect it has on our operations and service delivery.

Mobile Device Management

two black smartphones

Small business deployments are still too cumbersome

Today is going to be a busy day. We’ve got a small party to attend to host, so I’ve got to do a bunch of household cleanup, roast a pork shoulder, bake a cake, and then host seven or ten children plus parents in the backyard. If that wasn’t enough, I’m behind on both my WordPress project and the Substack post for Monday, which is about bitcoin.

Work picked up a bit last week. I’m helping roll out Git best practices for a software development firm, which is the kind of challenge I’m looking for, and dealing with a failed mobile device management solution (MDM) that I rolled out several years ago and which has been summarily ignored since then. It’s not what I’d rather be doing.

Microsoft’s MDM, Intune, has evolved over the past few years, and like most Microsoft services, has gone through several iterations and is a maze of admin dashboards, documentation, and licensing products. It still seems vastly superior to the product that we’ve been using from IBM, called MaaS360. Still, figuring out the requirements for a small business client is a huge pain. We’ve been dealing mainly with Apple devices, which means managing all the end user accounts. Getting the devices enrolled requires managing a signed certificate from Apple (another account), and then deploying the device requires not only a configuration profile on the device, but additional apps on the device for it to work.

For our initial deployment MaaS360, requirements were pretty simple, the customer mainly wanted to lock down the browser on the phones for content filtering. It was an arduous process, even for a first-time deployment. Setting up the device profiles and testing took me several hours, then another associate of mine had to go through each device, setting up iTunes profiles for each user and downloading our management application. Then, after we deployed it, we discovered that GPS tracking wasn’t working. Permission needed to be granted individually on each device.

This initial deployment went unattended for almost two years. We got a request to pilot a new service app on one of the phones, and when I went back to check the tenant, all but two of the phones hadn’t checked in to the portal in over six months, more than half in over a year.

By some stroke of luck one of the two belonged to the individual who was selected to pilot the new service app, so I was able to proceed with the planning for that. I spent the rest of the morning trying to acquaint myself with Microsoft’s MDM offerings. Since most of our clients are on O365, it makes sense to take advantage of whatever is available through the platform. I was able to get a device policy setup under our partner account, but wasn’t able to get my personal iPhone to report into the console, even after several attempts connecting it to my O365 Exchange account.

Then, several hours later, after getting a Teams notification, I was prompted to install the device management profile, as well as two other apps, one for a “company portal”, and the Microsoft Authenticator app. Then, I was prompted for a managed Apple ID, and that’s where I stopped for the day.

I decided that if I was going to be forced to redeploy management to a dozen or so client devices, that I had best start communicating with the client, so I spoke to them. There had been numerous personnel changes in the past few months, and a lot of other processes were being re-evaluated, which meant that it was a good time to put some processes in place. First off, a freeze on any device purchases or equipment transfers without keeping me in the loop. (Outsource IT is usually an afterthought when it comes to hiring and firing.) Second, we were going to audit all existing devices, and make sure that we have a record of which devices we think we have, and who they belong to. That would give us some time to evaluate whether we can move management over to O365, or redeploy with the current solution.

I pulled some spreadsheets down from the management portal and dumped them into the client’s SharePoint site, then scheduled a Tuesday meeting with the pilot user for the new app.

Next week, I’ll have to do some investigation into Apple Business Manager, to see if it allows us to manage user IDs as well as the devices. We can barely depend on this firm’s employees to manage the one AD accounts, let alone another set of Apple IDs. It’s management hell. I’ll also have to draft some written policies for device and user onboarding and so forth. Eventually, I’d like to enroll the client firm in the carrier’s device provisioning program, to get them enrolled with minimal supervision. That will likely be a slog for this small firm.

On the brighter side of things, this may force me to develop some concrete MDM deployment best practices that will make me a superstar. I’m not aware of any Powershell tools that can be used to automate any of this process. Even turning on MDM within O365 requires clicking a box in the admin portal, and the Apple Certificate provisioning requires setting up accounts and downloading a file from one portal into another. Drafting an SOP for the entire process start to finish would be valuable.

That will have to wait till next week, because today, I have a party and very special birthday girl to attend to.

Private, internal blog for the family

girl wearing grey long-sleeved shirt using MacBook Pro on brown wooden table

Maintaining privacy for your kids by running a private WordPress instance in your home network

Well, I may have finally lost my mind. I quit Facebook over a year ago, but one of the things that I do miss is the throwback posts that they pop up on your feed with pictures and posts from one or five years ago. It’s great for those “oh look how little Elder is in that picture!” kinds of moments. I don’t feel comfortable sharing pictures of my kids on my normie feed anymore, but I still want to do more than just have a folder sitting on my computer somewhere that gets looked at once in a blue moon. Plus, Elder is getting more involved with using the computer, and I wanted to give her a chance to express her creativity without the risk of letting her have a YouTube account. So I did the only thing any sensible tech dad would do. I set up an internal WordPress site for the family to use.

Setting up internal domain

I’m pretty proficient with Windows domain controllers, and manage a lot of contoso.local domains that aren’t externally routable. I decided that I wanted to do this, so that the site could be accessed only from our local network. That way we could easily access it from any of our personal devices, and can potentially allow friends and family to look at it when they visit and join our network.

Bind is the go-to DNS server for Ubuntu, so I started by installing and configuring it. However I quickly got lost in a maze of .conf files and StackExchange posts trying to get it to work, so I dumped it and installed dnsmasq instead. Dnsmasq relies on simple host files instead of the more complicated zone files that bind uses, which is more than enough for what I need at the house.

I setup my /etc/dnsmasq.conf file as follows using this guide:

# Don't forward plain names or non-routed addresses
domain-needed
bogus-priv

# Use OpenDNS, not ISP's DNS router
server=208.67.222.222
server=208.67.220.220

# Replace second IP with your local interface
listen-address=127.0.0.1,192.168.1.123

expand-hosts
domain=dahifi.internal

Then I setup my /etc/hosts file with the records I need, pointing to the downstairs server and my development workstation.

27.0.0.1       localhost.localdomain   localhost
::1             localhost6.localdomain6 localhost6

192.168.1.123   dahifi.internal
192.168.1.123   homeboy.dahifi.internal   homeboy
192.168.1.102   oberyn.dahifi.internal    oberyn
192.168.1.123   elder.dahifi.internal   berkley

After saving changes, I need to restart dnsmasq: systemctl restart dnsmasq. From there I was able to validate the configuration on the server and external machines using nslookup. Once I was comfortable that things were working, I added my internal server’s IP to my router’s DHCP DNS scope and refreshing the client leases on a couple devices to make sure they would work.

Note about .local domains

I’ve never had issues with domains ending in .local among my Windows business networks, but Ubuntu may have a multicast DNS service called Avahi running, which hijacks anything FQDN ending in .local. Interestingly this service was missing off of the actual Ubuntu server install, which interfered with my troubleshooting. The simplest thing to do to get us up and running was just to change our internal domain from dahifi.local to dahifi.internal. Any other non-routable TLD should work as well.

Additionally, Ubuntu ships with resolved, a network name manager service. It runs a caching service at 127.0.0.53, and interfered with my troubleshooting as well. My internal domain kept getting routed to my ISP’s search page, until I ran sudo service systemd-resolved restart to clear the cache.

Multisite Docker setup using Nginx Proxy

The SSD Nodes site has a nice write up of how to run multiple websites with Docker and Nginx that I was able to use to get our WordPress site up and running. I prefer putting everything in a docker-compose file. The only prerequisite is creating the network:

docker network create nginx-proxy

docker-compose.yml:

version: "3"
services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
    ports:
      - "80:80"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro

networks:
  default:
    external:
      name: nginx-proxy

And a second file, blog-compose.yml for the blog itself:

version: "3"

services:
   db_node_blog:
     image: mysql:5.7
     command: --default-authentication-plugin=mysql_native_password
     volumes:
       - ./blog_db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: root_password
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress_user
       MYSQL_PASSWORD: wordpress_user
     container_name: blog_wp_db

   wordpress:
     depends_on:
       - db_node_blog
     image: wordpress:latest
     volumes:
       - ./blog_wp_data:/var/www/html
     expose:
       - 80
     restart: always
     environment:
       VIRTUAL_HOST: dahifi.internal
       WORDPRESS_DB_HOST: db_node_blog:3306
       WORDPRESS_DB_USER: wordpress_user
       WORDPRESS_DB_PASSWORD: wp_password
     container_name: blog_wp
volumes:
    blog_db_data:
    blog_wp_data:


networks:
  default:
    external:
      name: nginx-proxy

You’ll notice that the site that the blog is published to is identified by the VIRTUAL_HOST environment variable. In this case we’re still pointing to the “top level” domain, dahifi.internal, and not blog.dahifi.internal. This is due to issues we were having with subdomain resolution on the Nginx proxy, and is something we’ll have to work on later. Originally, we had our internal GitLab instance running on this host at port 80, and had to take it down for Nginx to work. My next step is to make sure that subhosts work properly, and then reconfigure GitLab and this blog to run under something like git.dahifi.internal.

Image uploads

One additional change that I needed to make was to change the default file size limit of 2M. Following along with this setup tutorial, I added - ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini line to the WordPress container, then added the following to the uploads.ini file:

memory_limit = 64M
upload_max_filesize = 64M
post_max_size = 64M

Then I rebuilt the container with docker-compose down and up, making sure to specify my blog-compose.yml file.

I still encountered errors trying to upload. WordPress kept throwing

Unexpected response from the server. The file may have been uploaded successfully. Check in the Media Library or reload the page.

I couldn’t find any errors in WordPress’s Apache instance, and PHP looked fine. I eventually found a message in the Nginx container log: client intended to send too large body. It seems that Nginx has it’s own limits. I added client_max_body_size 64M this directly in the /etc/nginx/nginx.conf file, then reloaded it with service nginx reload. Problem solved! Not only can I upload files directly through the web interface, but I was also able to add the internal site to the WordPress app running on my phone, and can upload images directly from my phone.

Elder is already working on writing a story in it now, and I’m looking forward to using this site as an internal bulletin board for the family. Let’s see what happens!

The future of work

white and black digital wallpaper

Tech trends aren’t favorable for small businesses that don’t pivot quickly

We’re now in the third month of The Great Lockdown, and for most of us it’s now the new normal. Households and businesses have made adjustments to deal with the restrictions and new procedures needed to keep people safe from the spread of COVID. Many tech firms have officially pivoted to remote first operations for staff, and many others are doing so unofficially. Perhaps the only question remaining at this point is how far things will return to the old ways. My guess, not so much.

I’ve been working from home for several years now, and have some insights into where things are headed, at least in the context of knowledge work. My day job involves supporting a number of small business, some of which operate with their staff completely distributed, so my experience is mostly in the medical and professional services industries. Leave aside traditional brick and mortar businesses that rely on foot traffic, like restaurants and lifestyle businesses for the moment. And I am of course writing while holded up in my cozy house with my family, under no pressure to leave for work and deal with the public. I have friends, neighbors and clients who are working the front lines in healthcare, and I have deep gratitude for them and for those others “frontline heroes”, keeping the grocery and retail supply chains moving for the rest of us.

Amazon and other digital platforms have been eating away at the real world for many years, and COVID has accelerated the transition dramatically. To steal a phrase from Tobe Lutke, CEO of Shopify, “2030 came a decade early”. People around the world have been forced to undertake massive changes in reaction to the pandemic, and we’re so far into it now that it’s hard to believe that that things were ever different. Future Shock has been barrelling at us for decades now, with the advent of personal computers, the internet, smartphones and wireless internet impacting generation after generation. It seems that the Singularity is nearly upon us. The last three and a half years of the Trump administration have seemed like a decade in themselves. All of that pales in comparison to the changes that we as a society have had to undergo these last three months.

To understand where we’re going, we must go look back for a moment and review the changes in economic and labor practices that have occured since the end of World War II. Since then we’ve transitioned from long-term, single company employment that traditionally ended with a gold watch and a pension, and have replaced it instead with a short-term, shareholder focused, gig economy. Productivity and investor profits have steadily increased for the past thirty years while real wages have remained stagnant. And it’s doubtful that we’ll see any progress on this issue, regardless of who holds the White House at the end of January next year.

Immense social pressure by large swaths of society, as we are currently seeing with the Black Lives Matter protests around the world, can be effective at spurring political action at the National level, but I remain skeptical that we will see much movement with regard to the economic sphere in the next twelve to eighteen months. It depends on whether we can continue to successfully deal with the pandemic. I am not an optimist on this point. I will note though how relatively quickly universal basic income went from being a humorous component of Andrew Yang’s presidential platform to a serious contender for COVID-related economic woes. For now, we’ll have to settle for our twelve hundred dollar stimulus checks.

If one is to use the stock market as a bellwether, it would appear that the economy has already rebounded from the huge hit it took in the weeks following the lockdown. The lack of any meaningful relationship between stocks and reality should be apparent to most. Recent gains have either been driven by the continuing extraction of value from small and local businesses by Amazon and other ecommerce platforms, or from the harvesting of attention and personal data by Facebook, Twitter, Google and others. And now, the Fed has turned on the spigots through what they call “unlimited quantitative easing”, releasing over six trillion dollars to banks, where most of it has gone into the hands of those who need it least. This has caused no lack of excitement among the Bitcoin community, who have taken the “money printer goes BRRR” meme to the next level as they build the narrative of bitcoin as a way to both defund the state, short (as in stocks) governments, and opt out of the traditional central bank based fiat system. I count myself among them, but I’ll save that for a future installment.

Source: VisualCapitalist

That said, data from LinkedIn does show that hiring is up for tech workers. Platform firms that enable developers and entrepreneurs to build on top of their systems are on somewhat of a spree lately. Twilio, a text messaging provider, Stripe and Square, two internet payment firms, as well as GitLab and GitHub, built around a popular software development tool, are all rapidly scaling up to meet demand. And there seem to be no shortage for software developer or engineering jobs that I see, especially for those in government-adjacent industries or those with heavy math and science qualifications, like artificial intelligence and data analysis.

It also appears that most of the healthcare industry, especially the mental health profession, will be safe from any negative economic impacts of COVID. There was previously no shortage of people needing treatment for depression or post traumatic stress disorder, numbers that are rising as people deal with the stress of the pandemic. Indeed, there are number of platform companies that have been offering telemedicine related services for therapy and non-emergency services. That seem to be on the uptake. I myself am currently involved with one.


Let’s turn our focus to the technical side of our new normal.

Perhaps the biggest early winner among the remote-work enabled companies was Zoom, which became synonymous with video conferencing in the weeks following lockdown. Then there was the inevitable backlash, following both privacy and security concerns around Zoom’s default settings and misleading encryption claims. Amazon has been a big winner, both because of quarantine-induced shopping as well as cloud driven usage on AWS. Google Cloud Services Microsoft’s Azure are no doubt doing well also, and there are indications that Apple may be trying to enter the space also.

Microsoft has been in a bit of a battle recently as well, after the CEO of Slack, a remote workspace collaboration service, announced that Microsoft was obsessed with “killing us”, and announced an integration partnership with Amazon AWS. I’ve been using both Teams and Slack for some time now, in different roles, and I don’t think either is in any immediate danger. I’ve got theories about what type of companies choose one or the other, Slack for more tech-focused firms, especially development houses, and Teams for those that already rely on other Microsoft services such as Exchange Online or Sharepoint. I don’t have any hard evidence on that yet, and it may be that a lot of people, like me, will continue to use both for various teams.

My day to day responsibilities involve managing Microsoft Office 365 cloud services for clients. We’ve been pushing them off of older SMTP/IMAP services to Exchange Online for several years now, since it’s just plain better. What amazes me though is that most of our clients have no idea of the myriad other services that come bundled with O365 these days. In fact, I have a hard time keeping up myself. Besides Exchange, Teams and SharePoint are probably the most prominent, and the ones that we push the most, but there are a number of other useful apps bundled with O365, like Forms, Flow and Planner. The problem with O365 is that there are so many product offerings within them that it’s impossible to keep up unless you dedicate a lot of time and resources to them. The issue is multiplied with Azure and AWS, each of which has dozens of services.

I’ve spent the last dozen years of my career building and managing business networks, server and supporting end end users and their equipment, what’s known as managed services in the industry. I realized several years ago as the app economy took over, that these types of services would soon become a commodity, and that competition, especially low barriers to entry, would soon drive prices in a race to the bottom. I think I’m being proven right on that point, but what I didn’t really anticipate was that it was not just managed services providers that are being hit by this, but essentially all industries and verticals.

The overwhelming drive today among business is the tendency toward fewer and fewer employees. Tech tools are increasing employee productivity, apps and automation are compounding their efforts. Some jobs are more resistant to this than others, but the number of administrative staff needed to manage a large organization is rapidly diminishing. As I noted last April:

Blockbuster, at its height in 2004, employed 84,300 workers with a a $5 billion market cap. Today Netflix is valued at $162 billion value with only 5,400 employees. Instagram had just 13 employees when it was acquired by Facebook for a $500 million valuation in 2013.

I wrote about businesses as operating systems last month, so I’ll not repeat myself wholesale save to say that successful businesses in the next year or two are going to be the ones that figure out how to utilize these digital tools to allow them to scale. Even currently sustainable businesses with no interest in growth are going to be forced to deploy them, as the pressure to become more lean will build up from competitors that are. My focus remains on exploring how to deploy these tools in a way that helps my clients streamline their operations. My working hypothesis right now is that businesses that do will survive and thrive and those that don’t will languish and die.

We’ve seen how quickly the world can change. There’s no going back.

Git-ing it done

black green and blue coated wires

Roll your own GitLab

I had trouble falling asleep last night, Younger crawled in our bed just as I was dozing off and kept squirming as I was falling asleep, so I slept in her bed. It faces East, so I woke up at five and tried to go back to sleep. I heard Elder up, so I got up and started the day. She’s sitting across the room from me, looking up “Valentine’s Day” gifts ideas for the boy in our quarantine bubble down the street. Her sister has been ribbing her about it for days now.

One of our Zombie, LLC clients wants help standing up an internal GitLab server. It got me thinking, so I went ahead and set up a GitLab docker instance on my downstairs Ubuntu server. I figure it’s good practice. Do the job you want has always been good practice, so setting it up was worth the time. Plus it only took about fifteen minutes. The main problem I ran into was an SSH conflict with the existing service on the host. And it doesn’t appear that modifying the config on an existing container requires stopping the Docker daemon, so I just deleted the container and started over. I’ll probably move SSH if I ever do a real deployment, but here at the house the HTTP functionality is enough.

There’s also the mail issue. I didn’t want to use the root account to setup my repos, but the workflow around new accounts wants to send an activation email. I tried installing sendmail on the host, but the password reset didn’t work. I doubt it will work without a publicly routable dynamic DNS entry back to it or SMTP services, which I don’t want to mess with right now. Thankfully I found a password change form in the admin interface that didn’t require knowing the old password and got up and running.

I am nowhere near as strong with my Linux management skills as I am with Windows, where everything is pre-packaged and is somewhat unified. I can stand up domain.local services lickety split, and have a library of PowerShell scripts to setup AD, DNS, DHCP services within a domain. I have never actually taken the time to set one up at home though, but that point may soon be approaching. I’ve been wanting to investigate the use of Ubuntu server as an alternative or supplement to Windows based AD services, but part of me is skeptical that such a setup is even viable for workstation authentication and services. But I digress. The point I’m trying to make here is that I’ve always been in awe of Unix sysadmins ever since I worked at an internet service provider back in the late 90’s and watched our systems guy pop in and out of terminal shells like a wizard. I’ve never felt adequate in that regard.

I made some good progress yesterday working on the WordPress project, and have started converting the client’s site over to the new theme. I’m going over the demo site, examining the Bakery build they’ve got set up, and recreating it using the client’s assets. This allows me to get a bit more familiar with the framework that the theme author is using, and hopefully gleam some best practices at the same time. It’s a two step forward, one step back process. There are some strange bugs that popped up. Activating Woocommerce seems to bring the site down completely, as does changing the theme back to the original. Then at one point, while I was working on the new header, the previews stopped working completely and would only throw 404 errors. They work in the actual site, so I had to make do while I made edits.

Usual best practices for WordPress development and git repos are to exclude the entire WordPress directory except for whatever theme and custom plugin that you’re developing, but since in this case we’re working on an entire site, I’ve added the entire WordPress directory and associated SQL database files. The wp-content/uploads directory is mounted outside the container, along with plugins and themes. I haven’t yet pulled this directory on another machine yet, so I don’t know if it’s going to work. My main concern is how I’m grabbing the database. Managing PostgreSQL during my Django projects has always been a bit of a pain as I never learned how to incorporate it into my source control. I’ll have to spend some time correcting this deficiency.

Here is a look at the Docker Compose file I am using for my development setup. The SQL mount /docker-entrypoint-initdb.d/backup_to_load.sql get’s imported when the container is created; I assume that it’s ignored when pulling the SQL data from source. We shall soon find out. Also, I haven’t solved the file permissions issues that happen when trying to edit things like the wp-config.php file. I’ll have to save that for a later time.

version: '3.8'
services:

  wordpress:
    container_name: 'local-wordpress'
    depends_on:
      - db
    image: 'wordpress:latest'
    ports:
      - '80:80'
    environment:
      WORDPRESS_DB_HOST: db
      WORDPRESS_DB_USER: wordpress_user
      WORDPRESS_DB_PASSWORD: wordpress_password
      WORDPRESS_DB_NAME: wordpress_db
    volumes:
      - "./Wordpress:/var/www/html"
      - "./plugins:/var/www/html/wp-content/plugins"
      - "./themes:/var/www/html/wp-content/themes"
      - "./uploads:/var/www/html/wp-content/uploads"

  db:
    container_name: 'local-wordpress-db'
    image: 'mysql:5.7'
    command: --default-authentication-plugin=mysql_native_password
    volumes:
      - './data/mysql:/var/lib/mysql'
      - './data/localhost.sql:/docker-entrypoint-initdb.d/backup_to_load.sql'
    environment:
      MYSQL_ROOT_PASSWORD: somewordpress
      MYSQL_DATABASE: wordpress_db
      MYSQL_USER: wordpress_user
      MYSQL_PASSWORD: wordpress_password

  adminer:
    image: adminer
    restart: always
    ports:
      - 8080:8080

Digital Addiction

flat screen computer monitor turned on beside black keyboard

Reclaiming my brain from the pull of the Feed.

So I deleted Twitter off my phone yesterday. I really did it. It just took a chapter or two of Digital Minimalism to convince me that I needed a break.

Getting rid of Facebook on my phone about eighteen months ago was one of the healthiest things that I’ve ever done. It was such a time suck and I spent way too much time on the platform arguing with people. One the one hand, it did lead to me writing quite a bit, and probably lead to my political career, but between the toxic people that I had connections with on there, and all of the privacy problems that were going on there, it was just too much. I had to leave. Given the Cambridge Analytica scandal and all the other bad news about Zuckerberg and they way they manage things over there, I’ve had no desire to go back. I’ve logged on a few times to deal with some messages or check on some family members, but I don’t browse the feed at all.

I always considered Twitter a bit different, since I was curating my feed, and it wasn’t just random friend of friend connections. Just because someone wanted to follow me, I didn’t have to follow them. Or vice versa. I still see Twitter as a source of news and information, and being able to remain pseudonymous was part of the main draw as well. Still, I spent way to much time on it, picking up my phone whenever I’m idle. Watching TV shows with the family, sitting out on the deck, or out somewhere waiting in public.

So I removed it Sunday morning and went about my day. The absence was felt immediately. I found my phone in my hand throughout the day, and I found myself wondering why I was holding it. Then I realized that the habit was still there, but I had short-circuited it with the app gone. It happened several times during random moments, like waking from a dream. I took the kids to a nature park to get out for an hour or two, and felt the urge to pull my phone out while the kids were finishing their lunch. No need. I set the slip and slide up for the girls outside and there’s that habit again. Nothing to do. Watching a movie after dinner, sitting on the couch, I’m always checking my feed. Instead, I worked on the Sunday crossword.

Today’s going to be interesting since I don’t have the same kind of blocks setup on my workstations. There are ones out there that will whitelist or blacklist certain sites on a timer. I’ve heard of people using them to make sure they get their work done, but I never went that far with it. There’s lots of downtime during the day, when I’m waiting for a download or some sort of progress bar, when I pull up Twitter and browse the feed. That’s going to be the real test. I wonder if I can redirect that energy to something productive, like doing a lesson on LinkedIn learning, FreeCodeAcademy, or doing one of the competitive coding challenge sites? I have been wanting to take a look at Rust…

I do have a project to finish, that is going to take several weeks of deep work. I’m really going to have to delve into WordPress’s innards and really figure out how the theme system works, then actually develop a design for a site. I had been attempting to figure out how this site’s current theme had been developed, but it’s such a mess, and I don’t know if I have it in me. All of the site’s functionality was just dumped into WordPress’s TwentySixteen theme, without even a child theme setup. And the dev hardcoded all of the scripts for Google Analytics and everything else directly in the template files. I’m got fifty four plugins, and trying to figure out which ones are needed to for the existing site is a mess.

Anyways. There was one moment yesterday when I desperately wished I still had Twitter on my phone. I was driving the kids to the aforementioned nature park, travelling down a two lane divided highway, when there was some sort of traffic slowdown. There was a car pulled off to the right just before an onramp. As I passed it I thought we were clear, but the cars on my left were still slowing up. There, up ahead, was a black man on a horse, just trotting his way down the highway. And there, the perfect tweet formed in my mind: “Is it legal to ride a horse on the parkway? Asking for a friend.”

Well, maybe not. But the next few days will be an interesting experiment to see what happens when I reclaim my brain. Will it unlock my creative superpowers, or have astonishing effects on my mental health and well-being? Probably not anything that that dramatic. Being in the moment certainly won’t hurt, and redirecting that nervous energy somewhere else will most likely be helpful.

Here we go.

Phishers step up their game

man standing on building rooftop during nigh time

This attack is not new, but the tactics are evolving, and some people are still behind the curve

I’ve been managing business networks for some time, and I’ve witnessed phishing attacks, where attackers attempt to steal a victim’s email login information, evolve the last few years. Yesterday I was alerted to a new variation on this traditional attack that I thought was worth sharing and dissecting, as you’ll see why.

Almost all of the attacks that I’ve seen stem from an email that a victim receives. Usually it’s someone that the victim has corresponded with in the past. The subject line and body vary, but there’s usually an external link where the victim is directed to in order to download some secure file. Normally, the victim arrives at a page that looks like a Google or Microsoft landing page, but of course they’re a fake, setup to steal the victim’s credentials.

If the phishers are successful, they’ll have gained access not only to the victim’s mailbox, but also any associated document storage systems like Google Drive, or Microsoft OneDrive or SharePoint. From there it’s all over, the attackers can download whatever they need, or if they discover that they’ve infiltrated a high value target, they might lurk, and prepare additional attacks.

In one particular case that I was involved in a few years ago, attackers managed to phish the CEO of a company. They discovered that they were going to be travelling from the East coast to the West, and waited until they were thirty thousand feet in the air to launch a fake CEO attack, requesting that their finance director wire tens of thousands of dollars to the perpetrators bank account as soon as possible. In this case, there were enough red flags that the attack was thwarted, but not before the attackers had used the CEOs mailbox to resend the phishing attack to everyone in their contact history.

And so the cycle repeats.

How not to be a victim

Normally, there are numerous red flags when phishing attempts happen, but it still surprises me the number of requests I get from people asking me to inspect an email for legitimacy.

Sometimes it’s as easy as examining the email recipient, or the actual link in the email, and finding that they don’t match. If Jane Doe’s corporate email is jdoe@corp.com, and you see your email client only displays “Jane Doe“, you might need to hover your mouse over it to see that the email is really from a different address altogether. (Hover over the link above to see what I’m talking about.) Most modern email clients have updated the way they display emails, making sure that the actual address is “Jane Doe <jdoe@corp.com>” or something similar.

However, there are still a number of businesses that haven’t taken precautions to protect their own email systems from being spoofed. That’s to say, there may not be anything stopping from someone from setting up a rogue email server and sending an email from anyone at that company. There are several methods to protect from this, known as SPF, DKIM and DMARC, that protect from this happening, so you may want to make sure that your domains are protected.

The flag that I look for is where the link is pointing. Just like email addresses, these URLs can be spoofed. Modern rich-text or HTML mail clients which allow special formatting can be used to try and trick users with links that misdirect users to hacked sites. So always check the URL. That official looking login page for your Office365 account might just be a fake sitting behind someone’s hacked WordPress site. CHECK. THE. URL.

These tips alone should prevent most people from falling victim to one of these attacks. If I’ve been drawn into investigating at this point, I usually go a step further and try to get the fake landing page taken down. Sometimes it’s easy to find the company who’s site has been hijacked, and usually a courtesy call is enough for me to consider my good deed done for the day. Sometimes the site is set up by the hackers themselves. A ten dollar web domain with a three dollar hosting account, paired with a free WordPress template is enough to start with. In these latter cases, I have to do a bit more work to find where the domain is registered and where the site is hosted. Then, an email to the company’s abuse department, and I’m done.

How you can stop it

And in almost every case that I’ve seen, it’s been a WordPress site that has been hosting the fake landing page. As it’s the software behind more than a third of all websites on the internet, it’s not surprising. But if you’ve got a business website running on WordPress and you’re not maintaining it or paying someone to manage it for you, then not only are you exposing yourself, your firm, and your clients to hacks, but you’re also partially responsible for any victims that fall prey through your site. Update your site, at least quarterly, or purchase a product or hire a firm that can check it on a regular basis for you.

How Chrome marked the site with the fake landing page. Firefox has no such warning.

Making sure email the security protocols mentioned earlier, (SPF, DKIM and DMARC) are enabled on your domains will prevent hackers from faking your domain and using it in an attack.

Using updated email software and security applications are also an effective way to mitigate these attacks. Make sure that your email client software is a recent version, or use a cloud-based one to make sure that you have access to the latest anti-phishing tools. And make sure you use them! It still astonishes me how many small firms haven’t enabled two factor authentication for their employees, or even looked at the protection services that are available from their email providers.

And one of the most important things you can do is train your staff how not to fall victim to these attacks. There are a number of firms that can deploy phishing attempts against your staff, and provide training to those who fail to avoid it.

Attackers upping their game

What concerned me with the attack I witnessed was the way that the attackers changed their tactics to evade some of the more advanced mitigation techniques that are in place to stop these cybercrimes. A number of enterprise level email security services have the ability to filter out these malicious links and block them from the recipient. They usually rely on some sort of whitelist or blacklist to allow certain domains through. In the case this week, the victim was sent to Live.com, which is Microsoft’s ID portal for Outlook.com and OneDrive accounts. To the casual observer, it looked like a legitimate OneNote notebook, and there was no breach at this point. No doubt most organization administrators would have no problem with users going there.

Of course within this OneNote page was the real trap, a link to the fake landing page. Thankfully the mark in this case, noting that the OneNote page was addressed from a person different than the original email, was suspicious enough not to fall for it. That said, when I was alerted to it and took a look at the OneNote page without the context of the original email, my initial thought was that it was legit. I almost cleared it! A second read turned up some irregular grammar, which is when I noticed the external link and the O365 landing page. Even then I still had to look up the domain registration on the site, two months earlier using an Asian registrar, before I was convinced it wasn’t some sort of Single Sign On configuration.


Technology changes fast, and cybersecurity is a cat and mouse game between attackers and the security professionals that protect your personal and business assets from these dangerous breaches. If you need help with managing your infrastructure or mitigation strategy against these attempts, let’s discuss it. Whether it’s email and network infrastructure, securing your website, or doing mock infiltration testing or employee training. I can help.

Baby slaps

person's left hand wrapped by tape measure

Inch by inch, life’s a cinch

Well I sure did step up my game. I guess I was anxious about writing a big long post for today’s Substack post, so I wrote it last night. Someone put out a message on the company Slack yesterday about a suspicious email, and I almost fell for it. I figured it warranted an exposition, so I spent an hour or so last night writing it up, after everyone went to bed. The experience writing at night, undisturbed, is quite different from my morning writing, when the kids are getting up and wandering in the room every five minutes. I’m not sure I like having a deadline every night like that though, so we’ll keep it limited to Thursday nights. I think I’ll even put it on my calendar just to keep it routine. Done.

I didn’t make any progress on my WordPress client last night, instead, I’ve started reading through the Docker documentation so that I can figure out what the hell I’m doing with my setups. I know a couple commands; it feels like I’m an infant failing my limbs trying to grab a rattle but I keep hitting myself in the head instead. I don’t understand half of the stuff I’m reading when I look at some of the image notes in Docker Hub, and trying to read through some of the discussions on GitHub issues is even worse. So I’m just reading through the best practices documentation to try and get a sense of how things run between Dockerfiles and Docker Compose, so that I can load a SQL backup automatically, add access my PHP files in both my development environment and in the container’s web services.

And the Docker documentation is pretty interesting. It’s scattered with little goodies like links to Twelve-Factor Apps, and here documents, which is something I didn’t even know about. And if there’s one place I need to step up, it’s my bash skills.

Anyways, I get to keep this post short since today’s a double post day. I thought about writing something for an off day, but figured the ritual of writing morning pages was more important than taking a day off. I’m starting to gather ideas faster, and my writing output has stepped up quite a bit. So I’m going to wrap it up, make some changes to today’s big post before I blast it out to several hundred unknowing Substack subscribers.

Now, time to go grab that rattle by the horns.

Leave it better than you found it

blue plastic trash bins on forest during daytime

Taking over abandoned or mismanaged projects

Taking over a project is a much different beast than starting from scratch. I think everyone knows that it’s easier starting from a blank slate most of the time. What separates the amateurs from the professionals though is the documentation that they leave for those that come after. Most of my recent work I’ve been the solo technical resource on a small team, and coming into a new project is often a mess, and trying to decipher someone else’s work without any form of documentation is a challenge. I make sure not to leave it that way.

I’ve been doing small business networks for well over a decade, and taking on a new client almost always starts with a network assessment, inventorying the equipment, and running some kind of network or system scanning tool to catch what else we might have caught and put it into a report. Internally, we’ve been using ITGlue to keep track of all our documentation, and it really comes in handy when handing off a client. In the past, taking over an account from another IT management firm has involved sitting down for an interview with the technical resources on the other team, making lots of notes, and then rebuilding the documentation in our system. And it usually involves some sort of roughly drawn up document with passwords and other critical information.

Lately, it seems we’ve been sending off runbooks left and right, containing all the documentation, checklists and SOPs that we’ve developed for a client. I remember the first time we handed off one to another firm, seeing the surprise in their eyes when we handed over a professional, looking document. It made a real good impression, and I almost considered jumping ship with them. That was over a year again, and here I am, finding myself going through the same situation again.

My recent consultation work has mostly involved taking over a stable of WordPress sites. I’ve been using WordPress for years for this blog and others, but I’ve usually kept things very simple. Just download a nice theme, and start writing. The sites I’ve been taking on recently are much more complicated. There’s usually two or three dozen plugins deployed, and some sort of complicated theme system in place that has some particular arcane way of adding a page or making changes to a header or footer.

Since my focus here is just about writing, I intentionally decided not to spend any time on presentation. I’m literally still using the default Twenty Seventeen theme that came with WordPress out of the box. I’ve looked at some premium themes for it to give it some zazz, but ultimately decided that the effort wasn’t a priority for me. Not so with the other projects. I’ve been using an Envato Elements subscription to source my themes and templates, but each one seems to carry it’s own set of required plugins and design methodology. Figuring out how to tweak them is its own challenge.

I recently took over a site for a client. It was mostly in good shape, but had been neglected for several years. I wanted to make some changes to it, but without understanding how everything was put together, it’s proven difficult. On top of that the original designer used a modified version of one of the default WordPress templates, so the choice was to start delving into the source code or start building from scratch. And again, there were about forty plugins being used, and I’m yet unaware of a simple way to trace an elment in a rendered WordPress site to its source. Plugins will often add elements to the Dashboard UI, or the document editor, and figuring out what goes with what is a slog.

So far, the choice for me has usually been to tear it all down and start from scratch. Cloning production to a staging site and deactivating all the plugins, to see what we’ve got, content wise, is usually the first step. Then I can all the pages and posts to see which elements are missing from the original site. “There’s a shortcode for a slideshow plugin, so let’s note that and re-enable that.” “Why is half of our content missing?” It’s because they used some post taxonomy plugin to put certain content in a separate blog. And so on and so on.

There’s one thing I picked up in the last year or so, called Architectural Design Records, or ADRs. It’s basically a decision making artifact that details the reasoning behind taking a particular design approach to something. They’re closely tied to user stories, and can be placed right in a Git repo with the rest of the source code. I’ve been trying to carry some of the ideas behind ADRs into my own projects, and not just software ones. It’s a good practice for any sort of system design, whether that is technical, business, or personal. Leaving these little artifacts behind for future you or for others seems that it can be a valuable practice, and will come in handy when the time come to tweak something months or years down the line. “Why on Earth did we decide to do x again? Oh yeah, here’s the ADR…”

I am defintely not a WordPress ninja. As the old saw goes, “the more you learn, the more you realize that you don’t know anything.” Give or take. Managing a web host reseller account and using a dedicated tool to keep WordPress installations and their related plugins up to date is simple enough, but taking over these sites and trying to redesign them with an eye toward user design, ecommerce and SEO is a completely different set of skills than I had hoped to working on. And I don’t know how far I want to develop it. Most likely I’ll be doing what I can to salvage the projects I’m working on and get them to the state where it can start generating some revenue, then I’ll start bringing in other resources to hand off tasks to. And when I do, I’ll have supporting documentation to hand off to them so that they can quickly get up to speed, a history of how things operate and why they were setup the way they are.