Mobile Device Management

two black smartphones

Small business deployments are still too cumbersome

Today is going to be a busy day. We’ve got a small party to attend to host, so I’ve got to do a bunch of household cleanup, roast a pork shoulder, bake a cake, and then host seven or ten children plus parents in the backyard. If that wasn’t enough, I’m behind on both my WordPress project and the Substack post for Monday, which is about bitcoin.

Work picked up a bit last week. I’m helping roll out Git best practices for a software development firm, which is the kind of challenge I’m looking for, and dealing with a failed mobile device management solution (MDM) that I rolled out several years ago and which has been summarily ignored since then. It’s not what I’d rather be doing.

Microsoft’s MDM, Intune, has evolved over the past few years, and like most Microsoft services, has gone through several iterations and is a maze of admin dashboards, documentation, and licensing products. It still seems vastly superior to the product that we’ve been using from IBM, called MaaS360. Still, figuring out the requirements for a small business client is a huge pain. We’ve been dealing mainly with Apple devices, which means managing all the end user accounts. Getting the devices enrolled requires managing a signed certificate from Apple (another account), and then deploying the device requires not only a configuration profile on the device, but additional apps on the device for it to work.

For our initial deployment MaaS360, requirements were pretty simple, the customer mainly wanted to lock down the browser on the phones for content filtering. It was an arduous process, even for a first-time deployment. Setting up the device profiles and testing took me several hours, then another associate of mine had to go through each device, setting up iTunes profiles for each user and downloading our management application. Then, after we deployed it, we discovered that GPS tracking wasn’t working. Permission needed to be granted individually on each device.

This initial deployment went unattended for almost two years. We got a request to pilot a new service app on one of the phones, and when I went back to check the tenant, all but two of the phones hadn’t checked in to the portal in over six months, more than half in over a year.

By some stroke of luck one of the two belonged to the individual who was selected to pilot the new service app, so I was able to proceed with the planning for that. I spent the rest of the morning trying to acquaint myself with Microsoft’s MDM offerings. Since most of our clients are on O365, it makes sense to take advantage of whatever is available through the platform. I was able to get a device policy setup under our partner account, but wasn’t able to get my personal iPhone to report into the console, even after several attempts connecting it to my O365 Exchange account.

Then, several hours later, after getting a Teams notification, I was prompted to install the device management profile, as well as two other apps, one for a “company portal”, and the Microsoft Authenticator app. Then, I was prompted for a managed Apple ID, and that’s where I stopped for the day.

I decided that if I was going to be forced to redeploy management to a dozen or so client devices, that I had best start communicating with the client, so I spoke to them. There had been numerous personnel changes in the past few months, and a lot of other processes were being re-evaluated, which meant that it was a good time to put some processes in place. First off, a freeze on any device purchases or equipment transfers without keeping me in the loop. (Outsource IT is usually an afterthought when it comes to hiring and firing.) Second, we were going to audit all existing devices, and make sure that we have a record of which devices we think we have, and who they belong to. That would give us some time to evaluate whether we can move management over to O365, or redeploy with the current solution.

I pulled some spreadsheets down from the management portal and dumped them into the client’s SharePoint site, then scheduled a Tuesday meeting with the pilot user for the new app.

Next week, I’ll have to do some investigation into Apple Business Manager, to see if it allows us to manage user IDs as well as the devices. We can barely depend on this firm’s employees to manage the one AD accounts, let alone another set of Apple IDs. It’s management hell. I’ll also have to draft some written policies for device and user onboarding and so forth. Eventually, I’d like to enroll the client firm in the carrier’s device provisioning program, to get them enrolled with minimal supervision. That will likely be a slog for this small firm.

On the brighter side of things, this may force me to develop some concrete MDM deployment best practices that will make me a superstar. I’m not aware of any Powershell tools that can be used to automate any of this process. Even turning on MDM within O365 requires clicking a box in the admin portal, and the Apple Certificate provisioning requires setting up accounts and downloading a file from one portal into another. Drafting an SOP for the entire process start to finish would be valuable.

That will have to wait till next week, because today, I have a party and very special birthday girl to attend to.

Private, internal blog for the family

girl wearing grey long-sleeved shirt using MacBook Pro on brown wooden table

Maintaining privacy for your kids by running a private WordPress instance in your home network

Well, I may have finally lost my mind. I quit Facebook over a year ago, but one of the things that I do miss is the throwback posts that they pop up on your feed with pictures and posts from one or five years ago. It’s great for those “oh look how little Elder is in that picture!” kinds of moments. I don’t feel comfortable sharing pictures of my kids on my normie feed anymore, but I still want to do more than just have a folder sitting on my computer somewhere that gets looked at once in a blue moon. Plus, Elder is getting more involved with using the computer, and I wanted to give her a chance to express her creativity without the risk of letting her have a YouTube account. So I did the only thing any sensible tech dad would do. I set up an internal WordPress site for the family to use.

Setting up internal domain

I’m pretty proficient with Windows domain controllers, and manage a lot of contoso.local domains that aren’t externally routable. I decided that I wanted to do this, so that the site could be accessed only from our local network. That way we could easily access it from any of our personal devices, and can potentially allow friends and family to look at it when they visit and join our network.

Bind is the go-to DNS server for Ubuntu, so I started by installing and configuring it. However I quickly got lost in a maze of .conf files and StackExchange posts trying to get it to work, so I dumped it and installed dnsmasq instead. Dnsmasq relies on simple host files instead of the more complicated zone files that bind uses, which is more than enough for what I need at the house.

I setup my /etc/dnsmasq.conf file as follows using this guide:

# Don't forward plain names or non-routed addresses

# Use OpenDNS, not ISP's DNS router

# Replace second IP with your local interface


Then I setup my /etc/hosts file with the records I need, pointing to the downstairs server and my development workstation.       localhost.localdomain   localhost
::1             localhost6.localdomain6 localhost6   dahifi.internal   homeboy.dahifi.internal   homeboy   oberyn.dahifi.internal    oberyn   elder.dahifi.internal   berkley

After saving changes, I need to restart dnsmasq: systemctl restart dnsmasq. From there I was able to validate the configuration on the server and external machines using nslookup. Once I was comfortable that things were working, I added my internal server’s IP to my router’s DHCP DNS scope and refreshing the client leases on a couple devices to make sure they would work.

Note about .local domains

I’ve never had issues with domains ending in .local among my Windows business networks, but Ubuntu may have a multicast DNS service called Avahi running, which hijacks anything FQDN ending in .local. Interestingly this service was missing off of the actual Ubuntu server install, which interfered with my troubleshooting. The simplest thing to do to get us up and running was just to change our internal domain from dahifi.local to dahifi.internal. Any other non-routable TLD should work as well.

Additionally, Ubuntu ships with resolved, a network name manager service. It runs a caching service at, and interfered with my troubleshooting as well. My internal domain kept getting routed to my ISP’s search page, until I ran sudo service systemd-resolved restart to clear the cache.

Multisite Docker setup using Nginx Proxy

The SSD Nodes site has a nice write up of how to run multiple websites with Docker and Nginx that I was able to use to get our WordPress site up and running. I prefer putting everything in a docker-compose file. The only prerequisite is creating the network:

docker network create nginx-proxy


version: "3"
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
      - "80:80"
      - /var/run/docker.sock:/tmp/docker.sock:ro

      name: nginx-proxy

And a second file, blog-compose.yml for the blog itself:

version: "3"

     image: mysql:5.7
     command: --default-authentication-plugin=mysql_native_password
       - ./blog_db_data:/var/lib/mysql
     restart: always
       MYSQL_ROOT_PASSWORD: root_password
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress_user
       MYSQL_PASSWORD: wordpress_user
     container_name: blog_wp_db

       - db_node_blog
     image: wordpress:latest
       - ./blog_wp_data:/var/www/html
       - 80
     restart: always
       VIRTUAL_HOST: dahifi.internal
       WORDPRESS_DB_HOST: db_node_blog:3306
       WORDPRESS_DB_USER: wordpress_user
       WORDPRESS_DB_PASSWORD: wp_password
     container_name: blog_wp

      name: nginx-proxy

You’ll notice that the site that the blog is published to is identified by the VIRTUAL_HOST environment variable. In this case we’re still pointing to the “top level” domain, dahifi.internal, and not blog.dahifi.internal. This is due to issues we were having with subdomain resolution on the Nginx proxy, and is something we’ll have to work on later. Originally, we had our internal GitLab instance running on this host at port 80, and had to take it down for Nginx to work. My next step is to make sure that subhosts work properly, and then reconfigure GitLab and this blog to run under something like git.dahifi.internal.

Image uploads

One additional change that I needed to make was to change the default file size limit of 2M. Following along with this setup tutorial, I added - ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini line to the WordPress container, then added the following to the uploads.ini file:

memory_limit = 64M
upload_max_filesize = 64M
post_max_size = 64M

Then I rebuilt the container with docker-compose down and up, making sure to specify my blog-compose.yml file.

I still encountered errors trying to upload. WordPress kept throwing

Unexpected response from the server. The file may have been uploaded successfully. Check in the Media Library or reload the page.

I couldn’t find any errors in WordPress’s Apache instance, and PHP looked fine. I eventually found a message in the Nginx container log: client intended to send too large body. It seems that Nginx has it’s own limits. I added client_max_body_size 64M this directly in the /etc/nginx/nginx.conf file, then reloaded it with service nginx reload. Problem solved! Not only can I upload files directly through the web interface, but I was also able to add the internal site to the WordPress app running on my phone, and can upload images directly from my phone.

Elder is already working on writing a story in it now, and I’m looking forward to using this site as an internal bulletin board for the family. Let’s see what happens!

The future of work

white and black digital wallpaper

Tech trends aren’t favorable for small businesses that don’t pivot quickly

We’re now in the third month of The Great Lockdown, and for most of us it’s now the new normal. Households and businesses have made adjustments to deal with the restrictions and new procedures needed to keep people safe from the spread of COVID. Many tech firms have officially pivoted to remote first operations for staff, and many others are doing so unofficially. Perhaps the only question remaining at this point is how far things will return to the old ways. My guess, not so much.

I’ve been working from home for several years now, and have some insights into where things are headed, at least in the context of knowledge work. My day job involves supporting a number of small business, some of which operate with their staff completely distributed, so my experience is mostly in the medical and professional services industries. Leave aside traditional brick and mortar businesses that rely on foot traffic, like restaurants and lifestyle businesses for the moment. And I am of course writing while holded up in my cozy house with my family, under no pressure to leave for work and deal with the public. I have friends, neighbors and clients who are working the front lines in healthcare, and I have deep gratitude for them and for those others “frontline heroes”, keeping the grocery and retail supply chains moving for the rest of us.

Amazon and other digital platforms have been eating away at the real world for many years, and COVID has accelerated the transition dramatically. To steal a phrase from Tobe Lutke, CEO of Shopify, “2030 came a decade early”. People around the world have been forced to undertake massive changes in reaction to the pandemic, and we’re so far into it now that it’s hard to believe that that things were ever different. Future Shock has been barrelling at us for decades now, with the advent of personal computers, the internet, smartphones and wireless internet impacting generation after generation. It seems that the Singularity is nearly upon us. The last three and a half years of the Trump administration have seemed like a decade in themselves. All of that pales in comparison to the changes that we as a society have had to undergo these last three months.

To understand where we’re going, we must go look back for a moment and review the changes in economic and labor practices that have occured since the end of World War II. Since then we’ve transitioned from long-term, single company employment that traditionally ended with a gold watch and a pension, and have replaced it instead with a short-term, shareholder focused, gig economy. Productivity and investor profits have steadily increased for the past thirty years while real wages have remained stagnant. And it’s doubtful that we’ll see any progress on this issue, regardless of who holds the White House at the end of January next year.

Immense social pressure by large swaths of society, as we are currently seeing with the Black Lives Matter protests around the world, can be effective at spurring political action at the National level, but I remain skeptical that we will see much movement with regard to the economic sphere in the next twelve to eighteen months. It depends on whether we can continue to successfully deal with the pandemic. I am not an optimist on this point. I will note though how relatively quickly universal basic income went from being a humorous component of Andrew Yang’s presidential platform to a serious contender for COVID-related economic woes. For now, we’ll have to settle for our twelve hundred dollar stimulus checks.

If one is to use the stock market as a bellwether, it would appear that the economy has already rebounded from the huge hit it took in the weeks following the lockdown. The lack of any meaningful relationship between stocks and reality should be apparent to most. Recent gains have either been driven by the continuing extraction of value from small and local businesses by Amazon and other ecommerce platforms, or from the harvesting of attention and personal data by Facebook, Twitter, Google and others. And now, the Fed has turned on the spigots through what they call “unlimited quantitative easing”, releasing over six trillion dollars to banks, where most of it has gone into the hands of those who need it least. This has caused no lack of excitement among the Bitcoin community, who have taken the “money printer goes BRRR” meme to the next level as they build the narrative of bitcoin as a way to both defund the state, short (as in stocks) governments, and opt out of the traditional central bank based fiat system. I count myself among them, but I’ll save that for a future installment.

Source: VisualCapitalist

That said, data from LinkedIn does show that hiring is up for tech workers. Platform firms that enable developers and entrepreneurs to build on top of their systems are on somewhat of a spree lately. Twilio, a text messaging provider, Stripe and Square, two internet payment firms, as well as GitLab and GitHub, built around a popular software development tool, are all rapidly scaling up to meet demand. And there seem to be no shortage for software developer or engineering jobs that I see, especially for those in government-adjacent industries or those with heavy math and science qualifications, like artificial intelligence and data analysis.

It also appears that most of the healthcare industry, especially the mental health profession, will be safe from any negative economic impacts of COVID. There was previously no shortage of people needing treatment for depression or post traumatic stress disorder, numbers that are rising as people deal with the stress of the pandemic. Indeed, there are number of platform companies that have been offering telemedicine related services for therapy and non-emergency services. That seem to be on the uptake. I myself am currently involved with one.

Let’s turn our focus to the technical side of our new normal.

Perhaps the biggest early winner among the remote-work enabled companies was Zoom, which became synonymous with video conferencing in the weeks following lockdown. Then there was the inevitable backlash, following both privacy and security concerns around Zoom’s default settings and misleading encryption claims. Amazon has been a big winner, both because of quarantine-induced shopping as well as cloud driven usage on AWS. Google Cloud Services Microsoft’s Azure are no doubt doing well also, and there are indications that Apple may be trying to enter the space also.

Microsoft has been in a bit of a battle recently as well, after the CEO of Slack, a remote workspace collaboration service, announced that Microsoft was obsessed with “killing us”, and announced an integration partnership with Amazon AWS. I’ve been using both Teams and Slack for some time now, in different roles, and I don’t think either is in any immediate danger. I’ve got theories about what type of companies choose one or the other, Slack for more tech-focused firms, especially development houses, and Teams for those that already rely on other Microsoft services such as Exchange Online or Sharepoint. I don’t have any hard evidence on that yet, and it may be that a lot of people, like me, will continue to use both for various teams.

My day to day responsibilities involve managing Microsoft Office 365 cloud services for clients. We’ve been pushing them off of older SMTP/IMAP services to Exchange Online for several years now, since it’s just plain better. What amazes me though is that most of our clients have no idea of the myriad other services that come bundled with O365 these days. In fact, I have a hard time keeping up myself. Besides Exchange, Teams and SharePoint are probably the most prominent, and the ones that we push the most, but there are a number of other useful apps bundled with O365, like Forms, Flow and Planner. The problem with O365 is that there are so many product offerings within them that it’s impossible to keep up unless you dedicate a lot of time and resources to them. The issue is multiplied with Azure and AWS, each of which has dozens of services.

I’ve spent the last dozen years of my career building and managing business networks, server and supporting end end users and their equipment, what’s known as managed services in the industry. I realized several years ago as the app economy took over, that these types of services would soon become a commodity, and that competition, especially low barriers to entry, would soon drive prices in a race to the bottom. I think I’m being proven right on that point, but what I didn’t really anticipate was that it was not just managed services providers that are being hit by this, but essentially all industries and verticals.

The overwhelming drive today among business is the tendency toward fewer and fewer employees. Tech tools are increasing employee productivity, apps and automation are compounding their efforts. Some jobs are more resistant to this than others, but the number of administrative staff needed to manage a large organization is rapidly diminishing. As I noted last April:

Blockbuster, at its height in 2004, employed 84,300 workers with a a $5 billion market cap. Today Netflix is valued at $162 billion value with only 5,400 employees. Instagram had just 13 employees when it was acquired by Facebook for a $500 million valuation in 2013.

I wrote about businesses as operating systems last month, so I’ll not repeat myself wholesale save to say that successful businesses in the next year or two are going to be the ones that figure out how to utilize these digital tools to allow them to scale. Even currently sustainable businesses with no interest in growth are going to be forced to deploy them, as the pressure to become more lean will build up from competitors that are. My focus remains on exploring how to deploy these tools in a way that helps my clients streamline their operations. My working hypothesis right now is that businesses that do will survive and thrive and those that don’t will languish and die.

We’ve seen how quickly the world can change. There’s no going back.

Git-ing it done

black green and blue coated wires

Roll your own GitLab

I had trouble falling asleep last night, Younger crawled in our bed just as I was dozing off and kept squirming as I was falling asleep, so I slept in her bed. It faces East, so I woke up at five and tried to go back to sleep. I heard Elder up, so I got up and started the day. She’s sitting across the room from me, looking up “Valentine’s Day” gifts ideas for the boy in our quarantine bubble down the street. Her sister has been ribbing her about it for days now.

One of our Zombie, LLC clients wants help standing up an internal GitLab server. It got me thinking, so I went ahead and set up a GitLab docker instance on my downstairs Ubuntu server. I figure it’s good practice. Do the job you want has always been good practice, so setting it up was worth the time. Plus it only took about fifteen minutes. The main problem I ran into was an SSH conflict with the existing service on the host. And it doesn’t appear that modifying the config on an existing container requires stopping the Docker daemon, so I just deleted the container and started over. I’ll probably move SSH if I ever do a real deployment, but here at the house the HTTP functionality is enough.

There’s also the mail issue. I didn’t want to use the root account to setup my repos, but the workflow around new accounts wants to send an activation email. I tried installing sendmail on the host, but the password reset didn’t work. I doubt it will work without a publicly routable dynamic DNS entry back to it or SMTP services, which I don’t want to mess with right now. Thankfully I found a password change form in the admin interface that didn’t require knowing the old password and got up and running.

I am nowhere near as strong with my Linux management skills as I am with Windows, where everything is pre-packaged and is somewhat unified. I can stand up domain.local services lickety split, and have a library of PowerShell scripts to setup AD, DNS, DHCP services within a domain. I have never actually taken the time to set one up at home though, but that point may soon be approaching. I’ve been wanting to investigate the use of Ubuntu server as an alternative or supplement to Windows based AD services, but part of me is skeptical that such a setup is even viable for workstation authentication and services. But I digress. The point I’m trying to make here is that I’ve always been in awe of Unix sysadmins ever since I worked at an internet service provider back in the late 90’s and watched our systems guy pop in and out of terminal shells like a wizard. I’ve never felt adequate in that regard.

I made some good progress yesterday working on the WordPress project, and have started converting the client’s site over to the new theme. I’m going over the demo site, examining the Bakery build they’ve got set up, and recreating it using the client’s assets. This allows me to get a bit more familiar with the framework that the theme author is using, and hopefully gleam some best practices at the same time. It’s a two step forward, one step back process. There are some strange bugs that popped up. Activating Woocommerce seems to bring the site down completely, as does changing the theme back to the original. Then at one point, while I was working on the new header, the previews stopped working completely and would only throw 404 errors. They work in the actual site, so I had to make do while I made edits.

Usual best practices for WordPress development and git repos are to exclude the entire WordPress directory except for whatever theme and custom plugin that you’re developing, but since in this case we’re working on an entire site, I’ve added the entire WordPress directory and associated SQL database files. The wp-content/uploads directory is mounted outside the container, along with plugins and themes. I haven’t yet pulled this directory on another machine yet, so I don’t know if it’s going to work. My main concern is how I’m grabbing the database. Managing PostgreSQL during my Django projects has always been a bit of a pain as I never learned how to incorporate it into my source control. I’ll have to spend some time correcting this deficiency.

Here is a look at the Docker Compose file I am using for my development setup. The SQL mount /docker-entrypoint-initdb.d/backup_to_load.sql get’s imported when the container is created; I assume that it’s ignored when pulling the SQL data from source. We shall soon find out. Also, I haven’t solved the file permissions issues that happen when trying to edit things like the wp-config.php file. I’ll have to save that for a later time.

version: '3.8'

    container_name: 'local-wordpress'
      - db
    image: 'wordpress:latest'
      - '80:80'
      WORDPRESS_DB_USER: wordpress_user
      WORDPRESS_DB_PASSWORD: wordpress_password
      WORDPRESS_DB_NAME: wordpress_db
      - "./Wordpress:/var/www/html"
      - "./plugins:/var/www/html/wp-content/plugins"
      - "./themes:/var/www/html/wp-content/themes"
      - "./uploads:/var/www/html/wp-content/uploads"

    container_name: 'local-wordpress-db'
    image: 'mysql:5.7'
    command: --default-authentication-plugin=mysql_native_password
      - './data/mysql:/var/lib/mysql'
      - './data/localhost.sql:/docker-entrypoint-initdb.d/backup_to_load.sql'
      MYSQL_ROOT_PASSWORD: somewordpress
      MYSQL_DATABASE: wordpress_db
      MYSQL_USER: wordpress_user
      MYSQL_PASSWORD: wordpress_password

    image: adminer
    restart: always
      - 8080:8080

Digital Addiction

flat screen computer monitor turned on beside black keyboard

Reclaiming my brain from the pull of the Feed.

So I deleted Twitter off my phone yesterday. I really did it. It just took a chapter or two of Digital Minimalism to convince me that I needed a break.

Getting rid of Facebook on my phone about eighteen months ago was one of the healthiest things that I’ve ever done. It was such a time suck and I spent way too much time on the platform arguing with people. One the one hand, it did lead to me writing quite a bit, and probably lead to my political career, but between the toxic people that I had connections with on there, and all of the privacy problems that were going on there, it was just too much. I had to leave. Given the Cambridge Analytica scandal and all the other bad news about Zuckerberg and they way they manage things over there, I’ve had no desire to go back. I’ve logged on a few times to deal with some messages or check on some family members, but I don’t browse the feed at all.

I always considered Twitter a bit different, since I was curating my feed, and it wasn’t just random friend of friend connections. Just because someone wanted to follow me, I didn’t have to follow them. Or vice versa. I still see Twitter as a source of news and information, and being able to remain pseudonymous was part of the main draw as well. Still, I spent way to much time on it, picking up my phone whenever I’m idle. Watching TV shows with the family, sitting out on the deck, or out somewhere waiting in public.

So I removed it Sunday morning and went about my day. The absence was felt immediately. I found my phone in my hand throughout the day, and I found myself wondering why I was holding it. Then I realized that the habit was still there, but I had short-circuited it with the app gone. It happened several times during random moments, like waking from a dream. I took the kids to a nature park to get out for an hour or two, and felt the urge to pull my phone out while the kids were finishing their lunch. No need. I set the slip and slide up for the girls outside and there’s that habit again. Nothing to do. Watching a movie after dinner, sitting on the couch, I’m always checking my feed. Instead, I worked on the Sunday crossword.

Today’s going to be interesting since I don’t have the same kind of blocks setup on my workstations. There are ones out there that will whitelist or blacklist certain sites on a timer. I’ve heard of people using them to make sure they get their work done, but I never went that far with it. There’s lots of downtime during the day, when I’m waiting for a download or some sort of progress bar, when I pull up Twitter and browse the feed. That’s going to be the real test. I wonder if I can redirect that energy to something productive, like doing a lesson on LinkedIn learning, FreeCodeAcademy, or doing one of the competitive coding challenge sites? I have been wanting to take a look at Rust…

I do have a project to finish, that is going to take several weeks of deep work. I’m really going to have to delve into WordPress’s innards and really figure out how the theme system works, then actually develop a design for a site. I had been attempting to figure out how this site’s current theme had been developed, but it’s such a mess, and I don’t know if I have it in me. All of the site’s functionality was just dumped into WordPress’s TwentySixteen theme, without even a child theme setup. And the dev hardcoded all of the scripts for Google Analytics and everything else directly in the template files. I’m got fifty four plugins, and trying to figure out which ones are needed to for the existing site is a mess.

Anyways. There was one moment yesterday when I desperately wished I still had Twitter on my phone. I was driving the kids to the aforementioned nature park, travelling down a two lane divided highway, when there was some sort of traffic slowdown. There was a car pulled off to the right just before an onramp. As I passed it I thought we were clear, but the cars on my left were still slowing up. There, up ahead, was a black man on a horse, just trotting his way down the highway. And there, the perfect tweet formed in my mind: “Is it legal to ride a horse on the parkway? Asking for a friend.”

Well, maybe not. But the next few days will be an interesting experiment to see what happens when I reclaim my brain. Will it unlock my creative superpowers, or have astonishing effects on my mental health and well-being? Probably not anything that that dramatic. Being in the moment certainly won’t hurt, and redirecting that nervous energy somewhere else will most likely be helpful.

Here we go.

Phishers step up their game

man standing on building rooftop during nigh time

This attack is not new, but the tactics are evolving, and some people are still behind the curve

I’ve been managing business networks for some time, and I’ve witnessed phishing attacks, where attackers attempt to steal a victim’s email login information, evolve the last few years. Yesterday I was alerted to a new variation on this traditional attack that I thought was worth sharing and dissecting, as you’ll see why.

Almost all of the attacks that I’ve seen stem from an email that a victim receives. Usually it’s someone that the victim has corresponded with in the past. The subject line and body vary, but there’s usually an external link where the victim is directed to in order to download some secure file. Normally, the victim arrives at a page that looks like a Google or Microsoft landing page, but of course they’re a fake, setup to steal the victim’s credentials.

If the phishers are successful, they’ll have gained access not only to the victim’s mailbox, but also any associated document storage systems like Google Drive, or Microsoft OneDrive or SharePoint. From there it’s all over, the attackers can download whatever they need, or if they discover that they’ve infiltrated a high value target, they might lurk, and prepare additional attacks.

In one particular case that I was involved in a few years ago, attackers managed to phish the CEO of a company. They discovered that they were going to be travelling from the East coast to the West, and waited until they were thirty thousand feet in the air to launch a fake CEO attack, requesting that their finance director wire tens of thousands of dollars to the perpetrators bank account as soon as possible. In this case, there were enough red flags that the attack was thwarted, but not before the attackers had used the CEOs mailbox to resend the phishing attack to everyone in their contact history.

And so the cycle repeats.

How not to be a victim

Normally, there are numerous red flags when phishing attempts happen, but it still surprises me the number of requests I get from people asking me to inspect an email for legitimacy.

Sometimes it’s as easy as examining the email recipient, or the actual link in the email, and finding that they don’t match. If Jane Doe’s corporate email is, and you see your email client only displays “Jane Doe“, you might need to hover your mouse over it to see that the email is really from a different address altogether. (Hover over the link above to see what I’m talking about.) Most modern email clients have updated the way they display emails, making sure that the actual address is “Jane Doe <>” or something similar.

However, there are still a number of businesses that haven’t taken precautions to protect their own email systems from being spoofed. That’s to say, there may not be anything stopping from someone from setting up a rogue email server and sending an email from anyone at that company. There are several methods to protect from this, known as SPF, DKIM and DMARC, that protect from this happening, so you may want to make sure that your domains are protected.

The flag that I look for is where the link is pointing. Just like email addresses, these URLs can be spoofed. Modern rich-text or HTML mail clients which allow special formatting can be used to try and trick users with links that misdirect users to hacked sites. So always check the URL. That official looking login page for your Office365 account might just be a fake sitting behind someone’s hacked WordPress site. CHECK. THE. URL.

These tips alone should prevent most people from falling victim to one of these attacks. If I’ve been drawn into investigating at this point, I usually go a step further and try to get the fake landing page taken down. Sometimes it’s easy to find the company who’s site has been hijacked, and usually a courtesy call is enough for me to consider my good deed done for the day. Sometimes the site is set up by the hackers themselves. A ten dollar web domain with a three dollar hosting account, paired with a free WordPress template is enough to start with. In these latter cases, I have to do a bit more work to find where the domain is registered and where the site is hosted. Then, an email to the company’s abuse department, and I’m done.

How you can stop it

And in almost every case that I’ve seen, it’s been a WordPress site that has been hosting the fake landing page. As it’s the software behind more than a third of all websites on the internet, it’s not surprising. But if you’ve got a business website running on WordPress and you’re not maintaining it or paying someone to manage it for you, then not only are you exposing yourself, your firm, and your clients to hacks, but you’re also partially responsible for any victims that fall prey through your site. Update your site, at least quarterly, or purchase a product or hire a firm that can check it on a regular basis for you.

How Chrome marked the site with the fake landing page. Firefox has no such warning.

Making sure email the security protocols mentioned earlier, (SPF, DKIM and DMARC) are enabled on your domains will prevent hackers from faking your domain and using it in an attack.

Using updated email software and security applications are also an effective way to mitigate these attacks. Make sure that your email client software is a recent version, or use a cloud-based one to make sure that you have access to the latest anti-phishing tools. And make sure you use them! It still astonishes me how many small firms haven’t enabled two factor authentication for their employees, or even looked at the protection services that are available from their email providers.

And one of the most important things you can do is train your staff how not to fall victim to these attacks. There are a number of firms that can deploy phishing attempts against your staff, and provide training to those who fail to avoid it.

Attackers upping their game

What concerned me with the attack I witnessed was the way that the attackers changed their tactics to evade some of the more advanced mitigation techniques that are in place to stop these cybercrimes. A number of enterprise level email security services have the ability to filter out these malicious links and block them from the recipient. They usually rely on some sort of whitelist or blacklist to allow certain domains through. In the case this week, the victim was sent to, which is Microsoft’s ID portal for and OneDrive accounts. To the casual observer, it looked like a legitimate OneNote notebook, and there was no breach at this point. No doubt most organization administrators would have no problem with users going there.

Of course within this OneNote page was the real trap, a link to the fake landing page. Thankfully the mark in this case, noting that the OneNote page was addressed from a person different than the original email, was suspicious enough not to fall for it. That said, when I was alerted to it and took a look at the OneNote page without the context of the original email, my initial thought was that it was legit. I almost cleared it! A second read turned up some irregular grammar, which is when I noticed the external link and the O365 landing page. Even then I still had to look up the domain registration on the site, two months earlier using an Asian registrar, before I was convinced it wasn’t some sort of Single Sign On configuration.

Technology changes fast, and cybersecurity is a cat and mouse game between attackers and the security professionals that protect your personal and business assets from these dangerous breaches. If you need help with managing your infrastructure or mitigation strategy against these attempts, let’s discuss it. Whether it’s email and network infrastructure, securing your website, or doing mock infiltration testing or employee training. I can help.

Baby slaps

person's left hand wrapped by tape measure

Inch by inch, life’s a cinch

Well I sure did step up my game. I guess I was anxious about writing a big long post for today’s Substack post, so I wrote it last night. Someone put out a message on the company Slack yesterday about a suspicious email, and I almost fell for it. I figured it warranted an exposition, so I spent an hour or so last night writing it up, after everyone went to bed. The experience writing at night, undisturbed, is quite different from my morning writing, when the kids are getting up and wandering in the room every five minutes. I’m not sure I like having a deadline every night like that though, so we’ll keep it limited to Thursday nights. I think I’ll even put it on my calendar just to keep it routine. Done.

I didn’t make any progress on my WordPress client last night, instead, I’ve started reading through the Docker documentation so that I can figure out what the hell I’m doing with my setups. I know a couple commands; it feels like I’m an infant failing my limbs trying to grab a rattle but I keep hitting myself in the head instead. I don’t understand half of the stuff I’m reading when I look at some of the image notes in Docker Hub, and trying to read through some of the discussions on GitHub issues is even worse. So I’m just reading through the best practices documentation to try and get a sense of how things run between Dockerfiles and Docker Compose, so that I can load a SQL backup automatically, add access my PHP files in both my development environment and in the container’s web services.

And the Docker documentation is pretty interesting. It’s scattered with little goodies like links to Twelve-Factor Apps, and here documents, which is something I didn’t even know about. And if there’s one place I need to step up, it’s my bash skills.

Anyways, I get to keep this post short since today’s a double post day. I thought about writing something for an off day, but figured the ritual of writing morning pages was more important than taking a day off. I’m starting to gather ideas faster, and my writing output has stepped up quite a bit. So I’m going to wrap it up, make some changes to today’s big post before I blast it out to several hundred unknowing Substack subscribers.

Now, time to go grab that rattle by the horns.

Leave it better than you found it

blue plastic trash bins on forest during daytime

Taking over abandoned or mismanaged projects

Taking over a project is a much different beast than starting from scratch. I think everyone knows that it’s easier starting from a blank slate most of the time. What separates the amateurs from the professionals though is the documentation that they leave for those that come after. Most of my recent work I’ve been the solo technical resource on a small team, and coming into a new project is often a mess, and trying to decipher someone else’s work without any form of documentation is a challenge. I make sure not to leave it that way.

I’ve been doing small business networks for well over a decade, and taking on a new client almost always starts with a network assessment, inventorying the equipment, and running some kind of network or system scanning tool to catch what else we might have caught and put it into a report. Internally, we’ve been using ITGlue to keep track of all our documentation, and it really comes in handy when handing off a client. In the past, taking over an account from another IT management firm has involved sitting down for an interview with the technical resources on the other team, making lots of notes, and then rebuilding the documentation in our system. And it usually involves some sort of roughly drawn up document with passwords and other critical information.

Lately, it seems we’ve been sending off runbooks left and right, containing all the documentation, checklists and SOPs that we’ve developed for a client. I remember the first time we handed off one to another firm, seeing the surprise in their eyes when we handed over a professional, looking document. It made a real good impression, and I almost considered jumping ship with them. That was over a year again, and here I am, finding myself going through the same situation again.

My recent consultation work has mostly involved taking over a stable of WordPress sites. I’ve been using WordPress for years for this blog and others, but I’ve usually kept things very simple. Just download a nice theme, and start writing. The sites I’ve been taking on recently are much more complicated. There’s usually two or three dozen plugins deployed, and some sort of complicated theme system in place that has some particular arcane way of adding a page or making changes to a header or footer.

Since my focus here is just about writing, I intentionally decided not to spend any time on presentation. I’m literally still using the default Twenty Seventeen theme that came with WordPress out of the box. I’ve looked at some premium themes for it to give it some zazz, but ultimately decided that the effort wasn’t a priority for me. Not so with the other projects. I’ve been using an Envato Elements subscription to source my themes and templates, but each one seems to carry it’s own set of required plugins and design methodology. Figuring out how to tweak them is its own challenge.

I recently took over a site for a client. It was mostly in good shape, but had been neglected for several years. I wanted to make some changes to it, but without understanding how everything was put together, it’s proven difficult. On top of that the original designer used a modified version of one of the default WordPress templates, so the choice was to start delving into the source code or start building from scratch. And again, there were about forty plugins being used, and I’m yet unaware of a simple way to trace an elment in a rendered WordPress site to its source. Plugins will often add elements to the Dashboard UI, or the document editor, and figuring out what goes with what is a slog.

So far, the choice for me has usually been to tear it all down and start from scratch. Cloning production to a staging site and deactivating all the plugins, to see what we’ve got, content wise, is usually the first step. Then I can all the pages and posts to see which elements are missing from the original site. “There’s a shortcode for a slideshow plugin, so let’s note that and re-enable that.” “Why is half of our content missing?” It’s because they used some post taxonomy plugin to put certain content in a separate blog. And so on and so on.

There’s one thing I picked up in the last year or so, called Architectural Design Records, or ADRs. It’s basically a decision making artifact that details the reasoning behind taking a particular design approach to something. They’re closely tied to user stories, and can be placed right in a Git repo with the rest of the source code. I’ve been trying to carry some of the ideas behind ADRs into my own projects, and not just software ones. It’s a good practice for any sort of system design, whether that is technical, business, or personal. Leaving these little artifacts behind for future you or for others seems that it can be a valuable practice, and will come in handy when the time come to tweak something months or years down the line. “Why on Earth did we decide to do x again? Oh yeah, here’s the ADR…”

I am defintely not a WordPress ninja. As the old saw goes, “the more you learn, the more you realize that you don’t know anything.” Give or take. Managing a web host reseller account and using a dedicated tool to keep WordPress installations and their related plugins up to date is simple enough, but taking over these sites and trying to redesign them with an eye toward user design, ecommerce and SEO is a completely different set of skills than I had hoped to working on. And I don’t know how far I want to develop it. Most likely I’ll be doing what I can to salvage the projects I’m working on and get them to the state where it can start generating some revenue, then I’ll start bringing in other resources to hand off tasks to. And when I do, I’ll have supporting documentation to hand off to them so that they can quickly get up to speed, a history of how things operate and why they were setup the way they are.

Life is a game

white and blue wallpaper

Reflections as another round begins

It’s my birthday weekend, a fact which I exploited last night to tie one on. The girls and I watched half of the The Phantom Menace last night before we tried to put them to bed at their normal time before giving up. We’ve got them trained on melatonin gummies, and I don’t give them any on the weekends, so they wind up staying up an hour or more past their bedtime. So we wound up having a little party afterward. I put a song on the piano and Younger banged around on it, singing some silly song she was making up. It was too cute. We took the girls outside to see if we could watch the moon, but it was cloudy so we just sat out there for a while until Missus took Younger to bed.

I bought Factorio, on Tobi Lutke’s recommendation, and it is right up my alley. Building little systems to harvest resources, transport them, and build more and more things. I am going to spend too much time on that game if I’m not careful. It’s addictive.

Systems thinking is affecting my brain. Everything is a process now, threads of the machine we call life. I only have control over my attention, and I need to guard it carefully. I’ve spent most of my life with it scattered in dozens of different directions, focused on one project or another for a few months, then onto the next. Of course there have been themes around them, music and computers, mainly, but the threads within them have run deep in various directions. Mostly broad, rarely deep. Jack of all trades, master of none.

There has always been so much to do, that it’s been hard to focus on what I ought to do. Every decision made, another choice abandoned, and so paralysis ensued. Attentioned wandered to what was easy, not to what was complex. These days I’m forcing myself to delve into things, tracking them and holding myself to account to work on them.

I still make music these days, but I don’t write songs. My coding is getting better. My designs are still huge messes that I wouldn’t share with anyone, so I’ve started building test coverage around them so that I can make the changes I need without fear of breaking things, or of getting overwhelmed by my own construction and forgetting where I am in some abstraction. There’s always a purity about starting from scratch that is addictive also, new and promising, that threatens to pull me away. Burn it down and start from scratch. The grass is always greener.

It’s much harder to fix what’s broken when you’re flying by the seat of your pants. After an hour in Factorio, running my little toon from one resource pile to the other, I realized I could have the the coal extractor feed itself, and I could run extend the conveyor here to feed this other machine. I woke up this morning thinking about it. Instead of maintaining two separate lines of conveyors for different goods, why don’t I just create a loop with everything on it, so that the machines can just pull what they need? The game has already infected my mind.

There are tons of games on Steam that gamify things things that some people pay to learn at college or tech schools. Learn networking through HackNet or computer science through 7 Billion Humans. Then there’s Zachtronics, who makes games that teach you how to code assembly language. Truly insane.

Last night I stayed up too late, drank too much, and woke up this morning hungover. As I lay in bed, staring at the ceiling fan spinning round, I was thinking what I wanted to do with the day. I couldn’t decide, and I didn’t need to. I got up and did what I always do, (after popping an ibuprofen): put water in the microwave, do pushups, make tea, meditate. Write. The day will come to me, I just need to go along with it. Watch out for the traps, optimize this, tweak that.

My designs may look like much of my house, a mess, things strewn haphazardly in this room or that, piles of clutter here and there. As the days weeks and years go by, I’m becoming more aware of the way everything is threaded together, of how the cycles are repeating. Change this, move that. Optimize, optimize, optimize.

Another trip around the sun almost complete.

Businesses as operating systems

worm's eye-view photography of ceiling

How to automate, and eliminate, bottlenecks in your business operations

I’ve noticed a trend in tech lately, as more startups and firms begin adopting software development methodologies toward operations. It’s a useful paradigm, and one that I’m trying to implement personally and professionally in both my firm’s operations as well as those of our clients. There are some businesses that are more suited for this than others. Obviously software firms and startups trying to find traction are more likely to fit into this kind of approach, but traditional service and brick and mortar stores can see benefits from them as well, especially from a change management perspective.

Every business is a software business.

Watt’s S. Humphrey

If you haven’t realized it yet, all businesses are now software businesses. And if you haven’t, I guarantee one of your competitors has and is figuring out a way to do it. Customer’s want options, and if you’re not providing them with ways to reduce friction in their operations, then someone else is going to.

Make a decision once

In Principles, Ray Dalio writes about businesses as systems, and the importance of offloading the decision making process out of one’s heads, to series of standard operating algorithms that can be written down as a series of principles. These guides serve as a reference for future decisions, not just by yourself, but by the rest of your team, and can be invaluable for new hires. Of course this information can eventually be turned to an algorithm, and used to automate the decision-making process.

The challenge of course, is figuring out where to capture this information. If you’re not doing it personally, I recommend starting with pen and paper. Teams can use email, electronic documents, or whatever your current messaging platform, e.g. Teams, Slack, or even Discord. The important habit here is to make the decision criteria concrete, and to continually revisit these principles the next time a similar decision needs to be made. You may not be able to automate every decision in this way, but the process can serve as a filter for a majority of the decisions that come your way.

Writing down the decision making process also has another key benefit: defining your values and helping you focus on your market.

Agile Workflow

If your business is struggling to find it’s footing, either as a new venture or as a established firm that is dealing with a rapidly changing market, then you need to build, iterate, and refine, quickly. Whether that means refining your operations, or pivoting the product or service you deliver, you’ll need agility to survive.

Agile is the name given to a software management process first defined two decades ago. It uses the concepts of short sprints, usually a two to six week period, where a team focuses on delivering a particular feature, or refinement to an existing process. Agile is a contrast to the previous, waterfall style development process, which had long development times. By the time a usable prototype was delivered, the requirements might have changed considerably. Focusing on smaller chunks of time ensures that effort is not wasted on delivering services that don’t align with business objectives, and allows development teams to make sure their deliverables are in line with customer or internal requirements.

User stories

There’s another key component from Agile methodology that is extremely helpful for designing business processes and services: the user story.

One mistake often found in business is the solution in search of a problem. Tech people, myself included, have often seen a new product or tool and start trying to sell the product or process to internal or external client without having a clear idea off the use case around it. I have been particularly egregious about this in the past. During my conversations with clients, I find that while they may have a clear picture off the service that they would like or are trying to provide, they may have trouble providing a clear picture of what success looks like. They might know that they need new network infrastructure, or a web site, but their idea of what that means from a capabilities standpoint might be very different from the perspective of the team implementing it. This inevitably leads to friction, whether as scope creep, or service interruptions which impact the client.

The Agile approach to this, which I’ve begun implementing to business systems, is to focus on the user stories, and make sure that there is a clear sync between the business unit’s expectations and those of the technical team. In the case of a new project, this usually requires brainstorming to specifically flesh out the myriad ways that a deliverable will be used. For existing businesses, this may be as simple as shadowing workers to enumerate a list of their various activities through out the day. When I’m working with a client focusing on an external application, I usually ask them to start by describe the customer journey in as much detail as possible. This epic is then broken down into concrete, actionable user stories.

There are several criteria for a well-defined user story. It must focus on a specific type of user role, it must be discrete, and it must be testable. This last component is the most critical from a technical perspective. A lot of time, business goals might be defined in broad or vague language which lacks specific requirements for success. By writing your system goals down as individual, testable units, you define a way to test for success, while also providing your team with a target that can be accomplished within a short sprint time frame.

Give up the paper

Over the years, I’ve walked into too many businesses and looked with dismay on large stack of paper, whether they are service work orders, purchasing or other tracking materials. While there may be certain governmental regulatory environments that still require paper, for most commercial and consumer-based businesses, this is a sign of stagnation. It could be death knell. It’s 2020, and there is hardly a justifiable excuse for a business to generate huge volumes of paper documents as part of day to day operations. There are literally hundreds of app or software as a service (SAAS) vendors targeting most established business sectors.

The most common excuse that I hear as to why a firm hasn’t migrated their service operations or other processes to an electronic system is that it “doesn’t meet all of our needs”. This is usually used as justification to do nothing, continuing with labor-wasting inefficient duplication of work. Paper forms are scanned, copied and entered into another spreadsheet or accounting system, as employees keep up with the inefficient system. Instead of choosing a vendor that can meet a majority of their needs, they let one or two small use cases stop progress in its tracks.

Even if a business can’t find the perfect vendor or SAAS product to meet their needs, there’s no reason one can’t find ways to make smaller improvements. I find that most businesses aren’t even taking advantage of the software and vendors that they’re already using. One of the most common examples I find are Office365 Premium users who are just using it for desktop apps and email, and don’t even realize the services that they’re getting for free, like Teams, SharePoint, Planner. One of the things I’ve been focusing with my existing O365 clients is showing them how all these tools work together, and figuring out how to use Teams, Flow and PowerApps to eliminate phone and email traffic. Tools like Airtable are also fantastic, and I’m hoping to be able to build more services using the various text and voice tools available from Twilio.

Connect your services: APIs, APIs, APIs

If I had to guess, I’d say most firms currently have around a dozen different apps, vendors and services that they use on a regular basis, platforms for work and time tracking, accounting, inventory, communications, and so on. Ask yourself, how does data flow through these different systems? How is it generated? As with paper records, manual data entry from one system to another is usually cumbersome, inefficient, and most important of all, unnecessary. It can also lead to error.

selective focus photography of computer code monitor display

Enterprise firms have long relied on electronic data interchanges (EDI) to transfer structured data between partners. These days, most applications allow some soft of import/export functionality via CSV files, but a more modern approach to this is via an application programming interface, or API. The most common API protocol is called REST, and it allows you to perform create, read, update, and delete (CRUD) operations on your data. Having a full REST API available to your all of your various services not only allows them to talk to each other, but also allows you to automate the various processes that you might perform via an application’s web or graphical interface. This is one of the most powerful tools that an organization can deploy. I’ve personally been building Python modules to pull data from our various management systems, allowing me to perform status checks on systems that would otherwise require me to login to a web portal and sign in via two-factor authentication. I’ve also built a number of scripts that I use as templates for various tasks, that I do on a regular basis.

Having an API is so important these days, that it’s one of the first things that I look for when evaluating a new vendor or app for myself or a client. These days, it’s unacceptable for a vendor to lock customer data behind a walled garden, but this is still a problem with a lot of legacy applications, which may not allow the ability to perform CRUD operations on internal data.

One more thing that is worth noting is the concept of a webhook, which is the ability to send or receive a message from one system to another based on a trigger. They’re not as dynamic as REST interfaces, but they do allow a system to send a one-way message to another. A simple example of this is the ability of a shipping vendor to send a notification to a client’s chat messaging system when a delivery status has been updated.

Corporation as Artificial Intelligence

One might think of the corporation, or even small businesses, as a prototypical artificial intelligence. They both have inputs and outputs, and along the way decisions and processes are made which transform the former into the latter. The main difference between the two might be the speed and the way in which those transformations occur. The question that business leaders need to ask themselves is how that transformation is happening within their own organization. If your organization has bottlenecks due to the decision making process or manual data entry requirements, then you should start implementing some of the tactics we describe in order to automate and eliminate these barriers to growth.

If you’re not taking these steps today, then you will find your business falling further behind organizations that are. Soon, you’ll be losing your clients to them. Beyond just the internal benefits, you’ll find that your customers also want the option on how they communicate and interact with you. And if they don’t have the freedom to chose, if your business isn’t flexible enough to provide that freedom, then they’ll eventually move where they have it.

My focus as a technologist is to assess current trends in business and technology, and extrapolate out where things are headed in the next five to ten years. My goal as your outside technology officer is to make sure that you not only have the tools to succeed today, but help provide the long term vision to get to where you need to be tomorrow. If you’d like help assessing your current operations or would like to discuss things further, please drop me a line.