Generating spelling flash cards with RemNote

Making alphabet and spelling flash cards with a little help from regex

I’ve been getting used to RemNote for a little over a week now. I haven’t really gotten too much into yet, just taking notes and trying to link things up. I haven’t played with the spaced repetition features yet; I’ve used Anki in the past to get through an accounting class a few years ago, but I haven’t really felt the need to use it much for anything I’ve been dealing with lately. I may start using it for certain CLI commands at some point, we’ll see.

I did start trying to use it for Younger and Elder, though. I set up a document for the alphabet and filled it out like so:

A:: A
B:: B
C:: C

And so on. It doesn’t look like there’s a way create these cards without having something on either side of the double colons, so I just filled it in with the letters on each side. Of course, Younger can’t do these by herself, so I have to sit there with her and push the answer buttons for her. It’s been working ok so far, it takes a couple minutes, and the app makes a nice little fireworks display when you hit your daily goal. She loves it. It of course makes her big sister a little jealous so I had to find a way to do one for her as well. We settled on third grade vocabulary words.

I found a couple lists online, but I wasn’t trying to copy and paste two hundred words into the proper format, so I did what any programmer worth their salt would do: regex.

Take a list like the following:

additional	event	region
agreeable	examine	repair
argue	example	ridiculous

We want to separate the non-whitespace \S from the whitespace \S, into two ( ) groups : (\S+)(\s*). Then we can substitute, using \1 as shorthand for the first group: \1::\1\n. This gives us the following output, which exports perfectly into RemNote:

additional::additional
event::event
region::region
agreeable::agreeable
examine::examine
repair::repair
argue::argue
example::example
ridiculous::ridiculous

Now while this works fine from a technical perspective, it’s a bit flawed in execution. Elder can’t see the words that she’s trying to spell, obviously, so I have to read them to her while she sits across the room from me. It causes her to miss the reward, the fireworks, and caused a bit of distress on her part.

So here I am now, brainstorming ways to generate audio files for these words so that I can put them in with the cards. Do I read a list of 200 words, and then go through the editing process to separate them into individual files and attach them to the proper file, or is there a way to program and automate all this.

Of course there is. There’s a Python module for the Google Text to Speech library, so I could literally generate the files in a few minutes. Then it’s just a question of importing them into RemNote. Unfortunately, RemNote doesn’t seem to support uploading or local audio files, so I would have to either upload them somewhere like an AWS bucket, or just use something like Anki, which supports audio within the card decks themselves. We shall see.

I’ll have to keep quizzing Elder on my own now, she seems to do better with the one on one time anyways. I’ll be sure to share any updates.

Storm watch

Hurricane Isaias is making itself known. Wind gusts are pounding the house, making it shake like a freight train. The girls are up, Missus let them start a movie this morning despite my protests. She woke up early because of the storm and apparently isn’t planning on doing any work till later this morning.

Alerts have been popping up on my phone all morning as our managed servers have been going dark across the board. Internet and power have been dropping across the region as the storm makes its way across the area. It’s not really that much more work for me, since there’s not much I can do about it. Hopefully I’ll be able to get some work done on my two main goals at work: converting a client over to Microsoft’s mobile device management, and building a C++ build pipeline for some embedded controller software.

The RMM vendor that we work with integrated IBM’s MaaS360 product into their offerings two years ago, and we signed on one of our clients for it. It was a bit more involved than we expected for such a small deployment. We had to get a management certificate issues from Apple, which wasn’t too bad, but then we had to manage eleven Apple IDs, one for each user, before we could even enroll the phones. This involved downloading a special management app and profile. The client wanted content filtering on the phones, which meant the deployment of MaaS’s Secure Browser, which involved several more steps. Then we thought we were done, and I just ignored the deployment until about a month ago.

The client contact me about installing a new service app on the phone, and after figuring out how to login to the management portal I found that nine out of the elven mobile devices hadn’t checked in, some in over eighteen months. After contacting my RMM vendor for some support and getting frustrated at their lack of knowledge, I started searching for solutions. I new Microsoft had been offering some options through O365, and since most all of our clients are 365 clients, I thought that any solution that can be managed through it would be a plus. What I found is that the latest MDM offerings, included free with O365, actually gives us a lot of what we need, which is security profiles on the device itself, and the ability to control the software installed on the device. I did a quick test with our O365 tenant and my personal device, and I’ve been holding on to a client phone for about a week to test and document procedures so that they can setup the rest of the devices. I’ve been talking to other MSPs in our network, and let me say that there’s a lot of interest in the fact that I’ve been able to setup federation between O365 and Apple Business Manager.

The other project I’m trying to work on involves setting up automated deployments for a development project. The developer workstations are based off of an Ubuntu 16 VirtualBox image with a custom IDE and hardware libraries installed. The process to setup runs about five or six pages, and hasn’t been replicated by the client, so I’m hoping to go through the document and create a full script that can be replicated to set things up for new employees, or whenever the developer config changes. I’d like to get them up to Ubuntu 18, at a minimum, but the eventual goal is to make sure that we have a build process that exists outside of the IDE and can be automated via a build job as part of the version control process.

The problem I was running into is that my own computing resources are kind of limited right now. I already run my Windows workstation in a Ubuntu KVM instance, so running another VirtualBox wasn’t really an option. So I decided to use some of my Azure credits that I get from my Microsoft Service Provider benefits. I recently used an Azure VM to stage an on-prem domain deployment, scripting it out using Desired Configuration State (DCS). I was able to validate my AD and DHCP scripts on the Azure server, then copy the files down to the on prem server, run them, and have my deployment up and running in about an hour. The scripts will need some improvements before it’s really useful, but it’s a start.

So before I got started yesterday, I decided to explore deploying my VM via the Azure CLI. I went through a couple exercises yesterday to practice, and today I’m ready to get started with the actual projects.

A couple days ago, a marketing employee at Zombie made a comment to me that they were thinking about becoming a technician, and I told her to look at cloud engineer tracks, cause AWS and Azure jobs are among the highest paying and in demand, besides data scientists. Spurred by my own comments, I started exploring the training options for AWS, and started going through the AWS Cloud Practitioner track. The exam is only $120, and why not. I actually prefer AWS over Azure cause of the pricing — good luck finding a $15 a month Azure VM! — and want to really have a handle on it since that’s where I’ll probably be focusing my own entrepreneurial projects. I’m still locked into Microsoft at work, so learning Azure is going to help me, but everything Microsoft does is convoluted and complicated.

Will having a handle on both AWS and Azure make me a double threat? Doubtful, since I wager most large shops will use one or the other, not both, but that’s just my situation now. So I’m stuck between the two. Jack of all trades, master of none.

Fast, good, cheap

Pick any two

It’s been a little over one year since I started blogging in earnest. I’ve been taking a look at the archives from last July to see what I was writing about back then. When I started, I think I gave myself a three hundred word target, just to get in the habit. Today, these posts routinely run two to three times that length, and with some posts in excess of fifteen hundred words. The content of those early posts were more focused; I had the habit of writing a post for every book or magazine that I read, but today these posts are mostly journal exercises for the most part.

My most popular posts have been on technical issues, two about a WordPress hack and an Windows server issue seems to drive most of the traffic here. My exploration into Facebook’s Prophet machine learning tools gets another trickle. I’ve yet to find a focus for this blog beyond whatever strikes my fancy for the day, and I’m content to continue with it as is, making small adjustments as necessary. However, they say that no one ever got where they wanted to go without a plan, so some critical fascimilie of a plan might have to come together at some point if I want this to be a part of a long-term career strategy.

For now, it serves enough for it to be a place where I practice my writing muscle. If I write, I am therefore a writer, so it goes, and every day that I write the better I get. I’m closing in on three hundred posts here, including ones older than a year old. (This count doesn’t consider the archival posts that are monthly roll-ups from the previous incarnation of my WP database.) I’m hoping that by the time I reach five hundred I’ll be even better. We’ll see if the traffic to this blog increases along with it. Time will tell.


The kids have been incredibly difficult this morning. We all got up pretty much at the same time, and I was unable to get much done till after they left for their grandmother’s house. Younger has been especially sensitive this morning, but both of the girls seem intent on making a sport out of disobeying me. I was unable to get either of them to do their studies this morning, and at one point I had them both taking timeouts in the kitchen, which they made into a game where they tried to laugh at each other while I made lunch. I shouldn’t be mad but I did lose my temper briefly from having to repeat myself whilst being ignored repeatedly. Hopefully they’ll be better behaved when they come back.


I’ll admit that part of the reason for the discord here in the house is due to a text I got from my WordPress client basically firing me from the project. When we had set out, I thought I had made perfectly clear that this was going to be done quickly. I believe my exact words were something to the effect of being on the cheap and good areas on the project triangle, and that if we needed to move to the fast that they should let me know. As we entered the third month of our engagement, they let it be known that they were frustrated with the pace, and that I had expressed some doubts about my ability to deliver the project. I had expressed some frustrations about the work that I had inherited. This was mostly due to the amicable arrangement that we had started out on.

I think one of the major mistakes I made taking on this project was not properly scoping it and setting expectations. Another WP developer in my area charges twelve hundred dollars for a basic, four or five page WP site, and this project involved a major redesign and restructuring of an existing site. Easily a six month project at the rates I was charging. That obviously wouldn’t have flown if I had proposed that at the beginning.

I did identify several aspects of the redesign that I wasn’t going to be able to deliver on my own, mainly image assets. I was having a hard time gathering stock photography to match what they were asking me for. When I made this clear to the client, and told them that delivering everything I felt needed to be done within the accelerated timeline was going to be difficult, they told me that they had other developer resources that we could bring in. I said by all means.

This hasn’t been going quite the way I hoped it would turn out. In anticipation, I wrote up a project summary, invited the outside dev to my Basecamp, where I had all of the project notes and tasks, and spent several sessions building out a backlog of things that needed to be done. I told the dev, a PHP and Laravel dev from Pakistan, that I needed their assistance with one particular task: setting up the MemberPress plugin for us.

It doesn’t seem that any of that has even been considered. When I got the text, to the effect that development would proceed from scratch due to the difficulty in determining what I had done, I checked logs for the staging site and saw that no one besides myself had even logged into it. So something else appears to be going on. I suspect that besides the English language barrier, the outside dev might be more of a Laravel developer than a WordPress one. And I find it highly ironic they’re starting from scratch, when I literally spent two months trying to figure out what the last dev did.

I’m trying to tread a fine line here given that this engagement is with someone I consider to be a friend. We had gotten into some heated discussions about this, and you know the old saw about mixing business with pleasure. Still, my friend is enough of a intrepid entrepreneur that I considered this a baby step into what should be the start of a mutually profitable enterprise for both of us. When they broached the subject of terminating the arrangement with me a few weeks ago, I was so held by a sense of honor that I basically volunteered to finish the work for free. That’s why this morning’s message stung so much.

I replied back with as much tact as was possible given the cortisone flowing. I told them that the outside dev hadn’t even given a cursory look at what I had done, and I asked that they take another look at the progress I had made in the past few days before they pulled the trigger on a redesign. Further, I said, even if they did insist on moving forward with a new project, I intended to continue my development on the staging site until I was satisfied that I had fulfilled my promise to deliver the redesign and the membership features by the end of next week.

This project has taught me a lot already, both about WordPress development, but aslo about managing client expectations. I have got to spend more time focusing on the business side of the relationship, and establish some formal contracts and work blueprints so that expectations are better managed up front. For now, I’ve got about twenty hours of work left in the month in which to deliver and salvage this project. Failure is not an option, and neither is ruining this friendship.

WPStagecoach saved my life

pink carriage with brown horse

Quit messing around with lesser staging processes and get the real deal.

I don’t mean to be too glowing or make this seem like some infomercial endorsement, but I do really think it saved me from having a heart attack the past couple days. I’ve been using InfiniteWP to manage most of my stable of WordPress sites, and it’s served me well for managing updates and backups, and is even handy for migrating websites from one host to another. It’s well worth the $120 or so that I paid for it a few months ago. It’s staging features aren’t really that great.

Part of the problem is that it only wants to install the staging site as a subfolder of the main site. It also makes a copy of the database on the production database, it just uses a different table prefix. I shouldn’t have to tell you why this is not great from a performance and quota standpoint. The other problem is that it doesn’t provide much information when things go wrong. Ideally, I want my staging sites in separate subdomains, but IWP just can’t do this. And the documentation is very mum about this. I have a support ticket open with them right now to figure out why I was unable to clone a particular client site, and to make sure that this paragraph is correct. What I can tell you is that I spent days trying to get a proper staging site setup for my client using IWP.

It’s not all their fault. I’m taking over a project that seems to have been abandoned by the original developer, and there were many problems with the site that may have contributed to the problems I’ve been having, as we shall see shortly. IWP has three staging options, on the original site, on my configured staging server, or custom FTP. I was able to clone the site to my custom staging server, but the theme didn’t operate properly. I believe this may have been a problem with hotlinked theme assets, I haven’t figured it out yet.

I literally spent days trying creating subdomains and updating DNS on the client site, and couldn’t figure out why IWP kept giving me “error: check your hostname” when I tried to update things. I figured it was a DNS propagation error between the server hosting my IWP and the client’s host. I usually only work on sites I host directly, but this was the first time I actually had to use the staging features. I was getting very anxious. I had wasted several days was already dealing with an irate client, and was starting to get a panicked feeling when working on the project.

So I decided to go another route and explore some other options. I read through several blog posts on WordPress staging sites, and one name that came up several times was WPStagecoach. And it was only $12 for a month, so I signed up for a trial and had the staging site up in less than an hour. No kidding.

The setup process was impressive. Getting the plugin installed and activated was pretty standard, and creating the staging site was very user friendly. It started off by scanning the site for large files, and found a backup archive, which it asked to exclude. Then it starting creating a tar file of the site to move to staging, and showed me a status percentage as it did so. This was very much needed considering IWP had been “working” for hours without so much as a log update. After the tar process was completed, I did get an error that the archive was missing files, and was asked whether I wanted to abort, retry, or “proceed fearlessly.” I retried, waited another five minutes, and got the same error, so I went ahead and pressed proceed. Another five minutes, and BAM. There was my staging site, and it looked perfect.

And one thing that really impressed me was that after the creation of the staging site, I was given a list of errors that WPS had found, mainly places where the site’s URL was hardcoded in the theme templates. These are likely why I had the rendering issues on my previous staging attempt. So now I have a list of files that I need to target, as hard coded URLs will play havoc with my development environment as well. And this feature really shows how WPStagecoach really shines as a specialized product.

WPS hosts the staging site on their own servers, giving each site their own subdomain. I got ten with my account, which is way more than I’m going to need anytime soon. So now I can proceed with the next step on this project, which is getting our MemberPress module up and running. Then I’ll be able to see if pushing changes back to the live site is as easy as creating it in the first place. If my experience so far is any indication, it’ll be a sinch.

IT fiction:The Phoenix Project

Thoughts on the first half of the business book

I’ve been reading The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win the past couple days. It’s an interesting book that takes a fiction approach to teaching “the Three Ways”, which are some devops patterns and principles. It’s really interesting, although some of the setup seems a bit contrived, the writing is good enough that I found myself blowing through half the book in two days, and found myself reading it through past my bedtime last night.

Part one of the book is a journey into enterprise IT hell, as our hero, Bill, is promoted from his small operations group to IT director for the large automotive parts company that he works for. They’re in the midst of preparing for a huge software rollout, which is bound to fail, and Bill struggles to get a grip on things before things inevitably crash and burn. In short it it’s a trainwreck, and the authors start introducing the reader into change management devops concepts.

I think anyone who’s ever worked in an enterprise environment will have PTSD from reading this, I know I sure did. Although it’s aimed squarely at teaching workers in larger firms understand these best practices, I think it may be useful to smaller operators and teams like the one I work with. The book was written more than six years ago, which seems like a lifetime ago in IT, but it doesn’t get into the details of any actual tech tools, instead focusing on the process. In fact, the change management process they use in the book is literally postcards on a whiteboard, and the description of the rest of the environment is literally generic enough that it’s irrelevant.

Part one ends with Bill quitting after too many of his warnings are unheeded by the CEO, and part two starts with said CEO seeing the light and bringing him back in as they struggle to work together and save the company.

I’m already thinking that this will be one of those books that I recommend to all my IT colleagues. I may buy a few copies and send them to a few people I’m working with. I think it could be a valuable book for people who haven’t actually operated in a large corporate environment. It may be good for stakeholders as well. Hell, it might actually be good to give a copy out as a sales tool next time we have a big prospect.

One thing that I’ve taken away from the book so far is the breakdown of four types of work: projects, internal IT tasks, changes, and unplanned work, which I’ve always referred to as firefighting. They describe it as anti-work, which is an apt description, and I’m going to be more cognizant about the type of work that I’m doing from day to day.

The Phoenix Project falls in an interesting class of book that I haven’t run into before, business fiction. I’m curious if there are any others that are similar. I’m sure that the situation told within it is real enough, probably culled together from various real experiences, names changed to protect the innocent and all that. The first-person voice used by the authors is a style that seems familiar from many business books going all the way back to Dale Carnegie, but I don’t think I’ve ever seen it deployed in quite this way, with the book as one large case study.

Besides the operational side of things, there were a couple of work-related things that struck out at me like a sore thumb. During the failed deployment of the new software product, the entire core project team is forced to pull an all-nighter trying to restore operations, and then spend many long days during the following weeks trying to shore things up. After Bill’s promotion to IT director, he seems to lose all grasp on work-life balance. He’s reading a story to his kid and means to lookup something about Thomas the Train when he gets drawn into a work email and then another call. The situation completely disrupts his family life. Another employee at the firm, Brent, the key-man with a hand in seemingly every system at the company has gone years without taking a vacation without being on call.

Apparently these two issues will somehow be resolved as Part Two progresses, but there was one detail about Bill’s circumstances that really had me shaking my head. Near the end of Part One, as he’s fretting over losing his career, he questions how they’re going to pay off their second mortgage and start saving for their kids’ college. Apparently they were just treading water, and the unexpected promotion has finally put them on the right track. This detail caught me, and I found it interesting. Perhaps to appeal to a broader base of people, or elicit sympathy, but to me it struck me as slightly incongruent with the rest of Bill’s disciplined personality.

Maybe I’m reading too much into it. If anything, The Phoenix Project has reminded me of the life that I don’t want. I spent four years working in an enterprise firm, and I came out of there in a rough way. I’m going to need to think long and hard before I think about getting back into a leadership role at a large firm where I have the type off responsibility where I’m going to be on call for emergencies in the middle of the night, or get sucked into some project deployment that’s going to require anything resembling a war room.

I’ll find out how life changes for Bill and the employees of Parts Unlimited soon, as I’ll probably wrap the book up over the next day or two. I’m looking forward to getting copies in the hands of a few more people to see how they like it, and, more importantly, to see what effect it has on our operations and service delivery.

Mobile Device Management

two black smartphones

Small business deployments are still too cumbersome

Today is going to be a busy day. We’ve got a small party to attend to host, so I’ve got to do a bunch of household cleanup, roast a pork shoulder, bake a cake, and then host seven or ten children plus parents in the backyard. If that wasn’t enough, I’m behind on both my WordPress project and the Substack post for Monday, which is about bitcoin.

Work picked up a bit last week. I’m helping roll out Git best practices for a software development firm, which is the kind of challenge I’m looking for, and dealing with a failed mobile device management solution (MDM) that I rolled out several years ago and which has been summarily ignored since then. It’s not what I’d rather be doing.

Microsoft’s MDM, Intune, has evolved over the past few years, and like most Microsoft services, has gone through several iterations and is a maze of admin dashboards, documentation, and licensing products. It still seems vastly superior to the product that we’ve been using from IBM, called MaaS360. Still, figuring out the requirements for a small business client is a huge pain. We’ve been dealing mainly with Apple devices, which means managing all the end user accounts. Getting the devices enrolled requires managing a signed certificate from Apple (another account), and then deploying the device requires not only a configuration profile on the device, but additional apps on the device for it to work.

For our initial deployment MaaS360, requirements were pretty simple, the customer mainly wanted to lock down the browser on the phones for content filtering. It was an arduous process, even for a first-time deployment. Setting up the device profiles and testing took me several hours, then another associate of mine had to go through each device, setting up iTunes profiles for each user and downloading our management application. Then, after we deployed it, we discovered that GPS tracking wasn’t working. Permission needed to be granted individually on each device.

This initial deployment went unattended for almost two years. We got a request to pilot a new service app on one of the phones, and when I went back to check the tenant, all but two of the phones hadn’t checked in to the portal in over six months, more than half in over a year.

By some stroke of luck one of the two belonged to the individual who was selected to pilot the new service app, so I was able to proceed with the planning for that. I spent the rest of the morning trying to acquaint myself with Microsoft’s MDM offerings. Since most of our clients are on O365, it makes sense to take advantage of whatever is available through the platform. I was able to get a device policy setup under our partner account, but wasn’t able to get my personal iPhone to report into the console, even after several attempts connecting it to my O365 Exchange account.

Then, several hours later, after getting a Teams notification, I was prompted to install the device management profile, as well as two other apps, one for a “company portal”, and the Microsoft Authenticator app. Then, I was prompted for a managed Apple ID, and that’s where I stopped for the day.

I decided that if I was going to be forced to redeploy management to a dozen or so client devices, that I had best start communicating with the client, so I spoke to them. There had been numerous personnel changes in the past few months, and a lot of other processes were being re-evaluated, which meant that it was a good time to put some processes in place. First off, a freeze on any device purchases or equipment transfers without keeping me in the loop. (Outsource IT is usually an afterthought when it comes to hiring and firing.) Second, we were going to audit all existing devices, and make sure that we have a record of which devices we think we have, and who they belong to. That would give us some time to evaluate whether we can move management over to O365, or redeploy with the current solution.

I pulled some spreadsheets down from the management portal and dumped them into the client’s SharePoint site, then scheduled a Tuesday meeting with the pilot user for the new app.

Next week, I’ll have to do some investigation into Apple Business Manager, to see if it allows us to manage user IDs as well as the devices. We can barely depend on this firm’s employees to manage the one AD accounts, let alone another set of Apple IDs. It’s management hell. I’ll also have to draft some written policies for device and user onboarding and so forth. Eventually, I’d like to enroll the client firm in the carrier’s device provisioning program, to get them enrolled with minimal supervision. That will likely be a slog for this small firm.

On the brighter side of things, this may force me to develop some concrete MDM deployment best practices that will make me a superstar. I’m not aware of any Powershell tools that can be used to automate any of this process. Even turning on MDM within O365 requires clicking a box in the admin portal, and the Apple Certificate provisioning requires setting up accounts and downloading a file from one portal into another. Drafting an SOP for the entire process start to finish would be valuable.

That will have to wait till next week, because today, I have a party and very special birthday girl to attend to.

Private, internal blog for the family

girl wearing grey long-sleeved shirt using MacBook Pro on brown wooden table

Maintaining privacy for your kids by running a private WordPress instance in your home network

Well, I may have finally lost my mind. I quit Facebook over a year ago, but one of the things that I do miss is the throwback posts that they pop up on your feed with pictures and posts from one or five years ago. It’s great for those “oh look how little Elder is in that picture!” kinds of moments. I don’t feel comfortable sharing pictures of my kids on my normie feed anymore, but I still want to do more than just have a folder sitting on my computer somewhere that gets looked at once in a blue moon. Plus, Elder is getting more involved with using the computer, and I wanted to give her a chance to express her creativity without the risk of letting her have a YouTube account. So I did the only thing any sensible tech dad would do. I set up an internal WordPress site for the family to use.

Setting up internal domain

I’m pretty proficient with Windows domain controllers, and manage a lot of contoso.local domains that aren’t externally routable. I decided that I wanted to do this, so that the site could be accessed only from our local network. That way we could easily access it from any of our personal devices, and can potentially allow friends and family to look at it when they visit and join our network.

Bind is the go-to DNS server for Ubuntu, so I started by installing and configuring it. However I quickly got lost in a maze of .conf files and StackExchange posts trying to get it to work, so I dumped it and installed dnsmasq instead. Dnsmasq relies on simple host files instead of the more complicated zone files that bind uses, which is more than enough for what I need at the house.

I setup my /etc/dnsmasq.conf file as follows using this guide:

# Don't forward plain names or non-routed addresses
domain-needed
bogus-priv

# Use OpenDNS, not ISP's DNS router
server=208.67.222.222
server=208.67.220.220

# Replace second IP with your local interface
listen-address=127.0.0.1,192.168.1.123

expand-hosts
domain=dahifi.internal

Then I setup my /etc/hosts file with the records I need, pointing to the downstairs server and my development workstation.

27.0.0.1       localhost.localdomain   localhost
::1             localhost6.localdomain6 localhost6

192.168.1.123   dahifi.internal
192.168.1.123   homeboy.dahifi.internal   homeboy
192.168.1.102   oberyn.dahifi.internal    oberyn
192.168.1.123   elder.dahifi.internal   berkley

After saving changes, I need to restart dnsmasq: systemctl restart dnsmasq. From there I was able to validate the configuration on the server and external machines using nslookup. Once I was comfortable that things were working, I added my internal server’s IP to my router’s DHCP DNS scope and refreshing the client leases on a couple devices to make sure they would work.

Note about .local domains

I’ve never had issues with domains ending in .local among my Windows business networks, but Ubuntu may have a multicast DNS service called Avahi running, which hijacks anything FQDN ending in .local. Interestingly this service was missing off of the actual Ubuntu server install, which interfered with my troubleshooting. The simplest thing to do to get us up and running was just to change our internal domain from dahifi.local to dahifi.internal. Any other non-routable TLD should work as well.

Additionally, Ubuntu ships with resolved, a network name manager service. It runs a caching service at 127.0.0.53, and interfered with my troubleshooting as well. My internal domain kept getting routed to my ISP’s search page, until I ran sudo service systemd-resolved restart to clear the cache.

Multisite Docker setup using Nginx Proxy

The SSD Nodes site has a nice write up of how to run multiple websites with Docker and Nginx that I was able to use to get our WordPress site up and running. I prefer putting everything in a docker-compose file. The only prerequisite is creating the network:

docker network create nginx-proxy

docker-compose.yml:

version: "3"
services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
    ports:
      - "80:80"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro

networks:
  default:
    external:
      name: nginx-proxy

And a second file, blog-compose.yml for the blog itself:

version: "3"

services:
   db_node_blog:
     image: mysql:5.7
     command: --default-authentication-plugin=mysql_native_password
     volumes:
       - ./blog_db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: root_password
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress_user
       MYSQL_PASSWORD: wordpress_user
     container_name: blog_wp_db

   wordpress:
     depends_on:
       - db_node_blog
     image: wordpress:latest
     volumes:
       - ./blog_wp_data:/var/www/html
     expose:
       - 80
     restart: always
     environment:
       VIRTUAL_HOST: dahifi.internal
       WORDPRESS_DB_HOST: db_node_blog:3306
       WORDPRESS_DB_USER: wordpress_user
       WORDPRESS_DB_PASSWORD: wp_password
     container_name: blog_wp
volumes:
    blog_db_data:
    blog_wp_data:


networks:
  default:
    external:
      name: nginx-proxy

You’ll notice that the site that the blog is published to is identified by the VIRTUAL_HOST environment variable. In this case we’re still pointing to the “top level” domain, dahifi.internal, and not blog.dahifi.internal. This is due to issues we were having with subdomain resolution on the Nginx proxy, and is something we’ll have to work on later. Originally, we had our internal GitLab instance running on this host at port 80, and had to take it down for Nginx to work. My next step is to make sure that subhosts work properly, and then reconfigure GitLab and this blog to run under something like git.dahifi.internal.

Image uploads

One additional change that I needed to make was to change the default file size limit of 2M. Following along with this setup tutorial, I added - ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini line to the WordPress container, then added the following to the uploads.ini file:

memory_limit = 64M
upload_max_filesize = 64M
post_max_size = 64M

Then I rebuilt the container with docker-compose down and up, making sure to specify my blog-compose.yml file.

I still encountered errors trying to upload. WordPress kept throwing

Unexpected response from the server. The file may have been uploaded successfully. Check in the Media Library or reload the page.

I couldn’t find any errors in WordPress’s Apache instance, and PHP looked fine. I eventually found a message in the Nginx container log: client intended to send too large body. It seems that Nginx has it’s own limits. I added client_max_body_size 64M this directly in the /etc/nginx/nginx.conf file, then reloaded it with service nginx reload. Problem solved! Not only can I upload files directly through the web interface, but I was also able to add the internal site to the WordPress app running on my phone, and can upload images directly from my phone.

Elder is already working on writing a story in it now, and I’m looking forward to using this site as an internal bulletin board for the family. Let’s see what happens!

The future of work

white and black digital wallpaper

Tech trends aren’t favorable for small businesses that don’t pivot quickly

We’re now in the third month of The Great Lockdown, and for most of us it’s now the new normal. Households and businesses have made adjustments to deal with the restrictions and new procedures needed to keep people safe from the spread of COVID. Many tech firms have officially pivoted to remote first operations for staff, and many others are doing so unofficially. Perhaps the only question remaining at this point is how far things will return to the old ways. My guess, not so much.

I’ve been working from home for several years now, and have some insights into where things are headed, at least in the context of knowledge work. My day job involves supporting a number of small business, some of which operate with their staff completely distributed, so my experience is mostly in the medical and professional services industries. Leave aside traditional brick and mortar businesses that rely on foot traffic, like restaurants and lifestyle businesses for the moment. And I am of course writing while holded up in my cozy house with my family, under no pressure to leave for work and deal with the public. I have friends, neighbors and clients who are working the front lines in healthcare, and I have deep gratitude for them and for those others “frontline heroes”, keeping the grocery and retail supply chains moving for the rest of us.

Amazon and other digital platforms have been eating away at the real world for many years, and COVID has accelerated the transition dramatically. To steal a phrase from Tobe Lutke, CEO of Shopify, “2030 came a decade early”. People around the world have been forced to undertake massive changes in reaction to the pandemic, and we’re so far into it now that it’s hard to believe that that things were ever different. Future Shock has been barrelling at us for decades now, with the advent of personal computers, the internet, smartphones and wireless internet impacting generation after generation. It seems that the Singularity is nearly upon us. The last three and a half years of the Trump administration have seemed like a decade in themselves. All of that pales in comparison to the changes that we as a society have had to undergo these last three months.

To understand where we’re going, we must go look back for a moment and review the changes in economic and labor practices that have occured since the end of World War II. Since then we’ve transitioned from long-term, single company employment that traditionally ended with a gold watch and a pension, and have replaced it instead with a short-term, shareholder focused, gig economy. Productivity and investor profits have steadily increased for the past thirty years while real wages have remained stagnant. And it’s doubtful that we’ll see any progress on this issue, regardless of who holds the White House at the end of January next year.

Immense social pressure by large swaths of society, as we are currently seeing with the Black Lives Matter protests around the world, can be effective at spurring political action at the National level, but I remain skeptical that we will see much movement with regard to the economic sphere in the next twelve to eighteen months. It depends on whether we can continue to successfully deal with the pandemic. I am not an optimist on this point. I will note though how relatively quickly universal basic income went from being a humorous component of Andrew Yang’s presidential platform to a serious contender for COVID-related economic woes. For now, we’ll have to settle for our twelve hundred dollar stimulus checks.

If one is to use the stock market as a bellwether, it would appear that the economy has already rebounded from the huge hit it took in the weeks following the lockdown. The lack of any meaningful relationship between stocks and reality should be apparent to most. Recent gains have either been driven by the continuing extraction of value from small and local businesses by Amazon and other ecommerce platforms, or from the harvesting of attention and personal data by Facebook, Twitter, Google and others. And now, the Fed has turned on the spigots through what they call “unlimited quantitative easing”, releasing over six trillion dollars to banks, where most of it has gone into the hands of those who need it least. This has caused no lack of excitement among the Bitcoin community, who have taken the “money printer goes BRRR” meme to the next level as they build the narrative of bitcoin as a way to both defund the state, short (as in stocks) governments, and opt out of the traditional central bank based fiat system. I count myself among them, but I’ll save that for a future installment.

Source: VisualCapitalist

That said, data from LinkedIn does show that hiring is up for tech workers. Platform firms that enable developers and entrepreneurs to build on top of their systems are on somewhat of a spree lately. Twilio, a text messaging provider, Stripe and Square, two internet payment firms, as well as GitLab and GitHub, built around a popular software development tool, are all rapidly scaling up to meet demand. And there seem to be no shortage for software developer or engineering jobs that I see, especially for those in government-adjacent industries or those with heavy math and science qualifications, like artificial intelligence and data analysis.

It also appears that most of the healthcare industry, especially the mental health profession, will be safe from any negative economic impacts of COVID. There was previously no shortage of people needing treatment for depression or post traumatic stress disorder, numbers that are rising as people deal with the stress of the pandemic. Indeed, there are number of platform companies that have been offering telemedicine related services for therapy and non-emergency services. That seem to be on the uptake. I myself am currently involved with one.


Let’s turn our focus to the technical side of our new normal.

Perhaps the biggest early winner among the remote-work enabled companies was Zoom, which became synonymous with video conferencing in the weeks following lockdown. Then there was the inevitable backlash, following both privacy and security concerns around Zoom’s default settings and misleading encryption claims. Amazon has been a big winner, both because of quarantine-induced shopping as well as cloud driven usage on AWS. Google Cloud Services Microsoft’s Azure are no doubt doing well also, and there are indications that Apple may be trying to enter the space also.

Microsoft has been in a bit of a battle recently as well, after the CEO of Slack, a remote workspace collaboration service, announced that Microsoft was obsessed with “killing us”, and announced an integration partnership with Amazon AWS. I’ve been using both Teams and Slack for some time now, in different roles, and I don’t think either is in any immediate danger. I’ve got theories about what type of companies choose one or the other, Slack for more tech-focused firms, especially development houses, and Teams for those that already rely on other Microsoft services such as Exchange Online or Sharepoint. I don’t have any hard evidence on that yet, and it may be that a lot of people, like me, will continue to use both for various teams.

My day to day responsibilities involve managing Microsoft Office 365 cloud services for clients. We’ve been pushing them off of older SMTP/IMAP services to Exchange Online for several years now, since it’s just plain better. What amazes me though is that most of our clients have no idea of the myriad other services that come bundled with O365 these days. In fact, I have a hard time keeping up myself. Besides Exchange, Teams and SharePoint are probably the most prominent, and the ones that we push the most, but there are a number of other useful apps bundled with O365, like Forms, Flow and Planner. The problem with O365 is that there are so many product offerings within them that it’s impossible to keep up unless you dedicate a lot of time and resources to them. The issue is multiplied with Azure and AWS, each of which has dozens of services.

I’ve spent the last dozen years of my career building and managing business networks, server and supporting end end users and their equipment, what’s known as managed services in the industry. I realized several years ago as the app economy took over, that these types of services would soon become a commodity, and that competition, especially low barriers to entry, would soon drive prices in a race to the bottom. I think I’m being proven right on that point, but what I didn’t really anticipate was that it was not just managed services providers that are being hit by this, but essentially all industries and verticals.

The overwhelming drive today among business is the tendency toward fewer and fewer employees. Tech tools are increasing employee productivity, apps and automation are compounding their efforts. Some jobs are more resistant to this than others, but the number of administrative staff needed to manage a large organization is rapidly diminishing. As I noted last April:

Blockbuster, at its height in 2004, employed 84,300 workers with a a $5 billion market cap. Today Netflix is valued at $162 billion value with only 5,400 employees. Instagram had just 13 employees when it was acquired by Facebook for a $500 million valuation in 2013.

I wrote about businesses as operating systems last month, so I’ll not repeat myself wholesale save to say that successful businesses in the next year or two are going to be the ones that figure out how to utilize these digital tools to allow them to scale. Even currently sustainable businesses with no interest in growth are going to be forced to deploy them, as the pressure to become more lean will build up from competitors that are. My focus remains on exploring how to deploy these tools in a way that helps my clients streamline their operations. My working hypothesis right now is that businesses that do will survive and thrive and those that don’t will languish and die.

We’ve seen how quickly the world can change. There’s no going back.

Git-ing it done

black green and blue coated wires

Roll your own GitLab

I had trouble falling asleep last night, Younger crawled in our bed just as I was dozing off and kept squirming as I was falling asleep, so I slept in her bed. It faces East, so I woke up at five and tried to go back to sleep. I heard Elder up, so I got up and started the day. She’s sitting across the room from me, looking up “Valentine’s Day” gifts ideas for the boy in our quarantine bubble down the street. Her sister has been ribbing her about it for days now.

One of our Zombie, LLC clients wants help standing up an internal GitLab server. It got me thinking, so I went ahead and set up a GitLab docker instance on my downstairs Ubuntu server. I figure it’s good practice. Do the job you want has always been good practice, so setting it up was worth the time. Plus it only took about fifteen minutes. The main problem I ran into was an SSH conflict with the existing service on the host. And it doesn’t appear that modifying the config on an existing container requires stopping the Docker daemon, so I just deleted the container and started over. I’ll probably move SSH if I ever do a real deployment, but here at the house the HTTP functionality is enough.

There’s also the mail issue. I didn’t want to use the root account to setup my repos, but the workflow around new accounts wants to send an activation email. I tried installing sendmail on the host, but the password reset didn’t work. I doubt it will work without a publicly routable dynamic DNS entry back to it or SMTP services, which I don’t want to mess with right now. Thankfully I found a password change form in the admin interface that didn’t require knowing the old password and got up and running.

I am nowhere near as strong with my Linux management skills as I am with Windows, where everything is pre-packaged and is somewhat unified. I can stand up domain.local services lickety split, and have a library of PowerShell scripts to setup AD, DNS, DHCP services within a domain. I have never actually taken the time to set one up at home though, but that point may soon be approaching. I’ve been wanting to investigate the use of Ubuntu server as an alternative or supplement to Windows based AD services, but part of me is skeptical that such a setup is even viable for workstation authentication and services. But I digress. The point I’m trying to make here is that I’ve always been in awe of Unix sysadmins ever since I worked at an internet service provider back in the late 90’s and watched our systems guy pop in and out of terminal shells like a wizard. I’ve never felt adequate in that regard.

I made some good progress yesterday working on the WordPress project, and have started converting the client’s site over to the new theme. I’m going over the demo site, examining the Bakery build they’ve got set up, and recreating it using the client’s assets. This allows me to get a bit more familiar with the framework that the theme author is using, and hopefully gleam some best practices at the same time. It’s a two step forward, one step back process. There are some strange bugs that popped up. Activating Woocommerce seems to bring the site down completely, as does changing the theme back to the original. Then at one point, while I was working on the new header, the previews stopped working completely and would only throw 404 errors. They work in the actual site, so I had to make do while I made edits.

Usual best practices for WordPress development and git repos are to exclude the entire WordPress directory except for whatever theme and custom plugin that you’re developing, but since in this case we’re working on an entire site, I’ve added the entire WordPress directory and associated SQL database files. The wp-content/uploads directory is mounted outside the container, along with plugins and themes. I haven’t yet pulled this directory on another machine yet, so I don’t know if it’s going to work. My main concern is how I’m grabbing the database. Managing PostgreSQL during my Django projects has always been a bit of a pain as I never learned how to incorporate it into my source control. I’ll have to spend some time correcting this deficiency.

Here is a look at the Docker Compose file I am using for my development setup. The SQL mount /docker-entrypoint-initdb.d/backup_to_load.sql get’s imported when the container is created; I assume that it’s ignored when pulling the SQL data from source. We shall soon find out. Also, I haven’t solved the file permissions issues that happen when trying to edit things like the wp-config.php file. I’ll have to save that for a later time.

version: '3.8'
services:

  wordpress:
    container_name: 'local-wordpress'
    depends_on:
      - db
    image: 'wordpress:latest'
    ports:
      - '80:80'
    environment:
      WORDPRESS_DB_HOST: db
      WORDPRESS_DB_USER: wordpress_user
      WORDPRESS_DB_PASSWORD: wordpress_password
      WORDPRESS_DB_NAME: wordpress_db
    volumes:
      - "./Wordpress:/var/www/html"
      - "./plugins:/var/www/html/wp-content/plugins"
      - "./themes:/var/www/html/wp-content/themes"
      - "./uploads:/var/www/html/wp-content/uploads"

  db:
    container_name: 'local-wordpress-db'
    image: 'mysql:5.7'
    command: --default-authentication-plugin=mysql_native_password
    volumes:
      - './data/mysql:/var/lib/mysql'
      - './data/localhost.sql:/docker-entrypoint-initdb.d/backup_to_load.sql'
    environment:
      MYSQL_ROOT_PASSWORD: somewordpress
      MYSQL_DATABASE: wordpress_db
      MYSQL_USER: wordpress_user
      MYSQL_PASSWORD: wordpress_password

  adminer:
    image: adminer
    restart: always
    ports:
      - 8080:8080

Digital Addiction

flat screen computer monitor turned on beside black keyboard

Reclaiming my brain from the pull of the Feed.

So I deleted Twitter off my phone yesterday. I really did it. It just took a chapter or two of Digital Minimalism to convince me that I needed a break.

Getting rid of Facebook on my phone about eighteen months ago was one of the healthiest things that I’ve ever done. It was such a time suck and I spent way too much time on the platform arguing with people. One the one hand, it did lead to me writing quite a bit, and probably lead to my political career, but between the toxic people that I had connections with on there, and all of the privacy problems that were going on there, it was just too much. I had to leave. Given the Cambridge Analytica scandal and all the other bad news about Zuckerberg and they way they manage things over there, I’ve had no desire to go back. I’ve logged on a few times to deal with some messages or check on some family members, but I don’t browse the feed at all.

I always considered Twitter a bit different, since I was curating my feed, and it wasn’t just random friend of friend connections. Just because someone wanted to follow me, I didn’t have to follow them. Or vice versa. I still see Twitter as a source of news and information, and being able to remain pseudonymous was part of the main draw as well. Still, I spent way to much time on it, picking up my phone whenever I’m idle. Watching TV shows with the family, sitting out on the deck, or out somewhere waiting in public.

So I removed it Sunday morning and went about my day. The absence was felt immediately. I found my phone in my hand throughout the day, and I found myself wondering why I was holding it. Then I realized that the habit was still there, but I had short-circuited it with the app gone. It happened several times during random moments, like waking from a dream. I took the kids to a nature park to get out for an hour or two, and felt the urge to pull my phone out while the kids were finishing their lunch. No need. I set the slip and slide up for the girls outside and there’s that habit again. Nothing to do. Watching a movie after dinner, sitting on the couch, I’m always checking my feed. Instead, I worked on the Sunday crossword.

Today’s going to be interesting since I don’t have the same kind of blocks setup on my workstations. There are ones out there that will whitelist or blacklist certain sites on a timer. I’ve heard of people using them to make sure they get their work done, but I never went that far with it. There’s lots of downtime during the day, when I’m waiting for a download or some sort of progress bar, when I pull up Twitter and browse the feed. That’s going to be the real test. I wonder if I can redirect that energy to something productive, like doing a lesson on LinkedIn learning, FreeCodeAcademy, or doing one of the competitive coding challenge sites? I have been wanting to take a look at Rust…

I do have a project to finish, that is going to take several weeks of deep work. I’m really going to have to delve into WordPress’s innards and really figure out how the theme system works, then actually develop a design for a site. I had been attempting to figure out how this site’s current theme had been developed, but it’s such a mess, and I don’t know if I have it in me. All of the site’s functionality was just dumped into WordPress’s TwentySixteen theme, without even a child theme setup. And the dev hardcoded all of the scripts for Google Analytics and everything else directly in the template files. I’m got fifty four plugins, and trying to figure out which ones are needed to for the existing site is a mess.

Anyways. There was one moment yesterday when I desperately wished I still had Twitter on my phone. I was driving the kids to the aforementioned nature park, travelling down a two lane divided highway, when there was some sort of traffic slowdown. There was a car pulled off to the right just before an onramp. As I passed it I thought we were clear, but the cars on my left were still slowing up. There, up ahead, was a black man on a horse, just trotting his way down the highway. And there, the perfect tweet formed in my mind: “Is it legal to ride a horse on the parkway? Asking for a friend.”

Well, maybe not. But the next few days will be an interesting experiment to see what happens when I reclaim my brain. Will it unlock my creative superpowers, or have astonishing effects on my mental health and well-being? Probably not anything that that dramatic. Being in the moment certainly won’t hurt, and redirecting that nervous energy somewhere else will most likely be helpful.

Here we go.