Private, internal blog for the family

girl wearing grey long-sleeved shirt using MacBook Pro on brown wooden table

Maintaining privacy for your kids by running a private WordPress instance in your home network

Well, I may have finally lost my mind. I quit Facebook over a year ago, but one of the things that I do miss is the throwback posts that they pop up on your feed with pictures and posts from one or five years ago. It’s great for those “oh look how little Elder is in that picture!” kinds of moments. I don’t feel comfortable sharing pictures of my kids on my normie feed anymore, but I still want to do more than just have a folder sitting on my computer somewhere that gets looked at once in a blue moon. Plus, Elder is getting more involved with using the computer, and I wanted to give her a chance to express her creativity without the risk of letting her have a YouTube account. So I did the only thing any sensible tech dad would do. I set up an internal WordPress site for the family to use.

Setting up internal domain

I’m pretty proficient with Windows domain controllers, and manage a lot of contoso.local domains that aren’t externally routable. I decided that I wanted to do this, so that the site could be accessed only from our local network. That way we could easily access it from any of our personal devices, and can potentially allow friends and family to look at it when they visit and join our network.

Bind is the go-to DNS server for Ubuntu, so I started by installing and configuring it. However I quickly got lost in a maze of .conf files and StackExchange posts trying to get it to work, so I dumped it and installed dnsmasq instead. Dnsmasq relies on simple host files instead of the more complicated zone files that bind uses, which is more than enough for what I need at the house.

I setup my /etc/dnsmasq.conf file as follows using this guide:

# Don't forward plain names or non-routed addresses
domain-needed
bogus-priv

# Use OpenDNS, not ISP's DNS router
server=208.67.222.222
server=208.67.220.220

# Replace second IP with your local interface
listen-address=127.0.0.1,192.168.1.123

expand-hosts
domain=dahifi.internal

Then I setup my /etc/hosts file with the records I need, pointing to the downstairs server and my development workstation.

27.0.0.1       localhost.localdomain   localhost
::1             localhost6.localdomain6 localhost6

192.168.1.123   dahifi.internal
192.168.1.123   homeboy.dahifi.internal   homeboy
192.168.1.102   oberyn.dahifi.internal    oberyn
192.168.1.123   elder.dahifi.internal   berkley

After saving changes, I need to restart dnsmasq: systemctl restart dnsmasq. From there I was able to validate the configuration on the server and external machines using nslookup. Once I was comfortable that things were working, I added my internal server’s IP to my router’s DHCP DNS scope and refreshing the client leases on a couple devices to make sure they would work.

Note about .local domains

I’ve never had issues with domains ending in .local among my Windows business networks, but Ubuntu may have a multicast DNS service called Avahi running, which hijacks anything FQDN ending in .local. Interestingly this service was missing off of the actual Ubuntu server install, which interfered with my troubleshooting. The simplest thing to do to get us up and running was just to change our internal domain from dahifi.local to dahifi.internal. Any other non-routable TLD should work as well.

Additionally, Ubuntu ships with resolved, a network name manager service. It runs a caching service at 127.0.0.53, and interfered with my troubleshooting as well. My internal domain kept getting routed to my ISP’s search page, until I ran sudo service systemd-resolved restart to clear the cache.

Multisite Docker setup using Nginx Proxy

The SSD Nodes site has a nice write up of how to run multiple websites with Docker and Nginx that I was able to use to get our WordPress site up and running. I prefer putting everything in a docker-compose file. The only prerequisite is creating the network:

docker network create nginx-proxy

docker-compose.yml:

version: "3"
services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
    ports:
      - "80:80"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro

networks:
  default:
    external:
      name: nginx-proxy

And a second file, blog-compose.yml for the blog itself:

version: "3"

services:
   db_node_blog:
     image: mysql:5.7
     command: --default-authentication-plugin=mysql_native_password
     volumes:
       - ./blog_db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: root_password
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress_user
       MYSQL_PASSWORD: wordpress_user
     container_name: blog_wp_db

   wordpress:
     depends_on:
       - db_node_blog
     image: wordpress:latest
     volumes:
       - ./blog_wp_data:/var/www/html
     expose:
       - 80
     restart: always
     environment:
       VIRTUAL_HOST: dahifi.internal
       WORDPRESS_DB_HOST: db_node_blog:3306
       WORDPRESS_DB_USER: wordpress_user
       WORDPRESS_DB_PASSWORD: wp_password
     container_name: blog_wp
volumes:
    blog_db_data:
    blog_wp_data:


networks:
  default:
    external:
      name: nginx-proxy

You’ll notice that the site that the blog is published to is identified by the VIRTUAL_HOST environment variable. In this case we’re still pointing to the “top level” domain, dahifi.internal, and not blog.dahifi.internal. This is due to issues we were having with subdomain resolution on the Nginx proxy, and is something we’ll have to work on later. Originally, we had our internal GitLab instance running on this host at port 80, and had to take it down for Nginx to work. My next step is to make sure that subhosts work properly, and then reconfigure GitLab and this blog to run under something like git.dahifi.internal.

Image uploads

One additional change that I needed to make was to change the default file size limit of 2M. Following along with this setup tutorial, I added - ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini line to the WordPress container, then added the following to the uploads.ini file:

memory_limit = 64M
upload_max_filesize = 64M
post_max_size = 64M

Then I rebuilt the container with docker-compose down and up, making sure to specify my blog-compose.yml file.

I still encountered errors trying to upload. WordPress kept throwing

Unexpected response from the server. The file may have been uploaded successfully. Check in the Media Library or reload the page.

I couldn’t find any errors in WordPress’s Apache instance, and PHP looked fine. I eventually found a message in the Nginx container log: client intended to send too large body. It seems that Nginx has it’s own limits. I added client_max_body_size 64M this directly in the /etc/nginx/nginx.conf file, then reloaded it with service nginx reload. Problem solved! Not only can I upload files directly through the web interface, but I was also able to add the internal site to the WordPress app running on my phone, and can upload images directly from my phone.

Elder is already working on writing a story in it now, and I’m looking forward to using this site as an internal bulletin board for the family. Let’s see what happens!

Git-ing it done

black green and blue coated wires

Roll your own GitLab

I had trouble falling asleep last night, Younger crawled in our bed just as I was dozing off and kept squirming as I was falling asleep, so I slept in her bed. It faces East, so I woke up at five and tried to go back to sleep. I heard Elder up, so I got up and started the day. She’s sitting across the room from me, looking up “Valentine’s Day” gifts ideas for the boy in our quarantine bubble down the street. Her sister has been ribbing her about it for days now.

One of our Zombie, LLC clients wants help standing up an internal GitLab server. It got me thinking, so I went ahead and set up a GitLab docker instance on my downstairs Ubuntu server. I figure it’s good practice. Do the job you want has always been good practice, so setting it up was worth the time. Plus it only took about fifteen minutes. The main problem I ran into was an SSH conflict with the existing service on the host. And it doesn’t appear that modifying the config on an existing container requires stopping the Docker daemon, so I just deleted the container and started over. I’ll probably move SSH if I ever do a real deployment, but here at the house the HTTP functionality is enough.

There’s also the mail issue. I didn’t want to use the root account to setup my repos, but the workflow around new accounts wants to send an activation email. I tried installing sendmail on the host, but the password reset didn’t work. I doubt it will work without a publicly routable dynamic DNS entry back to it or SMTP services, which I don’t want to mess with right now. Thankfully I found a password change form in the admin interface that didn’t require knowing the old password and got up and running.

I am nowhere near as strong with my Linux management skills as I am with Windows, where everything is pre-packaged and is somewhat unified. I can stand up domain.local services lickety split, and have a library of PowerShell scripts to setup AD, DNS, DHCP services within a domain. I have never actually taken the time to set one up at home though, but that point may soon be approaching. I’ve been wanting to investigate the use of Ubuntu server as an alternative or supplement to Windows based AD services, but part of me is skeptical that such a setup is even viable for workstation authentication and services. But I digress. The point I’m trying to make here is that I’ve always been in awe of Unix sysadmins ever since I worked at an internet service provider back in the late 90’s and watched our systems guy pop in and out of terminal shells like a wizard. I’ve never felt adequate in that regard.

I made some good progress yesterday working on the WordPress project, and have started converting the client’s site over to the new theme. I’m going over the demo site, examining the Bakery build they’ve got set up, and recreating it using the client’s assets. This allows me to get a bit more familiar with the framework that the theme author is using, and hopefully gleam some best practices at the same time. It’s a two step forward, one step back process. There are some strange bugs that popped up. Activating Woocommerce seems to bring the site down completely, as does changing the theme back to the original. Then at one point, while I was working on the new header, the previews stopped working completely and would only throw 404 errors. They work in the actual site, so I had to make do while I made edits.

Usual best practices for WordPress development and git repos are to exclude the entire WordPress directory except for whatever theme and custom plugin that you’re developing, but since in this case we’re working on an entire site, I’ve added the entire WordPress directory and associated SQL database files. The wp-content/uploads directory is mounted outside the container, along with plugins and themes. I haven’t yet pulled this directory on another machine yet, so I don’t know if it’s going to work. My main concern is how I’m grabbing the database. Managing PostgreSQL during my Django projects has always been a bit of a pain as I never learned how to incorporate it into my source control. I’ll have to spend some time correcting this deficiency.

Here is a look at the Docker Compose file I am using for my development setup. The SQL mount /docker-entrypoint-initdb.d/backup_to_load.sql get’s imported when the container is created; I assume that it’s ignored when pulling the SQL data from source. We shall soon find out. Also, I haven’t solved the file permissions issues that happen when trying to edit things like the wp-config.php file. I’ll have to save that for a later time.

version: '3.8'
services:

  wordpress:
    container_name: 'local-wordpress'
    depends_on:
      - db
    image: 'wordpress:latest'
    ports:
      - '80:80'
    environment:
      WORDPRESS_DB_HOST: db
      WORDPRESS_DB_USER: wordpress_user
      WORDPRESS_DB_PASSWORD: wordpress_password
      WORDPRESS_DB_NAME: wordpress_db
    volumes:
      - "./Wordpress:/var/www/html"
      - "./plugins:/var/www/html/wp-content/plugins"
      - "./themes:/var/www/html/wp-content/themes"
      - "./uploads:/var/www/html/wp-content/uploads"

  db:
    container_name: 'local-wordpress-db'
    image: 'mysql:5.7'
    command: --default-authentication-plugin=mysql_native_password
    volumes:
      - './data/mysql:/var/lib/mysql'
      - './data/localhost.sql:/docker-entrypoint-initdb.d/backup_to_load.sql'
    environment:
      MYSQL_ROOT_PASSWORD: somewordpress
      MYSQL_DATABASE: wordpress_db
      MYSQL_USER: wordpress_user
      MYSQL_PASSWORD: wordpress_password

  adminer:
    image: adminer
    restart: always
    ports:
      - 8080:8080

Hewing bits and bytes

Why am I working so hard?

After publishing last night’s post, I made a little headway with one of my projects, figuring out how to mount a SQL dump into a mySQL Docker image so that it gets loaded automatically when the container spins up. Just one more little win toward accomplishing my task. Now I just need to tackle the way I have WordPress deployed, and I can begin working on the project for real. I’m taking my time with this. All of the learning and research I’m doing now isn’t the client’s time, it’s mine, and is the kind of learning I love.

Being able to master Docker means I don’t have to run all this stuff on my local machines. I can start culling all of the packages that I’ve loaded in the past for this project or that, things like Node dependencies, Ruby, and Postgres no longer have to bulk up my system. Pop, here’s a container. Pop, there it goes. I went through my staging server a few days ago and started cleaning out shop, removing abandoned projects. Goodbye, rm *pennykoin* -rf, and so long.

I’m still reading Fluent Python, about a half hour before bed. I finally have a good grasp on decorators. I think my eyes glazed over on coroutines, but I think I’m ready to add threading to my value averager app. I’ve only got a couple of chapters left, on asyncio, which I desperately need to master, and another on one of my favorite subjects, metaprogramming.

I’ve been reading Fluent Python for about twenty minutes right when I climb in the bed. It’s on the iPad and even with the brightness turned down all the way, it’s still bad for rest, so I usually wind up reading a real book. Right now it’s Digital Minimalism and last night there was a section about Henry David Thoreau, starting with his time building his cabin at Walden Pond, before he wrote his book. Just how does one build a cabin using just an axe? Anyways, the point here, and one I never knew before is that Walden is really about using time as the true unit of account. What use is earning a bunch more money if the cost in time to earn it is so much. And for what?

It’s not that I haven’t heard the idea of time as money before, or rather trading time for money. It’s very prevalent in the things I read and hear. Just realizing that Thoreau was writing about it some one hundred and fifty years ago makes me realize how little things have changed. I don’t know why I should be surprised. I’m sure Marcus Aurelius says similar things in his diaries. I think my point is that I wasn’t expecting to hear it. Here I was, trying to convince myself that I should delete Twitter off my phone for a month, and here’s Cal Newport, via Thoreau, asking “why are you working so hard, you sap?”

Thoreau did have any children, though, so I guess I can say that’s part of the reason that I grind, although it’s really not the only reason. I like figuring things out, and it’s just so happened that the things I’ve figured out how to do enables me to earn a comfortable living. Still, there’s some sort of drive to build something, a legacy, if you will, coupled with a mild regret that I should have more to show for this life I’ve lived these past forty one years. One of my grandfathers built a house. All I have of another is a stained glass lamp, sitting next to one of my daughter’s beds. That and memories of model trains in a basement, and playing a flight simulator on an old Tandy PC back in the 80’s.

And maybe that later point is the crux of minimalism. In the end, it is the memories that matter. Not all of us are going to write lasting works of fiction or build cathedrals that will be finished long after our deaths and stand for centuries. Today, all I can do is love those around me, and tinker on my keyboard, changing the world around me, bit by bit. Who knows, maybe Bitcoin is going to succeed, allowing me to leave generational wealth for my grandkids, either directly or indirectly. Maybe one of my other projects will succeed and grant me a minimum viable income so that I’m not forced to work another day in my life.

Maybe I’m being fatalistic, maybe this is just my monkey mind sowing doubt in my mind, preparing me for failure. I’m not sure, but it doesn’t feel like it. I think it’s just recognition that I’ve got too many things distracting me, things that I need to let go of, and remove from my life.

But right now, I hear the pitter patter of little feet upstairs, which means it’s time for me to enjoy my Sunday.

Baby slaps

person's left hand wrapped by tape measure

Inch by inch, life’s a cinch

Well I sure did step up my game. I guess I was anxious about writing a big long post for today’s Substack post, so I wrote it last night. Someone put out a message on the company Slack yesterday about a suspicious email, and I almost fell for it. I figured it warranted an exposition, so I spent an hour or so last night writing it up, after everyone went to bed. The experience writing at night, undisturbed, is quite different from my morning writing, when the kids are getting up and wandering in the room every five minutes. I’m not sure I like having a deadline every night like that though, so we’ll keep it limited to Thursday nights. I think I’ll even put it on my calendar just to keep it routine. Done.

I didn’t make any progress on my WordPress client last night, instead, I’ve started reading through the Docker documentation so that I can figure out what the hell I’m doing with my setups. I know a couple commands; it feels like I’m an infant failing my limbs trying to grab a rattle but I keep hitting myself in the head instead. I don’t understand half of the stuff I’m reading when I look at some of the image notes in Docker Hub, and trying to read through some of the discussions on GitHub issues is even worse. So I’m just reading through the best practices documentation to try and get a sense of how things run between Dockerfiles and Docker Compose, so that I can load a SQL backup automatically, add access my PHP files in both my development environment and in the container’s web services.

And the Docker documentation is pretty interesting. It’s scattered with little goodies like links to Twelve-Factor Apps, and here documents, which is something I didn’t even know about. And if there’s one place I need to step up, it’s my bash skills.

Anyways, I get to keep this post short since today’s a double post day. I thought about writing something for an off day, but figured the ritual of writing morning pages was more important than taking a day off. I’m starting to gather ideas faster, and my writing output has stepped up quite a bit. So I’m going to wrap it up, make some changes to today’s big post before I blast it out to several hundred unknowing Substack subscribers.

Now, time to go grab that rattle by the horns.

Django on Docker development challenges

I finally had some time to do some deep work yesterday, and go my unversity project’s Django instance up and running. It took way to long. The local version settings for Cookiecutter Django work very easy from a Docker setup, but deploying them to a production instance took me by surprise. There were several issues I had to work through.

Cloud storage: I had inadvertently setup my project with the cloud storage settings for AWS. We’re not using either AWS or Google Cloud Services for CDN since this is just a small prototype. Since we didn’t have the AWS bucket credentials, the Django service wouldn’t start. I had to replay my setttings file to recreate my project without cloud services set to none. I attempted to use a fork of CC Django that uses Ngnix as the media server, but had other issues with it and decided we just won’t have media for this prototype.

Traefix: The production settings put Django behind a Traefik load balancer firewall, it’s configured to use Let’sEncrypt! for validation. Leaving this section blank causes Traefik to fail. So I commented out the SSL validation section of the configuration file. Currently it throws a warning about a nonexistent validator, but this is the only way to get it to serve pages. Currently. I’ll register with Let’sEncrypt!, but I’m not sure I’ll be able to procure a cert for our web server given that it’s a host on our University’s CS domain.

Developmental environment issues: Perhaps the most frustrating problems I’m having are around the way our environment is setup. I work off campus, yet our resource host is only available inside the campus network. I’m using our CS Gitlab server as a code repo, but I haven’t setup and CI jobs to deploy the code yet. On order for me to terminal to the server I have to SSH to our public CS server, then SSH to the resource server. In order to view the Django website, I have to open an RDS session. Not ideal, but I’ve yet to optimize the setup.

And troubleshooting these various problems with our production server comes with it’s own set of challenges. The git repo is synced to our Docker host, and the instance is deployed via docker-compose commands. In order for me to update the code, I have to cycle through down, build and up commands to resync the code. Hopefully, I’ll be able to setup Pycharm’s Docker remote capabilities to edit the code directly within the docker instance. We had planned to setup multiple containers in order to run a test server, but that’s going to be very difficult on a single host.

I’ve had other minor issues with production settings not taking properly. It looks like the .env files aren’t loading properly, causing the default local ones to be imported. I had to change the defaults in manage.py, but I assume this may break our local setup.

When everything is shaped up, I’ll have one git repo that can be run locally, in test or production, with a CI job that will deploy commits to our Docker container. I’ve got a lot of work to do.