Private, internal blog for the family

girl wearing grey long-sleeved shirt using MacBook Pro on brown wooden table

Maintaining privacy for your kids by running a private WordPress instance in your home network

Well, I may have finally lost my mind. I quit Facebook over a year ago, but one of the things that I do miss is the throwback posts that they pop up on your feed with pictures and posts from one or five years ago. It’s great for those “oh look how little Elder is in that picture!” kinds of moments. I don’t feel comfortable sharing pictures of my kids on my normie feed anymore, but I still want to do more than just have a folder sitting on my computer somewhere that gets looked at once in a blue moon. Plus, Elder is getting more involved with using the computer, and I wanted to give her a chance to express her creativity without the risk of letting her have a YouTube account. So I did the only thing any sensible tech dad would do. I set up an internal WordPress site for the family to use.

Setting up internal domain

I’m pretty proficient with Windows domain controllers, and manage a lot of contoso.local domains that aren’t externally routable. I decided that I wanted to do this, so that the site could be accessed only from our local network. That way we could easily access it from any of our personal devices, and can potentially allow friends and family to look at it when they visit and join our network.

Bind is the go-to DNS server for Ubuntu, so I started by installing and configuring it. However I quickly got lost in a maze of .conf files and StackExchange posts trying to get it to work, so I dumped it and installed dnsmasq instead. Dnsmasq relies on simple host files instead of the more complicated zone files that bind uses, which is more than enough for what I need at the house.

I setup my /etc/dnsmasq.conf file as follows using this guide:

# Don't forward plain names or non-routed addresses
domain-needed
bogus-priv

# Use OpenDNS, not ISP's DNS router
server=208.67.222.222
server=208.67.220.220

# Replace second IP with your local interface
listen-address=127.0.0.1,192.168.1.123

expand-hosts
domain=dahifi.internal

Then I setup my /etc/hosts file with the records I need, pointing to the downstairs server and my development workstation.

27.0.0.1       localhost.localdomain   localhost
::1             localhost6.localdomain6 localhost6

192.168.1.123   dahifi.internal
192.168.1.123   homeboy.dahifi.internal   homeboy
192.168.1.102   oberyn.dahifi.internal    oberyn
192.168.1.123   elder.dahifi.internal   berkley

After saving changes, I need to restart dnsmasq: systemctl restart dnsmasq. From there I was able to validate the configuration on the server and external machines using nslookup. Once I was comfortable that things were working, I added my internal server’s IP to my router’s DHCP DNS scope and refreshing the client leases on a couple devices to make sure they would work.

Note about .local domains

I’ve never had issues with domains ending in .local among my Windows business networks, but Ubuntu may have a multicast DNS service called Avahi running, which hijacks anything FQDN ending in .local. Interestingly this service was missing off of the actual Ubuntu server install, which interfered with my troubleshooting. The simplest thing to do to get us up and running was just to change our internal domain from dahifi.local to dahifi.internal. Any other non-routable TLD should work as well.

Additionally, Ubuntu ships with resolved, a network name manager service. It runs a caching service at 127.0.0.53, and interfered with my troubleshooting as well. My internal domain kept getting routed to my ISP’s search page, until I ran sudo service systemd-resolved restart to clear the cache.

Multisite Docker setup using Nginx Proxy

The SSD Nodes site has a nice write up of how to run multiple websites with Docker and Nginx that I was able to use to get our WordPress site up and running. I prefer putting everything in a docker-compose file. The only prerequisite is creating the network:

docker network create nginx-proxy

docker-compose.yml:

version: "3"
services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
    ports:
      - "80:80"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro

networks:
  default:
    external:
      name: nginx-proxy

And a second file, blog-compose.yml for the blog itself:

version: "3"

services:
   db_node_blog:
     image: mysql:5.7
     command: --default-authentication-plugin=mysql_native_password
     volumes:
       - ./blog_db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: root_password
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress_user
       MYSQL_PASSWORD: wordpress_user
     container_name: blog_wp_db

   wordpress:
     depends_on:
       - db_node_blog
     image: wordpress:latest
     volumes:
       - ./blog_wp_data:/var/www/html
     expose:
       - 80
     restart: always
     environment:
       VIRTUAL_HOST: dahifi.internal
       WORDPRESS_DB_HOST: db_node_blog:3306
       WORDPRESS_DB_USER: wordpress_user
       WORDPRESS_DB_PASSWORD: wp_password
     container_name: blog_wp
volumes:
    blog_db_data:
    blog_wp_data:


networks:
  default:
    external:
      name: nginx-proxy

You’ll notice that the site that the blog is published to is identified by the VIRTUAL_HOST environment variable. In this case we’re still pointing to the “top level” domain, dahifi.internal, and not blog.dahifi.internal. This is due to issues we were having with subdomain resolution on the Nginx proxy, and is something we’ll have to work on later. Originally, we had our internal GitLab instance running on this host at port 80, and had to take it down for Nginx to work. My next step is to make sure that subhosts work properly, and then reconfigure GitLab and this blog to run under something like git.dahifi.internal.

Image uploads

One additional change that I needed to make was to change the default file size limit of 2M. Following along with this setup tutorial, I added - ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini line to the WordPress container, then added the following to the uploads.ini file:

memory_limit = 64M
upload_max_filesize = 64M
post_max_size = 64M

Then I rebuilt the container with docker-compose down and up, making sure to specify my blog-compose.yml file.

I still encountered errors trying to upload. WordPress kept throwing

Unexpected response from the server. The file may have been uploaded successfully. Check in the Media Library or reload the page.

I couldn’t find any errors in WordPress’s Apache instance, and PHP looked fine. I eventually found a message in the Nginx container log: client intended to send too large body. It seems that Nginx has it’s own limits. I added client_max_body_size 64M this directly in the /etc/nginx/nginx.conf file, then reloaded it with service nginx reload. Problem solved! Not only can I upload files directly through the web interface, but I was also able to add the internal site to the WordPress app running on my phone, and can upload images directly from my phone.

Elder is already working on writing a story in it now, and I’m looking forward to using this site as an internal bulletin board for the family. Let’s see what happens!

Alienware m15 slow resume fix with TLP

I’m happy to report that I finally got my new Alienware m15 properly configured. I got it through work via the Dell Outlet, and immediately wiped Windows off of it and installed Ubuntu. There were some issues. The unit shipped with a 256GB M2 drive and a 1TB hybrid. The 1TB was set as the primary drive, so I had to mess with UEFI/Secure boot to get it installed. However the main problem I had was with the power and sleep/hibernation functions.

The main annoyance was that opening the lid to resume the unit from sleep took upwards of half a minute to do anything. I tried a multitude of APCI settings in grub, poured through logs and updated to Ubunutu 19 with no fix, and had finally resigned to the issue.

This was especially frustrating since the unit this machine was obstensibly replacing, a seven year old Dell Latitude, had almost identical hardware and had no problems with the power — although it did have lockup issues related to running Windows in VMM….

This Alienware is quite the power hog, as the battery would only last about an hour on max charge. Again, quite frustrating given that my Latitude can easily get three hours or more. In installed prime-select to disable the GPU, but it didn’t seem to help. Then a few days ago as I was typing with it on my lap, I decided to do something about the intense heat coming from the metal heat grate on the bottom that had been toasting my legs. So installed the tlp advanced power management package. I saw a dramatic decrease in heat and fan activity, and was pleasantly surprised the next time I opened the unit and saw the lock screen almost immediately. Problem solved!

It’s only moderately improved the battery life though, maxing out around an hour and a half. I may experiment and pull the hybrid drive out to see if that is the culprit. I figure the 9th gen i7 should be more efficient that my older one, and with the same screen size there isn’t any other component that could be sucking the battery down like this.

Ultimately the issue is Dell’s lack of support for Linux. If I was going to recommend a laptop for a *nix user I would probably shy away from recommending Dell. They have dabbled with Ubuntu support in the past, and still may do so for their enterprise server lines, but you’re pretty much on your own if you’re using a desktop or laptop. That said, I haven’t run into many problems with the few deployments I’ve done. My old, old Latitude that I’ve given to my oldest works great, but I would probably go with a brand that is dedicated to supporting Linux if I was going to buy something out of my own pocket.

Hopefully this post will help someone experiencing similar problems. If so, please drop a line in the comments to let me know. Thanks!

Windows VM on Ubuntu

I’ve been slowly converting to Ubuntu over the years. Neal Stephenson’s In The Beginning Was the Command Line made Linux seem like such a rage when I read it years ago, but I had always been slave to the GUI. Things started to change a bit when Microsoft started pushing Powershell. My manager at the time said that it would “separate the men from the boys” and I’ve been making a push to start building out a library of PS scripts to use to during Windows Server deployments and migrations. 

I’ve been exposed to *nix plenty over the years. My first job after high school was at an ISP, and I remember watching in awe as the sysops guy would bash his way through things to disconnect hung modems or do this or that. I forget exactly when I started getting into actually using it, but I remember setting up LAMP stacks back in the day to setup PHP apps like WordPress or Wikimedia when I was working at the Fortune 500 firm. Cryptoassets led me further down that world, compiling wallets from source, deploying mining pools on AWS instances. Computer science courses opened me up to the world of sed and regex. I still haven’t gotten into emacs or vim — I’m not a sadist. 

As someone who’s been supporting Windows operating systems pretty much for the past 20 years, one of the realities that one often lives with is the reality of having to reinstall the operating system. I did so many during the time that I operated my service center that it was as natural as turning it off and back on again. Luckily I’ve managed to keep a few boxes up and running for several years now –knock on wood– but my primary work laptop hasn’t been so lucky. It’s five+ years old now and has probably been redone 3 times. The last time I went ahead and took the plunge and installed Ubuntu. I still run Windows in a VM since my job relies so much on it, but I’m becoming more and more comfortable in it that it’s becoming my preferred OS. 

One issue that I’ve been struggling with on this setup is that from time to time my system will halt. I might be in the VM, working on something, or browsing Chrome on the host and it will just lock. Sometimes it seems to be when I open a resource-heavy tab. I don’t know if it’s a resource issue between host and guest, but it’s been annoying while not bad enough that I can’t just reboot and keep going.

Today has been a different story. 

Earlier I noticed that the system was starting to become unstable. Fans were whirring, Chrome was starting to hang up intermittently, so I went ahead and restarted the guest OS. Only this time it wouldn’t come back up. Stuck in a automated system repair. I downloaded a boot disk and tried to mount the system. Wouldn’t even get that far. Finally I said ‘screw it’, unmounted the disk and started creating a new one. That’s when I started getting into raw vs. vpc vs. qcow2, ide vs. virtio, pouring over CPU and RAM allocations. I spent hours trying to get the disk to come back up. I think it had something to do with the format I used when I originally set up the disk. It might have been a swap issue or something, but since I’m running it off of virtio now it seems more stable. Time will tell. 

As for the original vhd, I eventually copied the data file off of the local file system on to an external, and was able to fire it up attached to another Win10 no problem. I deleted the original on my laptop and copied the copy back and was able to get it to spin back up. I think it  may be something to do with the fixed allocation of the vpc file vs. the dynamic sizing of the qcow format. 

Today has been a reminder to check backups on all of my systems. Thankfully Crashplan has Linux support now, so I’m going to get that deployed ASAP.