Maintaining privacy for your kids by running a private WordPress instance in your home network
Well, I may have finally lost my mind. I quit Facebook over a year ago, but one of the things that I do miss is the throwback posts that they pop up on your feed with pictures and posts from one or five years ago. It’s great for those “oh look how little Elder is in that picture!” kinds of moments. I don’t feel comfortable sharing pictures of my kids on my normie feed anymore, but I still want to do more than just have a folder sitting on my computer somewhere that gets looked at once in a blue moon. Plus, Elder is getting more involved with using the computer, and I wanted to give her a chance to express her creativity without the risk of letting her have a YouTube account. So I did the only thing any sensible tech dad would do. I set up an internal WordPress site for the family to use.
Setting up internal domain
I’m pretty proficient with Windows domain controllers, and manage a lot of contoso.local domains that aren’t externally routable. I decided that I wanted to do this, so that the site could be accessed only from our local network. That way we could easily access it from any of our personal devices, and can potentially allow friends and family to look at it when they visit and join our network.
Bind is the go-to DNS server for Ubuntu, so I started by installing and configuring it. However I quickly got lost in a maze of .conf files and StackExchange posts trying to get it to work, so I dumped it and installed dnsmasq instead. Dnsmasq relies on simple host files instead of the more complicated zone files that bind uses, which is more than enough for what I need at the house.
I setup my /etc/dnsmasq.conf file as follows using this guide:
# Don't forward plain names or non-routed addresses
# Use OpenDNS, not ISP's DNS router
# Replace second IP with your local interface
Then I setup my /etc/hosts file with the records I need, pointing to the downstairs server and my development workstation.
188.8.131.52 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
192.168.1.123 homeboy.dahifi.internal homeboy
192.168.1.102 oberyn.dahifi.internal oberyn
192.168.1.123 elder.dahifi.internal berkley
After saving changes, I need to restart dnsmasq:
systemctl restart dnsmasq. From there I was able to validate the configuration on the server and external machines using nslookup. Once I was comfortable that things were working, I added my internal server’s IP to my router’s DHCP DNS scope and refreshing the client leases on a couple devices to make sure they would work.
Note about .local domains
I’ve never had issues with domains ending in .local among my Windows business networks, but Ubuntu may have a multicast DNS service called Avahi running, which hijacks anything FQDN ending in .local. Interestingly this service was missing off of the actual Ubuntu server install, which interfered with my troubleshooting. The simplest thing to do to get us up and running was just to change our internal domain from dahifi.local to dahifi.internal. Any other non-routable TLD should work as well.
Additionally, Ubuntu ships with resolved, a network name manager service. It runs a caching service at 127.0.0.53, and interfered with my troubleshooting as well. My internal domain kept getting routed to my ISP’s search page, until I ran
sudo service systemd-resolved restart to clear the cache.
Multisite Docker setup using Nginx Proxy
The SSD Nodes site has a nice write up of how to run multiple websites with Docker and Nginx that I was able to use to get our WordPress site up and running. I prefer putting everything in a docker-compose file. The only prerequisite is creating the network:
docker network create nginx-proxy
And a second file, blog-compose.yml for the blog itself:
You’ll notice that the site that the blog is published to is identified by the VIRTUAL_HOST environment variable. In this case we’re still pointing to the “top level” domain, dahifi.internal, and not blog.dahifi.internal. This is due to issues we were having with subdomain resolution on the Nginx proxy, and is something we’ll have to work on later. Originally, we had our internal GitLab instance running on this host at port 80, and had to take it down for Nginx to work. My next step is to make sure that subhosts work properly, and then reconfigure GitLab and this blog to run under something like git.dahifi.internal.
One additional change that I needed to make was to change the default file size limit of 2M. Following along with this setup tutorial, I added
- ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini line to the WordPress container, then added the following to the uploads.ini file:
memory_limit = 64M
upload_max_filesize = 64M
post_max_size = 64M
Then I rebuilt the container with docker-compose down and up, making sure to specify my blog-compose.yml file.
I still encountered errors trying to upload. WordPress kept throwing
Unexpected response from the server. The file may have been uploaded successfully. Check in the Media Library or reload the page.
I couldn’t find any errors in WordPress’s Apache instance, and PHP looked fine. I eventually found a message in the Nginx container log:
client intended to send too large body. It seems that Nginx has it’s own limits. I added
client_max_body_size 64M this directly in the
/etc/nginx/nginx.conf file, then reloaded it with
service nginx reload. Problem solved! Not only can I upload files directly through the web interface, but I was also able to add the internal site to the WordPress app running on my phone, and can upload images directly from my phone.
Elder is already working on writing a story in it now, and I’m looking forward to using this site as an internal bulletin board for the family. Let’s see what happens!