WPStagecoach saved my life

pink carriage with brown horse

Quit messing around with lesser staging processes and get the real deal.

I don’t mean to be too glowing or make this seem like some infomercial endorsement, but I do really think it saved me from having a heart attack the past couple days. I’ve been using InfiniteWP to manage most of my stable of WordPress sites, and it’s served me well for managing updates and backups, and is even handy for migrating websites from one host to another. It’s well worth the $120 or so that I paid for it a few months ago. It’s staging features aren’t really that great.

Part of the problem is that it only wants to install the staging site as a subfolder of the main site. It also makes a copy of the database on the production database, it just uses a different table prefix. I shouldn’t have to tell you why this is not great from a performance and quota standpoint. The other problem is that it doesn’t provide much information when things go wrong. Ideally, I want my staging sites in separate subdomains, but IWP just can’t do this. And the documentation is very mum about this. I have a support ticket open with them right now to figure out why I was unable to clone a particular client site, and to make sure that this paragraph is correct. What I can tell you is that I spent days trying to get a proper staging site setup for my client using IWP.

It’s not all their fault. I’m taking over a project that seems to have been abandoned by the original developer, and there were many problems with the site that may have contributed to the problems I’ve been having, as we shall see shortly. IWP has three staging options, on the original site, on my configured staging server, or custom FTP. I was able to clone the site to my custom staging server, but the theme didn’t operate properly. I believe this may have been a problem with hotlinked theme assets, I haven’t figured it out yet.

I literally spent days trying creating subdomains and updating DNS on the client site, and couldn’t figure out why IWP kept giving me “error: check your hostname” when I tried to update things. I figured it was a DNS propagation error between the server hosting my IWP and the client’s host. I usually only work on sites I host directly, but this was the first time I actually had to use the staging features. I was getting very anxious. I had wasted several days was already dealing with an irate client, and was starting to get a panicked feeling when working on the project.

So I decided to go another route and explore some other options. I read through several blog posts on WordPress staging sites, and one name that came up several times was WPStagecoach. And it was only $12 for a month, so I signed up for a trial and had the staging site up in less than an hour. No kidding.

The setup process was impressive. Getting the plugin installed and activated was pretty standard, and creating the staging site was very user friendly. It started off by scanning the site for large files, and found a backup archive, which it asked to exclude. Then it starting creating a tar file of the site to move to staging, and showed me a status percentage as it did so. This was very much needed considering IWP had been “working” for hours without so much as a log update. After the tar process was completed, I did get an error that the archive was missing files, and was asked whether I wanted to abort, retry, or “proceed fearlessly.” I retried, waited another five minutes, and got the same error, so I went ahead and pressed proceed. Another five minutes, and BAM. There was my staging site, and it looked perfect.

And one thing that really impressed me was that after the creation of the staging site, I was given a list of errors that WPS had found, mainly places where the site’s URL was hardcoded in the theme templates. These are likely why I had the rendering issues on my previous staging attempt. So now I have a list of files that I need to target, as hard coded URLs will play havoc with my development environment as well. And this feature really shows how WPStagecoach really shines as a specialized product.

WPS hosts the staging site on their own servers, giving each site their own subdomain. I got ten with my account, which is way more than I’m going to need anytime soon. So now I can proceed with the next step on this project, which is getting our MemberPress module up and running. Then I’ll be able to see if pushing changes back to the live site is as easy as creating it in the first place. If my experience so far is any indication, it’ll be a sinch.

Git-ing it done

black green and blue coated wires

Roll your own GitLab

I had trouble falling asleep last night, Younger crawled in our bed just as I was dozing off and kept squirming as I was falling asleep, so I slept in her bed. It faces East, so I woke up at five and tried to go back to sleep. I heard Elder up, so I got up and started the day. She’s sitting across the room from me, looking up “Valentine’s Day” gifts ideas for the boy in our quarantine bubble down the street. Her sister has been ribbing her about it for days now.

One of our Zombie, LLC clients wants help standing up an internal GitLab server. It got me thinking, so I went ahead and set up a GitLab docker instance on my downstairs Ubuntu server. I figure it’s good practice. Do the job you want has always been good practice, so setting it up was worth the time. Plus it only took about fifteen minutes. The main problem I ran into was an SSH conflict with the existing service on the host. And it doesn’t appear that modifying the config on an existing container requires stopping the Docker daemon, so I just deleted the container and started over. I’ll probably move SSH if I ever do a real deployment, but here at the house the HTTP functionality is enough.

There’s also the mail issue. I didn’t want to use the root account to setup my repos, but the workflow around new accounts wants to send an activation email. I tried installing sendmail on the host, but the password reset didn’t work. I doubt it will work without a publicly routable dynamic DNS entry back to it or SMTP services, which I don’t want to mess with right now. Thankfully I found a password change form in the admin interface that didn’t require knowing the old password and got up and running.

I am nowhere near as strong with my Linux management skills as I am with Windows, where everything is pre-packaged and is somewhat unified. I can stand up domain.local services lickety split, and have a library of PowerShell scripts to setup AD, DNS, DHCP services within a domain. I have never actually taken the time to set one up at home though, but that point may soon be approaching. I’ve been wanting to investigate the use of Ubuntu server as an alternative or supplement to Windows based AD services, but part of me is skeptical that such a setup is even viable for workstation authentication and services. But I digress. The point I’m trying to make here is that I’ve always been in awe of Unix sysadmins ever since I worked at an internet service provider back in the late 90’s and watched our systems guy pop in and out of terminal shells like a wizard. I’ve never felt adequate in that regard.

I made some good progress yesterday working on the WordPress project, and have started converting the client’s site over to the new theme. I’m going over the demo site, examining the Bakery build they’ve got set up, and recreating it using the client’s assets. This allows me to get a bit more familiar with the framework that the theme author is using, and hopefully gleam some best practices at the same time. It’s a two step forward, one step back process. There are some strange bugs that popped up. Activating Woocommerce seems to bring the site down completely, as does changing the theme back to the original. Then at one point, while I was working on the new header, the previews stopped working completely and would only throw 404 errors. They work in the actual site, so I had to make do while I made edits.

Usual best practices for WordPress development and git repos are to exclude the entire WordPress directory except for whatever theme and custom plugin that you’re developing, but since in this case we’re working on an entire site, I’ve added the entire WordPress directory and associated SQL database files. The wp-content/uploads directory is mounted outside the container, along with plugins and themes. I haven’t yet pulled this directory on another machine yet, so I don’t know if it’s going to work. My main concern is how I’m grabbing the database. Managing PostgreSQL during my Django projects has always been a bit of a pain as I never learned how to incorporate it into my source control. I’ll have to spend some time correcting this deficiency.

Here is a look at the Docker Compose file I am using for my development setup. The SQL mount /docker-entrypoint-initdb.d/backup_to_load.sql get’s imported when the container is created; I assume that it’s ignored when pulling the SQL data from source. We shall soon find out. Also, I haven’t solved the file permissions issues that happen when trying to edit things like the wp-config.php file. I’ll have to save that for a later time.

version: '3.8'
services:

  wordpress:
    container_name: 'local-wordpress'
    depends_on:
      - db
    image: 'wordpress:latest'
    ports:
      - '80:80'
    environment:
      WORDPRESS_DB_HOST: db
      WORDPRESS_DB_USER: wordpress_user
      WORDPRESS_DB_PASSWORD: wordpress_password
      WORDPRESS_DB_NAME: wordpress_db
    volumes:
      - "./Wordpress:/var/www/html"
      - "./plugins:/var/www/html/wp-content/plugins"
      - "./themes:/var/www/html/wp-content/themes"
      - "./uploads:/var/www/html/wp-content/uploads"

  db:
    container_name: 'local-wordpress-db'
    image: 'mysql:5.7'
    command: --default-authentication-plugin=mysql_native_password
    volumes:
      - './data/mysql:/var/lib/mysql'
      - './data/localhost.sql:/docker-entrypoint-initdb.d/backup_to_load.sql'
    environment:
      MYSQL_ROOT_PASSWORD: somewordpress
      MYSQL_DATABASE: wordpress_db
      MYSQL_USER: wordpress_user
      MYSQL_PASSWORD: wordpress_password

  adminer:
    image: adminer
    restart: always
    ports:
      - 8080:8080

Hewing bits and bytes

Why am I working so hard?

After publishing last night’s post, I made a little headway with one of my projects, figuring out how to mount a SQL dump into a mySQL Docker image so that it gets loaded automatically when the container spins up. Just one more little win toward accomplishing my task. Now I just need to tackle the way I have WordPress deployed, and I can begin working on the project for real. I’m taking my time with this. All of the learning and research I’m doing now isn’t the client’s time, it’s mine, and is the kind of learning I love.

Being able to master Docker means I don’t have to run all this stuff on my local machines. I can start culling all of the packages that I’ve loaded in the past for this project or that, things like Node dependencies, Ruby, and Postgres no longer have to bulk up my system. Pop, here’s a container. Pop, there it goes. I went through my staging server a few days ago and started cleaning out shop, removing abandoned projects. Goodbye, rm *pennykoin* -rf, and so long.

I’m still reading Fluent Python, about a half hour before bed. I finally have a good grasp on decorators. I think my eyes glazed over on coroutines, but I think I’m ready to add threading to my value averager app. I’ve only got a couple of chapters left, on asyncio, which I desperately need to master, and another on one of my favorite subjects, metaprogramming.

I’ve been reading Fluent Python for about twenty minutes right when I climb in the bed. It’s on the iPad and even with the brightness turned down all the way, it’s still bad for rest, so I usually wind up reading a real book. Right now it’s Digital Minimalism and last night there was a section about Henry David Thoreau, starting with his time building his cabin at Walden Pond, before he wrote his book. Just how does one build a cabin using just an axe? Anyways, the point here, and one I never knew before is that Walden is really about using time as the true unit of account. What use is earning a bunch more money if the cost in time to earn it is so much. And for what?

It’s not that I haven’t heard the idea of time as money before, or rather trading time for money. It’s very prevalent in the things I read and hear. Just realizing that Thoreau was writing about it some one hundred and fifty years ago makes me realize how little things have changed. I don’t know why I should be surprised. I’m sure Marcus Aurelius says similar things in his diaries. I think my point is that I wasn’t expecting to hear it. Here I was, trying to convince myself that I should delete Twitter off my phone for a month, and here’s Cal Newport, via Thoreau, asking “why are you working so hard, you sap?”

Thoreau did have any children, though, so I guess I can say that’s part of the reason that I grind, although it’s really not the only reason. I like figuring things out, and it’s just so happened that the things I’ve figured out how to do enables me to earn a comfortable living. Still, there’s some sort of drive to build something, a legacy, if you will, coupled with a mild regret that I should have more to show for this life I’ve lived these past forty one years. One of my grandfathers built a house. All I have of another is a stained glass lamp, sitting next to one of my daughter’s beds. That and memories of model trains in a basement, and playing a flight simulator on an old Tandy PC back in the 80’s.

And maybe that later point is the crux of minimalism. In the end, it is the memories that matter. Not all of us are going to write lasting works of fiction or build cathedrals that will be finished long after our deaths and stand for centuries. Today, all I can do is love those around me, and tinker on my keyboard, changing the world around me, bit by bit. Who knows, maybe Bitcoin is going to succeed, allowing me to leave generational wealth for my grandkids, either directly or indirectly. Maybe one of my other projects will succeed and grant me a minimum viable income so that I’m not forced to work another day in my life.

Maybe I’m being fatalistic, maybe this is just my monkey mind sowing doubt in my mind, preparing me for failure. I’m not sure, but it doesn’t feel like it. I think it’s just recognition that I’ve got too many things distracting me, things that I need to let go of, and remove from my life.

But right now, I hear the pitter patter of little feet upstairs, which means it’s time for me to enjoy my Sunday.

What is work?

two white rabbits

Down one rabbit hole after the other

I spent most of yesterday really digging into WordPress in a way that I really haven’t before: theme files. My current project has a customized version of the Twenty Seventeen theme, with lots of custom templates, fields, and functions that I need to move over to a new template. It’s taken me weeks to finally understand what the previous developer was doing, and there’s a fatal bug in the system somewhere that is deleting post data that I’m trying to uncover so I can clean things up. I figure my best course of action is to migrate everything to a staging site, start with a new theme, and start going through the plugins one by one to rebuild the content on the site. There are multiple pages and types of posts with custom fields that need to be displayed properly. I’m not really looking forward to having to debug someone else’s stylesheets, though.

Doing this kind of development isn’t ideal even on a staging site, given that the WordPress native code editor isn’t really suited to real work. I haven’t done PHP work in over ten years, but I downloaded PHPStorm and got started setting up a development environment. I was hoping to setup some sort of Git workflow for the site, but I didn’t find any options that were production ready, so I grabbed the files via FTP and quickly set to work.

WordPress has an official Docker image, so I set about configuring a Compse file for my local environment. There I ran into problems. I was trying to map my theme directory to the container’s, but I ran into issues with file permissions. I haven’t quite figured it out. I can change the permissions within the container to allow the container to use the files, but then they’re locked on my development host. So that’s my challenge for today, and one that will no doubt lead down many more rabbit holes.

This is just an example of the kind of stuff I do, that most people call work. Now this doesn’t have anything to do with my regular day job responsibilities, it’s for a client. And even if it wasn’t, it’s still the same type of activity that I would be doing for fun anyways. Although if you asked my wife if she thought I was having fun last night, she would have said that all the cursing and muttering I was doing under my breath would indicate otherwise. This particular project is a challenge for me because it involved a level of technical expertise that I don’t have, that I am forced to pick up in order to understand the issue — and hopefully solve it! It’s this area, right outside my current capabilities, that puts me in the zone and makes time fly.

It’s a drive that has gotten me where I am today, and has served me very well. Unfortunatley, it’s not something I find in my current day job, and is one of the main reasons why I’m looking else where these days. Part of the problem is the fact that the company constantly hovers on the edge of sustainabily and closure, but I have trouble reconciling that situation with my responsibility for it. Perhaps it’s that I don’t have any stake in the company, other than my current minimum viable salary. It’s allowed me to pursue other projects, including school and political activities, but has not offered anything for me in the way of growth in several years. I am not in sync with my boss in the way of the direction of the company or even the type of customers that we take on. The challenges are rote, and therefore not interesting to me. And they haven ‘t changed in years. Neither has my salary.

I’ve started reading Designing Your Life, by Bill Burnett and Dave Evans, and one of the first exercises that they ask readers to write a workfview reflection, defining how work relates to their life, money and others. This is my response to that, of course. Work has such a broad meaning to me. It’s not just your job, it’s also the things you do for your family and friends, chores around the house or the yard, spending time with family, and yes, helping your dad or whoever with their laptop from time to time. And one thing my dad taught me, that I’m trying to impress upon my girls, is that when there’s work to be done you just have to suck it up and do it.

Work is rewarding also, and can be fun. That’s not to say it can’t be repetitive or stressful,, the most panic-inducing heart attack moments I’ve had have been related to failures at work. But I’ve helped a lot of people, and it’s often fulfilling. That’s not to say that I haven’t had horrible, dirty jobs that I had to take because I was unemployed and living on couches, but most of them have been knowledge work, and pretty chill. These days it pays the bills, but it’s the work I do outside of work that is where I continue to learn and grow.

Hopefully my girls will be as lucky as I am, and be able to make a living doing what they love. Actually, it’s not luck, it’s by design. Obviously I am not where I want to be right now. Sure, my work life is probably better than ninety percent of the world’s population right now, and I have no room to complain about anything, but it’s it the human condition to want more, to want to be more? And to me, that’s what work is, the drive to improve, to become better. Constant improvement. Refine, iterate, repeat, repeat, repeat.

Underpromise, but overdeliver

Yesterday we gave the final demo of our two-semester professional workforce development project. It did not go well. We had fifty minutes to present, but our demo only took about five. One of the professors, who had been receptive of our pitch last semester, was very disappointed. We basically failed to deliver. I was defensive, and tried not to make excuses cause she was totally correct.

During the last few weeks of this semester, we were more focused on what was right in front of us than on the big picture that we had promised last semester. And by that I mean technical issues. As system architect I was more focused on getting the architectural components up and running than I was on whatever particular use case this person was expecting to see. So in that sense, yes, we failed.

There were a number of roadblocks that we had to overcome. Our team was made up of six members, two of which were complete dead weight. Another member had issues with her local development machine and was unable to contribute directly to the source code. This was fine, as this was a writing intensive course and there were several written deliverables, including specifications and testing plans. So we basically split the work: I lead the technical development and contributed to the written work as needed, but pretty much left management of the final written work to others on the team.

We relied heavily on Cookiecutter Django for our base deployment. In the long run, this was probably the way to go versus using vanilla Django, or another framework like Flask, but it hurt us in the short term. No one on the team besides myself was familiar with it, although I don’t think we could have avoided that with another solution. We wound up spending an inordinate amount of time trying to get the others on the team up to speed on deploying it via Docker and managing that through various IDEs. I spent a lot of time mentoring the two teammates who were assisting with actual code commits. And it had been so long since I had worked on Django that I had to relearn its model-view-template architecture all over again.

And this is where using Cookiecutter really slowed us down. The package implements several best practices on top of Django: overriding the default user model, implementing updated authentication forms, even deploying a Traefik load balancer on top of the production web server. All of these slowed us down.

In all, we spent less than forty five days doing actual development work on the project. That was following more than thirty days trying to get the framework up and running between local and production environments. The semester was actually focused more on the written deliverables and put off actual development work till the last half of the semester. In retrospect, holding to this schedule was a mistake, and it was probably a bit of hubris on my part not to start work a bit earlier.

We got an email from our instructor this morning:

“From a development (i.e., architecture, tool, collaboration, and project) standpoint your team has met all the requirements for the course. While your demo was less “interface oriented” than the other groups the evaluators referenced, the foundation of your prototype is more substantial. It was clear to me that there was a bias during the evaluation based on discussions that occurred […] last semester. Keep in mind that I have been equally critical of groups in the past (albeit in a less heavy-handed fashion). Your group as a whole should not worry about failing the course.”

So maybe I’ve been a little hard on myself, but I think it more likely that they were as much invested in our project as we were. Obviously there’s some University politics at play there.

So the question is, what comes next? We had hoped to pursue a grant to continue working on this project on behalf of the university post-graduation, but given the tepid response from the skeptical evaluator, I don’t see that as forthcoming without many additional changes. I’ve broached the subject with my other teammates, and I’m not sure there’s any desire to continue forward with that. Maybe after the semester is over. I still have two more classes to complete, and others have a full case load. At least one of them has taken a job, so it may just come down to me and one other person.

All in all, I know the experience was a positive one for me. I know that learning GitLab, Docker and Django for app deployment will come in handy in future projects. And we’ll just have to see if any of the relationships with the team members will last past this semester. Any decision about the future of our project will be on hold for now.

Fearless refactoring

C++ is a much more complicated language than I ever imagined. I’d had a little bit of exposure to it earlier in college, and I hated it because of the amount of setup that was required to get it running. We were introduced to CodeBlocks and Eclipse, but both of them just seemed so clunky. Figuring out compiler options, makefiles, and trying to get programs that compiled on my home development Ubuntu workstation and on the schools Windows RDS environment and the professor’s autograder was just just too much. So when I really started diving into Python, it was like coming out from being underwater too long and getting that breath of air.

Working on the Pennykoin Cryptonote codebase got me a bit more comfortable with it. I stil didn’t understand half of what I saw. Half of it was the semantics of the code itself, half of it was just trying to understand the large codebase itself. Eventually I was able to figure out what I was looking for and make the changes that I needed to make. I never really felt comfortable making those changes, and even less so publishing and releasing them. That’s because the Pennykoin codebase had no tests.

I’ve spent the last few days working on some matrix elimination code for my numerical methods class. During class, the professor would hastily write some large, procedural mess to demonstrate Gaussian elimination or Jacobi iteration, and not only did I struggle to understand what (and why) he was doing, but he often ran into problems of his own and we had to debug things during lecture, which I thought was wasteful of class time.

As I’d been on an Uncle Bob kick during that time, I decided I would take a TDD approach to my code, and began what’s turned out to be a somewhat arduous process to abstract and decouple the professors examples into something that had test coverage, and allowed me to follow DRY principles. Did I mention that our base matrix class had to use C-style arrays using pointer pointers? Yes it was a slog. Rather than be able to use iterators through standard library arrays, every matrix operation involves nested for loops. I’ve gone mad trying to figure out what needs dereferencing, and spent far too long tracing strange stack exceptions. (Watch what happens when have an endl; at the end of a print function and call another endl; immediately after calling that function…

I started out working on the Gaussian elimination function, then realized that I needed to pull my left hand side matrix member out as it’s own class. Before I did that I tried to create my own vector function for the right hand side. So I pulled that out, writing tests first. Then I started with my new matrix class. I ran into problems including a pointer array of my vector class. For reasons that I’ll not get into, I kept the C-style arrays. I slowly went through my existing test cases for the Gaussian class, making sure that I recreated the relevant ones in the matrix class. Input and output stream operators, standard array loaders (for the tests themselves), equality, inequality and copy functions were copied or rewritten. After one last commit to assure myself that I had what I needed, I swapped out the **double[] lhs member for matrix lhs, and commented out the code within the relevant Gaussian functions with calls to lhs.swapRows(). Then I ran the tests.

And it worked

Uncle Bob talks about having that button of truth that you can hit to know that the code works, and how it changes the way you develop. I’m not sure if he uses the word fearless, but that’s how it feels. Once the test said OK, I erased the commented code. Commit. Don’t like the name of this function? Shift+F6, rename, test, commit. These two functions have different names, but do the same thing, with different parameter types? Give them the same name and trust the compiler to tell the difference. Test OK? Commit.

It’s quite amazing.

I spent several hours over the past few days working on adding an elementary matrix to the matrix elimination function, and I made various small changes to the code, adding what I needed (tests first!) and making small refactors to make the code clearer. I’ve had to step into the debugger a few times, but it’s going well. There’s still one large function block that I’ve been unable to break down because of some convoluted logic, but I’m hoping to tackle it today before moving on. And I’m confident that no matter what changes I make, I’ll know immediately whether they work or not.

Fair Open Source

Last night I had the pleasure of meeting Travis Oliphant, one of the primary creators of Numpy and and founder of Anaconda. He’s currently the CEO of OpenTeams, a company attempting to change the relationship between open source software and the companies that build on top of it. I found out about the lecture and was interested in it because of an article I had read in Wired about technology’s free rider problem, and went to the event without knowing anything much at all about Mr. Oliphant. I soon found out who he was and was very grateful that I had come. I’ve spent a lot of time using Numpy, and I’ll admit I was a bit starstruck.

Travis’s lecture spawned from his experience working on Numpy. He basically gave up tenure track at Brigham Young University to work on it, and had to find other ways to support his family for the two years that he was working on the initial release. As was noted elsewhere, much of the tech boom over the past 20 years has been built on top of the contributions of FOSS developers like Travis and others. He’s a big believer of profit, and thinks that the lack of financial incentives in the FOSS space has caused several problems, including developer to burnout, leading to a lack of proper maintenance of these projects. Many of these projects, like Numpy, have become crucially important to the scientific and business community.

Tim Oliphant’s Pycon 2019 Lighting Talk about Quansight

Oliphant’s goal is to make open source sustainable. Quansight is a venture fund for companies that rely on OSS, one of the ones they’ve funded is a public benefit corporation called FairOSS, which hopes to support OSS developers through contributions from companies that use OSS. He’s also doing something very similar with OpenTeams, hoping to follow Red Hat’s model of supporting Open Source by providing support contracts for various projects.

These are all very worthy goals, and I was both impressed and inspired by his talk. It’s opened up some interesting career opportunities. I recently took my first developer payment through GitCoin recently, and it was a bit of a rush. Getting paid to work on Open Source Software seems like an awesome opportunity, and I’ll be keeping an eye on this for potential post-graduate plans.

Becoming a Git-xpert

I have been trying to get a grip on the Pennykoin CLI code base for some time. One of the problems that I’ve had is that the original developer had a lot of false starts and stops, and there’s a lot of orphan branches like this:

Taken with GitKraken

If that wasn’t bad enough, at some point they decided to push the current code to a new repo, and lost the entire starting commit history. Whether this was intentional or not, I can’t say. It’s made it very tricky for me to backtrack through the history of the code and figure out where bugs were introduced. So problem number one that I’m dealing with is how to link these two repos together so that I have a complete history to search through.

Merging two branches

So we had two repos, which we’ll call pk_old and pk_new. I originally tried methods where I tried to merge the repos together using branches, but I either wound up with the old repo as the last commit, or with the new repo and none of the old history. I spent a lot of time going over my bash history file and playing with using my local directories as remote sources, deleting and starting over. Then I was able to find out that there was indeed a common commit between these two repos, and that all I had to do was add the old remote with the –tags option to pull in everything.

mkdir pk_redux
cd pk_refresh
git init
git remote add -f pk_new https://github.com/Pennykoin/Pennykoin-old.git --tags
git merge pk_new/master
git remote add -f pk_old https://github.com/Pennykoin/Pennykoin-old.git --tags

Now, I probably could have gotten away by just cloning the pk_new repo instead of initializing an empty directory and adding the remote, but we the end result should be the same. A quick check of the tags between the two original repos and my new one showed that everything was there.

The link between the two repos

Phantom branches

One of the things that we have to do as part of our pk_redux, as we’re calling it, is setup new repos that we actually have control over. This time around, everything will be setup properly as part of governance, so that I’m not the only one with keys to the kingdom in case I go missing. I want to take advantage of GitLab’s integrated CI/CD, as we’ve talked about before, so I setup a new group and pkcli repo. I pushed the code base up, and saw all the tags, but none of the branches were there.

The issue ultimately comes down to the fact that git branches are just pointers to a specific commit in a repository’s history. Git will pull the commits down from a remote as part of a fetch job, but not the pointers to those branches unless I physically checked them out. Only after I created these tracking branches on my local repo could I then push them to the new remote origin.

Fixing Pennykoin

So now that I’ve got a handle on this repo, my next step is to hunt some bugs. I’ll probably have to do some more work to try and de-orphan some of these early commits in the repo history, cause that will be instrumental in tracking down changes to the Cryptonote parameters. These changes are likely the cause for the boostrap issue that exists. And my other priority is figuring out if we can unlock the bugged coins. From there I’d like to implement a test suite, and make sure that there is are proper branching workflows for code changes.

Frustrations

I’m a bit perturbed right now. I went back to Django project I hadn’t worked on in two weeks and could not get my Pycharm interpreter working properly. I’d updated from the Community Edition to the Professional Edition during that time, which I’m not sure had anything to do with it, but this failed session brings me to another source of frustration with things that I need to get off my chest.

There are 3, maybe 4 ways that one might need to interact with a Django app in Pycharm. The first, being the Python console itself. The second, the regular command terminal. Third would be the various run configurations that one can setup. And four would be the Django console that Pycharm Pro enables. My issue is that each of these has their own environment variables settings! Maybe it’s just my inexperience showing through here, but I tend to use several of these when I’m working. I have a run configuration for the test server running, then the Django console for migrations and tests, and a terminal window that’s actually running the Django shell, so that I can muck around with code while I’m figuring things out.

I don’t know if I’m an idiot or what, but it just seems extremely ineffective, and I have got to be missing something.

Working alone

Last weekend I finally got around to reading Two Scoops of Django, and it was very interesting. I wish I had picked it up earlier. I think I first started really delving into the Django framework about 3 months ago or so, and I’ve really enjoyed tinkering around with the models and ORM. I’ve done a bit with the forms and views, but I’ve spent a lot more time trying to draft out some data models for various projects and get a feel for how things work. I’ve fallen into my trap of getting too caught up in tools in order to actually deliver anything yet, but I’ve got two projects that I am primarily working on. I’ve been very disciplined about spending at least an hour or more each day on one of them.

Part of me thinks I should just focus on the one at the exclusion of the other, just to focus and plow through. “Starting is easy, finishing is hard,” as Jason Calacanis says. The other voice in my head is telling me that as long as I’m pushing forward on one of them or the other, it doesn’t matter, since the skills I’m learning on each will translate to the other. The last few days have seemed like my wheels are spinning though, as it seems I spent more time sharpening my ax than I did actually cutting down trees. I spent what feels like two whole days just trying to figure out how to setup cookiecutter-django the way I wanted it, another day or two trying to figure out why pipenv doesn’t work properly in Pycharm, and then another trying to figure out how to get Celery to work. Yesterday it was all about how to properly clone a 3rd party Django app so that I can make some modifications to it. And I’ve spent hours trying to figure out how to do my tests, what needs testing and what doesn’t. Endless hours on Medium reading everything I could find related to any of the above.

But as long as I can sit down and work on something, I tell myself I’m making progress and becoming an actual developer. I’ve talked about discipline previously, and that discipline is paying off with my day job as well, whether it’s Powershell scripts, or more Python API wrappers. The hardest thing about it for me is the solitary nature of what I’m doing. Not having a team or partner with these projects is the hardest, cause it ultimately means that I have no one to bounce ideas off of in real time. Best I can hope for is to dump something out on StackExchange and hope that someone gets back to me. Most of the time, just explaining the question sufficiently enough for someone else to understand it spurs the kind of subconscious creativity that leads to a solution.

There’s been many false starts already, but I’m starting to get there.

Currently, with a fintech app I’m working on, I’m trying to determine how I expand a cryptocurrency wallet app designed for Bitcoin and other assets that use it’s RPC interface. The asset that I’m working with is a fork of a privacy coin with the un-shielded send functionalities disabled. So I’ve got to figure out the simplest method to update all the calls in this library so that they’ll use the shielded commands for this asset while retaining the existing commands for the legacy assets. So far, I’ve decided to try adding a boolean field to the currency model and add an if clause to the Celery tasks to choose between the two based on the boolean. It requires modifying code in each of the various function. While it’s simple, it seems to violate one of the core principals of Django, which is don’t repeat yourself (DRY). It seems to me that there is another way that I can add a decorator or something to each of these functions — maybe a strategy pattern — to do that bit of logic in a way that would make it easier to implement. Maybe even without having to fork the 3rd party app in the first place.

We shall see.