Testing SetProtocol on Kovan

We’ve been talking about building a TokenSet for weeks now, it seems. We believe that having the ability to manage a Set has many advantages, and we’re hoping to build one under the Homebrew.Finance banner. My primary use case for it is being able to manage “customer funds” (e.g. friends and family) in a non-custodial way, while pooling gas costs among the entire capital pool. It’s also a way that others can follow along with my strategy as well, what TokenSets calls “social trading”.

Deploying a Set is costly, over 3.2m gas from what we’ve seen . With gas at 100 gwei and ETH close to $2000, we’re talking about $500-800 just to mint a set. That’s not something that I want to do without understanding the intricacies of managing a set, both from creating it, managing the modules and assets, and issuing/redeeming the tokens. And since there aren’t any published charts with gas costs for various operations, let’s do some testing on the Kovan testnet and see what we can come up with, shall we?

Preparation

The TokenSet docs have a list of procotol contracts, both for Mainnet and Kovan. Our first task is to execute the create function on the SetTokenCreator contract. We can do this using Etherscan, but first we need some setup tasks. First, you’ll need some Kovan tokens via this faucet, and we’ll need a list of ERC20 tokens that we can use. I started off with the Weenus ERC20 faucet tokens, but Balancer has Kovan faucets for popular tokens like WETH, DAI, USDC, WBTC and others.

Now I originally did this test deployment using the Etherscan write interaction, but I’ve since discovered the wonderful seth , a “Metamask for the command line. It’s much easier to use than writing a web3 script or using the Etherscan webpage. After a couple minutes setting it up I was able to interact with Kovan using my dev Ethereum address. The hardest part was exporting and saving the private key from Metamask to a JSON keystore file using the MEW CX Chrome plugin. I also put the password in a text file, then configured the .sethrc file to unlock the account and use it via my Infura project URL. Needless to say, I don’t use this account for anything of value, and don’t recommend you do this with production keys.

Creating the set

I literally spent hours trying to figure out how to call this transaction to create the set. Most of my time was spent trying to get things working on Etherscan, but I wasn’t quite sure how to call the contract arguments. First, let’s take a look at the function call parameters. Per the documentation:

function create(
    address[] memory _components,
    int256[] memory _units,
    address[] memory _modules,
    address _manager,
    string memory _name,
    string memory _symbol
)
    external
    returns (address)

Most of this is pretty easy to understand: _components is an array of the tokens in the set, _modules are the components of the Set Protocol that the set needs to operate. _manager, _name and _symbol don’t need any explanation. But what about _units? The docs define it as “the notional amount of each component in the starting allocation”, but the word notional doesn’t really have a definition that makes sense to me in a programming context.

So let’s take a look at the SetProtocol UI to see how this works.

To start with, let’s create a set with one-hundred percent allocation of USDC. Let’s set the Set price to one dollar. I’m calling it the “One Dollar Set”, and the token to 1USD. We can grab the hex data from the Metamask confirmation prompt, and throw it in this Ethereum input data decoder. For this contract, you can get the ABI from the Etherscan page, but for unverified ones you may need to compile it yourself.

Here’s how we can do that programmatically using seth:

$ export CREATE_SIGNATURE="create(address[],int256[],address[],address,string,string)"
$ export ONE_USD="0xa949dc3e00000000000000000000000000000000000000000000000000000000000000c000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000140000000000000000000000000d82cac867d8d08e880cd30c379e79d9e48876b8b00000000000000000000000000000000000000000000000000000000000001c000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000001000000000000000000000000a0b86991c6218b36c1d19d4a2e9eb0ce3606eb48000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000f7a7d0000000000000000000000000000000000000000000000000000000000000003000000000000000000000000d8ef3cace8b4907117a45b0b125c68560532f94d00000000000000000000000090f765f63e7dc5ae97d6c576bf693fb6af41c12900000000000000000000000008f866c74205617b6f3903ef481798eced10cdec000000000000000000000000000000000000000000000000000000000000000e4f6e6520446f6c6c61722053657400000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000043155534400000000000000000000000000000000000000000000000000000000"

0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48  // USDC token address
1014397  // 'nominal' amount
0xd8EF3cACe8b4907117a45B0b125c68560532F94D,0x90F765F63E7DC5aE97d6c576BF693FB6AF41C129,0x08f866c74205617B6F3903EF481798EcED10cDEC // tokenset modules
0xd82Cac867d8D08E880Cd30C379e79d9e48876b8b // my dev address
One Dollar Set // name
1USD //symbol

So let’s take a look at this nominal amount. The documentation doesn’t really explain it, but I did some experimenting to see what the values come out to.

AllocationStarting PriceHexDecimal
100% USDC$1f4b2e1,000,286
$1098efce10,022,862
$1005f5e100100,000,000
$10003b9aca001,000,000,000
50/50 USDC/USDT$17a120500,000
$104c4b405,000,000

The single asset one and ten dollar prices seem to be anomalies, I’m guessing due to precision errors or something. Let’s look at a more realistic mix of a Set with an allocation between wBTC, wETH, USDC and DPI token:

Starting PriceTokenHexDecimal
$10wBTC14c65318
wETH524a299c0f0f11,447,655,666,413,809
USDC2634472503751
DPI1658cfe35c0f546,290,099,383,570,260
$100wBTCcfb853,176
wETH336757534418dd14,468,848,569,030,877
USDC17dd05125,022,545
DPIdf5708b80c3c0c62,864,614,765,640,716

So what we’ve determined here is that the decimal numbers correspond to the fractional portion of the various tokens in USD. For example, the value 5318 for BTC corresponds to 0.00005318 BTC, with BTC at $48k, corresponds to approximately $2.50 worth of BTC.

It seems that the TokenSet UI uses their an oracle system to determine these weights, based on the allocation and opening price given. One could compute these manually, or copy the values from the TokenSet UI as I did here. Just remember that the nominal value of each token you have in your set will determine the USD price. For practical reasons regarding issuance, you might want to consider starting with a single component, such as USDC, wETH or wBTC.

Deploying our set

The TokenSet UI creates new Sets with the following modules: basic issuance, trade module, and streaming fee. The basic issuance requires issuers to have the correct ratio of the Set’s inputs, and also requires approval for each spend. It is very expensive to do this. For this reason we want to use the NAV module, which allows users to deposit tokens using a single asset. However, the NAV module only supports issuance with assets that are supported using SetProtocol’s on-chain oracles, and these are limited. (More on this in our next post.)

Using Etherscan

One of the big problems I had trying to pass these parameters in via Etherscan was how to encode them the values. Eventually, with some help from a Set team member, I figured out that arrays of addresses, such as those for the component assets and modules, need to be enclosed in quotes, without the “0x” prefix intact. The int256 array for the component amounts need to be in hex form, with the 0x prefix intact. Here’s how it looks on Etherscan:

You can see the result of this transaction on Etherscan. The total gas used was 3,308,683. That’s about 0.13ETH at 40 gwei, but I doubt I’ll be able to get a tx through at that price given recent prices. Earlier testing on the UI actually showed a price around three thousand dollars last night when gas was closer to 400 gwei. I believe that Set creation is not a time sensitive process like trading on Uniswap, so we can try to sneak by with a lower cost gas fee if we want to.

Using SETH

export ETH_FROM=$(seth accounts | head -n1 | awk '{ print $1 }')
export ZERO=0x0000000000000000000000000000000000000000000000000000000000000000

export WEENUS=0xaFF4481D10270F50f203E0763e2597776068CBc5
export YEENUS=0xc6fDe3FD2Cc2b173aEC24cc3f267cb3Cd78a26B7
export XEENUS=0x022E292b44B5a146F2e8ee36Ff44D3dd863C915c
export ZEENUS=0x1f9061B953bBa0E36BF50F21876132DcF276fC6e

export SET_TOKEN_CREATOR=0xB24F7367ee8efcB5EAbe4491B42fA222EC68d411
export NAV_ISSUANCE_MODULE=0x5dB52450a8C0eb5e0B777D4e08d7A93dA5a9c848
export STREAMING_FEE_MODULE=0xE038E59DEEC8657d105B6a3Fb5040b3a6189Dd51
export TRADE_MODULE=0xC93c8CDE0eDf4963ea1eea156099B285A945210a

export CREATE_SIGNATURE="create(address[],int256[],address[],address,string,string)"
export COMPONENTS=[$WEENUS,$XEENUS,$YEENUS,$ZEENUS]
export UNITS=["0x1","0x1","0x1","0x1"]
export MODULES=[$NAV_ISSUANCE_MODULE,$STREAMING_FEE_MODULE,$TRADE_MODULE]
export NAME='"TEST"'
export SYMBOL='"TEST24"'

seth send $SET_TOKEN_CREATOR $CREATE_SIGNATURE $COMPONENTS $UNITS $MODULES $ETH_FROM $NAME $SYMBOL --gas=4000000

seth-send: Published transaction with 772 bytes of calldata.
seth-send: 0xc12a5402ff37e114d8951fef756ab6985e6387b0e5d74e8c6ee5b83913077a86
seth-send: Waiting for transaction receipt...........
seth-send: Transaction included in block 23634932.

Here, we are using export to create bash variables to make our code more readable. We set our own address $ETH_FROM, as well as the zero address for use later, then create vars for the components in our Set, WEENUS et al, the Set creator contract, as well as the modules that we’ll have in our set. We provide the signature for the create function, then bundle our parameters together before sending the transaction with four million gas.

Note that we could use the seth estimate function to get a fairly accurate measure of the actual gas needed:

seth estimate $SET_TOKEN_CREATOR $CREATE_SIGNATURE $COMPONENTS $UNITS $MODULES $ETH_FROM $NAME $SYMBOL --gas=4000000 
3308707

This is exactly how much gas was used by our actual Set creation transaction.

Using web3

Jacob Abiola was gracious enough to create this SetTokenCreator-boilerplate repo for me, showing how to create a Set using a Javascript file and web3 via Infura and Metamask.

Where’s my Set?

If you look at the internal transactions for the set creation call that we just minted, you’ll see the create call is made on a brand new contract. The one below, starting with 0x902d1ce… is our new $TEST24 token on Kovan.

On our next post, we’ll take a look at some of the challenges around issuing tokens. The basic and NAV issuance modules are very misunderstood, and we’ve had many requests from people asking how we’re dealing with it with the $MUG token.

WPStagecoach saved my life

pink carriage with brown horse

Quit messing around with lesser staging processes and get the real deal.

I don’t mean to be too glowing or make this seem like some infomercial endorsement, but I do really think it saved me from having a heart attack the past couple days. I’ve been using InfiniteWP to manage most of my stable of WordPress sites, and it’s served me well for managing updates and backups, and is even handy for migrating websites from one host to another. It’s well worth the $120 or so that I paid for it a few months ago. It’s staging features aren’t really that great.

Part of the problem is that it only wants to install the staging site as a subfolder of the main site. It also makes a copy of the database on the production database, it just uses a different table prefix. I shouldn’t have to tell you why this is not great from a performance and quota standpoint. The other problem is that it doesn’t provide much information when things go wrong. Ideally, I want my staging sites in separate subdomains, but IWP just can’t do this. And the documentation is very mum about this. I have a support ticket open with them right now to figure out why I was unable to clone a particular client site, and to make sure that this paragraph is correct. What I can tell you is that I spent days trying to get a proper staging site setup for my client using IWP.

It’s not all their fault. I’m taking over a project that seems to have been abandoned by the original developer, and there were many problems with the site that may have contributed to the problems I’ve been having, as we shall see shortly. IWP has three staging options, on the original site, on my configured staging server, or custom FTP. I was able to clone the site to my custom staging server, but the theme didn’t operate properly. I believe this may have been a problem with hotlinked theme assets, I haven’t figured it out yet.

I literally spent days trying creating subdomains and updating DNS on the client site, and couldn’t figure out why IWP kept giving me “error: check your hostname” when I tried to update things. I figured it was a DNS propagation error between the server hosting my IWP and the client’s host. I usually only work on sites I host directly, but this was the first time I actually had to use the staging features. I was getting very anxious. I had wasted several days was already dealing with an irate client, and was starting to get a panicked feeling when working on the project.

So I decided to go another route and explore some other options. I read through several blog posts on WordPress staging sites, and one name that came up several times was WPStagecoach. And it was only $12 for a month, so I signed up for a trial and had the staging site up in less than an hour. No kidding.

The setup process was impressive. Getting the plugin installed and activated was pretty standard, and creating the staging site was very user friendly. It started off by scanning the site for large files, and found a backup archive, which it asked to exclude. Then it starting creating a tar file of the site to move to staging, and showed me a status percentage as it did so. This was very much needed considering IWP had been “working” for hours without so much as a log update. After the tar process was completed, I did get an error that the archive was missing files, and was asked whether I wanted to abort, retry, or “proceed fearlessly.” I retried, waited another five minutes, and got the same error, so I went ahead and pressed proceed. Another five minutes, and BAM. There was my staging site, and it looked perfect.

And one thing that really impressed me was that after the creation of the staging site, I was given a list of errors that WPS had found, mainly places where the site’s URL was hardcoded in the theme templates. These are likely why I had the rendering issues on my previous staging attempt. So now I have a list of files that I need to target, as hard coded URLs will play havoc with my development environment as well. And this feature really shows how WPStagecoach really shines as a specialized product.

WPS hosts the staging site on their own servers, giving each site their own subdomain. I got ten with my account, which is way more than I’m going to need anytime soon. So now I can proceed with the next step on this project, which is getting our MemberPress module up and running. Then I’ll be able to see if pushing changes back to the live site is as easy as creating it in the first place. If my experience so far is any indication, it’ll be a sinch.

Git-ing it done

black green and blue coated wires

Roll your own GitLab

I had trouble falling asleep last night, Younger crawled in our bed just as I was dozing off and kept squirming as I was falling asleep, so I slept in her bed. It faces East, so I woke up at five and tried to go back to sleep. I heard Elder up, so I got up and started the day. She’s sitting across the room from me, looking up “Valentine’s Day” gifts ideas for the boy in our quarantine bubble down the street. Her sister has been ribbing her about it for days now.

One of our Zombie, LLC clients wants help standing up an internal GitLab server. It got me thinking, so I went ahead and set up a GitLab docker instance on my downstairs Ubuntu server. I figure it’s good practice. Do the job you want has always been good practice, so setting it up was worth the time. Plus it only took about fifteen minutes. The main problem I ran into was an SSH conflict with the existing service on the host. And it doesn’t appear that modifying the config on an existing container requires stopping the Docker daemon, so I just deleted the container and started over. I’ll probably move SSH if I ever do a real deployment, but here at the house the HTTP functionality is enough.

There’s also the mail issue. I didn’t want to use the root account to setup my repos, but the workflow around new accounts wants to send an activation email. I tried installing sendmail on the host, but the password reset didn’t work. I doubt it will work without a publicly routable dynamic DNS entry back to it or SMTP services, which I don’t want to mess with right now. Thankfully I found a password change form in the admin interface that didn’t require knowing the old password and got up and running.

I am nowhere near as strong with my Linux management skills as I am with Windows, where everything is pre-packaged and is somewhat unified. I can stand up domain.local services lickety split, and have a library of PowerShell scripts to setup AD, DNS, DHCP services within a domain. I have never actually taken the time to set one up at home though, but that point may soon be approaching. I’ve been wanting to investigate the use of Ubuntu server as an alternative or supplement to Windows based AD services, but part of me is skeptical that such a setup is even viable for workstation authentication and services. But I digress. The point I’m trying to make here is that I’ve always been in awe of Unix sysadmins ever since I worked at an internet service provider back in the late 90’s and watched our systems guy pop in and out of terminal shells like a wizard. I’ve never felt adequate in that regard.

I made some good progress yesterday working on the WordPress project, and have started converting the client’s site over to the new theme. I’m going over the demo site, examining the Bakery build they’ve got set up, and recreating it using the client’s assets. This allows me to get a bit more familiar with the framework that the theme author is using, and hopefully gleam some best practices at the same time. It’s a two step forward, one step back process. There are some strange bugs that popped up. Activating Woocommerce seems to bring the site down completely, as does changing the theme back to the original. Then at one point, while I was working on the new header, the previews stopped working completely and would only throw 404 errors. They work in the actual site, so I had to make do while I made edits.

Usual best practices for WordPress development and git repos are to exclude the entire WordPress directory except for whatever theme and custom plugin that you’re developing, but since in this case we’re working on an entire site, I’ve added the entire WordPress directory and associated SQL database files. The wp-content/uploads directory is mounted outside the container, along with plugins and themes. I haven’t yet pulled this directory on another machine yet, so I don’t know if it’s going to work. My main concern is how I’m grabbing the database. Managing PostgreSQL during my Django projects has always been a bit of a pain as I never learned how to incorporate it into my source control. I’ll have to spend some time correcting this deficiency.

Here is a look at the Docker Compose file I am using for my development setup. The SQL mount /docker-entrypoint-initdb.d/backup_to_load.sql get’s imported when the container is created; I assume that it’s ignored when pulling the SQL data from source. We shall soon find out. Also, I haven’t solved the file permissions issues that happen when trying to edit things like the wp-config.php file. I’ll have to save that for a later time.

version: '3.8'
services:

  wordpress:
    container_name: 'local-wordpress'
    depends_on:
      - db
    image: 'wordpress:latest'
    ports:
      - '80:80'
    environment:
      WORDPRESS_DB_HOST: db
      WORDPRESS_DB_USER: wordpress_user
      WORDPRESS_DB_PASSWORD: wordpress_password
      WORDPRESS_DB_NAME: wordpress_db
    volumes:
      - "./Wordpress:/var/www/html"
      - "./plugins:/var/www/html/wp-content/plugins"
      - "./themes:/var/www/html/wp-content/themes"
      - "./uploads:/var/www/html/wp-content/uploads"

  db:
    container_name: 'local-wordpress-db'
    image: 'mysql:5.7'
    command: --default-authentication-plugin=mysql_native_password
    volumes:
      - './data/mysql:/var/lib/mysql'
      - './data/localhost.sql:/docker-entrypoint-initdb.d/backup_to_load.sql'
    environment:
      MYSQL_ROOT_PASSWORD: somewordpress
      MYSQL_DATABASE: wordpress_db
      MYSQL_USER: wordpress_user
      MYSQL_PASSWORD: wordpress_password

  adminer:
    image: adminer
    restart: always
    ports:
      - 8080:8080

Hewing bits and bytes

Why am I working so hard?

After publishing last night’s post, I made a little headway with one of my projects, figuring out how to mount a SQL dump into a mySQL Docker image so that it gets loaded automatically when the container spins up. Just one more little win toward accomplishing my task. Now I just need to tackle the way I have WordPress deployed, and I can begin working on the project for real. I’m taking my time with this. All of the learning and research I’m doing now isn’t the client’s time, it’s mine, and is the kind of learning I love.

Being able to master Docker means I don’t have to run all this stuff on my local machines. I can start culling all of the packages that I’ve loaded in the past for this project or that, things like Node dependencies, Ruby, and Postgres no longer have to bulk up my system. Pop, here’s a container. Pop, there it goes. I went through my staging server a few days ago and started cleaning out shop, removing abandoned projects. Goodbye, rm *pennykoin* -rf, and so long.

I’m still reading Fluent Python, about a half hour before bed. I finally have a good grasp on decorators. I think my eyes glazed over on coroutines, but I think I’m ready to add threading to my value averager app. I’ve only got a couple of chapters left, on asyncio, which I desperately need to master, and another on one of my favorite subjects, metaprogramming.

I’ve been reading Fluent Python for about twenty minutes right when I climb in the bed. It’s on the iPad and even with the brightness turned down all the way, it’s still bad for rest, so I usually wind up reading a real book. Right now it’s Digital Minimalism and last night there was a section about Henry David Thoreau, starting with his time building his cabin at Walden Pond, before he wrote his book. Just how does one build a cabin using just an axe? Anyways, the point here, and one I never knew before is that Walden is really about using time as the true unit of account. What use is earning a bunch more money if the cost in time to earn it is so much. And for what?

It’s not that I haven’t heard the idea of time as money before, or rather trading time for money. It’s very prevalent in the things I read and hear. Just realizing that Thoreau was writing about it some one hundred and fifty years ago makes me realize how little things have changed. I don’t know why I should be surprised. I’m sure Marcus Aurelius says similar things in his diaries. I think my point is that I wasn’t expecting to hear it. Here I was, trying to convince myself that I should delete Twitter off my phone for a month, and here’s Cal Newport, via Thoreau, asking “why are you working so hard, you sap?”

Thoreau did have any children, though, so I guess I can say that’s part of the reason that I grind, although it’s really not the only reason. I like figuring things out, and it’s just so happened that the things I’ve figured out how to do enables me to earn a comfortable living. Still, there’s some sort of drive to build something, a legacy, if you will, coupled with a mild regret that I should have more to show for this life I’ve lived these past forty one years. One of my grandfathers built a house. All I have of another is a stained glass lamp, sitting next to one of my daughter’s beds. That and memories of model trains in a basement, and playing a flight simulator on an old Tandy PC back in the 80’s.

And maybe that later point is the crux of minimalism. In the end, it is the memories that matter. Not all of us are going to write lasting works of fiction or build cathedrals that will be finished long after our deaths and stand for centuries. Today, all I can do is love those around me, and tinker on my keyboard, changing the world around me, bit by bit. Who knows, maybe Bitcoin is going to succeed, allowing me to leave generational wealth for my grandkids, either directly or indirectly. Maybe one of my other projects will succeed and grant me a minimum viable income so that I’m not forced to work another day in my life.

Maybe I’m being fatalistic, maybe this is just my monkey mind sowing doubt in my mind, preparing me for failure. I’m not sure, but it doesn’t feel like it. I think it’s just recognition that I’ve got too many things distracting me, things that I need to let go of, and remove from my life.

But right now, I hear the pitter patter of little feet upstairs, which means it’s time for me to enjoy my Sunday.

What is work?

two white rabbits

Down one rabbit hole after the other

I spent most of yesterday really digging into WordPress in a way that I really haven’t before: theme files. My current project has a customized version of the Twenty Seventeen theme, with lots of custom templates, fields, and functions that I need to move over to a new template. It’s taken me weeks to finally understand what the previous developer was doing, and there’s a fatal bug in the system somewhere that is deleting post data that I’m trying to uncover so I can clean things up. I figure my best course of action is to migrate everything to a staging site, start with a new theme, and start going through the plugins one by one to rebuild the content on the site. There are multiple pages and types of posts with custom fields that need to be displayed properly. I’m not really looking forward to having to debug someone else’s stylesheets, though.

Doing this kind of development isn’t ideal even on a staging site, given that the WordPress native code editor isn’t really suited to real work. I haven’t done PHP work in over ten years, but I downloaded PHPStorm and got started setting up a development environment. I was hoping to setup some sort of Git workflow for the site, but I didn’t find any options that were production ready, so I grabbed the files via FTP and quickly set to work.

WordPress has an official Docker image, so I set about configuring a Compse file for my local environment. There I ran into problems. I was trying to map my theme directory to the container’s, but I ran into issues with file permissions. I haven’t quite figured it out. I can change the permissions within the container to allow the container to use the files, but then they’re locked on my development host. So that’s my challenge for today, and one that will no doubt lead down many more rabbit holes.

This is just an example of the kind of stuff I do, that most people call work. Now this doesn’t have anything to do with my regular day job responsibilities, it’s for a client. And even if it wasn’t, it’s still the same type of activity that I would be doing for fun anyways. Although if you asked my wife if she thought I was having fun last night, she would have said that all the cursing and muttering I was doing under my breath would indicate otherwise. This particular project is a challenge for me because it involved a level of technical expertise that I don’t have, that I am forced to pick up in order to understand the issue — and hopefully solve it! It’s this area, right outside my current capabilities, that puts me in the zone and makes time fly.

It’s a drive that has gotten me where I am today, and has served me very well. Unfortunatley, it’s not something I find in my current day job, and is one of the main reasons why I’m looking else where these days. Part of the problem is the fact that the company constantly hovers on the edge of sustainabily and closure, but I have trouble reconciling that situation with my responsibility for it. Perhaps it’s that I don’t have any stake in the company, other than my current minimum viable salary. It’s allowed me to pursue other projects, including school and political activities, but has not offered anything for me in the way of growth in several years. I am not in sync with my boss in the way of the direction of the company or even the type of customers that we take on. The challenges are rote, and therefore not interesting to me. And they haven ‘t changed in years. Neither has my salary.

I’ve started reading Designing Your Life, by Bill Burnett and Dave Evans, and one of the first exercises that they ask readers to write a workfview reflection, defining how work relates to their life, money and others. This is my response to that, of course. Work has such a broad meaning to me. It’s not just your job, it’s also the things you do for your family and friends, chores around the house or the yard, spending time with family, and yes, helping your dad or whoever with their laptop from time to time. And one thing my dad taught me, that I’m trying to impress upon my girls, is that when there’s work to be done you just have to suck it up and do it.

Work is rewarding also, and can be fun. That’s not to say it can’t be repetitive or stressful,, the most panic-inducing heart attack moments I’ve had have been related to failures at work. But I’ve helped a lot of people, and it’s often fulfilling. That’s not to say that I haven’t had horrible, dirty jobs that I had to take because I was unemployed and living on couches, but most of them have been knowledge work, and pretty chill. These days it pays the bills, but it’s the work I do outside of work that is where I continue to learn and grow.

Hopefully my girls will be as lucky as I am, and be able to make a living doing what they love. Actually, it’s not luck, it’s by design. Obviously I am not where I want to be right now. Sure, my work life is probably better than ninety percent of the world’s population right now, and I have no room to complain about anything, but it’s it the human condition to want more, to want to be more? And to me, that’s what work is, the drive to improve, to become better. Constant improvement. Refine, iterate, repeat, repeat, repeat.

Underpromise, but overdeliver

Yesterday we gave the final demo of our two-semester professional workforce development project. It did not go well. We had fifty minutes to present, but our demo only took about five. One of the professors, who had been receptive of our pitch last semester, was very disappointed. We basically failed to deliver. I was defensive, and tried not to make excuses cause she was totally correct.

During the last few weeks of this semester, we were more focused on what was right in front of us than on the big picture that we had promised last semester. And by that I mean technical issues. As system architect I was more focused on getting the architectural components up and running than I was on whatever particular use case this person was expecting to see. So in that sense, yes, we failed.

There were a number of roadblocks that we had to overcome. Our team was made up of six members, two of which were complete dead weight. Another member had issues with her local development machine and was unable to contribute directly to the source code. This was fine, as this was a writing intensive course and there were several written deliverables, including specifications and testing plans. So we basically split the work: I lead the technical development and contributed to the written work as needed, but pretty much left management of the final written work to others on the team.

We relied heavily on Cookiecutter Django for our base deployment. In the long run, this was probably the way to go versus using vanilla Django, or another framework like Flask, but it hurt us in the short term. No one on the team besides myself was familiar with it, although I don’t think we could have avoided that with another solution. We wound up spending an inordinate amount of time trying to get the others on the team up to speed on deploying it via Docker and managing that through various IDEs. I spent a lot of time mentoring the two teammates who were assisting with actual code commits. And it had been so long since I had worked on Django that I had to relearn its model-view-template architecture all over again.

And this is where using Cookiecutter really slowed us down. The package implements several best practices on top of Django: overriding the default user model, implementing updated authentication forms, even deploying a Traefik load balancer on top of the production web server. All of these slowed us down.

In all, we spent less than forty five days doing actual development work on the project. That was following more than thirty days trying to get the framework up and running between local and production environments. The semester was actually focused more on the written deliverables and put off actual development work till the last half of the semester. In retrospect, holding to this schedule was a mistake, and it was probably a bit of hubris on my part not to start work a bit earlier.

We got an email from our instructor this morning:

“From a development (i.e., architecture, tool, collaboration, and project) standpoint your team has met all the requirements for the course. While your demo was less “interface oriented” than the other groups the evaluators referenced, the foundation of your prototype is more substantial. It was clear to me that there was a bias during the evaluation based on discussions that occurred […] last semester. Keep in mind that I have been equally critical of groups in the past (albeit in a less heavy-handed fashion). Your group as a whole should not worry about failing the course.”

So maybe I’ve been a little hard on myself, but I think it more likely that they were as much invested in our project as we were. Obviously there’s some University politics at play there.

So the question is, what comes next? We had hoped to pursue a grant to continue working on this project on behalf of the university post-graduation, but given the tepid response from the skeptical evaluator, I don’t see that as forthcoming without many additional changes. I’ve broached the subject with my other teammates, and I’m not sure there’s any desire to continue forward with that. Maybe after the semester is over. I still have two more classes to complete, and others have a full case load. At least one of them has taken a job, so it may just come down to me and one other person.

All in all, I know the experience was a positive one for me. I know that learning GitLab, Docker and Django for app deployment will come in handy in future projects. And we’ll just have to see if any of the relationships with the team members will last past this semester. Any decision about the future of our project will be on hold for now.

Fearless refactoring

C++ is a much more complicated language than I ever imagined. I’d had a little bit of exposure to it earlier in college, and I hated it because of the amount of setup that was required to get it running. We were introduced to CodeBlocks and Eclipse, but both of them just seemed so clunky. Figuring out compiler options, makefiles, and trying to get programs that compiled on my home development Ubuntu workstation and on the schools Windows RDS environment and the professor’s autograder was just just too much. So when I really started diving into Python, it was like coming out from being underwater too long and getting that breath of air.

Working on the Pennykoin Cryptonote codebase got me a bit more comfortable with it. I stil didn’t understand half of what I saw. Half of it was the semantics of the code itself, half of it was just trying to understand the large codebase itself. Eventually I was able to figure out what I was looking for and make the changes that I needed to make. I never really felt comfortable making those changes, and even less so publishing and releasing them. That’s because the Pennykoin codebase had no tests.

I’ve spent the last few days working on some matrix elimination code for my numerical methods class. During class, the professor would hastily write some large, procedural mess to demonstrate Gaussian elimination or Jacobi iteration, and not only did I struggle to understand what (and why) he was doing, but he often ran into problems of his own and we had to debug things during lecture, which I thought was wasteful of class time.

As I’d been on an Uncle Bob kick during that time, I decided I would take a TDD approach to my code, and began what’s turned out to be a somewhat arduous process to abstract and decouple the professors examples into something that had test coverage, and allowed me to follow DRY principles. Did I mention that our base matrix class had to use C-style arrays using pointer pointers? Yes it was a slog. Rather than be able to use iterators through standard library arrays, every matrix operation involves nested for loops. I’ve gone mad trying to figure out what needs dereferencing, and spent far too long tracing strange stack exceptions. (Watch what happens when have an endl; at the end of a print function and call another endl; immediately after calling that function…

I started out working on the Gaussian elimination function, then realized that I needed to pull my left hand side matrix member out as it’s own class. Before I did that I tried to create my own vector function for the right hand side. So I pulled that out, writing tests first. Then I started with my new matrix class. I ran into problems including a pointer array of my vector class. For reasons that I’ll not get into, I kept the C-style arrays. I slowly went through my existing test cases for the Gaussian class, making sure that I recreated the relevant ones in the matrix class. Input and output stream operators, standard array loaders (for the tests themselves), equality, inequality and copy functions were copied or rewritten. After one last commit to assure myself that I had what I needed, I swapped out the **double[] lhs member for matrix lhs, and commented out the code within the relevant Gaussian functions with calls to lhs.swapRows(). Then I ran the tests.

And it worked

Uncle Bob talks about having that button of truth that you can hit to know that the code works, and how it changes the way you develop. I’m not sure if he uses the word fearless, but that’s how it feels. Once the test said OK, I erased the commented code. Commit. Don’t like the name of this function? Shift+F6, rename, test, commit. These two functions have different names, but do the same thing, with different parameter types? Give them the same name and trust the compiler to tell the difference. Test OK? Commit.

It’s quite amazing.

I spent several hours over the past few days working on adding an elementary matrix to the matrix elimination function, and I made various small changes to the code, adding what I needed (tests first!) and making small refactors to make the code clearer. I’ve had to step into the debugger a few times, but it’s going well. There’s still one large function block that I’ve been unable to break down because of some convoluted logic, but I’m hoping to tackle it today before moving on. And I’m confident that no matter what changes I make, I’ll know immediately whether they work or not.

Fair Open Source

Last night I had the pleasure of meeting Travis Oliphant, one of the primary creators of Numpy and and founder of Anaconda. He’s currently the CEO of OpenTeams, a company attempting to change the relationship between open source software and the companies that build on top of it. I found out about the lecture and was interested in it because of an article I had read in Wired about technology’s free rider problem, and went to the event without knowing anything much at all about Mr. Oliphant. I soon found out who he was and was very grateful that I had come. I’ve spent a lot of time using Numpy, and I’ll admit I was a bit starstruck.

Travis’s lecture spawned from his experience working on Numpy. He basically gave up tenure track at Brigham Young University to work on it, and had to find other ways to support his family for the two years that he was working on the initial release. As was noted elsewhere, much of the tech boom over the past 20 years has been built on top of the contributions of FOSS developers like Travis and others. He’s a big believer of profit, and thinks that the lack of financial incentives in the FOSS space has caused several problems, including developer to burnout, leading to a lack of proper maintenance of these projects. Many of these projects, like Numpy, have become crucially important to the scientific and business community.

Tim Oliphant’s Pycon 2019 Lighting Talk about Quansight

Oliphant’s goal is to make open source sustainable. Quansight is a venture fund for companies that rely on OSS, one of the ones they’ve funded is a public benefit corporation called FairOSS, which hopes to support OSS developers through contributions from companies that use OSS. He’s also doing something very similar with OpenTeams, hoping to follow Red Hat’s model of supporting Open Source by providing support contracts for various projects.

These are all very worthy goals, and I was both impressed and inspired by his talk. It’s opened up some interesting career opportunities. I recently took my first developer payment through GitCoin recently, and it was a bit of a rush. Getting paid to work on Open Source Software seems like an awesome opportunity, and I’ll be keeping an eye on this for potential post-graduate plans.

Becoming a Git-xpert

I have been trying to get a grip on the Pennykoin CLI code base for some time. One of the problems that I’ve had is that the original developer had a lot of false starts and stops, and there’s a lot of orphan branches like this:

Taken with GitKraken

If that wasn’t bad enough, at some point they decided to push the current code to a new repo, and lost the entire starting commit history. Whether this was intentional or not, I can’t say. It’s made it very tricky for me to backtrack through the history of the code and figure out where bugs were introduced. So problem number one that I’m dealing with is how to link these two repos together so that I have a complete history to search through.

Merging two branches

So we had two repos, which we’ll call pk_old and pk_new. I originally tried methods where I tried to merge the repos together using branches, but I either wound up with the old repo as the last commit, or with the new repo and none of the old history. I spent a lot of time going over my bash history file and playing with using my local directories as remote sources, deleting and starting over. Then I was able to find out that there was indeed a common commit between these two repos, and that all I had to do was add the old remote with the –tags option to pull in everything.

mkdir pk_redux
cd pk_refresh
git init
git remote add -f pk_new https://github.com/Pennykoin/Pennykoin-old.git --tags
git merge pk_new/master
git remote add -f pk_old https://github.com/Pennykoin/Pennykoin-old.git --tags

Now, I probably could have gotten away by just cloning the pk_new repo instead of initializing an empty directory and adding the remote, but we the end result should be the same. A quick check of the tags between the two original repos and my new one showed that everything was there.

The link between the two repos

Phantom branches

One of the things that we have to do as part of our pk_redux, as we’re calling it, is setup new repos that we actually have control over. This time around, everything will be setup properly as part of governance, so that I’m not the only one with keys to the kingdom in case I go missing. I want to take advantage of GitLab’s integrated CI/CD, as we’ve talked about before, so I setup a new group and pkcli repo. I pushed the code base up, and saw all the tags, but none of the branches were there.

The issue ultimately comes down to the fact that git branches are just pointers to a specific commit in a repository’s history. Git will pull the commits down from a remote as part of a fetch job, but not the pointers to those branches unless I physically checked them out. Only after I created these tracking branches on my local repo could I then push them to the new remote origin.

Fixing Pennykoin

So now that I’ve got a handle on this repo, my next step is to hunt some bugs. I’ll probably have to do some more work to try and de-orphan some of these early commits in the repo history, cause that will be instrumental in tracking down changes to the Cryptonote parameters. These changes are likely the cause for the boostrap issue that exists. And my other priority is figuring out if we can unlock the bugged coins. From there I’d like to implement a test suite, and make sure that there is are proper branching workflows for code changes.

Frustrations

I’m a bit perturbed right now. I went back to Django project I hadn’t worked on in two weeks and could not get my Pycharm interpreter working properly. I’d updated from the Community Edition to the Professional Edition during that time, which I’m not sure had anything to do with it, but this failed session brings me to another source of frustration with things that I need to get off my chest.

There are 3, maybe 4 ways that one might need to interact with a Django app in Pycharm. The first, being the Python console itself. The second, the regular command terminal. Third would be the various run configurations that one can setup. And four would be the Django console that Pycharm Pro enables. My issue is that each of these has their own environment variables settings! Maybe it’s just my inexperience showing through here, but I tend to use several of these when I’m working. I have a run configuration for the test server running, then the Django console for migrations and tests, and a terminal window that’s actually running the Django shell, so that I can muck around with code while I’m figuring things out.

I don’t know if I’m an idiot or what, but it just seems extremely ineffective, and I have got to be missing something.