Fearless refactoring

C++ is a much more complicated language than I ever imagined. I’d had a little bit of exposure to it earlier in college, and I hated it because of the amount of setup that was required to get it running. We were introduced to CodeBlocks and Eclipse, but both of them just seemed so clunky. Figuring out compiler options, makefiles, and trying to get programs that compiled on my home development Ubuntu workstation and on the schools Windows RDS environment and the professor’s autograder was just just too much. So when I really started diving into Python, it was like coming out from being underwater too long and getting that breath of air.

Working on the Pennykoin Cryptonote codebase got me a bit more comfortable with it. I stil didn’t understand half of what I saw. Half of it was the semantics of the code itself, half of it was just trying to understand the large codebase itself. Eventually I was able to figure out what I was looking for and make the changes that I needed to make. I never really felt comfortable making those changes, and even less so publishing and releasing them. That’s because the Pennykoin codebase had no tests.

I’ve spent the last few days working on some matrix elimination code for my numerical methods class. During class, the professor would hastily write some large, procedural mess to demonstrate Gaussian elimination or Jacobi iteration, and not only did I struggle to understand what (and why) he was doing, but he often ran into problems of his own and we had to debug things during lecture, which I thought was wasteful of class time.

As I’d been on an Uncle Bob kick during that time, I decided I would take a TDD approach to my code, and began what’s turned out to be a somewhat arduous process to abstract and decouple the professors examples into something that had test coverage, and allowed me to follow DRY principles. Did I mention that our base matrix class had to use C-style arrays using pointer pointers? Yes it was a slog. Rather than be able to use iterators through standard library arrays, every matrix operation involves nested for loops. I’ve gone mad trying to figure out what needs dereferencing, and spent far too long tracing strange stack exceptions. (Watch what happens when have an endl; at the end of a print function and call another endl; immediately after calling that function…

I started out working on the Gaussian elimination function, then realized that I needed to pull my left hand side matrix member out as it’s own class. Before I did that I tried to create my own vector function for the right hand side. So I pulled that out, writing tests first. Then I started with my new matrix class. I ran into problems including a pointer array of my vector class. For reasons that I’ll not get into, I kept the C-style arrays. I slowly went through my existing test cases for the Gaussian class, making sure that I recreated the relevant ones in the matrix class. Input and output stream operators, standard array loaders (for the tests themselves), equality, inequality and copy functions were copied or rewritten. After one last commit to assure myself that I had what I needed, I swapped out the **double[] lhs member for matrix lhs, and commented out the code within the relevant Gaussian functions with calls to lhs.swapRows(). Then I ran the tests.

And it worked

Uncle Bob talks about having that button of truth that you can hit to know that the code works, and how it changes the way you develop. I’m not sure if he uses the word fearless, but that’s how it feels. Once the test said OK, I erased the commented code. Commit. Don’t like the name of this function? Shift+F6, rename, test, commit. These two functions have different names, but do the same thing, with different parameter types? Give them the same name and trust the compiler to tell the difference. Test OK? Commit.

It’s quite amazing.

I spent several hours over the past few days working on adding an elementary matrix to the matrix elimination function, and I made various small changes to the code, adding what I needed (tests first!) and making small refactors to make the code clearer. I’ve had to step into the debugger a few times, but it’s going well. There’s still one large function block that I’ve been unable to break down because of some convoluted logic, but I’m hoping to tackle it today before moving on. And I’m confident that no matter what changes I make, I’ll know immediately whether they work or not.

Django on Docker development challenges

I finally had some time to do some deep work yesterday, and go my unversity project’s Django instance up and running. It took way to long. The local version settings for Cookiecutter Django work very easy from a Docker setup, but deploying them to a production instance took me by surprise. There were several issues I had to work through.

Cloud storage: I had inadvertently setup my project with the cloud storage settings for AWS. We’re not using either AWS or Google Cloud Services for CDN since this is just a small prototype. Since we didn’t have the AWS bucket credentials, the Django service wouldn’t start. I had to replay my setttings file to recreate my project without cloud services set to none. I attempted to use a fork of CC Django that uses Ngnix as the media server, but had other issues with it and decided we just won’t have media for this prototype.

Traefix: The production settings put Django behind a Traefik load balancer firewall, it’s configured to use Let’sEncrypt! for validation. Leaving this section blank causes Traefik to fail. So I commented out the SSL validation section of the configuration file. Currently it throws a warning about a nonexistent validator, but this is the only way to get it to serve pages. Currently. I’ll register with Let’sEncrypt!, but I’m not sure I’ll be able to procure a cert for our web server given that it’s a host on our University’s CS domain.

Developmental environment issues: Perhaps the most frustrating problems I’m having are around the way our environment is setup. I work off campus, yet our resource host is only available inside the campus network. I’m using our CS Gitlab server as a code repo, but I haven’t setup and CI jobs to deploy the code yet. On order for me to terminal to the server I have to SSH to our public CS server, then SSH to the resource server. In order to view the Django website, I have to open an RDS session. Not ideal, but I’ve yet to optimize the setup.

And troubleshooting these various problems with our production server comes with it’s own set of challenges. The git repo is synced to our Docker host, and the instance is deployed via docker-compose commands. In order for me to update the code, I have to cycle through down, build and up commands to resync the code. Hopefully, I’ll be able to setup Pycharm’s Docker remote capabilities to edit the code directly within the docker instance. We had planned to setup multiple containers in order to run a test server, but that’s going to be very difficult on a single host.

I’ve had other minor issues with production settings not taking properly. It looks like the .env files aren’t loading properly, causing the default local ones to be imported. I had to change the defaults in manage.py, but I assume this may break our local setup.

When everything is shaped up, I’ll have one git repo that can be run locally, in test or production, with a CI job that will deploy commits to our Docker container. I’ve got a lot of work to do.

Being a Technologist

Or, “Where’s my flying car?”

The Recode/Decode pod had this great interview with Peter Diamandis and Steven Kotler, authors of the book The Future is Faster Than You Think. I’ve always considered myself somewhat of a futurist, and the concept of accelerating technological change has been on my mind for a long time. FutureShock might have been the first foray into the subject for me, and I followed that up with several books by Ray Kurzweil, including The Age of Spiritual Machines and The Singularity is Near. The years have given me some skepticism that we’ll see the type of changes that Kurzweil envisions, but 2040 does seem like a long ways of from here. Given just what we’ve seen in the past couple of decades, I’m am certain that we are going to see a continuing rapid transformation.

Diamandis is a colleague of Kurweil, they both are co-founders of Singularity University, which aims to help business leaders understand the changes that are on the horizon. The confluence of new technologies are enabling things that were just not possible a decade ago: robotics, genomics, artificial intelligence, 3D printing, blockchain. Having an understanding about these trends is a huge competitive advantage.

Diamandis and Kotler were both on Impact Theory as well, and the host asked them what technology they are most interested in seeing, and the answer was flying cars. They’re pretty confident that we’ll be seeing autonomous flying vehicles hit the market in the next two to four years, and it reminded me of a thesis I heard from ARK Investment’s Cathie Wood a while back. She’s betting that it will soon be cheaper, based on cost per mile, to take a flying taxi than it will be to own a vehicle. She gets into a discussion about utilization rates, how personal automobiles are in the single digits as they sit in our driveways. Autonomous cars will be more like eighty percent.

Considering this bull case for the flying car market, I did some research to see what companies were on the forefront of this tech, and, more importantly, which ones were available as publicly traded companies. I was unable to find many pure plays, as most companies making progress in the space are either startups or subdivisions of other larger companies. The list that have are all the companies I could find that are currently available on the public equities markets.

  • Ashton Martin
  • Amazon
  • Audi
  • BA
  • Borg Warner
  • Delphi
  • Airbus
  • EHang
  • Gelly
  • Hyundai
  • Lear
  • Moog
  • Porche
  • Rolls Royce
  • Toyota
  • Tesla

Besides the larger firms like Tesla, Toyota and Boeing, there are also some smaller car companies like Rolls Royce and Ashton Martin making plays in the space. Chinese car manufacturer Geely has also acquired startup Terrafugia, which seems to be a leader in the space, and has also invested in Volocopter. I also added two parts suppliers to this list, including Borg Warner and Y. The firm that I’ve chosen to dip my foot into, however, is EHang, a Chinese drone manufacturer.

This is a straight gamble play on my part, but I’m only taking a two percent stake of my portfolio, and will be averaging in daily over the next 90 days. The stock has only been trading since December, bewtteen eight and fourteen dollars.

I’m also opening a position in additive manufacturing firm XOne, based purely off the fact that they are on the ARK Invest Autonomous Tech fund, and that their current price and chart fit my personal preference as well. So starting today, I’ll be adding these two stocks to my value averaging program, along with Lending Club and MTLS, another 3D printing firm.

During the interview with Kara Swisher, Clayton M. Christensen’s name came up. Clayton, who passed away recently, is the author of The Innovators Dilemma, a book that has been mentioned by so many leaders over the years. I went to the library to pick up a copy, but found that I had unintentionally picked up a copy of the sequel, The Innovators Solution. Thankfully, copies of the former are easily found online, so I downloaded a copy to my iPad and started reading it last night. Once I finish these two, I plan on getting on to The Future is Faster Than You Think, after I read the two books written by x and y before that, Blank and Blank.

Firms I’m thinking about applying to

A recent Medium post on 2020 IPOs got me thinking about places that I’d like to work. Part of me has no desire to go back to work for a large company, I did four years with a Fortune 500 company, and while it was good for a while, the environment became toxic and I wound up self-destructing util they fired me. I haven’t had the best track record with any jobs up until my present position, to be honest. The place I’m at now isn’t ideal, but I guess I’d rather be a big fish in a small pond, so to say.

Now while I have no desire to go work for a retailer, or an exploitative company like Instacart, if I was to go back to work at a large firm and trade my freedom for a hefty package, these are some of the ones I would be interested in.

GitLab

One of the first companies on the list was GitLab. I’ve been a fan of theirs and have been using them over GitHub for the past few months. My university has an internal instance, and I’ve been using it a lot, figuring out how to use their CI/CD pipelines. They apparently have a culture of radical transparency, and have all of their guidebooks up online. Their interview and selection critiera are there, along with job responsibilities and performance metrics. Based on the compensation calculator, it looks like even a basic support position would be a step up from where I’m at today. It seems really appealing.

Stripe

Stripe has been doing very well in the payments space. They’ve got no plans to go public, but have a crazy valuation. They’ve got a lot of remote technical opportunities that could be interesting. On the downside, they recently discontinued support for Bitcoin payments, although the CEO remains optimistic about cryptocurrency in general.

Square

Not on the IPO since they went public in 2016. (Man did I miss that one…) Another payments company with several remote positions, as well as jobs in Atlanta, Denver, and Austin. Several front-end positions that I could qualify for, even with my limited experience. And the Cash App does Bitcoin, so it seems like it may be a good fit.

Asana

I used to be an advocate for Asana, but stopped using their software in favor of Basecamp. I originally skipped over them in consideration but just took a look at their job board. Nothing remote. I have no desire to move to San Fransisco, but if I wanted to move the family to Iceland it might be worth considering. I like how they have their values listed on their job postings, as well as this Day in the Life featuring one of their engineers.

Robinhood

I’m not a customer — get IRAs already! — but have been following them for some time and respect the efforts they’re doing to make investing more accessible. Fractional shares investing is a really good idea. And they offer crypto trading as well. No remote jobs available, but Denver is starting to sound like a good place to live. Go Broncos!

TDAmeritrade

Not on the original list, but I’m adding it here after hearing Junayna Tuteja, TD’s Head of Digital Assets and DLT on the On The Brink podcast. She makes it seem like a really great place to work. A quick look at their job board, however doesn’t match anything crypto-related. There’s a couple contract positions in Omaha and New Jersey, not two places I have any interest in moving to.

Templates, makefiles, and YAML, oh my!

When I first started programming, it was simple to just fire up an editor and start typing away. Scripts usually wound large procedural monstrosities. If I managed to get anything working it usually was such a mess that it quickly became unmanageable. Now days, there’s so much setup that needs to be done before I can even get to work: setting up a git repo and Python virtual environment; external repo; databases, setting up my IDE. I suppose it must be indicative of the progress I’ve made as a programmer.

One of my final classes is a multi-semester group project. We spent last semester building out a the design docs, and are spending the first few weeks of this one refining those docs individually before coming back together and deploying a prototype. I’m the old man on the team, about twice as old as the rest, and I’ve been doing this long enough to have very strong opinions about a lot of things, so I’ve been trying to guide the team toward these standards.

I’m not going to get into the use case around our app yet, but convinced the team that we would use Django for the backend. Now, while we could use it for the front end as well, I figured that since Django was giving us most of what we needed for the core functionality, we could spend some resources trying out some cutting edge tech that would give the team some experience with GraphQL and React Native. I’ve got no idea whether that will make it in the final approach. Even though we’ve got a team of six people and I’m handling all of the infrastructure stuff, I’m starting to wonder whether the others are going to be able to implement those new features in time.

I’ve got a few more passes through my individual paper to make, then I’ll start focusing on the prototype presentation. My professor made a comment during last week’s recitation that these applications did not have ‘cookie cutter’ approaches, and I almost laughed out loud cause we’re literally using Cookie Cutter Django as the basis of our project. I’m debating whether I want to try a live demo of deploying one or do it offline and record screenshots or something.

Being able to use something like Cookiecutter to setup a Python package, with all the unit testing, CI, and documentation setup via make make commands, out of the box, is amazing once you understand what all of that stuff actually does. It can lead to a bit of choice paralysis at first, trying to figure out testing frameworks, code coverage tools, linters and all that. I’m still getting there. But once found, it makes rapid prototyping easy.

It’s almost maddening thinking about how many different ways there are to setup your workflow. I’m currently using Pipenv as my tool of choice, but recently read about Poetry, which seems to be a step up in many ways. For now though, I’m not chasing it down. Instead, I’m going to focus on delivering something using the tools I already have, instead of getting caught up in what’s new. It’s lesson that continues to be more and more relevant as I mature in my abilities.

Windows 10 Provisioning packages FTW

My company has spent the last few weeks trying to prep customers for the death of Windows 7, which happened officially earlier this month. I’ve been less than happy with the amount of buy in that we were able to get from clients, most of whom are either too broke or too stubborn to deal with the expense. We’ve had a few that are taking things seriously and are upgrading their machines, so I’m preparing to do a lot of installs.

In the past, when I worked in Enterprise, we would do images for the various models of desktop and laptops that we deployed. It made sense because of the scale. My work in the SMB space doesn’t necessitate this type of operation, since deployments are sporadic and smaller in quantity. Plus there’s so much churn in the OEM hardware market that it just doesn’t make sense.

As a managed service provider (MSP) with dozens of clients, I’ve been trying to standardize our operations as much as possible, but it doesn’t scale very well. We have a Remote Monitoring & Management tool (RMM) that we deploy to all our endpoints that installs our remote access and security tools, but we wind up with different installers for each site. We can create scripts to deploy some software, but it’s clunky and I don’t like using it. I’ve had some success deploying things like Chrome and Acrobat Reader, but it’s useless for some of the more obscure line of business vendors that haven’t packaged their installers for silent installations.

A majority of our clients are either on Windows Active Directory domains or Office 365, which uses Azure AD, so I’ve managed to write and collect a number of Powershell scripts to repeat common tasks, but even after seven years I haven’t been able to reach the level of automation that I’d like to be at. I’ve written about my attempts to integrate some of our platform APIs, but doing things at the user level is really difficult when you’re dealing with small sites of five or ten users.

Recently I extracted the installer from our RMM provider, and found that the package is just another executable with a settings file. One of the lines in this settings file contains our client’s unique ID in their system, and discovered I could use this as the base for a universal installer. I wrote a PS script to search and replace this string based on a hash map, and I even added a little popup selection box to run it. It wasn’t anything fancy, but it made my life just a little bit easier.

One of the things that’s always been a pain in the ass is dealing with the ‘out of the box experience’ (OOBE) that Windows presents the first time you turn it on. We’ve got a standard operating procedure for naming the default admin account and password, machine names, but for some reason it still gets screwed up. So I wrote another small script that I can run in Audit mode that imports an XML file to skip OOBE, create the account, and install the RMM tool. Life easier, but still buggy.

Lately I’ve been playing with Windows Configuration Designer. It creates Provisioning Packages (PPKG), which can be used by end users to do a lot of these things. It’s got some useful features: device naming; enroll in AD or Azure AD; local admin account; and adding applications. You can even specify WLAN authentication for wireless deployment. Unfortunately, it’s not a panacea as debugging packages (especially application installs) is a pain. One, WCD is buggy. The advanced editor started acting buggy when I started changing the application install order, garbling friendly and file names in the XML and finally throwing errors during compile time that forced me to start from scratch. And if the package installation fails, it can’t be run again.

I made the mistake of trying to wing the installation of some of the applications. It’s really good about MSI packages, but you better have your command switches right for everything else. I kept running into issues with Acrobat Reader. Apparently it was still throwing up the GUI, waiting for a finish, which caused the PPKG to stall. And after restarting the machine and trying to run the PPKG again throws an obscure error message that is not well documented. And don’t even think about running the package in audit mode. It won’t skip OOBE, and seemed to undo the Azure AD join that I did.

I wound up splitting the device and account setup into a separate package that I could rerun while I troubleshot the application installations through the main package. Eventually I started using a VM workstation that I could restore a snapshot on, but it was only seven laptops that I needed to deploy. I finally had a working package by the time I got to the seventh!

I’m starting to see the larger picture for a program that can edit the XML to dynamically generate provisioning packages for all of our clients. Grabbing the Azure token might take more time to work out, but I just need to swap out a few variables for device name, local admin, and could individually select applications to install.

One last thing about Azure AD, apparently, joining a Windows 10 device to Azure causes Bitlocker to be enabled. Decryption keys are synced to the cloud, which is a nice feature.

Gaussian Elimination with TDD C++, Part 1

I’m pretty pleased with myself. I managed to pull an epic coding session Friday night and met the deadline on a school assignment. I almost used the word finished there, but I merely got it working. As Uncle Bob said, getting it working is the first step to completion, refactoring and cleaning it up is the next.

The purpose of the assignment was to implement a Guassian Elimination function in C++. The professor, an old Fortran/C++ veteran who had done a lot of scientific matrix work back in the day, wanted us to use pointer pointers for the matrix rows, to make swapping rows faster. They gave us the following specification of how the Matrix would be represented in a data file:

3 // int N representing the size of the matrix A
1 1 1 // values of A[row i]
0 1 1 // A[i+1]
0 0 1 // A[i-N]
1 1 1 // right hand side

The professor then went through the algorithm for solving such a matrix on the board. Later they showed us how to generate datafiles with solvable problems for testing, but we’ll skip over that for now.

The example that the professor did in class was a bit of a mess, so I went looking for better examples. Rosetta Code has examples of Gaussian Elimination in many different programming languages. The C version is pretty close, but even looking at the gauss_eliminate function here, we can see that it’s doing a lot and can further be broken down into smaller functions.

void gauss_eliminate(double *a, double *b, double *x, int n)
{
#define A(y, x) (*mat_elem(a, y, x, n))
    int i, j, col, row, max_row,dia;
    double max, tmp;
 
    for (dia = 0; dia < n; dia++) {
        max_row = dia, max = A(dia, dia);
 
        for (row = dia + 1; row < n; row++)
            if ((tmp = fabs(A(row, dia))) > max)
                max_row = row, max = tmp;
 
        swap_row(a, b, dia, max_row, n);
   
        for (row = dia + 1; row < n; row++) {
            tmp = A(row, dia) / A(dia, dia);
            for (col = dia+1; col < n; col++)
                A(row, col) -= tmp * A(dia, col);
            A(row, dia) = 0;
            b[row] -= tmp * b[dia];
        }
    }
    for (row = n - 1; row >= 0; row--) {
        tmp = b[row];
        for (j = n - 1; j > row; j--)
            tmp -= x[j] * A(row, j);
        x[row] = tmp / A(row, row);
    }
#undef A
} 

My experience with C++ has been limited, mostly schoolwork with CodeBlocks and Eclipse; I prefer using JetBrains these days. And I’ve never done tests in it, so after I set up a new repo the first thing I did was spent some time figuring out Google Test before I wrote my first line of code. I started with making sure I could load files, then started writing output helpers, overloading the ostream operator and creating a print() function.

Let me say: Test Driven Development is HARD. It requires a lot of thought up front about what it is that you are trying to do. I started off with a todo list:

- call GaussianElimination function
- read file from file system
- get size from file
- create matrix(size)
- load vector data from file
- create 2d vector array size N
- initialize matrix with values 

and started working through each of them, going through the red light/ green light cycle: writing a test that would fail, then implementing the code that would make that test pass — and NOTHING MORE. Like any discipline, it’s hard. But the effect is amazing. Having a magic button that lets you change code and get [ PASSED ] back after is exhilarating.

I’ll admit that I cheated a bit as the deadline approached. I hadn’t implemented proper comparison operators for the Matrix class, so I was doing everything by eyeball before I got to the point where the code worked and I could submit for credit. The result was still a far cry from the way I usually operate, with a bunch of manually entered code.

I’ll share more in a further post.

Learning to fly

I’ve been on a bit of a kick on Robert C. Martin’s work lately. Martin, AKA “Uncle Bob” is the author of several books on coding and is the author of a couple of classics in the software development field. I’ve watched several of his lectures on YouTube recently, and have been reading through Clean Code the last couple days. It’s really making me realize how garbage the things I’ve been writing lately are, and I’m pressed with an immense urge to go back and completely refactor everything that I’ve been working on the past few weeks.

Of course, having a robust integration test suite is absolutely necessary for any kind of refactoring, which is not something I’ve been terribly disiplined about recently. I’m proud to say that I am taking a strict TDD approach to my latest class assignment in C++, although it has slowed me a great deal. The hardest part is determining how to right tests. Sure I could go and write a massive 200-line function that would take input and perform the Gaussian Elimination on it, but since this is part of a larger test suite that we’ll use for our final exams, I want to make the code more modular. For example, see the difference between this big 75 line single main statement, and this one. The latter could still be broken out to smaller functions according to Uncle Bob, but is still a step in the right direction.

There were two reasons that I went back to school to finish my degree. The first was that I thought I needed a BS after my name in order to get my resume past some of the gatekeeping algorithms at some firms. I’ve since come to the realization that I have no desire to go to work at any large enterprise or other organization where this would be a factor — six figures be damned. The second was that I felt like I was running into roadblocks with my own development projects. They were basically huge convoluted procedural things. Even when I tried to adopt OOO principles, they were still a mess. I felt like I needed to go back to school and go through the curriculum to get where I needed to get.

I don’t think it’s quite worked out the way I wanted it to. Now, don’t get me wrong, I think earning a degree in ‘Computer Science’ has been valuable, but it’s not quite what I expected. I think one of the intro Unix classes really broke my block when it comes to working with Linux, and that’s a skill that I have definitely appreciated. But I think the focus on Java and C++ is behind the times.

I recently had a conversation with one of my professors about why I was surprised that there hadn’t been any focus on Software Design patterns. (I’m still working my way through the Gang of Four.) He told me that there was a bit of disagreement within the department between those who wanted to focus on theory, and those who wanted more actual engineering and development. So far, the balance of power lay with the theoretical side, which is why the focus on the maths, big-O notation, data structures and discrete-finite-automata.

Even so, I’m still surprised that I feel like I’ve taken more out of a couple of 30 year old videos on Lisp than I have out of the classes that I’m going $20K+ in debt for. All I wanted to do was to write better code, so that I can make programs do what I want them to do. The ideas that I’ve had for things were beyond my grasp to complete them, and I was looking for ways to increase my knowledge. I’m probably being unfair to the university, since some of the more business-end document writing (requirements, software specification documents, use cases, &c..) have helped me already in some of my professional interactions.

At the end of the day, it’s about sitting down with an IDE and writing those magic lines of code that make the computer do what I want.

Programmer Discipline

So my productivity has been shot to hell the last two days while I try to familiarize and setup not one, but two new programming environments. I’ve got Javascript for the CCXT/Safe.Trade library, and just got assigned a C++ module for one of my classes.

I have a somewhat convoluted setup. I like to work from one of two machines. My desktop is for gaming and personal or school projects, and my laptop has a Windows VM that I use for my day job. I also have an Ubuntu server that I’m running a file share and other services on. It’s got Docker running over ssh, but I was pounding my head today trying to figure out how to get IntelliJ to talk to it so I could use the integrated run tools instead of the copy/paste garbage I’ve been dealing with as I try to catch up on 20 years of Javascript changes and Node.

For one of my final classes I’ve got to implement Gaussian Elimination in C++ as part of a larger library that will be part of my final grade. I said goodby to CodeBlocks and Eclipse a while back, but I haven’t started a project in C++ in years. The only time I looked at it all has been for the PennyKoin updates. I’ve never spent the time to understand make lists and linking, so I just spent a painful hour trying to get Googletest integrated with this new project. Cause of course I’m going to write a test before I put down anything more complicated than ‘hello world’.

Of course I am.

I’ve spent the last week going over a series of videos on Clean Code by Uncle Bob C. Martin. It’s a good one that I really enjoyed. Martin is really good up on stage — and funny — and I was disappointed when I finished the last one and realized that there weren’t any more. There’s much more on his CleanCoder site for sale that I might dive into, but I want to read his Clean Code and Clean Architecture books first.

Highly recommended if you have several hours to spare.

I came to realize that the tests that I wrote for the GBTC Estimator were too tightly coupled to the module code, and that the module code was coupled to the input (IEX via Pandas DataReader class. So I’ve been trying to decouple it so that so that it works with a dataframe from my broker’s API. I’m taking some hints from a mocking talk I saw that made me realize that I need to break out dependencies even more.

Safe.Trade integration with CCXT

Cryptocurrency exchange automation library

I’m still operating a two year old crypto mining rig here at the house. For the couple months I’ve had it mining Arrow, a Zcash clone that has all of the non-private transactions turned off. I’ve accumulated quite a bit of it, and found out this past week that it was released on the Safe.Trade exchange. Me being me, I immediately went looking for the API docs to see what was available.

I have yet to sell any of the accumulated tokens since I turned the rig on, but feel like I have enough of a stockpile that it’s time for me to start selling some of it for Bitcoin. So what I would like to do is write a program that will interface with my Arrow mining wallet, see how much has been deposited in that day, and transfer said amount over to the Exchange. From there, place a market order, and transfer the proceeds to my bitcoin hard wallet.

Usually I would just open up Pycharm and start building an API wrapper for the exchange, but I’ve been using the excellent CCXT cryptoexchange libary, and wanted to try my hand at adding an exchange to that. The library is very well designed, exchanges are added via a single Javascript file that performs authentication and API calls to CCXTs unified specifications. It seems simple enough, but I haven’t done JS development in fifteen years.

I managed to download the CCXT Docker image and run the tests, but figuring out how to do test driven development in Node is going to be a bit more than I had originally bargained for. I’m going to have to spend a few days figuring out how to set things up and get in the flow.

Of course, yesterday was also the first day of school, so it’s going to be interesting figuring out how to fit all this in. I’m also still doing work with the Value Average and GBTC Estimators, so I’ll have to balance doing all that as well. Still, having a commit in the CCXT library would be like a badge of honor, so I’m going to give it a shot.

We will keep you posted.