Washing C++ code

I spent most of the day hunched at my laptop, checking out git branches trying to rebase commits to clean up the project I’m working on, but I haven’t been having much success. I did, have some good progress with automated documentation and code review tools, as well as some Docker stuff.

I found an interesting presentation by a dev named Uilian Ries titled Creating C++ applications with Gitlab CI, which is exactly what I’m hoping to do. He mentions tools such as Cppcheck, Clang Tidy, and Doxygen. Now I remember something about automated documentation generators during one of my CS classes a couple semesters ago, but let me say that I really should have paid more attention.

Code dependencies for a wallet address creation test class in the Cryptonote codebase. Generated by the Doxygen automated documentation tool.

Uilian goes into a lot more during his presentation that I didn’t get into today, but I did start to work on automating the build process using Docker. One of the problems that I’ve run into with the original Cryptonote forks is that it was built for Ubuntu 16, using an older version of the Boost library. I haven’t quite figured out how to get the builds to work on Ubuntu 18, and keeping an older distro running somewhere isn’t really an effective use of time. I already had Docker setup on a home server, so I was able to spin a copy up, clone my repo, install build prereqs and go to town.

docker run --name pk_redux -it ubuntu:16.04
root@5a1d66905643:/#
apt-get update 
apt-get install git
git clone https://gitlab.com/pk_redux/pkcli.git
cd pkcli
apt install screen make cmake build-essential libboost-all-dev pkg-config libssl-dev libzmq3-dev libunbound-dev libsodium-dev libminiupnpc-dev libreadline6-dev libldns-dev
make
.build/release/src/pkdaemon

My next step here is to save these commands into a Dockerfile or docker-compose file that I can start building off of, adding the code checks and documentation generators as needed. Once I’ve verified the syntax and worked out any bugs, I should be able to start adding things to the Gitlab CI YML files as well. This should help keep the project well-maintained and clean.

I’ve been familiar with Docker for some time, but it’s been a long time since I last messed around with it. It’s really exciting, to be able to document everything, and be able to spin up containers without polluting my base system.

Becoming a Git-xpert

I have been trying to get a grip on the Pennykoin CLI code base for some time. One of the problems that I’ve had is that the original developer had a lot of false starts and stops, and there’s a lot of orphan branches like this:

Taken with GitKraken

If that wasn’t bad enough, at some point they decided to push the current code to a new repo, and lost the entire starting commit history. Whether this was intentional or not, I can’t say. It’s made it very tricky for me to backtrack through the history of the code and figure out where bugs were introduced. So problem number one that I’m dealing with is how to link these two repos together so that I have a complete history to search through.

Merging two branches

So we had two repos, which we’ll call pk_old and pk_new. I originally tried methods where I tried to merge the repos together using branches, but I either wound up with the old repo as the last commit, or with the new repo and none of the old history. I spent a lot of time going over my bash history file and playing with using my local directories as remote sources, deleting and starting over. Then I was able to find out that there was indeed a common commit between these two repos, and that all I had to do was add the old remote with the –tags option to pull in everything.

mkdir pk_redux
cd pk_refresh
git init
git remote add -f pk_new https://github.com/Pennykoin/Pennykoin-old.git --tags
git merge pk_new/master
git remote add -f pk_old https://github.com/Pennykoin/Pennykoin-old.git --tags

Now, I probably could have gotten away by just cloning the pk_new repo instead of initializing an empty directory and adding the remote, but we the end result should be the same. A quick check of the tags between the two original repos and my new one showed that everything was there.

The link between the two repos

Phantom branches

One of the things that we have to do as part of our pk_redux, as we’re calling it, is setup new repos that we actually have control over. This time around, everything will be setup properly as part of governance, so that I’m not the only one with keys to the kingdom in case I go missing. I want to take advantage of GitLab’s integrated CI/CD, as we’ve talked about before, so I setup a new group and pkcli repo. I pushed the code base up, and saw all the tags, but none of the branches were there.

The issue ultimately comes down to the fact that git branches are just pointers to a specific commit in a repository’s history. Git will pull the commits down from a remote as part of a fetch job, but not the pointers to those branches unless I physically checked them out. Only after I created these tracking branches on my local repo could I then push them to the new remote origin.

Fixing Pennykoin

So now that I’ve got a handle on this repo, my next step is to hunt some bugs. I’ll probably have to do some more work to try and de-orphan some of these early commits in the repo history, cause that will be instrumental in tracking down changes to the Cryptonote parameters. These changes are likely the cause for the boostrap issue that exists. And my other priority is figuring out if we can unlock the bugged coins. From there I’d like to implement a test suite, and make sure that there is are proper branching workflows for code changes.

Continuous integration/deployment for static web sites

I spent most of my day yesterday setting up Jekyll on my local workstation and getting that setup. The group project that I’m working on at school requires us to publish a web page for it, but the course assignment requires that we only use static pages, so we can’t run WordPress or other CMS systems. The pages have to be easily transferable to a new location for posterity once the term ends, and it’s questionable if PHP is even running on the web server, and getting a MySQL database is a special request.

Anyways, there’s only one other person on our team besides me that knows anything about HTML. One of my teammates started threw a site together on Silex.me, which is a visual editor that allows you to download static files that can be thrown into the web server’s directory. Editing them isn’t so much fun. It requires downloading the files from the web server, uploading them to Silex via Dropbox, and then uploading the files back to the web server. Not efficient, and there’s no history.

Now our CS department does have a local instance of GitLab setup, which raises possibilities of using some continuous integration/continuous deployment automation to the process. (No Pages, though. Alas) After much debugging, we were able to create a job that would copy the HTML files from our repo’s public directory to the webserver’s secure_html location, where the web page is served from: .

image: alpine:latest 
before_script:  
 - apk update && apk add sshpass openssh-client rsync bash 

secure_html:   
 stage: deploy   
 script:    
  - eval $(ssh-agent -s)    
  - bash -c 'ssh-add <(echo "$SSH_PRIVATE_KEY")'    
  - mkdir "${HOME}/.ssh"    
  - echo "${SSH_HOST_KEY}" > "${HOME}/.ssh/known_hosts"    
  - rsync -auvz public/* user@hostname:/home/user/secure_html/   
artifacts:    
 paths:   
  - public   
 only: 
  - master 

There are a couple of of variables that we had to specify in our CI settings. SSH_PRIVATE_KEY is of course the key that we created and uploaded to the server to connect without a password. There should be no password on the key itself, as we could not determine a way to provide the password within the script. And it’s probably redundant anyways. The thing that caused us a lot of issues was figuring out that we needed the SSH_HOST_KEY to prevent a key verification error when running the rsync command.

This way, whenever someone modifies the HTML files in the repo, it will deploy those files to the web server without any intervention. It will also allow us change history, which is also crucial if someone messes up. Another benefit is that we have a record of commits from the various team members, so we can tell who is contributing.

Now since HTML still requires a bit of finesse to work, we’ve been converting our site over to Jekyll, which should allow us to use either Liquid or Markdown for our content. I was able to get a local development environment up and running and generate a basic template for our site, so now the next step involves deploying a job in CI that will build the site, then push over the static site files to our web server. We’ll cover that later.

Scaling a managed IT service provider

The company that I work at is coming up on seven years old this winter. We’re a small managed service provider with about 4 employees and 25 or so clients. We provide IT support and project implementation services for small professional and service companies. We’ve been stagnant, growth wise, for the past three years or so, and my main focus in addition to taking care of our clients is refining our business processes so that we can scale to the next level. What we’ve been doing has brought us success, but it’s not enough to get us to where we want to be.

We’re part of a franchise system of independent operators all over the U.S. The home office is supposed to provide us with best practices and partner relationships, and the franchisees pool their purchasing power to get best deals with the partners. That’s how it’s supposed to work, anyways. What’s happened in practice is that the home office basically provides new franchise owners with a vendor for this, a vendor for that, and so on, and basically leaves the franchisees to themselves to figure out how to implement it. It’s completely inefficient. I can’t even begin to tell you how much time we’ve spent managing our RMM and PSA tools, or how much of my day to day is refining these various systems (some of which don’t have any API for automation control) to talk to each other.

Instead of pooling human resources, say to have a team of engineers that specialize in setting up firewall systems, for example, each location pretty much has their own teams. We rely on outside NOC and helpdesk partners to deal with first-line issues, and the local teams are supposed to be escalation support. But providing information to these various entities can be very difficult (ITGlue has helped tremendously!) but having a remote helpdesk is very frustrating for customers who expect some sort of continuity.

Unfortunately we’re just not able to provide that level of service for what clients are willing to pay. Especially the smaller clients. MSPs use a per-month contract billing, with rates for servers, workstations, and other IT resources, but that usually just covers keeping things running, remotely, and on site and project work is billed separately.

Things can really add up for clients, especially when they don’t follow our recommendations and shit goes south. Most of them are trying to balance the cost of having their own in-house IT resource, but hardware, software and human resource costs can quickly add up. This is even more true when you consider regulatory and compliance requirements. It’s really hard.

And companies that skimp on these costs always pay for it. Always. I’ve had my fair share of ransomware breaches, but one that I saw this week really took the cake. An firm who we have done business with in the past, that we’ve been under a limited engagement with, had a really bad attack which took down their entire Windows domain: three servers, including AD, Exchange, SQL, file services, and a custom database application. We stopped doing business with them three years ago because it was always a challenge to justify what needed doing over there, and things were usually such a matter of urgency that we would be forced to do things to keep them running. And then we would have to spent weeks having to pull teeth to get paid. We finally said enough is enough and just walked away.

So we got a call from them a few weeks ago. Turns out they had pissed off another MSP, and needed help. They had been through several in-house IT resources, but they needed RMM monitoring, AV and patch management stuff that we would provide. But because they were in dispute with the old IT company, we weren’t able to get access to their backup and data continuity appliance.

Long story short, they got hit earlier this week and didn’t have backups for half their shit. I had convinced their in-house person that they really needed to get some sort of local backup, and thankfully they followed my advice. But it was really too little, and they’ve spent the last 72 hours trying to recover. And let me tell you, it was the most stress-free disaster recovery that I’ve ever dealt with. I’ve damn near had panic attacks and probably lost years off my life from the stress of dealing with my own share of these disasters. Sometimes they were self-inflicted, other times not. But since I wasn’t the one holding the bag, I was chill as fuck.

I’ve saw the writing on the wall for MSPs some time ago. I don’t know if it will be ten years or when, but the business model is going to approach a race to the bottom. And our local market is already saturated with 4 or 5 decent competitors, and many more not so decent. Internal conversations around the future of our firm talked a lot about compliance auditing for DOD/NIST, and the question we’re struggling with now is whether we want to be an MSP that does compliance, or a compliance firm that does MSP. My gut tells me to go where others aren’t. Which is why I’m focusing my time on process automation, combining applications via API.

I was able to list several things to our no list, things we’ve done in the past that have gotten us into trouble in the past. That means setting boundaries for business that we deal with, and will likely involve cutting some of our clients who aren’t growing with us or don’t see the value of the service we provide. It means converting our services to product offerings in order to differentiate ourselves from the competition. And it means automating our processing so we’re not making the same decision over and over again.

Goodbye Github

A good version control system is critical to any software development project. I haven’t been serious enough in the field to have ever messed with Subversion, but git has been part of my daily workflow for a while now. Github has been instrumental for the advancement of open source software, and there are still tons of projects out there that are still relying on it, and I’ll continue using it as far as I need to in order to participating in those projects. But moving forward, I’m using GitLab for all my new projects, and will be recommending it moving forward.

I’ll admit that I’m a cheapskate, and have never shelled out the $7/month to enable private repos on GitHub. And then Microsoft bought them. I haven’t noticed any changes as a result of that buyout, so I can’t say that there’s anything that troubles me. I understand the ban on embargoed countries that they had to implement, but that bothers me from an imperialist standpoint than anything I hold GitHub responsible for. Their hands are tied.

I attempted to setup local git servers on some my local and hosted servers, but nothing beats the convenience of software as a service. However, when I started pitching what would hopefully be a commercial project, I wasn’t about to put things up in a public repo. I had originally started using Bitbucket as an alternative, but recent experiences with other people who have used it previously have been problematic. (Issues following their buyout by Atlassian left many users unable to access their team accounts…)

My university is currently running an (older) version of GitLab internally, and I’ve been working with it extensively today as part of a new group project that I’m working on. One of the things we’re doing is setting up project repos for our website, and eventually other deliverables that we’ll be generating as part of the course. I wanted to avoid using Google drive, so I set something up to push our repo to the computer science department web servers. Unfortunately, they’re running a newer version of Kubernetes which is preventing the continuous integration runner from working, so we’re pretty much stuck for now. But it’s got me looking at options for static content management pages, which is cool. The idea is to allow people to easily edit the repo pages using markdown, and then have Jekyll or whatever package push the generated HTML to the project site. Today has mostly been about setting up scheduling tools and a Discord instance.

But hopping back to the public Gitlab site, I’ve been pretty impressed with the functionality of things like GitLab Pages and the features built into the service. So for now, GitLab will be my go-to for all new code repos.

Software tools in volunteer organizations

America is nearing a generational shift, as more and more of the Boomer generation reaches retirement age, and more and more Gen X and Millennial start taking more of a leadership role in organizations. I’ve made a bit of a personal brand identity as one who straddles between this digital and analog world, and I’ve made a living by helping organizations move between the two. As more and more of my focus shifts away from established businesses and toward startups, non-profits, campaigns and community organizations, I’ve been spending a lot more time thinking about the tools and apps available within these spaces, how to select them, and how to get traction within them.

It’s not an easy problem to solve, and this post will likely have more questions than answers. All I can do is present a few use cases and share some thoughts.

The first political campaign that I became involved with was the 2016 Sanders campaign. I got involved with the state grassroots, we didn’t have any official support from the campaign at all, in fact, the first direct involvement that we had was during the petition-gathering process. A campaign representative came in and met with all of the various organizers that could make it to the state capitol, and we went over the process and requirements. Up into that point, all of us were operating over several dozen Facebook groups. There was one at the state level, and regional and city groups as well. Communications required crossposting events and news to about a dozen of these groups at a time.

I think a few of us recommended some distribution groups. I found a service and volunteered to setup the various lists to hand over to individual admins; another person put up the $35 to get it started. We also rolled out a Slack instance for the state, and used that extensively throughout the campaign.

Now one of the problems that I’ve seen over and over again with rolling out new tech is that there’s always a certain number of individuals that are tech-averse. More accurately, they’re risk averse. They may be more comfortable with email or text messages, or Facebook, than using Slack, Signal, or Twitter. That’s just the nature of the beast. There also seems to be a sweet spot for the size of the organization and the number of people that participate with these apps. Generally speaking, if you can’t get more than two thirds of a group membership to use a tool, then it will most likely not be useful as a public tool. This is more true the smaller the group is.

Now, there are exceptions to things that are more useful from an individual standpoint. Things like Google Drive or Trello are helpful even if you’re just using them for yourself. Other things may be impossible to roll out organically. I once tried recommending our local Democratic party executive board start using Signal instead of group text messaging, and while most everyone agreed in spirit to using it, the initiative never moved forward.

Ultimately, anything that is going to require people to install another app or create a new account is going to be met with pushback in any volunteer or service organization. Hell, that holds true for any organization. But for startups or existing orgs that are still living in the analog world, technology deployments can have a great effect on productivity.

Choosing from all of the options out there is another challenge. When I was in the enterprise space, the process usually involved gathering business requirements, doing an initial selection of vendors, and then putting together a request for proposals from vendors, compiling reports and making recommendations to an executive board. This process is no doubt familiar to people in any large organization. In smaller orgs, the decision making responsibility might fall to one technology ‘expert’, who has to make these recommendations, implement them, and ultimately provide support. The latter of those tasks is ultimately the longest and most expensive of them, and is where most of the frustration ultimately lies.

My daughter just joined the Girl Scouts, and I got an email several days ago that they were moving to Slack and Google Calendar instead of text messages and emails. I went to a leadership meeting earlier tonight and was talking with a couple of the parents while we waited for the building to open up. One of them said “I downloaded so-and-so’s Slack app like he asked, but no one was in it.”

We’ll see if this time is any different.

Ending cash bail

Of the projects that we considered pitching for our ‘social benefit’ programming project was in the cash bail space. There are are many arguments for abolishing cash bail, and there are organizations that are focused on making bail for non-violent offenders. We wondered how we might increase participation in these types of programs using novel software solutions.

The arguments against the bail system are many. A 2013 study of pretrial detention in Kentucky showed a “direct link between how long low- and moderate-risk defendants are in pretrial detention and the chances that they will commit new crimes.” The hypothesis behind this is that “jail destabilizes lives that are often, and almost by definition, already unstable“, and disrupts employment, housing and family support.

Then there are stories like that of Kalief Browder, a 17 year-old who was accused of stealing a backpack. After refusing to plead guilty, he was was given a thousand dollars bail, and sent to Rikers Island after his family was unable to pay. He was held there for three years without trial, and was beaten and held in solitary confinement for two years. Two years later he hung himself.

The current system punishes the poor. Unable to post bail, and faced with the possibility of weeks or even months of pre-trial detention before their case, many people choose to plead guilty just in order to get out. These perverse incentives can lead to disastrous consequences later in life for these people who are legally innocent. And there’s also the problem of America’s $2 billion bail bonds industry, which makes money off the poor. (The U.S. and the Philippines are the only two countries in the world that allow bail bonds.)

Thankfully, the tide is turning.

The Bail Project is a national revolving bail fund, launched following the non-profit Bronx Freedom Fund. The goal of the fund is to pay bail for thousands of low-income Americans. Since bail is refunded when a person shows up for court, the money gets recycled and is made available to more individuals. According to their website, they’ve paid bail for over 6300 people.

The efforts of the Bail Project and likeminded others seems be having an effect. California completely abolished cash bail last summer. Google and Facebook have banned bail bond service ads. And nine of the 2020 Democratic presidential candidates support ending it, including Biden, Sanders and Warren.

So we thought about ways to help expedite this movement using technology. Perhaps more people would be interested in donating to a bail fund if there was more transparency. Could a blockchain system be used to allow people to see the individuals whose lives they were helping? Aside from the privacy concerns, of course. Some people might object to having their information made public in that way, but arrest and court records are already public… There are a host of ethical concerns that such a system might bring up.

There is research that shows that people are more likely to donation to a cause if the ask is made as specific as possible, and that likelihood decreases as the benefit group is increased in size. Simply put, people are more likely to donate to help feed a single child if they are shown their name and picture, but if you are told about an entire nation of people experiencing famine, people will do nothing. I can’t find the source currently, but have heard it from Sam Harris. If true, we may be able to increase participation in a bail fund if people are shown exactly who their money is going to help.

I can imagine some of the pushback already. There’s some racial and class dynamics that are bound to be brought up against it, and it could turn into some sort of reality show gamification if not dealt with delicately. There could also be negative consequences if someone is deemed “not worthy”, a Willie Horton moment if you will.

So would there be a use case for donation tracking system, even if the individual data was anonymized? For example, if you donate $5 to a bail fund, that money might go to help one person, but once the funds are recycled, half of the money could go to two different people. If people could be reminded that their donation had helped three unique individuals, would that cause them to contribute more? This effect could be even greater if released individuals were encouraged to add back into the fund, either as an alternative to the standard bailbond fee, or as a way of paying it forward. Even a small donation could have a non-linear effect.

We also mused at ways to automate the operation of such a fund using a smart contract, but ultimately, the onramps and offramps needed to execute such a system over a number of jails seems like too much of a risk.

In all, most of what I came up with seems like a solution in search of a problem. I’m ultimately trying to improve on something that I have no experience with, and that could ultimately be completely unwanted by those affected. As interesting of a thought experiment this may be, I decided to pass this project by for now and see if something where the need was more readily apparent would present itself.

I did not have to wait long before one would show up on my doorstep. More on that in a week.

Solving society’s problems

So part of my senior year at university involves a professional workforce development. The class spans two semesters, with small teams working on the development and prototype of a project that aims to provide a solution to a “societal problem”. I posted my initial response to the assignment a week ago, and the response from my professor was that he didn’t really see how it met the definition of the class requirements, which were to provide a non-trivial solution that could actually be implemented by six or seven undergrads in a school year. No world peace solutions, basically.

I’ve some time since then trying to narrow the field to something that could actually be implemented, and had some interesting notes that may or may not be relevant from a computer science perspective, but need sharing regardless.

Decentralized autonomous organizations

Smart contracts have really changed things in a way that I don’t think will be readily apparent to most people for another five to ten years. The ability to programmatically define business logic and have it deployed to a permissionless, decentralised network, is going to transform how organizations operate. I can’t find the quote, but I heard this idea about corporations as proto-artificial intelligence. They function according to their own rules, have inputs and outputs which follow from those rules, and ultimately take on a life of their own. The issues that we’ve had with corporations over the years, especially those of governance, growth mindset, and accountability, are ultimately ones that the AI field needs to deal with as intelligent agents become more capable.

DAOstack seems to be the most advanced toolkit that is available for bringing a DAO to the light of day. They have GitHub repos available for working at the various levels needed to either deploy custom smart contracts to Ethereum, use their existing framework, or interact with these contracts via a web front-end. These ‘web 3.0’ applications are called dapps or distributed apps. Most of the use cases that they have listed currently involve different voting rights or reputation management systems. These are usually used as a mechanism for submitting, voting on, and funding proposals through the DAO. Effectively, DAOs function as a type of digital constitution for an organization, and there could be use cases for the management of political or even sovereign organizations.

I’ve been thinking of ways to incorporate organizations into a DAO, although I don’t think think the tech is comprehensive enough to do so at the level that would be needed to replace Robert’s Rules of Order, or be robust enough to be deployed within the executive board of a traditional political organization. (Change management is hard enough as it is…I digress.)

Innovation and social change

I also spent a good deal of time trying to brainstorm on more generic ‘social good’ issues, and discovered The Workers Lab, which has two initiatives that I thought were worth noting. The first is what they call the Design Sprint for Social Change. Basically, the problem set is that most workers don’t have the financial stability to deal with a $400 problem. Their solution: provide up to a thousand dollars in direct assistance, no questions asked. They went straight into a design sprint:

Source: The Workers Lab website

The results of their pre-pilot was interesting, (spoiler: getting money to people is hard!) and I look forward to seeing the results of the full-scale pilot that is apparently ongoing, but what really caught my eye was their Innovation Fund.

The fund awards $150,000 in funding to projects which build power for workers, and has invested more than $2 million to over 20 projects. The application deadline for this fall was a few days ago, but we will watching to see who joins the past winners as this years’ finalists.

Now these aren’t software solutions that they’re funding, but more organizational and purpose driven organizations. I kept digging some more, and that’s when I came across societal benefit corporations, which I will cover next.

Team Human

I don’t know exactly when I became aware of Douglas Rushkoff’s excellent podcast Team Human, but I’ve been hooked on it since I discovered it earlier this year. The book has been on my to-read list for months, and I finally purchased a copy and I am not disappointed. This is a very important book, and highly recommended.

Rushkoff is a ‘media theorist’, and has been covering technology since the 80’s. He was part of the cyberpunk Mondo2000 movement back in the day, and has spent most of his time since then critiquing the capitalization of the internet by business forces since the dot com boom.

The podcast itself usually starts with one of Douglas’s monologues, which are usually taken from sections of the book, and is followed by an interview with various people who are ‘playing for Team Human’. These are usually technologists and authors like Cory Doctorow, Clive Thompson, or climate activists such as Naomi Klein, David-Wallace Wells, or members of Extinction Rebellion.

You can get a real good sense of the book from the first 30 minutes of this episode, which comes from a speech Rushkoff gave at a recent event hosted by startup accelerator Betaworks.

https://teamhuman.fm/episodes/ep-135-mary-gray/

This is ultimately where Rushkoff excels, bursting the bubble of venture capital and startup culture, who are most often interested in whether they can do something than whether they should. His main premise is that technology, once driven by the promise of connecting and empowering people and communities, is eventually corrupted by capitalism’s growth-driven profit model, and is turned against humans, ultimately exploiting and alienating us. Having run out of territory and other nations to extract value from, we have now turned ourselves into targets, and now we are the fuel for these digital technologies.

Team Human covers a lot of ground in a short two hundred pages, and ultimately makes a lot of simplifications that some people may find cherry-picked, but Rushkoff’s version of history, from the invention of finance, markets and religion, to more recent advents of social media, machine learning, and big data, is very interesting, and are as mind-opening as Zinn’s A People’s History was to me when I first read it years ago. There are also more than twenty pages of footnotes for those that want to follow deeper into subjects.

The book is short enough that it can be read through in a few hours, and Rushkoff tends to repeat certain turns of phrase or statistics enough times in his interviews that I’ve started to assimilate into my own thinking quickly enough. (He coined the term ‘viral media’, and seems to be an expert on memetic propagation, so I’m sure this is no accident.)

The call to action in this book is find the others, which is to say that to survive the challenges that we face in the current age, we need to purposefully foster the human connections with those around us, in our local community. Rushkoff believes that there is no substitute for the full bandwidth experience of face-to-face human interaction, and that by meeting with those that we disagree with can we ‘recognise the humanity’ in those who we may be ideologically opposed to, and come to some sort of agreement.

There is a lot to unpack in this short book. It is very broad, with room for exploration within each of the dozen chapters within. It’s a mind-altering work, and one that is much needed in today’s divided public sphere. Rushkoff has intentionally refused to take the helm of any new organization under the Team Human banner, but instead encourages others to find the organizations that are already doing the work.

I’ve taken that advice, and I encourage others to do the same to work toward that end. This book is ultimately a mind-virus for the future of humanity, not a revolution, but a renaissance of pro-human values, a return away from the extractive corporate tech firms that have transformed the world in the last decades. It’s a cycle that has played out through written language, the printing press, radio, television, and the internet. And Rushkoff’s mission is to make sure that the inventors of the next world-altering app have human values in mind when they are created.

The Most Human Human: By Brian Christian

I heard Brian Christian on a recent podcast in my feed earlier this month, talking about computer science, decision making and other subjects. His more recent book, 2016’s Algorithms to Live By wasn’t available at my local library, but this 2011 book about the Loebner prize was, so I picked it up instead. I’ll admit, it took me a while to get into it. I had other things in my reading list I was trying to work through simultaneously, but once I got through those and was able to spend some more time with it I did enjoy it.

The Most Human Human is a chronicle of Christian’s experience in the 2011 Loebner prize, a Turing Test competition where chatbot programs and their human confederates compete against each other, each trying to convince a series of judges that they are the most human. Christian tells his story of trying to prepare for the competition, trying to figure out what he can do during each five minute round of chatting to convince the judges that he is not a computer. Within this story, Christian covers the history of computer science, from Charles Babbage and Ada Lovelace, to the first chatbots like ELIZA, onto more advanced concepts like compression algorithms.

Christian has a dual degree in both computer science and philosophy, as well as a MFA in poetry, and he puts all of this to use in the book. There’s a lot of discussion about art, music and poetry, as one would expect, and lots of quotes to break up the various sections. He spends a good deal on chess, mainly as it relates to computer science and the Kasparov vs. Deep Blue matches. The book is informative without being jargony. I was fairly familiar with most of what he covers in the book, but I was pleased that the book was more in depth about general computing concepts that just about the details of the Loebner prize competition, which was probably one of the lesser interesting parts of the book.

The book is only eight years old, but I fear it doesn’t really age well due to the prevalence of discussion around the pick-up artist (PUA) scene, notably Neil Strauss, Mystery, and the techniques employed by them like negging and neurolinguistic programming. I was involved in the PUA scene, around the mid-aughts, and it’s notable how that subculture became aggressively toxic. It’s quite a distraction in this age of #meToo and incels and the like. It’s also apparent that Christian really liked Dave Matthews Band when he was writing this book, but who am I to judge.

One of the things I really liked from this that Christian writes is about what he terms the anti-parliamentary debate, modeled as the antithesis to the Lincoln Douglas debates which are typical of primary and secondary school debate clubs. Instead of an adversarial process, opposing sides have to come together to work on joint legislation, which they then present to the judges independently, each side explaining why the legislation supports their position. These collaborators are scored jointly, with individual scores based on joint participation scores across several rounds. It’s a shame it hasn’t taken off.

As someone interested in artificial intelligence and machine learning, this book is a good read. It’s right in the sweet spot for general audiences and people like myself that have more of a technical background. A mix or art and science, if you will. One of the takeaways that is going to stay with me from reading this is the ever-moving target that we humans present to these questions around synthetic minds. The goalposts keep moving. Each time a computer beats us at a particular task, that task is no longer seen as a creative endeavor. First checkers was solved, and we’ve been beaten by computers in chess, now Go and other games. But these defeats ultimately allow us to determine what is it that separates us from the machines, that makes us uniquely human? And ultimately, Christian’s book is about his mission to discover those things, and is what the reader is left thinking about afterward.