Choice overload

I don’t seem to have issues finding flow. I guess I’m lucky like that. I don’t have any problems delving into a problem for hours on end and really disappearing into it. I seem to have a different kind of problem. Focus. It’s a situation where I find myself with too many options and I’m not sure what to do next. I guess it borders on feeling overwhelmed, but I don’t really feel that stress. The only thing that stresses me is the feeling of the immediate, the reacting. As one that works in support, I call it firefighting, those situations where something is broken, a server down, or some other critical application or service that brings me out of flow and forces me to go from what I want to do to what I have to do. Sometimes these situations are self inflicted, but other times they’re just disasters. A virus on a client server that brings them to a standstill, and requires my full undivided attention until the issue is resolved. These are the only moments in my life when I’m truly stressed.

The other problem I struggle with is deciding what to do next. It’s probably a prioritization issue, and I’m either ignoring my list of todos or otherwise procrastinating. I pulled out my copy of David Allen’s Getting Things Done and skimmed through the first third. I read it years ago and tried to implement some of it as a best practice, but I’ve struggled to really find something that works. I started using index cards for the wife and I to keep track of things. We’ve tried Trello in the past, but she didn’t really stick with it. I personally found Nirvana to be the most pleasant for me, but I quickly ran up against the limits of the free plan. Of course when I do pay for something, I quickly forget about it, and once I’m reminded about the bill and cancel the service, I start using it again.

The golden age of apps that we currently live in presents more and more of a choice paralysis for me. My various clients have their own stack, but it seems like every new startup has their own, and I seem to get bogged down in figuring out what works, what’s new, how to manage it, and most importantly, how to get buy in from the team. I could think of dozens of examples, the choices are everywhere. As someone who makes their professional calling as the ‘trusted advisor’, I get lost in a sea of possibilities. It’s hardest when you’re planning a new organization, or one that is using a lot of manual and paper processes.

In the past, it’s been trivial to take a few simple steps to get an organization moving along. A few examples: creating mailing lists where the organizer was using personal email; Setting up Google Apps for email and file sharing; registering a domain and setting up a basic website. But the longer I go on with this, the harder it’s proving to nail my preferences down to a few apps. And depending on the organization, it’s proving almost impossible to get buy in from everyone. I have one volunteer group I’m part of where they still insist on massive group chats. I recommend Signal, and things go nowhere. Trello, or Asana for task management, and everything falls off after a few weeks. The more involved I get with an org, the more convoluted things get. Having different apps for time tracking, billing, CRM, issue logs, and so on and on just gets cumbersome. Having these apps talk to each other is nice, but trying to design a solution over and over again is just sapping the energy out of me.

Which is how I found myself playing around with Basecamp last night. I need a system that I can use for my freelance work, that will allow me to add clients as needed and allow me to collaborate with them and their clients. I’m not saying that it’s the answer, but it does seem to do a lot of what I think is needed within a team. It doesn’t do it all, and I’m not sure that it’s best at all of these things, but I’ll settle for simplicity right now. What I need is less options, so that I can get back to work, and stay in flow. I need something with which I can track tasks, assignments, time entries, billing, documentation, source code if need be. There’s just too many solutions out there, and at some level it seems like some sort of entrepreneurial masterbation to be messing around with all of this crap.

Doesn’t mean they’re not after you

I occasionally have to procure and setup new laptops and workstations for new employees for my various clients. As happened today. I drove into the office this morning to wait for FedEx to deliver a laptop that was ordered on Friday, and I didn’t even have to unpack my gear as the delivery was waiting for me when I got to the reception desk. I grabbed it and immediately drove back home, without even five minutes at the office. Got a good hour of my podcasts in on the round trip. Wasn’t even any slowdowns. Not bad for a Monday.

I got home and went to unpack the laptop, and noticed that the box had been opened. Usually Dell has a small round plastic sticker that goes over the cardboard tab that locks the lid to the box. But it and the cardboard around it had been ripped clean off. Weird, I thought. I removed the laptop, and opened it up, and there was a small piece of orange, plastic tape crumpled and stuck to the case. Double weird. I went to turn the machine on. Dead. Usually they ship with enough charge to run them for a few minutes. Triple weird.

Now I’m not usually paranoid, but this just seemed to rub me the wrong way this morning. Nothing looked damaged, so I plugged in the charger and told my boss about it. We had ordered this thing for an employee that was starting tomorrow, so if we needed to go through the trouble of returning it to our distributor, then we needed to act fast. I called the distributor, they said that the box wouldn’t have gone out opened — they don’t accept opened returns — so it probably happened during shipping. The sales rep said they would send a new one out for a return, but I was torn about our timeline and said I would think about it and make a decision soon.

I was concerned about this mainly for an operational security perspective. What if the machine had been compromised? What if someone had opened it, installed malware, then sealed the box back up? What was the risk? What was the likelihood. I told my boss I was going to call the client and see what they wanted to do. As soon as I got the client on the phone and started to explain I realized that this was a mistake. I told them that I didn’t think there were any problems, but that I wanted full transparency. I probably rambled on for about five minutes before we agreed to let it go.

Since the machine didn’t show any obvious signs of tampering besides the box, I wasn’t worried about physical damage or anything like that. I was worried about the device being compromised. A rootkit, or other piece of malware that would lead to a security breach. This laptop had shipped with a smart-card reader, so it was obviously going to be used in a secure environment. What were the chances that the device had been compromised in the warehouse by an employee? What if my client was being targeted?

I went ahead and set the machine up, and made sure our antivirus deployed. Not that I have any illusions about any vendor solution out there that can catch a properly customized virus payload. A sufficiently determined adversary, if you will. One of our neighbors at my office is a team of ex-Special Forces personnel that specialize in security assessment and tactical training. Physical intrusion, surveillance, all that kind of government/military engagement stuff. I hear enough background chatter from them in the office next door to know that they have access to some crazy shit. And of course Twitter has been all about the DEFCON conference last week. My feed has been full of USB cables that contain all kinds of hidden components that will steal and exfiltrate your data. If had enough proximity to this kind of spook-level stuff in my career to know that it’s out there.

Eventually, what gave me piece of mind enough to forget it about it was that I saw the seal ripped off the box. If it was government or corporate espionage, they sure were sloppy. Must have just been something else, a more plausible, unexplained occurrence.

Nothing to worry about. Right?

IDEX staking

During the California gold rush of the 1800’s, the ones who could be counted on to make the most money were the shops that sold the picks, shovels and other mining equipment to the speculators. The same can be argued about the recent cycle in the cryptocurrency space. When Bitcoin took a dive off the all time highs in December 2017, it seemed that the only ones making money were the graphics card and ASIC manufacturers. During the crypto winter which we have just left, one of the biggest success stories was that of Binance, which went on to become one of the largest exchanges in the space. In less than a year, CZ and his team created the premier trading market, and the success of Binance coin (BNB), was one of the few rays of hope in the space, increasing 10x while the rest of the market was tanking.

This reality of the market was not lost on traditional finance players, as even throughout the winter, infrastructure was being built. Coinbase and Gemini continued to expand their offerings and improve their platform, ready to make millions in fees from both the retail and institutional investor.

As an alternative to centralised exchanges, decentralized exchanges (DEX) have held the promise to preserve the decentralization, anonymity and censorship-resistance that the crypto-economy brought. Many of the more libertarian-minded have bemoaned the big money moving into the space, as centralization and regulation have given crypto the flavor of traditional finance.

So when it began to look like infrastructure plays were the best way to hold value during crypto winter, I began looking at ways to use my knowledge to setup staking and masternodes. Our first experiment was with XDNA, which uses a tiered staking system. We setup an AWS instance, fumbled around with the setup, and eventually had a staking masternode setup, which was earning us a nice passive stake. Unfortunately, we made a mistake with the sizing of our instance, which ran up our expenses. And even worse, we had secured our stake before the bottom fell out of the market, meaning our stake lost 90% of it’s value. So while we are still holding this node, along with the additional 25% in masternode earnings, we have discontinued the node and are holding our stake in case XDNA ever 100x back to our entry price.

Our latest play, that we’re currently evaluating, is IDEX. It’s the staking token for the DEX of the same name, which is currently the most popular of its kind. The iDEX staking roadmap has an ambitious plan to eventually migrate all of the exchange hardware to this decentralized model, and the current tier 3 staking model for trade history is just the first step in the plan. We had purchased our stake many months ago, before the staking software was released, and between now and then the market has taken quite the hit due to a number of factors. Including the broader decline in altcoins, iDEX implemented know your customer (KYC) requirements, which caused a revolt among the community. We also made similar issues with hardware selections, and ran into some performance issues that we’ve since rectified. Perhaps the biggest issue that will doom our participation in this project is that the current minimum stake, at current price and volume, is insufficient to generate a profit. Even though our hardware costs are down to nine dollars a month, the staking proceeds for the minimum 10,000 IDEX (about $200-300) is not enough to cover this cost. Based on our calculations, we would need to increase our stake by 3-4 times in order to just break even. Things are complicated by the fact that IDEX is only tradable against ETH.

So for the moment, we’ll leave our node running, and assess whether we want to add the increased exposure to iDEX. We remain bullish on DEXes in general, but operating a staking node at this juncture, given the risk of more price depreciation in ETH and alt markets, make this a tough play from a risk management perspective.

Pushing further into the e-commerce world

I took my plunge into the Shopify space today. I was introduced to someone who wants to start a line of clothing and wanted to use the platform to sell her wares. I don’t have much experience with online marketplaces, other than eBay and Craigslist sporadically over the years. I had put together some stuff on CafePress years ago, mainly as a way to get a shirt made for myself, but that went dormant a long time ago. Most recently I setup a prototype storefront using WooCommerce and Printful as an experiment for a crypto project, but I haven’t done anything with it since setting it up.

The technical side of all these platforms seems relatively straightforward. It’s really actually pretty simple. The exercise instead becomes one of branding, marketing, and logistics. The just-in-time distributor models from Printful and other integrators makes the barrier to entry so low that literally anyone can do it. It just takes the time and the will to do it.

I’ve been in sales my entire life, but I hate selling, and have this mental block against the type of soul-sucking standard fare that goes along with promotions these days. It’s like the old Bill Hicks routine: “Anyone here work in marketing? Yes? Go kill yourself. Seriously.” I suppose though that to be successful these days we’ve all become brands and have to manage our online identities. In the past we may have called them ‘personas’, but now it’s all about branding. I really surprised myself on the phone with this young woman today, telling her to imagine her brand as a person, literally to imagine her brand as a person, whether it was herself or an idealised version of herself: twenty something, college-educated, married, earning x-dollars a year and driving a Mercedes. Find her voice and make sure every product fit that story. Was trying to describe what the splash page on the storefront should do and told her to find some pictures that tell the story of her brand. Twenty year old me would have thrown up in his mouth if he had said it with a straight face.

Going back to Shopify for another moment; it only took a few minutes to setup a partner account and link to the existing storefront. Another minute or two and I had my own development site up and running to play around with. Anyone who is familiar with WordPress shouldn’t have a bit of a problem navigating around Shopify’s presentation and theme options. The Liquid templating language was actually familiar to me from my work with Nationbuilder — I had no idea it was originally developed by Shopify.

Frustrations

I’m a bit perturbed right now. I went back to Django project I hadn’t worked on in two weeks and could not get my Pycharm interpreter working properly. I’d updated from the Community Edition to the Professional Edition during that time, which I’m not sure had anything to do with it, but this failed session brings me to another source of frustration with things that I need to get off my chest.

There are 3, maybe 4 ways that one might need to interact with a Django app in Pycharm. The first, being the Python console itself. The second, the regular command terminal. Third would be the various run configurations that one can setup. And four would be the Django console that Pycharm Pro enables. My issue is that each of these has their own environment variables settings! Maybe it’s just my inexperience showing through here, but I tend to use several of these when I’m working. I have a run configuration for the test server running, then the Django console for migrations and tests, and a terminal window that’s actually running the Django shell, so that I can muck around with code while I’m figuring things out.

I don’t know if I’m an idiot or what, but it just seems extremely ineffective, and I have got to be missing something.

Working alone

Last weekend I finally got around to reading Two Scoops of Django, and it was very interesting. I wish I had picked it up earlier. I think I first started really delving into the Django framework about 3 months ago or so, and I’ve really enjoyed tinkering around with the models and ORM. I’ve done a bit with the forms and views, but I’ve spent a lot more time trying to draft out some data models for various projects and get a feel for how things work. I’ve fallen into my trap of getting too caught up in tools in order to actually deliver anything yet, but I’ve got two projects that I am primarily working on. I’ve been very disciplined about spending at least an hour or more each day on one of them.

Part of me thinks I should just focus on the one at the exclusion of the other, just to focus and plow through. “Starting is easy, finishing is hard,” as Jason Calacanis says. The other voice in my head is telling me that as long as I’m pushing forward on one of them or the other, it doesn’t matter, since the skills I’m learning on each will translate to the other. The last few days have seemed like my wheels are spinning though, as it seems I spent more time sharpening my ax than I did actually cutting down trees. I spent what feels like two whole days just trying to figure out how to setup cookiecutter-django the way I wanted it, another day or two trying to figure out why pipenv doesn’t work properly in Pycharm, and then another trying to figure out how to get Celery to work. Yesterday it was all about how to properly clone a 3rd party Django app so that I can make some modifications to it. And I’ve spent hours trying to figure out how to do my tests, what needs testing and what doesn’t. Endless hours on Medium reading everything I could find related to any of the above.

But as long as I can sit down and work on something, I tell myself I’m making progress and becoming an actual developer. I’ve talked about discipline previously, and that discipline is paying off with my day job as well, whether it’s Powershell scripts, or more Python API wrappers. The hardest thing about it for me is the solitary nature of what I’m doing. Not having a team or partner with these projects is the hardest, cause it ultimately means that I have no one to bounce ideas off of in real time. Best I can hope for is to dump something out on StackExchange and hope that someone gets back to me. Most of the time, just explaining the question sufficiently enough for someone else to understand it spurs the kind of subconscious creativity that leads to a solution.

There’s been many false starts already, but I’m starting to get there.

Currently, with a fintech app I’m working on, I’m trying to determine how I expand a cryptocurrency wallet app designed for Bitcoin and other assets that use it’s RPC interface. The asset that I’m working with is a fork of a privacy coin with the un-shielded send functionalities disabled. So I’ve got to figure out the simplest method to update all the calls in this library so that they’ll use the shielded commands for this asset while retaining the existing commands for the legacy assets. So far, I’ve decided to try adding a boolean field to the currency model and add an if clause to the Celery tasks to choose between the two based on the boolean. It requires modifying code in each of the various function. While it’s simple, it seems to violate one of the core principals of Django, which is don’t repeat yourself (DRY). It seems to me that there is another way that I can add a decorator or something to each of these functions — maybe a strategy pattern — to do that bit of logic in a way that would make it easier to implement. Maybe even without having to fork the 3rd party app in the first place.

We shall see.

API obsession



I have been obsessed with APIs lately. Obsessed. Part of this stems from the interest in coding, of course, but part of it has come from a new focus on automating a lot of manual processes out of existence. I think I first really started messing around with them via crypto — of course — through the need to maintain price tracking sheets for my spec mining projects. I wanted to be able to keep track of the amount of coins that we were mining, the current price of said assets, and use that to calculate earnings and so forth. When I started tracking, I would manually get the prices from the exchange, paste them into a Google Drive doc, then copy my totals from one tab into a running monthly sheet. It quickly became tiresome, and when I found an add-on that someone had created to do lookups via CoinMarketCap (CMC), I became very interested in figuring out how it was done.

Eventually, I got interested in projects that weren’t available via this CMC interface, and had to start rolling my own. I was able to write Google scripts that could call the APIs of various exchanges and mining pools, to give me exchange totals, prices, and mining payouts. I’ve added them to a hodge-podge collections of scripts that I maintain in a sheet, so I can keep track of the entire venture. I use them to plan trades and track positions afterward. Of course Google Sheets has its limitations, and most of my work is in Python, but the basic premise is the same. Wrap an API request in a function wrapper, do something interesting with the result.

A lot of the interest also comes from my interest in automation. I’ve read the stories about people who have automated their jobs using Python, for example, and one of the fun things about APIs is that not only can you get information out of them, but you can send requests to them and make them do things for you. To stick with fintech for a bit longer, trade execution platforms are a perfect example of this. Being able to send orders to a trading platform through an API has enabled the high-frequency trading and bots to take over the markets. But my main interest is a bit closer to home, or work, to be more precise.

At my day job, we use several different systems to maintain our operation. The crux of it is a professional services automation (PSA) ticketing system and a remote monitoring and management (RMM) system. The two vendors that we use are integrated petty well. There are several major players in the space, and most of them plug together pretty well. The main issue is that the PSA requires a lot of manual setup and steps to do basic things like setting up new clients, configuring contracts, maintaining inventory. All which require multiple steps through their rather clunky UI. It’s a pain. Even something as simple as closing a ticket requires 4-5 mouse clicks.

Using the PSA’s API, I’ve begun to draft a collection of function that will allow me to close a ticket using a simple close_ticket(ticketID) call. I’ve developed more complicated functions that will create contracts, add products to those contracts and link assets from the RMM to those contracts. Right now I’m focused on standardizing operations across our clients, but there’s further opportunity to standardize operations between all of our franchise partners.

But perhaps the most critical opportunity that I’m focusing on within my day job is eliminating failures caused by human error.

Hotel Alexa

I’m staying in a hotel tonight. It’s not something that I usually do but there’s been a couple of family events that have brought me and the crew out of town. I woke up this morning before dawn and found myself staring at this bright green light a few inches away from my face. It’s probably just the thermostat or something, but it got me thinking: how long before Hilton and the other big hotel chains start putting voice assistants in rooms? 

I probably wouldn’t be thinking this if the front desk had picked up the phone any of the dozen times I tried to call them last night, but it seems inevitable that there’s some company out there working on a bot for one of these chains: “Alexa, I need more towels.” “Alexa, how do I connect to wifi?” “Alexa, what channel is Disney channel?” “Alexa, have room service bring me dinner.”

Listening

Seems inevitable. I’m sure it will be a while before people are willing to accept these things in their rooms. Some, like myself, aren’t comfortable having these things in my house, let alone rooms where they are sleeping or doing other more intimate activities. I assume that the hotels could hand them out at the front desk as an option: “Would you like a virtual assistant with your room, sir?” And then I’m sure it’d be a matter of time before they’d become standard deployments the way wifi repeater seem to be everywhere. 

Update: It seems I’m a bit late to the party. Amazon released their Alexa for Hospitality in June of 2018:

Guests will be able to do things like order room service, request a housekeeping visit, or adjust room controls (thermostat, blinds, lights, etc.) using an Echo in their room. They can also ask location-specific questions such as what time the hotel pool closes or where the fitness center is.

Some upscale Vegas hotel apparently pioneered Alexas in their rooms back in December 2016, and in October of last year, Marriot announced plans to run a trial in Charlotte hotels. One thing that we didn’t anticipate when we were writing this was the response of hotel staff, who saw these devices as a threat to job security as far back as September. It was apparently part of a list of concerns when  Marriott employees went on strike last fall. 

https://imgs.xkcd.com/comics/listening.png

Free JetBrains software for academic developers

So I’m pretty happy cause today I found out that JetBrains is offering free licenses to their entire software library for students and faculty members. I’ve been using PyCharm Community edition for some time now, and am really glad to have access to the Professional version with all the plugin and Django support. I actually purchased a CLion license a year ago or so. They make really good software, and I encourage everyone to check it out. 

Windows VM on Ubuntu

I’ve been slowly converting to Ubuntu over the years. Neal Stephenson’s In The Beginning Was the Command Line made Linux seem like such a rage when I read it years ago, but I had always been slave to the GUI. Things started to change a bit when Microsoft started pushing Powershell. My manager at the time said that it would “separate the men from the boys” and I’ve been making a push to start building out a library of PS scripts to use to during Windows Server deployments and migrations. 

I’ve been exposed to *nix plenty over the years. My first job after high school was at an ISP, and I remember watching in awe as the sysops guy would bash his way through things to disconnect hung modems or do this or that. I forget exactly when I started getting into actually using it, but I remember setting up LAMP stacks back in the day to setup PHP apps like WordPress or Wikimedia when I was working at the Fortune 500 firm. Cryptoassets led me further down that world, compiling wallets from source, deploying mining pools on AWS instances. Computer science courses opened me up to the world of sed and regex. I still haven’t gotten into emacs or vim — I’m not a sadist. 

As someone who’s been supporting Windows operating systems pretty much for the past 20 years, one of the realities that one often lives with is the reality of having to reinstall the operating system. I did so many during the time that I operated my service center that it was as natural as turning it off and back on again. Luckily I’ve managed to keep a few boxes up and running for several years now –knock on wood– but my primary work laptop hasn’t been so lucky. It’s five+ years old now and has probably been redone 3 times. The last time I went ahead and took the plunge and installed Ubuntu. I still run Windows in a VM since my job relies so much on it, but I’m becoming more and more comfortable in it that it’s becoming my preferred OS. 

One issue that I’ve been struggling with on this setup is that from time to time my system will halt. I might be in the VM, working on something, or browsing Chrome on the host and it will just lock. Sometimes it seems to be when I open a resource-heavy tab. I don’t know if it’s a resource issue between host and guest, but it’s been annoying while not bad enough that I can’t just reboot and keep going.

Today has been a different story. 

Earlier I noticed that the system was starting to become unstable. Fans were whirring, Chrome was starting to hang up intermittently, so I went ahead and restarted the guest OS. Only this time it wouldn’t come back up. Stuck in a automated system repair. I downloaded a boot disk and tried to mount the system. Wouldn’t even get that far. Finally I said ‘screw it’, unmounted the disk and started creating a new one. That’s when I started getting into raw vs. vpc vs. qcow2, ide vs. virtio, pouring over CPU and RAM allocations. I spent hours trying to get the disk to come back up. I think it had something to do with the format I used when I originally set up the disk. It might have been a swap issue or something, but since I’m running it off of virtio now it seems more stable. Time will tell. 

As for the original vhd, I eventually copied the data file off of the local file system on to an external, and was able to fire it up attached to another Win10 no problem. I deleted the original on my laptop and copied the copy back and was able to get it to spin back up. I think it  may be something to do with the fixed allocation of the vpc file vs. the dynamic sizing of the qcow format. 

Today has been a reminder to check backups on all of my systems. Thankfully Crashplan has Linux support now, so I’m going to get that deployed ASAP.