Learning to fly

I’ve been on a bit of a kick on Robert C. Martin’s work lately. Martin, AKA “Uncle Bob” is the author of several books on coding and is the author of a couple of classics in the software development field. I’ve watched several of his lectures on YouTube recently, and have been reading through Clean Code the last couple days. It’s really making me realize how garbage the things I’ve been writing lately are, and I’m pressed with an immense urge to go back and completely refactor everything that I’ve been working on the past few weeks.

Of course, having a robust integration test suite is absolutely necessary for any kind of refactoring, which is not something I’ve been terribly disiplined about recently. I’m proud to say that I am taking a strict TDD approach to my latest class assignment in C++, although it has slowed me a great deal. The hardest part is determining how to right tests. Sure I could go and write a massive 200-line function that would take input and perform the Gaussian Elimination on it, but since this is part of a larger test suite that we’ll use for our final exams, I want to make the code more modular. For example, see the difference between this big 75 line single main statement, and this one. The latter could still be broken out to smaller functions according to Uncle Bob, but is still a step in the right direction.

There were two reasons that I went back to school to finish my degree. The first was that I thought I needed a BS after my name in order to get my resume past some of the gatekeeping algorithms at some firms. I’ve since come to the realization that I have no desire to go to work at any large enterprise or other organization where this would be a factor — six figures be damned. The second was that I felt like I was running into roadblocks with my own development projects. They were basically huge convoluted procedural things. Even when I tried to adopt OOO principles, they were still a mess. I felt like I needed to go back to school and go through the curriculum to get where I needed to get.

I don’t think it’s quite worked out the way I wanted it to. Now, don’t get me wrong, I think earning a degree in ‘Computer Science’ has been valuable, but it’s not quite what I expected. I think one of the intro Unix classes really broke my block when it comes to working with Linux, and that’s a skill that I have definitely appreciated. But I think the focus on Java and C++ is behind the times.

I recently had a conversation with one of my professors about why I was surprised that there hadn’t been any focus on Software Design patterns. (I’m still working my way through the Gang of Four.) He told me that there was a bit of disagreement within the department between those who wanted to focus on theory, and those who wanted more actual engineering and development. So far, the balance of power lay with the theoretical side, which is why the focus on the maths, big-O notation, data structures and discrete-finite-automata.

Even so, I’m still surprised that I feel like I’ve taken more out of a couple of 30 year old videos on Lisp than I have out of the classes that I’m going $20K+ in debt for. All I wanted to do was to write better code, so that I can make programs do what I want them to do. The ideas that I’ve had for things were beyond my grasp to complete them, and I was looking for ways to increase my knowledge. I’m probably being unfair to the university, since some of the more business-end document writing (requirements, software specification documents, use cases, &c..) have helped me already in some of my professional interactions.

At the end of the day, it’s about sitting down with an IDE and writing those magic lines of code that make the computer do what I want.

Programmer Discipline

So my productivity has been shot to hell the last two days while I try to familiarize and setup not one, but two new programming environments. I’ve got Javascript for the CCXT/Safe.Trade library, and just got assigned a C++ module for one of my classes.

I have a somewhat convoluted setup. I like to work from one of two machines. My desktop is for gaming and personal or school projects, and my laptop has a Windows VM that I use for my day job. I also have an Ubuntu server that I’m running a file share and other services on. It’s got Docker running over ssh, but I was pounding my head today trying to figure out how to get IntelliJ to talk to it so I could use the integrated run tools instead of the copy/paste garbage I’ve been dealing with as I try to catch up on 20 years of Javascript changes and Node.

For one of my final classes I’ve got to implement Gaussian Elimination in C++ as part of a larger library that will be part of my final grade. I said goodby to CodeBlocks and Eclipse a while back, but I haven’t started a project in C++ in years. The only time I looked at it all has been for the PennyKoin updates. I’ve never spent the time to understand make lists and linking, so I just spent a painful hour trying to get Googletest integrated with this new project. Cause of course I’m going to write a test before I put down anything more complicated than ‘hello world’.

Of course I am.

I’ve spent the last week going over a series of videos on Clean Code by Uncle Bob C. Martin. It’s a good one that I really enjoyed. Martin is really good up on stage — and funny — and I was disappointed when I finished the last one and realized that there weren’t any more. There’s much more on his CleanCoder site for sale that I might dive into, but I want to read his Clean Code and Clean Architecture books first.

Highly recommended if you have several hours to spare.

I came to realize that the tests that I wrote for the GBTC Estimator were too tightly coupled to the module code, and that the module code was coupled to the input (IEX via Pandas DataReader class. So I’ve been trying to decouple it so that so that it works with a dataframe from my broker’s API. I’m taking some hints from a mocking talk I saw that made me realize that I need to break out dependencies even more.

Safe.Trade integration with CCXT

Cryptocurrency exchange automation library

I’m still operating a two year old crypto mining rig here at the house. For the couple months I’ve had it mining Arrow, a Zcash clone that has all of the non-private transactions turned off. I’ve accumulated quite a bit of it, and found out this past week that it was released on the Safe.Trade exchange. Me being me, I immediately went looking for the API docs to see what was available.

I have yet to sell any of the accumulated tokens since I turned the rig on, but feel like I have enough of a stockpile that it’s time for me to start selling some of it for Bitcoin. So what I would like to do is write a program that will interface with my Arrow mining wallet, see how much has been deposited in that day, and transfer said amount over to the Exchange. From there, place a market order, and transfer the proceeds to my bitcoin hard wallet.

Usually I would just open up Pycharm and start building an API wrapper for the exchange, but I’ve been using the excellent CCXT cryptoexchange libary, and wanted to try my hand at adding an exchange to that. The library is very well designed, exchanges are added via a single Javascript file that performs authentication and API calls to CCXTs unified specifications. It seems simple enough, but I haven’t done JS development in fifteen years.

I managed to download the CCXT Docker image and run the tests, but figuring out how to do test driven development in Node is going to be a bit more than I had originally bargained for. I’m going to have to spend a few days figuring out how to set things up and get in the flow.

Of course, yesterday was also the first day of school, so it’s going to be interesting figuring out how to fit all this in. I’m also still doing work with the Value Average and GBTC Estimators, so I’ll have to balance doing all that as well. Still, having a commit in the CCXT library would be like a badge of honor, so I’m going to give it a shot.

We will keep you posted.

Estimating GBTC price from BTC after-hours activity

Grayscale Bitcoin Trust (GBTC) is the name of a publicly traded OTC investment product listed on the public OTC markets. It’s a way for US investors to take a position in Bitcoin through brokerage and retirement accounts like IRAs. A lot of OG crypto-types scoff at the prospect of purchasing such an asset, since you don’t actually control the BTC or the private keys, but for some this is an attractive option, or an only one. I’ve been personally taking positions in GBTC over the past 3 or so years through my retirement IRA. One of the most underlooked qualities of GBTC through an IRA is that all transactions are tax-free. I can take profits in my IRA at any time without worrying about tax liability, which is not something I can say for my actual crypto holdings.

Two of the downsides of GBTC is that Grayscale takes a two percent management fee. This isn’t a big deal to me because of the expected gains in a bull run. The other is that there is a premium on GBTC over the underlying asset value. Each share of GBTC represents .00096884 Bitcoin, but the GBTC’s price is usually 30-10% higher than the value of the underlying asset.

One of the main differences between the equities and crypto markets is the fact that crypto is 24/7. Often, during times when BTC has made a big price movement, I’ve wondered what the corresponding change in the price of GBTC would be (and in my portfolio!) So, I have written a small Python package to calculate this that I call GBTC Estimator.

I have it setup to get public BTC prices from Gemini (via the excellent CCXT package). Right now it’s using IEX’s daily GBTC data (and required an IEX API key), so it only has access to daily OHLCV (open, high, low, close, volume) data. We take the close price of GBTC, and divide it by the price of BTC at the same time (4PM EST) to come up with the actual BTC per share. This number is then used with the current BTC price to come up with the estimated GBTC value.

This current version is run from the command line and returns the estimated price as well as the difference from the last close in dollars and percentage. I have plans to put this up as a website that updates automatically, but first I think I’m going to do some backtesting to see how accurate this is. I think there may be some arbitrage opportunities to be found here. I’ve already started refactoring and will have more updates to follow.

Fair Open Source

Last night I had the pleasure of meeting Travis Oliphant, one of the primary creators of Numpy and and founder of Anaconda. He’s currently the CEO of OpenTeams, a company attempting to change the relationship between open source software and the companies that build on top of it. I found out about the lecture and was interested in it because of an article I had read in Wired about technology’s free rider problem, and went to the event without knowing anything much at all about Mr. Oliphant. I soon found out who he was and was very grateful that I had come. I’ve spent a lot of time using Numpy, and I’ll admit I was a bit starstruck.

Travis’s lecture spawned from his experience working on Numpy. He basically gave up tenure track at Brigham Young University to work on it, and had to find other ways to support his family for the two years that he was working on the initial release. As was noted elsewhere, much of the tech boom over the past 20 years has been built on top of the contributions of FOSS developers like Travis and others. He’s a big believer of profit, and thinks that the lack of financial incentives in the FOSS space has caused several problems, including developer to burnout, leading to a lack of proper maintenance of these projects. Many of these projects, like Numpy, have become crucially important to the scientific and business community.

Tim Oliphant’s Pycon 2019 Lighting Talk about Quansight

Oliphant’s goal is to make open source sustainable. Quansight is a venture fund for companies that rely on OSS, one of the ones they’ve funded is a public benefit corporation called FairOSS, which hopes to support OSS developers through contributions from companies that use OSS. He’s also doing something very similar with OpenTeams, hoping to follow Red Hat’s model of supporting Open Source by providing support contracts for various projects.

These are all very worthy goals, and I was both impressed and inspired by his talk. It’s opened up some interesting career opportunities. I recently took my first developer payment through GitCoin recently, and it was a bit of a rush. Getting paid to work on Open Source Software seems like an awesome opportunity, and I’ll be keeping an eye on this for potential post-graduate plans.

Automating value average stock investing

I spent most of the winter break working on automating a value averaging algorithm that I wrote about several months ago. Back in October we started scaling into three positions that we identified based on our work with some predictions we did using Facebook’s Prophet earlier. My goal was to develop a protocol and work out any kinks in the process manually while I worked on building out code that would eventually take over. While I’m not ready to release the modules to the public yet, I have managed to get the general order calculation and order placement up and running.

To start, I setup a Google Sheet with the details of each position: start date, number of days to run, and the total amount to invest. I used Alexander Elder’s Two Percent Rule, as usual to come up with this number. Essentially each position would be small enough that I wouldn’t need to setup stop losses. From there, the sheet would keep track of the number of business days (as a proxy for trading days) and would compute the target position size for that day. I would update a cell with the current instrument price, and the sheet would compute whether my asset holding was above or below the target, and calculate the buy or sell quantities accordingly.

After market open, I would update the price for each stock and put in the orders for each position. This took a few minutes each day, and became part of my morning routine over the past two months or so. Ideally, this process should have only taken five minutes out of my day, but we ran into some challenges due to the decisions we made that required us to rework things and audit our order history several times.

The first of these was based around the type of orders we placed. I decided that I didn’t want to market buy everything, and instead put ‘good-until-cancelled’ limit orders in. When there was no spread between the bid and the ask, I would just match whichever end I was on, and if there was a split I would put my order price one penny in the spread. As a result, some orders would go unfilled, and required some overly complicated spreadsheet calculations to keep track of which orders were filled, what my actual number of shares was ‘supposed’ to be, and so on. I also started using a prorated target, based on the number of days with actual filled orders. This became a problem to track. Also, some days there were large spreads, and my buy orders were way lower than anything that would get filled. There were times when the price fell for a few days and picked up some of these, but keeping track of these filled/unfilled orders was a huge pain in the butt.

One of the reasons that it took me so long to develop a working product was due to the challenges I had with existing Python support for my brokerage. The only feasible module that I could find on Pypi had basic functionality and required a lot of work. It had no order-placing capabilities, so I had to write those. I also got lost working through Ameritrade’s non-compliant schema definitions, and I almost gave up hope entirely when I found out that they were getting bought out. The module still has a lot of improvements needed before it can be run in a completely automated manner, but more on that later.

So far I’ve got just under a thousand lines of code — not as many tests as I should have written — that allows me to process a list of positions, tuples with stock ticker, days to run, start date, and total capital to invest. It calculates the ideal target, gets the current value of the position, and then calculates the difference and number of shares to buy or sell. It then places the order. I’m still manually keeping an eye on things and tracking my orders in the sheet as I’ve been doing, but there’s too much of a discrepancy between the Python algorithm and my spreadsheet. I don’t anticipate trying to wade through my transaction history to try to program around all of the mistakes and adjustments that I made during the development process. I’ll just have to live without the prorated targets for the time being.

I think priorities for the next few commits will be improving the brokerage module. Right now it requires Chromedriver to generate the authentication tokens; this can be done using straight up request sessions. There’s also no error checking; session expiration is a common problem and I had to write a function to use to refresh it without reauthentication. So first priority will be getting the the order placement calls and token handling improvements put in and a PR back into the main module.

From there, I’d like to clean up the Quicktype-generated objects and get them moved over to the brokerage package where they belong. I don’t know that most people are going to want to use Python objects instead or dictionaries, but I put enough work into it that I want it out there.

Lastly, I’ll need to figure out how to separate any of the broker-specific function calls from the value averaging functions. Right now it’s too intertwined to be used for anything other than my brokerage, so I’ll see about getting it generalized in such a way that it can be used with Tensortrade or other algorithmic trading platforms.

I’m not sure how much of this I can get done over the spring. Classes for my final semester at school start next Monday, and it will be May before I’m done with classes. But I will keep posting updates.

Windows Feature Installation The Referenced Assembly Could Not Be Found. Error: 0x80073701

My day to day involves a good deal of sysadmin work, mostly Windows networks for small business customers. I ran into the above error on a Dell Server 2016 machine when trying to uninstall Windows components (Hyper-V in this case). This post gave me hint I needed to figure out the root cause, some missing language packs.

Now the original post recommends reinstalling the OS, which is a huge non-starter for me in an environment with a single file/AD server. The long fix starts with uninstalling language packs using the lpksetup tool and then manually removing references to any missing packs under the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Component Based Servicing\PackageDetectregistry subkeys. There are lots of them, literally thousands.

Just one of the 7600 registry values that need to be filtered.

I really needed to resolve this, so I spent an hour writing a PowerShell script to run through each subkey value and remove the one’s that referenced a missing language pack. In my case it was a German edition, so we’re searching for the string ‘~de-DE~’ below:

$string = "*~de-DE~*"

$keys = Get-ChildItem -Path "hklm:\SOFTWARE\Microsoft\Windows\CurrentVersion\Component Based Servicing\PackageDetect"

foreach ($key in $keys) {
    (Get-ItemProperty -Path $key.PSPath).psobject.properties |where-object{$_.name -like $string} | Remove-ItemProperty -path $key.PSPath

}

It’s pretty simple, but was frustrating because of the need to get the .psobject.properties. I went through a lot of iterative testing to make sure that I targeted the proper values. Hopefully this helps someone else avoid a reinstallation. After running this script I was able to remove the Windows Hyper-V feature with no problems. I assume that this error was caused by an aborted uninstallation of one of the language packs. I’m not sure how they got on there, but assume that it was a Dell image load or something.

Anyways, cheers!

Overwhelmed discovery

I’ve settled into a bit of a rhythm lately, spending my days working on Python code from home, continuing my project to integrate the various vendor that I use as part of my job via their APIs. In the evenings, after the girls have gone to bed, I’ve been working on various personal and school projects, or watching various computer science related videos. Friday to Saturday evenings, however, we’ve been doing a tech Shabbat, so I’ve been trying to find ways to get the girls out of the house as much as possible. More on that another time.

I finished watching the MIT Lisp series a week or two ago. It’s hard to believe that it’s a freshman-level class, as it’s fair to say that it’s probably more challenging than anything I’ve taken at my current university other than autotonoma theory. It covers so much, and Lisp, or Scheme, rather, has such a simple syntax, that it is really interesting to see how everything can be built up from these simple data structures and procedures.

Since I finished those, I’ve been watching a lot of Pycon videos to try and level up my understanding of some of the more advanced programming idioms that Python has. I’ve finally wrapped my head around list and dictionary comprehension, and have finally figured out what generators are good for. I’m going to to be exploring async and threading next.

In the last post, I was talking about the Ameritrade API and the work I’ve been doing to wrap my head around that. Well, yesterday I logged into my account and saw a notice that they were being bought out by Charles Schwab. Since I’ve been rather frustrated with the whole debugging process for that, I’ve decided to step away from that for a few days and decide whether it makes sense to continue with that or not. I don’t get the sense hat Schwab has a a very open system, and I’m not really enthusiastic about starting from scratch. Maybe I can find a broker with an open, standards compliant API that I can rollover to?

One of the main vendors that we use at work has a SOAP API that uses XML to transmit data. I spent several days trying to figure out how all the WSDL definitions work to see if I could use some metaprogramming to dynamically inspect the various entity attributes. Creating XML queries via chained method calls seems overly complicated, so I’ve been looking at how Django builds their SQL queries using QuerySet calls like Entity.objects.get(id='foo'). It’s much simpler, but it’s such a higher-level design program that I’ve become overwhelmed.

In general, that’s been the feeling lately, overwhelmed. It’s a different type of feeling than I have when I’m overcome with personal obligations and scheduling items. Now, it’s more like my programming ideas are getting to the point where it’s getting hard to intellectually manage them. The solution at this point seems to be to step back and work on something else, and give my brain time to work on the problem offline. An idea might pop into my head on it’s own, or a programming video might give me an idea on how to tackle a problem.

I’ve been trying to stay focused, by limiting the number of things that I’m allowing myself to work on, whether software projects or learning a new piano piece. But sometime you have to step back, and apply skills in to other problems that may reveal solutions in unexpected ways.

QuickType and Ameritrade’s API.

My life goal of automating my job out of existence continues unabated. I’ve been spending a lot of time dealing with the APIs of the various vendors that we deal with, and I’ve spent a lot of time pouring over JSON responses. Most of these are multi-level structures, and usually leads to some clunky accessor code like object['element']['element']. I much rather prefer the more elegant dot notation of object.element.element instead, but getting from JSON to objects hasn’t been something I’ve wanted to spend much time on. Sure, there are a few options to do this using standard Python, but QuickType is by far the best solution out there.

I’ve been using the web-based version for the past few days to create an object library for Ameritrade’s API. Now first off, I’m probably going overboard and violating YAGNI (you ain’t gonna need it) principles by trying to include everthing that the API can return, but it’s been a good excuse to learn more about JSON schemas.

JSON schema with resultant Python code on right.

One of the things that I wish I’d caught earlier is that the recommended workflow in Quicktype is to start with example JSON data, and convert it to a JSON schema before going from that schema to your target language. I’d been trying to go straight from JSON to Python, and there were some problems. First off, the Ameritrade schema has a lot more types than I’ll need: there are two subclasses of securities account, and 5 different ones for the various instrument class. I only need a small subset of that, but thankfully Quicktype automatically combines these together. Secondly, Ameritrade’s response summary, both the schema and the JSON examples, aren’t grouped together in a way that can be parsed efficiently. I spent countless hours trying to combine things into a schema that is properly referenced and would compile properly.

But boy, once it did. Quicktype does a great job of generating code that can process JSON into a Python object. There are handlers for all of the various data types, and Quicktype will actually type check everything from ints to lists, dicts to unions (for handling Nones), and will process classes back out to JSON as well. Subobject parsing works very well. And even if you don’t do Python, it has a an impressive number of languages that it outputs to.

One problem stemming from my decision to use Ameritrade’s response summary JSON code instead of their schema is that the example code uses 0 instead of 0.0 where a float would be applicable. This led to Quicktype generating it’s own schema using integers instead of the JSON schema float equivalent, number. Additionally, Ameritrade doesn’t designate any properties as required, whereas Quicktype assumes everything in your example JSON is, which has led to a lot of failed tests.

Next, I’ll likely figure out how to run Quicktype locally via CLI and figure out some sort of build process to use to keep my object code in sync with my schema definitions. There’s been a lot of copypasta going on the past few days, and having it auto update and run tests when the schema changes seems like a good pipeline opportunity. I’ve also got to spend some more time understanding how to tie together complex schema. Ameritrade’s documentation isn’t up to standard, so figuring out to break them up into separate JSON objects and reference them efficiently will be crucial if I’m going to finish converting the endpoints that I need for my project.

That said, Quicktype is a phenomenal tool, and one that I am probably going to use for other projects that interface with REST APIs.

Wired: September 2019

Three Years of Misery Inside Silicon Valley’s Happiest Company, by Nitasha Tiku:The cover story of this month’s issue a really in-depth piece about the chaos that has been plaguing of the eponymous internet search company, Google. One of the things I love about Wired is their long form reporting, and this article is several thousand words, about 14 pages of text in nine parts. Tiku details the leaks from inside the company that seemingly destroyed the company’s unspoken rule of ‘what happens at Google, stays in Google’. Social justice activists tore down the company’s missteps with regard to sexual harrassment by managers and execs, betraying their “don’t do evil” motto, through dealings with authoritarian China and the United States military.

The article really opens up a look at the inner culture of Google, how they had a fairly open culture, with C-level executives making themselves open for questioning from staffers, and with rank-and-file employees creating sub-cultures within the company. T

The story starts in January 2017, as employees took to the streets after the Trump administration’s travel ban was declared. The company stood behind their employees and stood up for immigrant rights. Then in June, an engineer named James Damore release a 10-page memo ‘explaining’ why there weren’t more female engineers in the industry. Damore was reacting to efforts to promote female engineers within the company, and claimed that this was a bad idea since there were biological reasons why there aren’t more women in STEM fields.

The eventual backlash to Damore’s memo, and his eventual dismissal, started a culture war between the company’s conservatives. Apparently, this minority within Google had been existing within their own corner of the company, but following this, some of them became emboldened and began to step up their opposition and trollishness. They doxed several of the liberal organizers, thereby breaking the company’s sacred rule of non-disclosure.

This episode is just one of several that Tiku details. By the end of the piece, it’s clear that Google’s culture has been transformed, and that while their employees may still be sticking to their ‘don’t be evil’ motto, the executives of the company, driven by shareholder capitalist growth demands, have lost their way.

FAN-tastic Planet: When I was a teenager, growing up in the mid 90’s, Wired was the coolest magazine on the planet. I felt that it offered up both a vision of the future and secret knowledge about where things were headed. Wired was an essential fuel for the ideas that eventually led to my career in computers and programming. Now, having learned more about the nascent cyberpunk culture that Wired killed off in favor off Dot Com boom and bust, that I wonder more about what could have been. I bring this up because I was almost shocked to read the intro to this special culture section in this issue.

“A person sitting at a computer – it was a mystical sight. Once,” it opens, before going further into something straight out of a Rushkoff monologue: “The users, we humans were the almight creator-gods of the dawning digial age, and the computers did our bidding. We were in charge. In today’s world, subject and object have switched places… Computers run the show now, and we -mere data subjects.” It’s almost like they literally took Rushkoff’s point about ‘figure and ground’ verbatim. Please forgive me, dear reader, for mistaking if his point isn’t original. But given his disdain for Wired’s entire raison d’etre during the 90’s and aughts, I find it entirely ironic that they have this section: fan-fic-writing nerds; Netflix’s turn toward Choose-Your-Own-Adventure-style programming; social-media influencers ‘connecting’ with fans over special, subscriber only feeds; and the rise of a new generation of crossword-puzzle writers who are bringing new, diverse voices to another field traditionally dominated by white men. (This last one actually includes a crossword, and let to several days of puzzling on my part.)

Free Riders, by Zeynep Tufekci: The front pages of this issue have the standard Wired fare, gadgets and the latest tech. This time it’s smart-writing tools and augmented-reality gizmos. A bit about the current NASA Mars rover being built, another extolling the joys of world-building video games, another bemoaning Facebook’s creepy dating feature. I wanted to end with a mention of Tufekci’s bit about the prevalence of commercial companies that are built on top of the free, open source tools that have been released to the internet. Not that this is a problem, per se, but there have been many instances of packages that have been so instrumental to the success of these companies, and to the industry in general, that have had security issues, or depended on the unpaid efforts of some very overworked contributors. Tufekci details two pertinent examples: the Heartbleed bug, which affected the OpenSSL spec used to protect almost all web traffic, and core-js, a Javascript library widely used in web browsers.

In the latter case, the developer had been working on the library almost every day for five years without pay, and had solicited less than a hundred dollars. While some might blame him for not taking advantage of his project’s popularity and using it to leverage himself into a high-paying gig somewhere, the issue highlights a problem with the web’s altruistic origins, that have long since been abused by corporation. At least, in the case of OpenSSL, they were able to guilt some firms into providing more funding, but we’ve got a long way to figure out a way to reward these open source programmers that have provided the tools that the web is built on.