Windows Feature Installation The Referenced Assembly Could Not Be Found. Error: 0x80073701

My day to day involves a good deal of sysadmin work, mostly Windows networks for small business customers. I ran into the above error on a Dell Server 2016 machine when trying to uninstall Windows components (Hyper-V in this case). This post gave me hint I needed to figure out the root cause, some missing language packs.

Now the original post recommends reinstalling the OS, which is a huge non-starter for me in an environment with a single file/AD server. The long fix starts with uninstalling language packs using the lpksetup tool and then manually removing references to any missing packs under the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Component Based Servicing\PackageDetectregistry subkeys. There are lots of them, literally thousands.

Just one of the 7600 registry values that need to be filtered.

I really needed to resolve this, so I spent an hour writing a PowerShell script to run through each subkey value and remove the one’s that referenced a missing language pack. In my case it was a German edition, so we’re searching for the string ‘~de-DE~’ below:

$string = "*~de-DE~*"

$keys = Get-ChildItem -Path "hklm:\SOFTWARE\Microsoft\Windows\CurrentVersion\Component Based Servicing\PackageDetect"

foreach ($key in $keys) {
    (Get-ItemProperty -Path $key.PSPath).psobject.properties |where-object{$_.name -like $string} | Remove-ItemProperty -path $key.PSPath

}

It’s pretty simple, but was frustrating because of the need to get the .psobject.properties. I went through a lot of iterative testing to make sure that I targeted the proper values. Hopefully this helps someone else avoid a reinstallation. After running this script I was able to remove the Windows Hyper-V feature with no problems. I assume that this error was caused by an aborted uninstallation of one of the language packs. I’m not sure how they got on there, but assume that it was a Dell image load or something.

Anyways, cheers!

Overwhelmed discovery

I’ve settled into a bit of a rhythm lately, spending my days working on Python code from home, continuing my project to integrate the various vendor that I use as part of my job via their APIs. In the evenings, after the girls have gone to bed, I’ve been working on various personal and school projects, or watching various computer science related videos. Friday to Saturday evenings, however, we’ve been doing a tech Shabbat, so I’ve been trying to find ways to get the girls out of the house as much as possible. More on that another time.

I finished watching the MIT Lisp series a week or two ago. It’s hard to believe that it’s a freshman-level class, as it’s fair to say that it’s probably more challenging than anything I’ve taken at my current university other than autotonoma theory. It covers so much, and Lisp, or Scheme, rather, has such a simple syntax, that it is really interesting to see how everything can be built up from these simple data structures and procedures.

Since I finished those, I’ve been watching a lot of Pycon videos to try and level up my understanding of some of the more advanced programming idioms that Python has. I’ve finally wrapped my head around list and dictionary comprehension, and have finally figured out what generators are good for. I’m going to to be exploring async and threading next.

In the last post, I was talking about the Ameritrade API and the work I’ve been doing to wrap my head around that. Well, yesterday I logged into my account and saw a notice that they were being bought out by Charles Schwab. Since I’ve been rather frustrated with the whole debugging process for that, I’ve decided to step away from that for a few days and decide whether it makes sense to continue with that or not. I don’t get the sense hat Schwab has a a very open system, and I’m not really enthusiastic about starting from scratch. Maybe I can find a broker with an open, standards compliant API that I can rollover to?

One of the main vendors that we use at work has a SOAP API that uses XML to transmit data. I spent several days trying to figure out how all the WSDL definitions work to see if I could use some metaprogramming to dynamically inspect the various entity attributes. Creating XML queries via chained method calls seems overly complicated, so I’ve been looking at how Django builds their SQL queries using QuerySet calls like Entity.objects.get(id='foo'). It’s much simpler, but it’s such a higher-level design program that I’ve become overwhelmed.

In general, that’s been the feeling lately, overwhelmed. It’s a different type of feeling than I have when I’m overcome with personal obligations and scheduling items. Now, it’s more like my programming ideas are getting to the point where it’s getting hard to intellectually manage them. The solution at this point seems to be to step back and work on something else, and give my brain time to work on the problem offline. An idea might pop into my head on it’s own, or a programming video might give me an idea on how to tackle a problem.

I’ve been trying to stay focused, by limiting the number of things that I’m allowing myself to work on, whether software projects or learning a new piano piece. But sometime you have to step back, and apply skills in to other problems that may reveal solutions in unexpected ways.

QuickType and Ameritrade’s API.

My life goal of automating my job out of existence continues unabated. I’ve been spending a lot of time dealing with the APIs of the various vendors that we deal with, and I’ve spent a lot of time pouring over JSON responses. Most of these are multi-level structures, and usually leads to some clunky accessor code like object['element']['element']. I much rather prefer the more elegant dot notation of object.element.element instead, but getting from JSON to objects hasn’t been something I’ve wanted to spend much time on. Sure, there are a few options to do this using standard Python, but QuickType is by far the best solution out there.

I’ve been using the web-based version for the past few days to create an object library for Ameritrade’s API. Now first off, I’m probably going overboard and violating YAGNI (you ain’t gonna need it) principles by trying to include everthing that the API can return, but it’s been a good excuse to learn more about JSON schemas.

JSON schema with resultant Python code on right.

One of the things that I wish I’d caught earlier is that the recommended workflow in Quicktype is to start with example JSON data, and convert it to a JSON schema before going from that schema to your target language. I’d been trying to go straight from JSON to Python, and there were some problems. First off, the Ameritrade schema has a lot more types than I’ll need: there are two subclasses of securities account, and 5 different ones for the various instrument class. I only need a small subset of that, but thankfully Quicktype automatically combines these together. Secondly, Ameritrade’s response summary, both the schema and the JSON examples, aren’t grouped together in a way that can be parsed efficiently. I spent countless hours trying to combine things into a schema that is properly referenced and would compile properly.

But boy, once it did. Quicktype does a great job of generating code that can process JSON into a Python object. There are handlers for all of the various data types, and Quicktype will actually type check everything from ints to lists, dicts to unions (for handling Nones), and will process classes back out to JSON as well. Subobject parsing works very well. And even if you don’t do Python, it has a an impressive number of languages that it outputs to.

One problem stemming from my decision to use Ameritrade’s response summary JSON code instead of their schema is that the example code uses 0 instead of 0.0 where a float would be applicable. This led to Quicktype generating it’s own schema using integers instead of the JSON schema float equivalent, number. Additionally, Ameritrade doesn’t designate any properties as required, whereas Quicktype assumes everything in your example JSON is, which has led to a lot of failed tests.

Next, I’ll likely figure out how to run Quicktype locally via CLI and figure out some sort of build process to use to keep my object code in sync with my schema definitions. There’s been a lot of copypasta going on the past few days, and having it auto update and run tests when the schema changes seems like a good pipeline opportunity. I’ve also got to spend some more time understanding how to tie together complex schema. Ameritrade’s documentation isn’t up to standard, so figuring out to break them up into separate JSON objects and reference them efficiently will be crucial if I’m going to finish converting the endpoints that I need for my project.

That said, Quicktype is a phenomenal tool, and one that I am probably going to use for other projects that interface with REST APIs.

Jacobin: War Is a Racket

It seems wholly appropriate to be covering this issue on Veteran’s Day. Both my parents were Army, and I’ve been living in an area of the country with one of the largest populations of active-duty personnel in the country.

This issue came to my door looking like a mock up of an old GI-Joe action toy, the packaging made out with images of our hero in the midst of battle. In this case, however, the included figurine is long after the battle has ended. Our action hero is sporting non-regulation long hair and beard, as well as a prosthetic leg, cane, and several bottles of prescription medications litter his feet.

This issue pulls no punches, deflating the notion of ‘service’ and ‘supporting our troops’. There’s plenty in this hefty issue about ending American imperialism, but probably the standout for me is the re-framing of American military culture as a ‘poverty draft’:

“The military welfare state only makes an effective recruiting tool because the Unite States denies all of us the civilian safety net we deserve. The US working class is held hostage by a political and military elite that exploits our deprivation to fuel its endless wars, forcing workers to make a devil’s bargain in pursuit of basic protections that should be available for all.

This statement hit me with such a moment of realization that reading it, I was almost embarassed that I had not seen it before. It’s a bit difficult to state the way which military culture permeates the culture here, so it was a bit like the David Foster Wallace bit about a fish learning what water is for the first time.

There’s a bit about activist opposition to ROTC programs in High Schools that made me think about the recruiting emails and texts that I’ve been getting through my college email address. And there’s a lot more in this issue, which is heftier than most of the others I’ve seen from Jacobin. They have a breakdown of the current 2020 Democratic Presidential contenders, (dl;dr: Biden, F; Warren, D-; Bernie: A-), infographic timelines on US military installations post-WWII, and some other features that are interesting.

But the short end of it is that they’re right about the hold that US militarism has on culture. From ‘Defense’ spending, displays of patriotism at sporting events, to the exploitation of Veterans by for-profit colleges via the GI Bill, American’s have an unhealthy relationship with our Armed Forces. And while a good deal of this issue does talk about concrete steps that can be taken to turn the tide, it seems like it might take generations before we have a population willing to fight back against our military-industrial system. Providing Medicare for All and free college would do a lot to break this, but then again, this may be exactly why the powers-that-be are fighting so hard to stop it.

Wired: September 2019

Three Years of Misery Inside Silicon Valley’s Happiest Company, by Nitasha Tiku:The cover story of this month’s issue a really in-depth piece about the chaos that has been plaguing of the eponymous internet search company, Google. One of the things I love about Wired is their long form reporting, and this article is several thousand words, about 14 pages of text in nine parts. Tiku details the leaks from inside the company that seemingly destroyed the company’s unspoken rule of ‘what happens at Google, stays in Google’. Social justice activists tore down the company’s missteps with regard to sexual harrassment by managers and execs, betraying their “don’t do evil” motto, through dealings with authoritarian China and the United States military.

The article really opens up a look at the inner culture of Google, how they had a fairly open culture, with C-level executives making themselves open for questioning from staffers, and with rank-and-file employees creating sub-cultures within the company. T

The story starts in January 2017, as employees took to the streets after the Trump administration’s travel ban was declared. The company stood behind their employees and stood up for immigrant rights. Then in June, an engineer named James Damore release a 10-page memo ‘explaining’ why there weren’t more female engineers in the industry. Damore was reacting to efforts to promote female engineers within the company, and claimed that this was a bad idea since there were biological reasons why there aren’t more women in STEM fields.

The eventual backlash to Damore’s memo, and his eventual dismissal, started a culture war between the company’s conservatives. Apparently, this minority within Google had been existing within their own corner of the company, but following this, some of them became emboldened and began to step up their opposition and trollishness. They doxed several of the liberal organizers, thereby breaking the company’s sacred rule of non-disclosure.

This episode is just one of several that Tiku details. By the end of the piece, it’s clear that Google’s culture has been transformed, and that while their employees may still be sticking to their ‘don’t be evil’ motto, the executives of the company, driven by shareholder capitalist growth demands, have lost their way.

FAN-tastic Planet: When I was a teenager, growing up in the mid 90’s, Wired was the coolest magazine on the planet. I felt that it offered up both a vision of the future and secret knowledge about where things were headed. Wired was an essential fuel for the ideas that eventually led to my career in computers and programming. Now, having learned more about the nascent cyberpunk culture that Wired killed off in favor off Dot Com boom and bust, that I wonder more about what could have been. I bring this up because I was almost shocked to read the intro to this special culture section in this issue.

“A person sitting at a computer – it was a mystical sight. Once,” it opens, before going further into something straight out of a Rushkoff monologue: “The users, we humans were the almight creator-gods of the dawning digial age, and the computers did our bidding. We were in charge. In today’s world, subject and object have switched places… Computers run the show now, and we -mere data subjects.” It’s almost like they literally took Rushkoff’s point about ‘figure and ground’ verbatim. Please forgive me, dear reader, for mistaking if his point isn’t original. But given his disdain for Wired’s entire raison d’etre during the 90’s and aughts, I find it entirely ironic that they have this section: fan-fic-writing nerds; Netflix’s turn toward Choose-Your-Own-Adventure-style programming; social-media influencers ‘connecting’ with fans over special, subscriber only feeds; and the rise of a new generation of crossword-puzzle writers who are bringing new, diverse voices to another field traditionally dominated by white men. (This last one actually includes a crossword, and let to several days of puzzling on my part.)

Free Riders, by Zeynep Tufekci: The front pages of this issue have the standard Wired fare, gadgets and the latest tech. This time it’s smart-writing tools and augmented-reality gizmos. A bit about the current NASA Mars rover being built, another extolling the joys of world-building video games, another bemoaning Facebook’s creepy dating feature. I wanted to end with a mention of Tufekci’s bit about the prevalence of commercial companies that are built on top of the free, open source tools that have been released to the internet. Not that this is a problem, per se, but there have been many instances of packages that have been so instrumental to the success of these companies, and to the industry in general, that have had security issues, or depended on the unpaid efforts of some very overworked contributors. Tufekci details two pertinent examples: the Heartbleed bug, which affected the OpenSSL spec used to protect almost all web traffic, and core-js, a Javascript library widely used in web browsers.

In the latter case, the developer had been working on the library almost every day for five years without pay, and had solicited less than a hundred dollars. While some might blame him for not taking advantage of his project’s popularity and using it to leverage himself into a high-paying gig somewhere, the issue highlights a problem with the web’s altruistic origins, that have long since been abused by corporation. At least, in the case of OpenSSL, they were able to guilt some firms into providing more funding, but we’ve got a long way to figure out a way to reward these open source programmers that have provided the tools that the web is built on.

The Nation: Aug 26/Sept 2, 2019

We’re seriously behind, both on the pile of periodicals that we have to read and on the ones that we’ve read that we haven’t covered. Letting them age a bit this way puts the coverage in perspective a bit, and helps justify our procrastination.

News You Can Lose, by John Nichols: I have yet to watch any of the Democratic Presidential debates. I gave up my binge-drinking, live Tweeting, event watching after Trump was elected president. Douglass Rushkoff has noted how the debate as television spectacle gave us our current, reality show president, and I am in no mood to participate in the current round. That’s not to say that I haven’t post-watched some of the more ‘gotcha’ moments of the current crop: Biden’s numerous flubs, Julian Castro’s miscalculated attack on Biden, and of course, Elizabeth Warren’s onstage murder of one of the other also-rans.

Nichols column follows the Democratic debate in Detroit, hosted by Fox News. There was a union solidarity event a few hours before the debate at a General Motors transmission plant that was scheduled to be closed the day after the debate. Nichols notes that Dems would have been smart to have hosted the debate at the union hall, or at a church across the street from the actual debate’s location, where Detroit’s Democratic congressional delegation were attending in solidarity with families facing deportation. Either of these would have been smart choices to help focus the debates on substantive policy issues.

But of course, that isn’t the point. Spectacle is. Nichols’s point is that progressives need to make more of an effort to wrest control back of these debates back from the party and the networks and points to the People’s Presidential Forum as an example of this. The Forum, which was to be hosted in October by New England group Rights and Democracy, was cancelled because “not enough candidates could make this date work”.

The American Workplace, by Bryce Covert: Workplace discrimination against pregnant women is rampant in America, especially against working-class women. My wife was able to save up weeks of leave for both of our two children, but for most American women, finding or keeping a job while pregnant can be difficult, and employers use a variety of measures to screen out or dismiss these women. Pay discrimination, or more specifically, the gender-based wage gap, and the Equal Rights Amendment are important political issues today, and Covert profiles the challenges of several women whose lives have been affected by this issue and are fighting back.

Without repeating the details of these cases, I should note that America is one of the only industrialized countries that does not provide for paid maternal care for new mothers. And this fact has many secondary effects on the well-being of the child, including potential educational and economic ones. The profit-driven war on expectant mothers is a roadblock to economic mobility. We should grant American mothers the same privileges as the rest of the developed world, and allow them the time to bond with their newborns and not force them to either give up this time with their newborns, or give up on their careers.

Source: Pew Research Center

Vigilant Struggle, by Robert Greene II: Review of Stony the Road, by Henry Louis Gates. The new Watchmen HBO series premieres with scenes from what can only be described as a race riot: Tulsa, Oklahoma, 1921. A black couple with a young boy tries to escape violence in the city as white citizens indiscriminately shoot unarmed blacks while buildings burn around them. Later, explosive devices are dropped from airplanes onto a garage where others had been hiding out. The scene was so outlandish that I thought it was some sort of alternate history being built around the show’s background. It wasn’t until later that I discovered that the racially motivated destruction depicted in the show was based on actual events.

My own ignorance of the Tulsa race riot almost a hundred years ago is further magnified by the history of Reconstruction following the Civil War. Henry Louis Gates has produced a new documentary series for PBS titled Reconstruction, and his book Stony The Road is a companion to this series. According to Greene, Gates has attempted to expand the period defined by Reconstruction to encompass the war itself as well as the first couple decades of the nineteenth century. Whether he includes the 1921 Tulsa riot in this definition remains to be seen.

Greene, a history professor at South Carolina’s Calfin University, spends several thousand words on the subject of Reconstruction before getting into Gates’s documentary, and notes how there was really two Reconstructions: one that was reconciliatory toward the vanquished South, an another, more radical Reconstruction that attempted to redefine the entire concept of American democracy and expand it to the former enslaved peoples. He notes that the period was one of revanchanist backlash, and lynchings, and raises questions about just how successful these latter efforts at reform were.

Gates documentary, he notes, provides a level of context to the African American experience, and is successful at detailing the evidence of continued aggressions against the freed slaves: racist stereotypes in papers and books, minstrel shows, the founding of the Klan, Jim Crow. This evidence is held as proof against claims of a modern post-racial America. Ultimately the Reconstruction is “not just about the rise and fall of black power in post-Civil War America, but the the rise and fall of black equality in all spheres of American life, cultural, political and otherwise.”

Confirmation bias

So I have proved once again that drinking sucks. I went over one hundred days, probably closer to 110, and have spent the past several days back in old habits. Nothing bad has happened, but nothing good has happened either. I’ve actually fallen off of several habits, dear reader, so I am hoping that coming back here and writing will help drive the demons out. I’m being melodramatic, so I guess I should explain.

I had settled into a bit of a routine, waking up early, meditating, drinking some tea, fasting, turning off screens at 10PM and going to bed at a decent hour. I felt like I was getting a lot done, and I felt great. Then I guess I settled into a few bad habits that started a decline. I’ve been drinking way to many caffeinated drinks, and then started staying up too late. I justified it cause I’ve been watching the MIT CS videos. But I wasn’t getting good sleep, so more caffeine, and so on and so on.

Before I stopped drinking, I had bought a bottle of wine, and after I ceased someone gave me a bottle of Scotch. The two bottles were sitting next to each other on a hutch in the dining room, next to the other drinking paraphernalia. I could see them every day, and as long as they were there they functioned as sort of a totem. I knew they were there, and I was proving something to myself by not choosing to drink them every day. And hopefully, days would pass that the thought would never even cross my mind.

Of course, I do not live alone. And I’ve never tried to impose my abstinence on my spouse. I may have even picked up something for her at the store a time or two. But on a particular day this last week, she had a ‘really bad day’ and wanted something to drink, but the only thing that was left was the wine and the scotch. After a bit of half-hearted protestration, I opened the wine. And I poured two glasses. Cause I would be damned if I was going to let her drink my bottle of wine without me. And so it was began.

The next day, so the seal on the scotch was broken. And over the next few nights, finger after finger I drank. And so on and so on, until I was buying and drinking an entire six pack of IPA until well after midnight this morning. I wasn’t hung over, but as I laid in bed this morning I determined that I would get back to the schedule. Wake. Meditate. Write. Be present.

This is my confession. And this is my reset.

The Nation Magazine: Aug 12/19, 2019

We are really behind on our periodicals, and have quite the stack building up on our bookshelf. We’re going to be catching up over the next few days with a flurry of reading and posting.

Go Not Abroad In Search of Monsters, by David Klion: The title of this article is taken from John Quincy Adam’s Monsters to Destroy speech. Quincy is the namesake for a new transpartisan think tank who’s aim is to restrain America’s foreign policy.

[The] Quincy Institute for Responsible Statecraft, which states that its mission is to โ€œmove US foreign policy away from endless war and toward vigorous diplomacy in the pursuit of international peace.โ€

This think tank is apparently the love-child between a number of liberals and conservatives, including perennial boogeyman George Soros and arch-fiend David Koch. This is really promising. If The Quincy can be effective at keeping the US out of the next international conflict, then I am all for it.

One notable takeaway that I hadn’t heard before is Quincy’s executive director’s definition of transpartisanship, as opposed to bipartisanship. Bipartisanship, she explains, implies that each side is giving up some of what they want in compromise. Transpartisanship means that both sides are “collaborating on issues they already are in agreement over.” It’s a definition that I will be stealing in the future.

Marie Newmann vs. the Democratic Machine, by Rebecca Grant: If ever there was a subject near and dear and to me, it’s Progressive challengers to the Democratic party establishment. I’m not ready to dox myself quite yet, but I was an organizer for the 2016 Sanders campaign, as well as a staff for an unsuccessful Congressional primary campaign. This article does well to highlight Newmann’s challenge to conservative Democratic representative Dan Lipinkski, but I’m not sure there’s more to take away from it.

Right-Wing Troika, by Bryce Convert: Review of State Capture, by Alexander Hertel-Fernandez. State-level politics is another game that I’ve been involved with, and Republican Scott Walker’s successes in Wisconsin has been something that’s interested me. In my case, it’s been to pine that Democrats hadn’t dropped the ball so spectacularly over the past decade and lost so many state legislative seats and governors mansions. The GOP had a great strategy and implemented it brilliantly, with ALEC and other think tanks that helped push policy out in the states.

The left has a lot to learn from the conservative playbook, especially the Wisconsin model, and has a long ways to go to catch up. Hopefully State Capture will help the road to recovery.

Stock price forecasting using FBโ€™s Prophet: Part 3

In our previous posts (part 1, part 2) we showed how to get historical stock data from the Alpha Vantage API, use Pickle to cache it, and how prep it in Pandas. Now we are ready to throw it in Prophet!

So, after loading our main.py file, we get ticker data by passing the stock symbol to our get_symbol function, which will check the cache and get daily data going back as far as is available via AlphaVantage.

>>> symbol = "ARKK"
>>> ticker = get_symbol(symbol)
./cache/ARKK_2019_10_19.pickle not found
{'1. Information': 'Daily Prices (open, high, low, close) and Volumes', '2. Symbol': 'ARKK', '3. Last Refreshed': '2019-10-18', '4. Output Size': 'Full size', '5. Time Zone': 'US/Eastern'}
{'1: Symbol': 'ARKK', '2: Indicator': 'Simple Moving Average (SMA)', '3: Last Refreshed': '2019-10-18', '4: Interval': 'daily', '5: Time Period': 60, '6: Series Type': 'close', '7: Time Zone': 'US/Eastern'}
{'1: Symbol': 'ARKK', '2: Indicator': 'Relative Strength Index (RSI)', '3: Last Refreshed': '2019-10-18', '4: Interval': 'daily', '5: Time Period': 60, '6: Series Type': 'close', '7: Time Zone': 'US/Eastern Time'}
./cache/ARKK_2019_10_19.pickle saved

Running Prophet

Now we’re not going to do anything here with the original code other than wrap it in a function that we can call again later. Our alpha_df_to_prophet_df() function renames our datetime index and close price series data columns to the columns that Prophet expects. You can follow the original Medium post for an explanation of what’s going on; we just want the fitted history and forecast dataframes in our return statement.

def prophet(ticker, fcast_time=360):
    ticker = alpha_df_to_prophet_df(ticker)
    df_prophet = Prophet(changepoint_prior_scale=0.15, daily_seasonality=True)
    df_prophet.fit(ticker)
    fcast_time = fcast_time
    df_forecast = df_prophet.make_future_dataframe(periods=fcast_time, freq='D')
    df_forecast = df_prophet.predict(df_forecast)
    return df_prophet, df_forecast

>>> df_prophet, df_forecast = prophet(ticker)
Initial log joint probability = -11.1039
    Iter      log prob        ||dx||      ||grad||       alpha      alpha0  # evals  Notes 
      99       3671.96       0.11449       1846.88           1           1      120   
...
    3510       3840.64   3.79916e-06       20.3995   7.815e-08       0.001     4818  LS failed, Hessian reset 
    3534       3840.64   1.38592e-06       16.2122           1           1     4851   
Optimization terminated normally: 
  Convergence detected: relative gradient magnitude is below tolerance

The whole process runs within a minute. Even twenty years of Google daily data can be processed quickly.

The last thing we want to do is concat the forecast data back to the original ticker data and Pickle it back to our file system. We rename our index back ‘date’ as it was before we modified it, then join it to the original Alpha Vantage data.

def concat(ticker, df_forecast):
    df = df_forecast.rename(columns={'ds': 'date'}).set_index('date')[['trend', 'yhat_lower', 'yhat_upper', 'yhat']]
    frames = [ticker, df]
    result = pd.concat(frames, axis=1)
    return result

Seeing the results

Since these are Pandas dataframes, we can use matplotlib to see the results, and Prophet also includes Plotly support. But as someone who looks at live charts in TradingView throughout the day, I’d like something more responsive. So we loaded the Bokeh library and created the following function to match.

ARKK plot using matplotlib. Static only.
ARKK plot in Plotly. Not great. UI is clunky and doesn’t work well in my dev VM browser.
def prophet_bokeh(df_prophet, df_forecast):
    p = figure(x_axis_type='datetime')
    p.varea(y1='yhat_lower', y2='yhat_upper', x='ds', color='#0072B2', source=df_forecast, fill_alpha=0.2)
    p.line(df_prophet.history['ds'].dt.to_pydatetime(), df_prophet.history['y'], legend="History", line_color="black")
    p.line(df_forecast.ds, df_forecast.yhat, legend="Forecast", line_color='#0072B2')
    save(p)

>>> output_file("./charts/{}.html".format(symbol), title=symbol)
>>> prophet_bokeh(df_prophet, df_forecast)
ARKK plot in Bokeh. Can easily zoom and pan. Lovely.

Putting it all together

Our ultimate goal here is to be able to process large batches of stocks, downloading the data from AV and processing it in Prophet in one go. For our initial run, we decided to start with the bundle of stocks in the ARK Innovation ETF. So we copied the holdings into a Python list, and created a couple of functions. One to process an individual stock, an another to process the list. Everything in the first function should be familiar except for two things. One, we added a check for the ‘yhat’ column to make sure that we didn’t inadvertently reprocess any individual stocks while we were debugging. We also refactored get_filename, which just adds the stock ticker plus today’s date to a string. It’s used in get_symbol during the Alpha Vantage call, as well as here when we save the Prophet-ized data back to the cache.

def process(symbol):
    ticker = get_symbol(symbol)
    if 'yhat' in ticker:
        print("DF exists, exiting")
        return
    df_prophet, df_forecast = prophet(ticker)
    output_file("./charts/{}.html".format(symbol), title=symbol)
    prophet_bokeh(df_prophet, df_forecast)
    result = concat(ticker, df_forecast)
    file = get_filename(symbol, CACHE_DIR) + '.pickle'
    pickle.dump(result, open(file, "wb"))
    return

Finally, our process_list function. We had a bit of a wrinkle at first. Since we’re using the free AlphaVantage API, we’re limited to 5 API calls per minute. Now since we’re making three in each get_symbol() call we get an exception if we process the loop more than once in sixty seconds. Now I could have just gotten rid of the SMA and RSI calls, ultimately decided just to calculate the duration of each loop and just sleep until the minute was up. Obviously not the most elegant solution, but it works.

def process_list(symbol_list):
    for symbol in symbol_list:
        start = time.time()
        process(symbol)
        end = time.time()
        elapsed = end - start
        print ("Finished processing {} in {}".format(symbol, elapsed))
        if elapsed > 60:
            continue
        elif elapsed < 1:
            continue
        else:
            print('Waiting...')
            time.sleep(60 - elapsed)
            continue

So from there we just pass our list of ARKK stocks, go for a bio-break, and when we come back we’ve got a cache of Pickled Pandas data and Bokeh plots for about thirty stocks.

Where do we go now

Now I’m not putting too much faith into the results of the Prophet data, we didn’t do any customizations, and we just wanted to see what we can do with it. In the days since I started writing up this series, I’ve been thinking about ways to calculate the winners of the plots via a function call. So far I’ve come up with this discount function, that determines the discount of the current price of an asset relative to Prophet’s yhat prediction band.

Continuing with ARKK:

def calculate_discount(current, minimum, maximum):
    return (current - minimum) * 100 / (maximum - minimum)

>>> result['discount'] = calculate_discount(result['4. close'], result['yhat_lower'], result['yhat_upper'])
>>> result.loc['20191016']
1. open           42.990000
2. high           43.080000
3. low            42.694000
4. close          42.800000
5. volume     188400.000000
SMA               44.409800
RSI               47.424600
trend             41.344573
yhat_lower        40.632873
yhat_upper        43.647911
yhat              42.122355
discount          71.877276
Name: 2019-10-16 00:00:00, dtype: float64

A negative number for the discount indicates that the current price is below the prediction band, and may be a buy. Likewise, anything over 100 is above the prediction range and is overpriced, according to the model. We did ultimately pick two out of the ARKK holding that were well below the prediction range and showed a long term forecast, and we’ve started scaling in modestly while we see how things play out.

If we were more cautious, we’d do more backtesting, running limited time slices through Prophet and comparing forecast accuracy against the historical data. Additionally, we’d like to figure out a way to weigh our discount calculation against the accuracy projections.

There’s much more to to explore off of the original Medium post. We haven’t even gotten into integrating Alpha Vantage’s cryptoasset calls, nor have we done any of the validation and performance metrics that are part of the tutorial. It’s likely a part 4 and 5 of this series could follow. Ultimately though, our interest is to get into actual machine learning models such as TensorFlow and see what we can come up with there. While we understand the danger or placing too much weight into trained models, I do think that there may be value to using these frameworks as screeners. Coupled with the value averaging algorithm that we discussed here previously, we may have a good strategy for long-term investing. And anything that I can quantify and remove the emotional factor from is good as well.


I’ve learned so much doing this small project. I’m not sure how much more we’ll do with Prophet per se, but the Alpha Vantage API is very useful, and I’m guessing that I’ll be doing a lot more with Bokeh in the future. During the last week I’ve also discovered a new Python project that aims to provide a unified framework for coupling various equity and crypto exchange APIs with pluggable ML components, and use them to execute various trading strategies. Watch this space for discussion on that soon.

Stock price forecasting using FB’s Prophet: Part 2

Facebook’s Prophet module is a trend forecasting library for Python. We spent some time over the last week going over it via this awesome introduction on Medium, but decided to do some refactoring to make it more reusable. Previously, we setup our pipenv virtual environment, separated sensitive data from our source code using dotenv, and started working with Alpha Vantage’s stock price and technical indicator API. In this post we’ll save our fetched data using Pickle and do some dataframe manipulations in Pandas. Part 3 is also available now.

Pickling our API results

When we left off, we had just wrote our get_time_series function, to which we pass 'get_daily' or such and a symbol for the stock that we would like to retrieve. We also have our get_technical function that we can use to pull any of the dozens of indicators available through Alpha Vantage’s API. Following the author’s original example, we can load Apple’s price history, simple moving average and RSI using the following calls:

symbol = 'AAPL'
ticker = get_time_series('get_daily', symbol, outputsize='full')
sma = get_technical('get_sma', symbol, time_period=60)
rsi = get_technical('get_rsi', symbol, time_period=60)

We’ve now got three dataframes. In the original piece, the author shows how you can export and import this dataframe using Panda’s .to_csv and read_csv functions. Saving the data is a good idea, especially during this stage of development, because it allows us to cache out data and reduce the number of API calls. (Alpha Vantage’s free tier allows 5 calls per minute, 500 a day. ) However, using CSV to save Panda’s dataframes is not recommended, as you will use index and column data. Python’s Pickle module will serialize the data and preserve it whole.

For our implementation, we will create a get_symbol function, which will check a local cache folder for a copy of the ticker data and load it. Our file naming convention uses the symbol string plus today’s date. Additionally, we concat our three dataframes into one using Pandas concat function:

def get_symbol(symbol):
    CACHE_DIR = './cache'
    # check if cache exists
    symbol = symbol.upper()
    today = datetime.now().strftime("%Y_%m_%d")

    file = CACHE_DIR + '/' + symbol + '_' + today + '.pickle'
    if os.path.isfile(file):
        # load pickle
        print("{} Found".format(file))
        result = pickle.load(open(file, "rb"))
    else:
        # get data, save to pickle
        print("{} not found".format(file))
        ticker = get_time_series('get_daily', symbol, outputsize='full')
        sma = get_technical('get_sma', symbol, time_period=60)
        rsi = get_technical('get_rsi', symbol, time_period=60)

        frames = [ticker, sma, rsi]
        result = pd.concat(frames, axis=1)
        pickle.dump(result, open(file, "wb"))
        print("{} saved".format(file))
    return result

Charts!

The original author left out all his chart code, so I had to figure things out on my own. No worries.

result = get_symbol("goog")
plt.plot(result.index, result['4. close'], result.index, result.SMA, result.index, result.RSI)
plt.show()
Google stock price (blue), 20-day moving average (orange) and RSI (green)

Since the RSI is such a small number relative to the stock price, let’s chart it separately.

    plt.subplot(211, title='Price')
    plt.plot(result.index, result['4. close'], result.index, result.SMA)
    plt.subplot(212, title="RSI")
    plt.plot(result.index, result.RSI)
    plt.show()
Much better.

We saved both of these in a plot_ticker function for reuse in our library. Now I am no expert on matplotlib, and have only done some basic stuff with Plotly in the past. I’m probably spoiled by looking at TradingView’s wonderful chart tools and dynamic interface, so being able to drag and zoom around in the results is really important to me from a usability standpoint.

Now I am no expert on matplotlib, and have only done some basic stuff with Plotly in the past. I’m probably spoiled by looking at TradingView’s wonderful chart tools and dynamic interface, so being able to drag and zoom around in the results is really important to me from a usability standpoint. So we’ll leave matplotlib behind from here, and I’ll show you how I used Bokeh in the next part.

Framing our data

We already showed how we concat our price, SMA and RSI data together earlier. Let’s take a look at our dataframe metadata. I want to show you the columns, the dtype of those columns, as well as that of the index. Tail is included just for illustration.

>>> ticker.columns
Index(['1. open', '2. high', '3. low', '4. close', '5. volume', 'SMA', 'RSI'], dtype='object')

>>> ticker.dtypes
1. open      float64
2. high      float64
3. low       float64
4. close     float64
5. volume    float64
SMA          float64
RSI          float64
dtype: object

>>> ticker.index
DatetimeIndex(['1999-10-18', '1999-10-19', '1999-10-20', '1999-10-21',
               '1999-10-22', '1999-10-25', '1999-10-26', '1999-10-27',
               '1999-10-28', '1999-10-29',

>>> ticker.tail()
            1. open  2. high  3. low  4. close   5. volume       SMA      RSI
date                                                                         
2019-10-09   227.03   227.79  225.64    227.03  18692600.0  212.0238  56.9637
2019-10-10   227.93   230.44  227.30    230.09  28253400.0  212.4695  57.8109

Now we don’t need all this for Prophet. In fact, it only looks at two series, a datetime column, labeled ‘ds’, and the series data that you want to forecast, a float, as ‘y’. In the original example, the author renames and recasts the data, but this is likely because of the metadata loss when importing from CSV, and isn’t strictly needed. Additionally, we’d like to preserve our original dataframe as we test our procedure code, so we’ll pass a copy.

def alpha_df_to_prophet_df(df):
    prophet_df = df.get('4. close')\
        .reset_index(level=0)\
        .rename(columns={'date': 'ds', '4. close': 'y'})

    # not needed since dtype is correct already
    # df['ds'] = pd.to_datetime(df['ds'])
    # df['y'] = df['y'].astype(float)
    return prophet_df

>>> alpha_df_to_prophet_df(ticker).tail()
             ds       y
5026 2019-10-09  227.03
5027 2019-10-10  230.09
5028 2019-10-11  236.21
5029 2019-10-14  235.87
5030 2019-10-15  235.32

In the first line of prophet_df =we’re selecting only the ‘close’ price column, which is returned with the original DateTimeIndex. We reset the index, which makes this into a ‘date’ column. Finally we rename them accordingly.


And that’s it for today! Next time we will be ready to take a look at Prophet. We’ll process our data, use Bokeh to display it, and finally write a procedure which we can use to process data in bulk.