Morning pages

Yesterday I spent most of my time trying to migrate a production WordPress site to my development environment. Normally, I’ve used Infinite WordPress’s site migration tools to move them, which does the trick of moving all the files and updating the database references to the site URL, but I don’t think it works when the site’s not public. I’m doing a lot of hacks with my Docker setup, importing databases, messing with file permissions, and duplicating a lot of my work since I like to sit downstairs at my desk during the day, and upstairs at night. So part of my challenge is trying to find a setup that works well for me.

I might have to make some sacrifices. JetBrains IDEs don’t like to work over the network, so I’ve got to add a directory sync if I want to keep the files on my network server and work from both workstations. At least I can run Docker from the remote machine, but it’s not supported by the IDE, so I’ll have to figure out how to fit that into my workflow.

The girls were good. Elder did everything I asked her, and got her extra screen time. She did Typing, piano, and two sessions of math in Khan’s, and we managed to keep the house tidy. So that’s a big parenting win. She’s already up and on her laptop right now, ostensibly doing typing, but I don’t hear too much of it going on over there. Maybe she’s doing math.

We had a bit of excitement yesterday when bitcoin went on a little bit of a tear. I noticed it shot up to touch ten thousand and got excited. Elder came over and said “it went UP,” excitedly. We watched the fight for a few minutes, before it dumped, and then went on a bike ride.

I moved my entire Ethereum stash over to BlockFi. There’s always a moment of horror after publishing a large transaction to the blockchain when the doubt sets in. Wondering whether the address was copied correctly or if my opsec failed and some hacker changed the receiving address while it was in my clipboard. Did I check the address. I usually do a small test transaction before sending over the big one, but it still makes me nervous. Especially after reading about the mining firm that said someone sent a $144 Eth transaction with $131 million in gas fees.

So now I have a fair chunk of my assets up on BlockFi. I haven’t touched but a fraction of my BTC; it’s just too much risk for me to do that. I’ve got a roughly even spit on there between BTC, ETH, and USD stablecoins, and I’m considering whether to put more USD there. I’ve still got the girl’s BTC accounts, but I don’t want to mix them with mine, and I’m not yet sure if I can open an account in their name or if I’ll have to do like I did for lending club and make multiple ones in my name.

Due to the coronavirus, the IRS is allowing 2019 IRA contributions up until July 1. I’m considering whether I want to do this, or throw some more cash into BlockFi. My IRA is on fire right now, I calculated 70% realized gains off of this market rally, and my unrealized gains for the year are much higher than when I calculated them a couple of weeks ago. I’ve still got active value average positions that are in play, and I’m probably going to be short on cash before they complete, so I need some powder. I just doing know whether I should sell some of my other positions, or put more cash into play. All of my current plays are under risk-adjusted position sizes, but my long term holdings are just sitting without any stops on them. With everyone going crazy on RobinHood these days, I should probably put some protections in place in case there’s another lockdown related pullback.

Yesterday, a client’s laptop failed, and I’m waiting on a vendor to go out there and swap a motherboard or something. The drive is encrypted, and while I’m certain I have the keys, I felt a shot of adrenaline course through my body when I remembered that I neglected to reinstall a backup program on her machine after replacing it. So I know what I’m doing today. What I don’t know is what I’m posting tomorrow for my newsletter. This post has been the type of rambling morning pages post that’s of no use to anyone but myself, and which is not the type of quality content that I want to be sending out to my LinkedIn network, or to the email list which I just salvaged from an old CSV file.

I’m going to let that one mull in my head today, and let it stew.

Choose FI: Financial Independence

My wife and I don’t usually read too many of the same books. Beside some sci-fi and fantasy novels, our non-fiction prefreadingerences don’t overlap too much. She likes trashy novels and I stick to political, business ,and technological based non-fiction. She recently discovered the FIRE movement, a group of people trying to build freedom from wage-slavery through financial independence. She picked up ChooseFI a few months ago, and has been listening to the related podcasts regularly. Since being able to redefine work has been of great interest to me, I decided to make a trade with her: I would read this book and she could read one of mine. I’m looking forward to her upcoming review of The Future Is Faster Than You Think — as soon as I finish it.

ChooseFI is an introduction to the financial independence movement. The last two letters of FIRE are for retire early, which the authors acknowledge is a bit of a misnomer, as many who have achieved FIRE continue to work. The general idea behind the system is to lower expenses and save up enough money to be able to fund your lifestyle via the interest earned on these savings. The first step is determining your magic number. By taking one’s annual expenses and multiplying it by twenty-five, you will have the amount needed to be able to maintain that lifestyle off of a four percent rate on those savings. These concepts are given as the rules of twenty five and four percent.

Determining this number and thinking about spending in terms of twenty five times (or three hundred if you’re talking about monthly expenses) can produce a dramatic shift in mindset. A simple example: spending three dollars a day on an energy drink or coffee each workday might cost you fifteen hundred dollars a year just to purchase, but maintaining that level of spending from savings income will require a whopping thirty seven thousand dollars in the bank. Another example: we’ve had a housekeeper come by our home twice a month, at $130 a visit. That’s over three grand a year, in direct expenses, over seventy eight grand to maintain during retirement. Putting these costs in this perspective creates a stark shift in priorities.

Of course the goal of living FIRE isn’t to live life as an an ascetic, it’s about prioritizing the things that one wants out of life. As a self-help book, Choose FI does a good job of laying the ground work toward setting priorities, developing a growth mindset, and mapping out the path to get there. There’s chapters on US tax savings, advice on college, career and networking, and investment tips that focus on real estate and house hacking, investing in index funds, and building a business.

As someone who’s taken a hands-on approach to managing my retirement and stock accounts much of my adult life, I found the chapter on index funds to be the weakest. I understand that for most people, picking a low-fee Vanguard index makes the most sense, but reading the following passage in the days following the worst daily drop in the S&P since the Great Depression struck me as ironic:

Buying an index fund means making a bet that the system continue to grow and prosper. Some will argue that this will not always be the case. After all, societies and economies have collapsed in the past. While this is true, I’m not basing my plan around a worst-case scenario that may never come.

Choose FI, pp. 229

Whoops.

To be fair, the authors have acknowledged the current pandemic and situation in the markets in their recent podcasts. From what I’ve heard, it sounds like they are exercising caution and urging listeners not to jump at the current fire sale prices. This is probably wise advice for most people.

The book is perhaps a bit longer than it needs to be, and I found myself skimming through the book the more I reached through the end. There are lot of personal stories from both the authors and many of the chapters showcase others that the authors have met or interviewed on their podcasts. I suppose it’s to be expected in an introductory book like this. The chapters on tax strategies and real estate investing were of the most interest to me, and the author’s point to other resources where readers can find more comprehensive resources.

That said, the authors deserve credit for the community building that they’ve built. I hesitate to use the term ‘media empire’, but I did experience a tinge of jealousy at some points during the some points, reading about how they quit their day jobs to focus on their Choose FI company, or others who were able to retire at thirty five based on their real estate holdings.

My wife and I may have seen the light a bit later in life, but we both understand that our parents path of working a nine-to-five for fifty years is not the way for us. We’ve got a long way to go until we’re in the position where we’ll have the freedom to work when and where we please. If anything ChooseFI has given us a similar perspective and opportunity to discuss and define what that life would look like.

QuickType and Ameritrade’s API.

My life goal of automating my job out of existence continues unabated. I’ve been spending a lot of time dealing with the APIs of the various vendors that we deal with, and I’ve spent a lot of time pouring over JSON responses. Most of these are multi-level structures, and usually leads to some clunky accessor code like object['element']['element']. I much rather prefer the more elegant dot notation of object.element.element instead, but getting from JSON to objects hasn’t been something I’ve wanted to spend much time on. Sure, there are a few options to do this using standard Python, but QuickType is by far the best solution out there.

I’ve been using the web-based version for the past few days to create an object library for Ameritrade’s API. Now first off, I’m probably going overboard and violating YAGNI (you ain’t gonna need it) principles by trying to include everthing that the API can return, but it’s been a good excuse to learn more about JSON schemas.

JSON schema with resultant Python code on right.

One of the things that I wish I’d caught earlier is that the recommended workflow in Quicktype is to start with example JSON data, and convert it to a JSON schema before going from that schema to your target language. I’d been trying to go straight from JSON to Python, and there were some problems. First off, the Ameritrade schema has a lot more types than I’ll need: there are two subclasses of securities account, and 5 different ones for the various instrument class. I only need a small subset of that, but thankfully Quicktype automatically combines these together. Secondly, Ameritrade’s response summary, both the schema and the JSON examples, aren’t grouped together in a way that can be parsed efficiently. I spent countless hours trying to combine things into a schema that is properly referenced and would compile properly.

But boy, once it did. Quicktype does a great job of generating code that can process JSON into a Python object. There are handlers for all of the various data types, and Quicktype will actually type check everything from ints to lists, dicts to unions (for handling Nones), and will process classes back out to JSON as well. Subobject parsing works very well. And even if you don’t do Python, it has a an impressive number of languages that it outputs to.

One problem stemming from my decision to use Ameritrade’s response summary JSON code instead of their schema is that the example code uses 0 instead of 0.0 where a float would be applicable. This led to Quicktype generating it’s own schema using integers instead of the JSON schema float equivalent, number. Additionally, Ameritrade doesn’t designate any properties as required, whereas Quicktype assumes everything in your example JSON is, which has led to a lot of failed tests.

Next, I’ll likely figure out how to run Quicktype locally via CLI and figure out some sort of build process to use to keep my object code in sync with my schema definitions. There’s been a lot of copypasta going on the past few days, and having it auto update and run tests when the schema changes seems like a good pipeline opportunity. I’ve also got to spend some more time understanding how to tie together complex schema. Ameritrade’s documentation isn’t up to standard, so figuring out to break them up into separate JSON objects and reference them efficiently will be crucial if I’m going to finish converting the endpoints that I need for my project.

That said, Quicktype is a phenomenal tool, and one that I am probably going to use for other projects that interface with REST APIs.

Stock price forecasting using FB’s Prophet: Part 3

In our previous posts (part 1, part 2) we showed how to get historical stock data from the Alpha Vantage API, use Pickle to cache it, and how prep it in Pandas. Now we are ready to throw it in Prophet!

So, after loading our main.py file, we get ticker data by passing the stock symbol to our get_symbol function, which will check the cache and get daily data going back as far as is available via AlphaVantage.

>>> symbol = "ARKK"
>>> ticker = get_symbol(symbol)
./cache/ARKK_2019_10_19.pickle not found
{'1. Information': 'Daily Prices (open, high, low, close) and Volumes', '2. Symbol': 'ARKK', '3. Last Refreshed': '2019-10-18', '4. Output Size': 'Full size', '5. Time Zone': 'US/Eastern'}
{'1: Symbol': 'ARKK', '2: Indicator': 'Simple Moving Average (SMA)', '3: Last Refreshed': '2019-10-18', '4: Interval': 'daily', '5: Time Period': 60, '6: Series Type': 'close', '7: Time Zone': 'US/Eastern'}
{'1: Symbol': 'ARKK', '2: Indicator': 'Relative Strength Index (RSI)', '3: Last Refreshed': '2019-10-18', '4: Interval': 'daily', '5: Time Period': 60, '6: Series Type': 'close', '7: Time Zone': 'US/Eastern Time'}
./cache/ARKK_2019_10_19.pickle saved

Running Prophet

Now we’re not going to do anything here with the original code other than wrap it in a function that we can call again later. Our alpha_df_to_prophet_df() function renames our datetime index and close price series data columns to the columns that Prophet expects. You can follow the original Medium post for an explanation of what’s going on; we just want the fitted history and forecast dataframes in our return statement.

def prophet(ticker, fcast_time=360):
    ticker = alpha_df_to_prophet_df(ticker)
    df_prophet = Prophet(changepoint_prior_scale=0.15, daily_seasonality=True)
    df_prophet.fit(ticker)
    fcast_time = fcast_time
    df_forecast = df_prophet.make_future_dataframe(periods=fcast_time, freq='D')
    df_forecast = df_prophet.predict(df_forecast)
    return df_prophet, df_forecast

>>> df_prophet, df_forecast = prophet(ticker)
Initial log joint probability = -11.1039
    Iter      log prob        ||dx||      ||grad||       alpha      alpha0  # evals  Notes 
      99       3671.96       0.11449       1846.88           1           1      120   
...
    3510       3840.64   3.79916e-06       20.3995   7.815e-08       0.001     4818  LS failed, Hessian reset 
    3534       3840.64   1.38592e-06       16.2122           1           1     4851   
Optimization terminated normally: 
  Convergence detected: relative gradient magnitude is below tolerance

The whole process runs within a minute. Even twenty years of Google daily data can be processed quickly.

The last thing we want to do is concat the forecast data back to the original ticker data and Pickle it back to our file system. We rename our index back ‘date’ as it was before we modified it, then join it to the original Alpha Vantage data.

def concat(ticker, df_forecast):
    df = df_forecast.rename(columns={'ds': 'date'}).set_index('date')[['trend', 'yhat_lower', 'yhat_upper', 'yhat']]
    frames = [ticker, df]
    result = pd.concat(frames, axis=1)
    return result

Seeing the results

Since these are Pandas dataframes, we can use matplotlib to see the results, and Prophet also includes Plotly support. But as someone who looks at live charts in TradingView throughout the day, I’d like something more responsive. So we loaded the Bokeh library and created the following function to match.

ARKK plot using matplotlib. Static only.
ARKK plot in Plotly. Not great. UI is clunky and doesn’t work well in my dev VM browser.
def prophet_bokeh(df_prophet, df_forecast):
    p = figure(x_axis_type='datetime')
    p.varea(y1='yhat_lower', y2='yhat_upper', x='ds', color='#0072B2', source=df_forecast, fill_alpha=0.2)
    p.line(df_prophet.history['ds'].dt.to_pydatetime(), df_prophet.history['y'], legend="History", line_color="black")
    p.line(df_forecast.ds, df_forecast.yhat, legend="Forecast", line_color='#0072B2')
    save(p)

>>> output_file("./charts/{}.html".format(symbol), title=symbol)
>>> prophet_bokeh(df_prophet, df_forecast)
ARKK plot in Bokeh. Can easily zoom and pan. Lovely.

Putting it all together

Our ultimate goal here is to be able to process large batches of stocks, downloading the data from AV and processing it in Prophet in one go. For our initial run, we decided to start with the bundle of stocks in the ARK Innovation ETF. So we copied the holdings into a Python list, and created a couple of functions. One to process an individual stock, an another to process the list. Everything in the first function should be familiar except for two things. One, we added a check for the ‘yhat’ column to make sure that we didn’t inadvertently reprocess any individual stocks while we were debugging. We also refactored get_filename, which just adds the stock ticker plus today’s date to a string. It’s used in get_symbol during the Alpha Vantage call, as well as here when we save the Prophet-ized data back to the cache.

def process(symbol):
    ticker = get_symbol(symbol)
    if 'yhat' in ticker:
        print("DF exists, exiting")
        return
    df_prophet, df_forecast = prophet(ticker)
    output_file("./charts/{}.html".format(symbol), title=symbol)
    prophet_bokeh(df_prophet, df_forecast)
    result = concat(ticker, df_forecast)
    file = get_filename(symbol, CACHE_DIR) + '.pickle'
    pickle.dump(result, open(file, "wb"))
    return

Finally, our process_list function. We had a bit of a wrinkle at first. Since we’re using the free AlphaVantage API, we’re limited to 5 API calls per minute. Now since we’re making three in each get_symbol() call we get an exception if we process the loop more than once in sixty seconds. Now I could have just gotten rid of the SMA and RSI calls, ultimately decided just to calculate the duration of each loop and just sleep until the minute was up. Obviously not the most elegant solution, but it works.

def process_list(symbol_list):
    for symbol in symbol_list:
        start = time.time()
        process(symbol)
        end = time.time()
        elapsed = end - start
        print ("Finished processing {} in {}".format(symbol, elapsed))
        if elapsed > 60:
            continue
        elif elapsed < 1:
            continue
        else:
            print('Waiting...')
            time.sleep(60 - elapsed)
            continue

So from there we just pass our list of ARKK stocks, go for a bio-break, and when we come back we’ve got a cache of Pickled Pandas data and Bokeh plots for about thirty stocks.

Where do we go now

Now I’m not putting too much faith into the results of the Prophet data, we didn’t do any customizations, and we just wanted to see what we can do with it. In the days since I started writing up this series, I’ve been thinking about ways to calculate the winners of the plots via a function call. So far I’ve come up with this discount function, that determines the discount of the current price of an asset relative to Prophet’s yhat prediction band.

Continuing with ARKK:

def calculate_discount(current, minimum, maximum):
    return (current - minimum) * 100 / (maximum - minimum)

>>> result['discount'] = calculate_discount(result['4. close'], result['yhat_lower'], result['yhat_upper'])
>>> result.loc['20191016']
1. open           42.990000
2. high           43.080000
3. low            42.694000
4. close          42.800000
5. volume     188400.000000
SMA               44.409800
RSI               47.424600
trend             41.344573
yhat_lower        40.632873
yhat_upper        43.647911
yhat              42.122355
discount          71.877276
Name: 2019-10-16 00:00:00, dtype: float64

A negative number for the discount indicates that the current price is below the prediction band, and may be a buy. Likewise, anything over 100 is above the prediction range and is overpriced, according to the model. We did ultimately pick two out of the ARKK holding that were well below the prediction range and showed a long term forecast, and we’ve started scaling in modestly while we see how things play out.

If we were more cautious, we’d do more backtesting, running limited time slices through Prophet and comparing forecast accuracy against the historical data. Additionally, we’d like to figure out a way to weigh our discount calculation against the accuracy projections.

There’s much more to to explore off of the original Medium post. We haven’t even gotten into integrating Alpha Vantage’s cryptoasset calls, nor have we done any of the validation and performance metrics that are part of the tutorial. It’s likely a part 4 and 5 of this series could follow. Ultimately though, our interest is to get into actual machine learning models such as TensorFlow and see what we can come up with there. While we understand the danger or placing too much weight into trained models, I do think that there may be value to using these frameworks as screeners. Coupled with the value averaging algorithm that we discussed here previously, we may have a good strategy for long-term investing. And anything that I can quantify and remove the emotional factor from is good as well.


I’ve learned so much doing this small project. I’m not sure how much more we’ll do with Prophet per se, but the Alpha Vantage API is very useful, and I’m guessing that I’ll be doing a lot more with Bokeh in the future. During the last week I’ve also discovered a new Python project that aims to provide a unified framework for coupling various equity and crypto exchange APIs with pluggable ML components, and use them to execute various trading strategies. Watch this space for discussion on that soon.

Choosing when, and how to sell

Knowing when to sell is one of the problems I struggle with as a trader, both in equities and crypto markets. In the past, I’ve relied on what the Motley Fool has referred to as a ‘buy and hold forever’ strategy, and it’s worked out well for me with some of the bigger tech stocks. As someone who’s been focusing more on shorter time frames lately, I’ve been having less success. And after watching my crypto portfolio break six figures in the winter of 2017 before crashing almost ninety percent, I’ve been trying to find a way to ensure that I’m able to actually take some profits when my positions start taking those parabolic runs.

One of the interesting metrics around the USD price of Bitcoin is the Mayer Multiple, which is the multiple of the current BTC price over the 200-day moving average. Trace Mayer determined that, historically speaking, the best long-term strategy was to accumulate BTC when when the Mayer Multiple was under 2.4. For comparison, the last time we hit that level was late June of this year when BTC hit $14K. Now I have traditionally been one to dollar-cost average into BTC, weekly, but I had to stop my contributions for a variety of market and personal reasons, but one plan I have been thinking about is to sell a large share of my position when the MM hits 2.88. This is a number I cam up with just by looking at the charts, and is currently about $26,200.

So I was really interested by this strategy put together by former BlackRock portfolio manager Vishal Karir, on how to take profits before the next bitcoin recession. I’m not going to rehash the entire piece here, suffice to say it sets a static accumulation target, buys when the asset value is below this target, and sells when it’s above it. I wanted to do some backtesting with some of my assets and see what I came up with. So I wrote up the following code in a Google Collab doc so I could start playing around with it.

Instead of looking at BTC directly, I wanted to start by looking at the Grayscale BTC ETF, GBTC, so for this block we’re using Pandas wonderful datareader to pull quotes from AlphaVantage.

import os
from datetime import datetime
import pandas_datareader.data as web

!pip install ipywidgets
import ipywidgets as widgets
from ipywidgets import interact, interact_manual

os.environ['ALPHAVANTAGE_API_KEY'] = 'my_key'

f = ""

time_series = [
    ("Intraday Time Series", "av-intraday"),
    ("Daily Time Series", "av-daily"),
    ("Daily Time Series (Adjusted)", "av-daily-adjusted"), 
    ("Weekly Time Series", "av-weekly"), 
    ("Weekly Time Series (Adjusted)", "av-weekly-adjusted"),
    ("Monthly Time Series", "av-monthly"),
    ("Monthly Time Series (Adjusted)", "av-monthly-adjusted")
]  
     

def get_asset_data (ticker, time_frame="av-weekly", start_date="2017-01-01", end_date="2019-09-30"):
  global f
  f = web.DataReader(ticker,
                     time_frame, 
                     start = parse(start_date), 
                     end = parse(end_date))
  return f

interact_manual(get_asset_data, ticker="", time_frame=time_series)

We’re using f as a global for our dataframe. It stores the stock data. We’re also using ipywidgets to allow us to easily change parameters for the data we want to run against.


The next cell allows us to pass this previous dataframe to our backtest function. I wanted to play with various parameters such as the amount of capital available, the max contribution, and the ratio of max contributions to max sell amount.

def run_karir_target(price_data, capital, contribute=100, sell_factor=2):

  def get_asset_value(price):
    return (shares * price)
  
  def buy(amount, price):
    
    nonlocal shares, cash
    if amount > 0:
      if amount > contribute:
        amount = contribute
      action = "BUY"
    else:
      if amount < -sell:
        amount = -sell
      action = "SELL"


    num_shares = amount/price
    print("{} {} shares for {}".format(action, num_shares, amount))

    cash -= amount
    shares += num_shares

    
  f = price_data
  cash = capital
  contribute=contribute
  sell_factor = sell_factor
  
  shares = 0
  sell = sell_factor * contribute
  week = 1

  for i, j in f.iterrows(): 
    if cash < 0:
      print ("Out of cash!")
      break
    price = j['open']
    print("Week {}: {}".format(week,i))

    target = week * contribute
    value = get_asset_value(price)
    print("Target: {} Asset value = {}".format(target,value))

    difference = target - value
    # 0 - 500 = -500

    buy(difference, price)
    new_total = get_asset_value(price)
    total = cash + new_total

    print ("Total: {}, Cash: {}, Shares: {}, Asset Value: {}".format(total, cash, shares, new_total))  
    print()
    week += 1

Now there’s a lot here I will do later to clean up this code, making the params available via a widget as I did for the first function, but for now it works just fine. Here’s a test run with $100 contributed weekly from just after the beginning of the BTC bear market till now.

Week 1 and 2 of our backtest
The ‘bottom’.
We’re not including the entire run, but here’s where we end up.

So, not bad on our hypothetical run. That’s a gain of almost $4700 off of $3769 invested, or a 24% return. Now we’re cherry-picking, or course. At the bottom of the market, we would have had over $7K deployed at a break even. But Karir’s strategy opens up a whole slew of possibilities, and what may be a good rule of thumb for scaling out of positions. Since I’ve been able to get this package working, I’ve been looking at other investments that I’ve made in equities markets, and am starting to form some hypotheses that may help guide best practices for both entries and exits.

My next step, after cleaning up this code a bit, is to figure out a way to run some regression tests on the sell multiple. I think a dynamic variable may actually be more helpful in instances where the asset goes on a parabolic run. But the point here is that I have a framework that I can test my assumptions against.

It shouldn’t be too hard also to integrate Karir’s strategy for accumulation of BTC, as well as altcoin pairings as well. It’s an exciting strategy for investment, and one that can be automated via exchange and broker APIs. There may also be some variations that we can deploy, using this kind of targeted portfolio value as a way to layer limit orders. For this simulation we simply used the open price of the time period, but we could calculate the price an asset would have to be for us to sell it next week, and set some orders in the present period. If the high for that period hits that level, we could simulate the trade and see how that affects our gains.

I can’t wait to model this and put it to use.

Saying ‘no’

Probably one of the most important things I’ve learned recently is the power of saying ‘no’. I’ve usually been gung-ho and enthusiastic when it comes to work, and I guess you could say I’ve been eager to please in a lot of respects. Part of that may be because of self-esteem issues from when I was younger, maybe the need for validation or acceptance, or the need to be liked or loved or whatever. But now, I’m at the point in my life that I don’t feel the need to please everyone, and have started being a lot more discriminating in what I take on.

I’ve mentioned before that as a technical person, I was always the first one people came to when they had problems with their computers. Now, don’t get me wrong, I made a great career out of fixing people’s stuff, but it was mainly because I was always fixing mine and was so good at it. But after twenty years, I’ve gotten tired of the support calls and spending my time working on someone’s 5 year workstation that can’t get Outlook 2016 to work right on Windows 7 or whatever. Or someone wants to spend hours of my time trying to get the straight up cheapest laptop they can find cause they’d rather spend the extra two hundred on lotto tickets. (I’m looking at you, dad.)

As my skills have advanced to deal with larger networks, business problems and software development, I’ve come to recognize where my most unique skills are and where I can have the greatest impact. Everything else has got to go.

I recently picked up David Allen’s Getting Things Done a few weeks back and started rifling through it. He was on Tim Ferris’s podcast more recently and hearing the two of them talk was a great motivation. And then Craig Groeschel had a segment last week on ‘cutting the slack‘ that mentioned the two of them by name, with his tips. I’ve definitely been building my own ‘no’ list, things that I just won’t do anymore. And I’ve been very clear with my boss that we should not do them any more. To quote Groeschel, you “grow with your ‘nos'” .

Now that my political candidate ‘career’ is over (for the foreseeable future,) I’ve been able to focus on a lot of things that I had put on hold for several months during the campaign. I’ve spent more time with my family, caught up on house projects, and I can focus on finishing my degree. But I’ve been asked about filling a leadership position in two of my local parties. The idea appeals to me for several reasons, but I told the first one that I had to consider it, and turned down the second offer outright. My first initial thought was what it would mean to have a democratic socialist as the chair of the local Democratic party. It seems like it aligns with where I want to accomplish, but I’m still on the fence about the effectiveness of traditional electoral politics at this point. I’ll have to save this discussion for another post, but the entire state party will be reorganizing this winter, and it seems like a big opportunity for DSA types to start gaining influence.

I’ve also been working with a blockchain project that I’ve been asked to take over. It’s not really that flattering as the sole-developer and originator of the project quit, and I’m the only other person who’s looked at the code. I was asked to take over formally, and I had to say no, for a variety or reasons related to governance and technical debt — another post coming on that one as well, I’m sure. But even when I was saying no to the person asking, we were exploring the possibility of a new project built on the ashes of the old one. This new one would start fresh, with a proper governance model, and follow a more formal design and test-driven development process than the one that is in a crippled state.

In all, this is part of a broader process that I am engaging in with my wife, to streamline our lives, reduce our clutter, and focus on what’s really important in our lives. We’ve decided that we are no longer buying into the American dream, and are finding ways to exit our salaried jobs, sell our big house, get rid of the mortgage and debt, and do what we do as we see the world.

Our goal is to be FIRE: financially independent and retire early, and saying ‘no’ is how I’m going to get there.

Trade wars and crypto gainz

The last couple days have been pretty interesting in the cryptocurrency and equities markets. The Fed lowered interest rates for the first time since 2008, back before Bitcoin had been created, and CryptoTwitter has been abuzz about what it means. Most see it as a devaluing of the US dollar, and ultimately good for Bitcoin. The Trump tariff chaos is also being seen as ultimately good for Bitcoin, assuming that its ‘store of value’ thesis holds up and it’s used as a hedge against US markets tanking. That’s how I’ve been acting, anyways.

The US economy — at least as measured by GDP and the S&P, we’ll leave broader economic issues for another day — has been in a bull market for much longer than we traditionally see in the market, and prominent investors believe that we are overdue for a pullback. There are claims that the Administration fears a downturn before the next US election, and that the fight between Trump and the Federal Reserve over interest rates is because the President wants to give the economy a boost in the hopes that any impact won’t be felt until after his re-election. With last week’s rate cut, it looks like he won that fight, but we’ll see if things play out like he wants.

The lower interest rate will have a devaluationary effect on the dollar, and seems to have pissed off the Chinese, who have since decided to devalue the Yuan in response. This sent Bitcoin prices flying as the Asia markets opened, and it continued upward after the US markets opened and proceeded to tank about two percent. Whether the Bitcoin price increase is due to capital flight, Asians trying to hedge their capital, or just speculators anticipating such a move remains to be seen.