Black hole

Yesterday was a decent day. I worked on our automation scripts that we’ll use to update our Serum indexer and front ends. I haven’t figured out quite how to fully automate it yet, but now I’m generating JSON and TypeScript files directly instead of using print statements to output them on the screen. I’ve got two more functions left to do and I’ll be happy. I’ll just need to copy the files into the respective repos and push them up to make the changes stick.

There are a bunch of hoops that I have to go through though. The JSON feed that we’re using doesn’t have all the markets that we want, so I have prelude of sorts that I’m appending the dynamically generated content to. On top of that I have to modify the dynamic content as well to correct a perplexing design decision — the same symbol for two items. I’m trying not to get caught up in too many optimizations at this point, but I will need to prepare for some changes in the base currency that’s being used. That will be a challenge since it’s so far outside of what the reference implementation will do.

Right now the current markets are all tied to USDC, but Atlas Co. is planning on creating ATLAS ones as well. We currently have the markets identified only by the NFT assets, but when the ATLAS markets go live we’ll need to distinguish between the two. I already know how to update the tickers in our Serum indexer, I just need to update the Redis key rename script that I used previously and I can gangload the changes. Updating the market itself can be done relatively easy once the feed has been updated, but the question is how to make the best user experience to distinguish between the two?

The current market list is already really crowded. We’ve got at least 78 items already in the list. Doubling everything just be too much. Splitting the market lists with some sort of top-level selector makes more sense, but I’m already dreading the program logic required to do such a thing.

I also need to update the DEX code to make sure that we can even take fees in another base currency. It’s currently coded for USDC and USDT, so I need to add some parameters for ATLAS as the base currency. It’s shouldn’t be too bad, Raydium currently has several markets for their token, so I know it’s not impossible.

I have a lot of things to add the exchange project Kaban board, that’s for sure. I’m not sure how much I’ll prioritize these things as I have a couple other Solana-related projects that I’ll be looking at. One is to really delve into the RPC API and start looking at accounts and transactions to see if I can recreate history. The other is to explore the Metaplex mono repo and figure out how Candy Machine and Fair Launch works.

I’m really getting pulled in now.


Hopefully today can be a bit of a reset from yesterday, which just wasn’t that great of a day. I got some good sleep — Younger slept in her bed all night yay! — and didn’t waste an hour at the bus stop this morning. I meditated and have my tea, and am ready to sit down and start cranking out some work.

The question at this point is on what? is up, but is not quite racking up the referral fees. We did the math, we need $3.2m of post orders to clear on there if we’re going to clear a profit on it, which could take a while. We don’t really have a marketing plan, so unless I’m going to spend all day pumping it in the official SA market channels, it’s not going to generate the needed traffic. There’s a list of things that we can improve on, my main concern is getting the cost down on our EC2 instance, but that doesn’t really matter if we still need to keep a $500/month RPC endpoint. I spun up a new instance last night and tested using Elasticache instead of Redis, but I need to figure out how to migrate the production data set to it and switch everything over. Plus I still need to figure out how to automatically update the markets when Atlas Co. adds them.

I could spend a lot more time working on it. The charts are acting weird, they’re not showing gaps if there are no sales, and are showing strange volume activity. I’m not sure how accurate they are. And there are a lot of cosmetic enhancements that we could add by updating descriptions and making other things more user friendly, but I’m not sure how much time to spend toward that.

One thing that I do think would be helpful long term would be for me to work on pulling historical data from Solana using Quiknode’s new endpoint. I have one from a friend that I can use, but I’d likely be starting from first principles to connect to the API, pull and collate data. I’d probably want to do that using Python, which would pull me away from the JS work that I’ve been doing. Still, working on this could potentially solve several problems: rebuilding trade data for Serum markets and providing cost basis for wallet transactions, just to start. So even if I couldn’t figure out how to integrated it into the exchange, it could be beneficial for PnL calculations, which would help me as dao manager. I would like to be able to rebuild Serum market trade data though, and who knows what else we could figure out with onchain data.

There’s a lot of administrative stuff that I need to do with wallet management. I alone control the keys for the ALPHA hotwallet and need to get a Shamir’s secret sharing scheme setup so that I don’t disappear and leave everyone out to dry. I’ve also got the wallet acquisition funds that need to be transferred over to the multisig vaults. I need to convert the privkey to a format that I can load in the CLI and transfer over everything. I’m delaying on that because of the cost of creating new token mints for each of the assets.

And to top it off, the most recent new shiny thing: Metaplex. I’ve been asked to assist in a new top secret NFT project that’s related to. Now thing pushes all sorts of buttons for me, but it’s very complicated. It involves Solana contract code that I might have to deploy, new frameworks like Next.js and deployment vendors. There’s new architecture involved with it as well, and the synergy with everything else is quite tempting. Man, is it tempting.

And we don’t even have the Star Atlas mini-game ready. There’s so much chatter in my various groups about how we’re going to build tooling (read bots) for that, but I think I’m going to leave that to others for now. They’re trying to figure out how to use Selenium with the web3 library to keep fleets running round the clock. There’s only so much we know with that one, so we’ll probably put that to the back burner.

Well, writing this out has made me realize what’s the most important thing I can be doing right now. I need to get to it.


Today has gotten off to a bad start.

I went to bed on time and got a decent sleep. I woke up to get Younger ready for school and she gave me a hard time, but I got her down to the bus stop. Yesterday I got a text message from the school that the bus would be late, so I wound up taking her in. There was a similar message this morning but I decided to wait it out as it had only been about fifteen minutes late yesterday. Not so today.

There had been a story in the local newspaper this weekend about restaurants closing because they couldn’t find workers. I’ve been hearing anecdotes and news reports all over about it. People don’t want to work at bullshit jobs anymore. Partially because of the stimulus, COVID fears, and whatever other factors, the Great Resignation is making it hard to fill these jobs. And yes, let’s not forget that I myself and opted out to do .. well, what is it I’m doing now?

So I don’t know why I just decided to write a long Tweetstorm on this instead of here, but I felt like putting some thoughts down about this supply-chain and labor shortage. I didn’t want to say the word Weimar, but that’s what I’m think of course.

Anyways, my day was made worse by the fact that I forgot Elder and I have a dentist appointment this morning and that I was supposed to drive Missus into the office. Anyways, that’s in an hour, then I need to drop her off at school, try and get some work done in two hours, pick up Younger at the bus stop, come home, work for two hours, then pickup Missus and Elder.

Yea, I’m not getting a lot done today.


I’m feeling great this morning, mostly because I went to bed at a decent hour and slept in the big bed all by myself. Woke up at 5:30 because I told myself I wanted to go running this morning, but laid in bed for an hour anyways before getting Younger up for school. I had a message on my phone from the school that the bus would be late, so I drove her in and was back while people were still standing at the stop.

I decided I’d go for a run anyways. I was a little worried about my left foot, which had given me problems on my last run and was a bit off, so I tried to cushion it by hitting with more of a mid-strike. I had listened to someone on Profit Maximalist who ran a lot off hundred-mile races, and he’d mentioned that mid or fore-strikes were the way to go long, so we’ll see how it works out. I got back to an empty house, so I did my morning meditation and now here I am.

My brain is pretty much thinking about Star Atlas or Solana now. I was making some changes to the exchange code last night, talking with people about pulling data from Solana, figuring out how we’re going to automate the minigame for profit. Even while I was trying to be still this morning, my brain was just pounding along trying to figure out various things. The problem now isn’t lack of ideas, it’s figuring out which one is the most important and needs my immediate focus. That’s tough.

I finally finished the deck yesterday, spraying the last of it. It looks pretty good, but I’m can’t say I’m proud of the job. I’m just glad it’s done. Looking back, I’m not sure that I would have done the project myself or paid someone to do it. I have no idea how many hours I spent on it. And while I might have saved some cash doing it myself, I’m not sure it was the best use of my time. I just have to think of it as exercise, or homesteading. I’ve still got some cleanup to do to get some stain off of the siding, but I’m ready to move on to my next home project, which will be cutting down a tree and some overgrown bushes in the front yard so Missus can redo the landscaping. I still want to build out the side yard for a bigger garden, but I’ve got a lot of stuff to cut down first.

It’s Monday, which means I need to do the weekly dao status report. That usually takes me till lunch, then I need to look at a UI bug on the exchange that has to do with a new design we pushed to dev last night. I’ve got lots of ideas about how to integrate some of the asset metadata into the site. I need to work on our Serum market indexer, it needs backing up, or balancing; I really want to get the database in Elasticache. And there’s a whole list of stuff I need to look at with Solana with regard to historical data that I need to figure out, and long term I need to be able to write my own contracts. There’s a lot going on, but I feel like I can do anything at this point.


Well I will see how well I can write this morning given the early morning slumber party going on in the den right now. Yesterday was Eldest’s birthday, and instead of a birthday party we let her BFF stay over. The two of them and Younger have been up the last two hours having a dance party listening to songs, singing along loudly and stomping around the room. It’s cute, but very distracting.

I spent last night delving into the internals of Redis, trying to figure out how to clear out some market data that got entered incorrectly due to an error on my part. I got that done while I was trying to figure out how to get historical data for the Serum event queue. It’s a circular buffer, so once it clears out it seems the only way to get the data is via the Solana explorer, feeding it into the Serum JS library to replay and decode the instructions. I think anyways. I’m waiting for confirmation, but I have no idea whether such a tool exists or whether we’d have to build one from scratch.

I went to bed reading over the Solana framework docs. I managed to skim through last night and learned a bit. I really don’t care much about most of it, but it did confirm some of my hunches about the ephemeral nature of Solana. A validator node can not be expected to store a complete copy of the Solana blockchain the way a Bitcoin or Ethereum node does. Apparently they’re going to use Google’s BigData as a storage medium for data after six months or so.

I woke up and immediately picked up my iPad and started reading the developer docs again, and took a look at the spl-token code again. It made a lot more sense this time, I guess I’ve started picking up some Rust skills after all. One thing that has finally made sense is that data in Solana programs is obscured, meaning that it has to be interpreted by the program that it is intended for. Looking at the data in the Explorer or SolScan is only going to get you so far to figure out what the hell is going on.

Coming from Python, I’ve been struggling with ways to make the NodeJS REPL work in a similar way. Import functions aren’t really allowed in the REPL, although requires are. Trying to load functions in real time and use them to debug live, on-chain data is a bit of a challenge. I made a couple of in-roads with that last night, and was able to do a bit of playing around in the serum-history module to see exactly what it was doing, but it still confirmed my hunch that Serum event data was ephemeral.

I see a lot of opportunity here if we can build a module that can recreate event data from Serum marketplaces, tracking cost basis is an obvious example. I can tell you that tax season for Solana is going to be almost impossible for people that aren’t taking good notes.


So yesterday was a really proud day for me as we shipped StarAtlas.Exchange to the public. It was the culmination of over three weeks of work by SAIAdao/IA members and myself, and I couldn’t be more proud. I wanted to get it live before the town hall with the Star Atlas team, and was a bit worried that we were going to demo the site with some broken functionality, but we had a couple of fortunate breakthroughs and were able to ship the site completely working and bug free!

The idea came together over the past few weeks as I started putting some tooling in place for my own personal use. We had started off trying to track the price of SAIAdao’s NFT assets and discovered that there wasn’t any simple way to get the price data into our spreadsheet. So we stood up our own Serum price history module in an EC2 instance and used that API to feed into a Google Sheet in which we had been collecting asset market and mint addresses. I figured out that I could build some charts with it as well and that this would be valuable to some people, and by then we had figured that we would just roll our own Serum front end for it and make trade fees off of it.

I had to pick up a lot of stuff to get this up and running. I’d done some React tutorials a few weeks ago so that helped, and I’ve used EC2 before as well. Wrapping my head around how Solana and Serum works has been a huge challenge though, and the long build times that our codebase has made testing changes and troubleshooting very difficult.

Solana’s public RPC issues have also been very problematic. Half the time we couldn’t tell whether our code had a bug or if there were network issues. Settlement in particular wasn’t working during nearly the entire development process, and then just magically started working early yesterday morning. We’ve had inconsistencies with trade execution and wallet operations as well, but some careful testing yesterday told us that our problems were likely due to indexing slowness via the Project Serum API. Thankfully we got the hookup from GenesysGo yesterday and now have our own private API endpoint that we can use.

And let me just say how amazing AWS Amplify is. Webhooks on our Github repositories trigger automatic rebuilds which are deployed to Cloudflare CDNs within minutes. It’s really amazing.

I’m really impressed how quickly things came together yesterday. We got the domain setup, images and branding updated and got a lot of support by IA and even some members of the IA community. One of our dao members was able to bring it to Star Atlas’s CEO during the town hall and we got confirmation that our efforts aren’t going to be thwarted by the team.

I’ve been busy making improvements today as well, trying to shore up our backend infrastructure and get analytics running so we can tell who’s using the site. The good news is that we can see the referral fees rolling into our USDC wallet already. It’s going to take a lot more for us to reach profitability though, but we’re in a good position to reap some serious rewards once the SA mini-game launches and people start earning in game assets.

Mints and markets

I spent way to much time working yesterday. I was trying to get the exchange working, and had some irregularities with certain markets where the token mint wasn’t working properly. I about lost my damn mind.

The Serum web3 library has two files that are relevant, one contains market data such as the name and market ID, and a mint list that maps the symbols to the token mints. It seems that the Serum program is able to query the base and quote currency mints from the market on-chain, and the exchange code then does a reverse lookup to display the names.

But for some reason, our implementation displays UNKNOWN for the bases. So my mission yesterday was to figure this out. And I failed. Problem number one was that my local development system is too unstable to run the DEX via yarn start. It caused my IDE to become very slow, and the entire system would lock up after an hour or so. So I started doing live edits on the EC2 instance where we run the API. This was less than ideal because I was having to edit through an ssh console, and it was very inefficient. Now for starters, yarn isn’t watching the serum dependency where we’re making the changes, so I’d have to restart the app each time, which takes about five or more minutes. I started getting more and more frustrated because it didn’t seem like anything I was doing was having any effect.

It wasn’t.

The original dex code uses @project-serum/serum/ as a dependency. We had been making changes directly within the node-modules version of this to update our mints, but this doesn’t work for source control. So the other dev I’m working with pulled it out into a root folder and included it in package.json using a file: designation. It worked great locally and in the Amplify build — mostly — so I thought all was well. The first problem I noticed was that Amplify had the node-modules cached in the yml build file, and changes in our project path didn’t get updated. So I removed the cache and it was fine. But I missed something on the EC2 instance.

It was looking to the node-module directory during the build process. Now I’m not sure about the internal workings of yarn, but on my local dev instance, restarting yarn picks up changes in the embedded module. Not so on the EC2 instance. So late last night I ripped out the node-module folder and tried to rebuild the app. It didn’t find our local version.

So that’s where I am at this morning. I either need to spin up a new EC2 instance to do this work or rest the password on this old server running under my desk. My VM is just not cutting it for this work. I’ve got several discrepancies between instances of the UI that are running, and I’m not quite sure why. I did spend a good chunk of time yesterday experimenting with different RPC servers and CORS issues, as I wasn’t quite sure whether there were network instability issues or whether I was getting rate limited, but I was having all kinds of failed calls. Serum sends a huge number of RPC calls. A good number of them are orderbook updates, but there are a lot of them that have to do with spl-token accounts.

Solana is one complicated beast.

So now it’s back to work, with an awesome hangover, no less. There’s a Star Atlas town hall later today and I had really wanted to have this thing ready to go for it. I got the card validator all set, but I really to eliminate any crashing bugs before I give out this url.


I got so caught up with working on last night that I didn’t even think about writing until right after my head hit the pillow. I was just too exhausted to get back up, and I figured it was better for me to start writing in the mornings when I have the energy. So I’ll start prioritizing this in the AM now, before I hop on the computer and start working.

Younger is sick, she was complaining about a sore throat and headache, but I knew something was really wrong when she fell asleep before dinner; she never takes naps. We’re keeping her home from school today, but Missus is WFH Thursdays and Fridays now, so I won’t have to manage it all by myself.

Of course the big news that I want to talk about is the progress that I made with setting up the exchange. Two days ago we were able to get the front end to build in AWS Amplify, but the charts weren’t working. Yesterday I figured out that it was because the front end was behind TLS, but the data feed wasn’t and browsers won’t display mixed content. So we needed to figure out how to make it work behind a firewall. So I went with nginx’s Unit application.

I’ve never used it before and had to deal with a bunch of issues to make it work. I struggled to get it working on my dev box because of some mismatched Ubuntu sources, which cost me time. It’s controlled through pushing json config files through a pipe via curl, which is new to me. I also had to reconfigure node because v10 was still installed as the system default (I override it with nvm) and I had to suss out that the API calls were failing because of the older version. Anyways, I got it to work locally and then had it changed around on our cloud server pretty quickly.

Then I just need the cert, so I went with Let’s Encrypt and their certbot. I had setup issues. We’re using Route53 for DNS, and you’re supposed to be able to setup certbot to do the DNS validation so that things will auto-renew. I couldn’t get it to work and gave up and did it manually. Even that took longer than it should because I mistyped startatlas in the request and couldn’t figure out why the validation wasn’t working. So I got the cert and loaded it without too much trouble but the site was down. By this point it was getting late, and I had given up for the night when I realized that I had applied the cert to the listener on port 80. A small change and the API was secure, and the site was functional.

I also spent some time cleaning up the git repo. There’s a lot of, shall we say experimentation or troubleshooting in those commits, and I didn’t want our idiocy immortalized in the index. So I reset some branches, cherry picked some commits and got everything looking nice and tidy. I’m surprised it worked as well as it did. I actually deleted the dev branch from Github and sent it back up with the pruned version, and CI/CD did the rest. I have yet to do the same with origin/master. That’s a bit more risky than I wanted to deal with last night.

Earlier in the morning I had attempted to move our market indexer to Amplify, and spun up a micro Elasticache instance to replace our Redis database. I couldn’t get it to build though, which is something I need to do, since that should be cheaper to operate than our large EC2 instance. That’s one for the backlog.

So I’ve got a few more checks to do this morning, some cosmetic changes that we’ll need to do before we’re ready to get user feedback. Right now the site is just a vanilla Serum clone with a few changes, and we need to strip out a bunch of things and customize. I plan on really digging into the Serum code to figure out some bugs, and look through any other exchange implementations that I can find to figure out who else is having problems and what’s just us.

Git add –force

So I literally spent most of the last two days trying to add the TradingView charting_library into our serum-dex-ui. I couldn’t figure out why it didn’t build properly after cloning from Git.

The TV folders were listed in .gitignore, so I removed them and checked the dirs in, but it broke when our other dev checked it out. I saw some sort of git hook on commit that I thought might be doing something, so I pulled that out, started from scratch. Nothing. I tried doing a bunch of diff checks on the directories, cause git status was showing nothing. Diff showed nothing, but I forgot to use -r, so it wasn’t recursing the subdirectories. This would have shown me that two directories in the folders I was checking in, dist and lib, were missing. This is because they also were listed in the .gitignore file. It was simple.

I don’t know whether to blame my oversight on stupidity or overwork, but solving it was 100% tenacity.

Looks like it’s official. Lord knows how we’re going to take custody of the private keys, but most of this will get liquidated for cash and/or go straight into the multisig. I was having anxiety last night about the assets that I’ve currently got. I’m nervous enough that I bought another Ledger today. Should have just bought a three pack to send out to the rest of the VCT. Hindsight 20/20.

Overall, a pretty good day.

Monday run

It really was a beautiful morning. Felt like a long jog because of a pain on the outside of my left knee. I’m not sure whether it was overuse from all the squatting and bending that I did last night, or if it was because of a change in stance or foot alignment.

So I went right to work writing up the latest weekly status report for SAIADao. It took all day, but the news is good.

I spent too long this afternoon trying to onboard someone to help me with the Serum DEX front end. I had included the TradingView library in our repo because I couldn’t figure out how authenticate it to pull it down, but something about the act of checking it in prevents it from being cloned from scratch again. It fails to compile. After much experimentation all I can do is wipe the library directories and copy from source. Not sure how to automate that, but I am probably going to hound the TV team to figure out why.

I didn’t get much else done today. We had the house power washed, and an easy dinner. Fought with the fam over chores. The kids want to fight me when I ask them to clear the table, and Missus goes straight to the bedroom and her Kindle as soon as she walks in, leaving me to make sure the kids get dinner. She said there’s no argument you can make that will make me think that your work harder than me, so I guess it’s one of those agree to disagree situations.

Anyways, I’m not sure if I’m going to do anything else tonight other than watch Last Week Tonight. I passed out at 9:30 last night I was so worn out from this weekend, and I think getting up at 5:30 in the morning is the way to go. Or maybe I’ll try to code some more if I can promise myself that I’ll go to bed on time.