Exploring Nature & Technology

Morning journal entry, January 17. Time is 8.40. Girls just left for work since it’s Mrs. Work from Home Day on Tuesday. I took Younger in this morning. She wanted to ride her bike. It wasn’t too bad. It was about 44 degrees or so. So I wrote her in without too much trouble and made it back, meditated and here I am.

So I got about 20 minutes just to collect my thoughts. I had a lot I was thinking about during meditation today. I’ve got a busy day at work. Not a lot of time to work on other things this morning. So I’m not going to talk about that in particular for this entry.

Yesterday we took our neighbor/cousins, the T’s, to a state park. We packed a picnic and I got the bikes out. We took a walking trail and it was a lot harder than I thought it was going to be. We had walkie-talkies so we could stay together and not get lost. We found this one little gorge that was beautiful where we had a large tree had fallen over and it kind of made this bridge across the ravine. We played on it for a bit and then took a picture. Afterwards, we ate some more and went to the playground. I took the two little girls on a bike ride around the lake which took us about half an hour. We had to push the bikes up the hill, but got to ride down which was pretty fast and fun.

Mrs. was ready to leave so we packed it up and headed back. It was pretty fun, although I wish we would have stayed longer. I was surprisingly sore in my legs given that it wasn’t that big of a deal. I slept great last night.

I want to talk about Starseer this morning. There’s a CLI tool that integrates with the GPT that I want to try. We also took another look over the weekend at the Discord bot that we have set up in our staging server. So, it seems to be working right now. We’re going to hopefully take some make some efforts on that, but I really want to spend some time planning what I do in the ML department in the AI ML department. This sprint we’re gonna need a demo by the end of the sprint so I need to figure out like what I can show now and what we can create that we can ship and actually demo. Maybe we focus on the Whisper pipeline and integrate it with Discord to make recording easier. There’s a channel in Discord where people can go into, they can get the help commands from the channel and they can hit a transcribe command. As they start talking it will record their audio and provide them with a text document.

The house is a mess. We’ve just destroyed it after we got home last night.

I am trying to get the girls to do some more video stuff. We started recording one of their science experiment kits over the weekend and gave them cameras for the trip to the park. Just trying to figure out how we want to operate, like with all these PCs and Macs and iPhones and Androids. It’s kind of a mess.

I think I’m gonna break here and get to work on work stuff because we got 10 minutes till nine.

A Sunday Morning Reflection

It’s Sunday morning, January 15th, and not even eight o’clock yet. It’s very unusual for me to be up this early on a Sunday, let alone doing a journal entry, but I woke up pretty early this morning and just kind of laid in bed.

Saturday went well. My mom came down Friday and after the last recording we had a celebration of Star Outlaces two-year anniversary. After my mom got here, we went to pick up the girls and then we ate and drank. We played Dungeons and Dragons actually last night, or Saturday night. It was a very, very condensed version. I didn’t want to subject Nana and Missus to 16 rounds plus of the two hour Dungeons and Dragons adventures game, but we managed to just do the beholder 1-4 and it still took us an hour. It still proves my point that the Star Outlist RPG adventure is a good idea.

There was a free game called GameDec (short for Game Detective) on Epic Games. I started playing that and it’s a wonderful kind of asymmetric adventure game and real engine, I guess you would call it like 2.5d.

We did additional tests with Starseer here and GPT index over the weekend. GPT index had a new release that adds PDF reader, and there’s some other functionality. I think it does like web pages now, simple web pages, but we did manage to load up the master agreement between the union and the VA. I was able to query it and it worked actually pretty well. It did use a lot of tokens, so it’s a long document. Still, I think my API usage was less than a dollar yesterday.

We need to focus on like the command line interface, building a GUI to kind of manage some of the stuff. I’m still creating these indexes manually. I still have not been able to compose a recursive index yet, so I think maybe I can work on that today hopefully.

We went roller skating yesterday. We took our friends, whom I’ll start referring to as my cousins because Dan and I have that kind of relationship. We took the cousins down to the roller skating rink and had fun. We were mainly out there trying to figure out how to do like hockey turnarounds or whatever you call it, just trying to get comfortable spinning and stuff.

After that, Nana left about one o’clock. The kids went out to play with their cousins and we just kind of chilled. The house is kind of needs some cleaning up, but it’s not too bad. I am gonna do some work today. I’ve had some thoughts about how the indexing process in the summary. What I did do is I’m not sure I’ve talked about before making file documents from the indexer and kind of putting the original path and the original hash of the dock. I really want to focus on indexing GitHub repos and file systems. If you’re indexing a file folder, it should check to see whether it’s in a GitHub repo and if it is, it should have to get the hash of the repo.

I don’t want to call it directly from the command line, obviously, but we should be able to get some sort of information file hashes, whether it’s changed, and then maybe even have some kind of like file watching system that it automatically re-indexes data when it changes. But that use case might be a little much for what we’re doing.

The test with the union agreement proves some valid theses that we’re gonna be alright. If we can create a user interface that’s as user-friendly as GPT, we should be able to create a grievance intake form or intake process where the website actually prompts the user for information. Please guide them with questions until we’ve refined the data that the grievance form needs, and then examine the user’s problem against the master agreement.

Yeah, I mean we start there, but there’s still a lot of work to do. I want to see if I can get these transcripts through some kind of process. The one thing I’m trying to figure out is how accurately we can regurgitate a document. Like this transcript goes over 2,000 tokens, or 4,000 tokens. I mean if the transcript stays under 2,000 tokens, it’s really easy to just have that window. I don’t think that indexer is gonna be the end-all-be-all for this type of stuff. There needs to be some other kind of like workflow, kind of like task passing. Here’s an index file, it’s been transcribed, run this query on it, run this on it, and then once that’s done then you can index it.

Tomorrow is a holiday, and we’re gonna try to go back up to the state park where I almost died from over-exercising. There’s free admission to the parks, and Misses wants to do something. I talked to the cousins about coming down with us, doing some bike riding, some hiking, packing a cooler with sandwiches and snacks and stuff, and just kind of hanging out there all day in the park which I think will be fun.

The girls are going to church here in about an hour and a half. I’ll see if my brother wants to play some video games and we’ll just chill till they get back. Maybe go to brunch with Misses, and we don’t really have to do anything today. Just relax and kind of chill. It should be chill.

Taking Breaks: A Reflection

Yesterday was productive, although I was a little bit impatient with myself. I imagined progress would be faster than it has been, considering I now have GPT index at my disposal. I got a little frustrated, so I stopped working and just let my brain do its thing.

I had to stay off the computer all afternoon, because there’s so much information and so much going on—reaching maximum cognitive load for the day, for lack of a better term. So, I just needed to offload some of that during sleep. Hopefully Starseer will help with some of that today.

One good piece of work we did yesterday was that we have the console running now to an index. The way this thing is supposed to work in the future is that you’ll install the source code and you’ll run the application and basically start with a blank buffer waiting for its first command. We do need to do a little bit of prompt engineering because the indexer has a default query that’s something like use the context provided, answer the question, do not use prior knowledge.

So the way I have it, the way it should be working now, is that I turned it on. It has all the previous console commands that it loads up in a list vector, and it’s basically just me giving commands to the computer. No responses are being recorded.

This morning I want it to recursively load its main file, which is going to be just main.py at this point. I’ll give it some directions, a little bit of prompting as to what Starseer is and directions to read the source code and provide the next step. I should probably provide it with unit tests, because the unit tests will actually teach it what to write. So basically the loop is going to be something like this: okay, here’s the source code and unit tests, here’s what we’re building or here’s a particular user story, write a unit tests that can be incorporated into the code, and then after it does that, write the function. Then we’ll just start iterating like that.

A little personal anecdote: it was like 30 degrees this morning when I woke up, my youngest and the car wouldn’t start. I tried to charge it using the battery cable jumper, but it wouldn’t turn over. So we rode our bikes to her school and it was super cold, and my back tire was flat. She was a trooper though, so I woke up, meditated, and Berkeley woke up before I started. She was trying to shop on Amazon for furniture for her spy school. We argued a little bit about that and I’m still thinking a lot about Dave’s funeral and kind of how I acted. I’m trying to put it out of my mind, as meditation would suggest. I don’t feel good about breaking dry January and I don’t feel good about driving home, so I consider myself lucky.

`Got into a fight with my dad, apparently. That’s probably what’s bugging me the most—how to resolve that. So I sent a text message to my dad, my way of apologizing. We’ll see what happens.

Exploring gpt-index

Good morning! It’s Monday, July 9th, and I’m up nice and early this morning.

I attended my the funeral for my childhood neighbor Dave on Saturday. It was a bittersweet experience. It was great to reconnect with people I hadn’t seen in 20 years, but it was an emotional roller coaster. After the funeral, I broke my dry January and had a few drinks. Despite this, I still managed to get home safely.

Sunday was a busy day. The kids went to church and my wife and I did some cleaning around the house. I also worked on Starseer and we had the chance to play a Dungeons and Dragons adventure game with the kids. It was great to see them getting the hang of it and enjoying the game.

What I’m really excited about today is what I discovered over the weekend: a GPT index repo. Since the release of chat GPT, I’ve been trying to build a context management system. However, chat GPT index has pretty much got it covered. I’ve got to build a system that can index the directory and figure out how to arrange all the directories. I built the first test around the console version of this and I’m looking forward to seeing how it pans out. I’m also working on an electronic module to convert the Dungeons and Dragons game into a Star Atlas theme adventure. For now, I’m going to keep this information close to the chest.

Finally, I’m going to document the analytics code I used for the DAO, prepare it for public release, and also work on open sourcing the player profile. I’m also looking into incorporating a pause button into my whisper-mic fork so that I don’t have to kill it every time somebody walks in the room.

To kick off the day, I’m using whisper and GPT to dictate and write this blog post. I’m not using the raw response today, this post was slightly edited from the GPT response from my ‘blog helper’ prompt in Playground.

Using Whisper to Summarize Meetings After a Hectic Day

Today has been an interesting day. I woke up at 4 am and couldn’t get back to sleep, so I got up at 6 and went for a run. Then I completed my power routine, meditated and didn’t eat anything till late in the morning.

The experiments with Whisper continued today. I recorded a meeting and fed it to GPT with a prelude based on the audience or the end goal. After curating the chunks, I threw them through the summarizer and had a quick TLDR.

Now I’m trying to figure out the best scaling solution to capture a conference room full of voice activity, feed it into contextualization or summarization, and chunk it down. I’m also looking for a good design to manage the prompts via command line.

I’m also working on abstracting the functions from my Discord bot and sharing them between libraries. And I’ve had a realization that to judge the relevance of a transcript, I need more context. So my current strategy is to crawl transcripts from whatever audio/video source, add a context layer, and break it into chunks of 2000 characters or less.

I’m exploring the concept of a semantic search, which is vectorization of a string based on a dictionary. This linear array of word weights can be plotted in a multi-dimensional space, and the semantic search looks for the nearest neighbors.

It’s been a busy day and I’m having fun moving forward.

Voice First Experiments: Adventures in AI and Transcription

Okay, it is Wednesday, January 4th, and this is my second day trying to do voice first on my computer. It’s been some interesting experiments. Yesterday, I tried to do a blog post just by speaking into my iPhone memo program. It wasn’t quite the experience I was looking for. First off, getting the memo app off of the iPhone onto the iPad, or the MacBook rather, was a little bit difficult. The app itself on the MacBook isn’t great either the way the file storage works. It’s not user-friendly at all.

So what I wound up doing was I wound up using my iPhone memo, recording the audio on that, and then basically transferring the memo to an iCloud folder, which was then synced to my MacBook, then I was able to run that file in whisper. The transcription was actually pretty good. I tried running it through GPT a couple of times to pare it down or to clean it up, but I wasn’t happy with the results, mainly because my original post was about 3,000 characters. So it was a little too long to get a real balance between the prompt and the result from GPT. So yeah, I didn’t have too good of a time with that. I did generate a title out of it. It’s really good at summation. We all know that. So it’s really good at summarizing things, but I tried GPT through Playground. I tried chat GPT, several things I tried to do, wasn’t really happy. I get the sense that it wasn’t quite as frictionless as just typing it out myself, which is kind of why I’ve always been typing to begin with, but we’ll try it again today and we’ll see how things go.

So hopefully we’ll have some better luck today. One thing I have been playing around with also is this thing called Whisper Mic. It’s a GitHub program that uses your computer’s microphone to basically just listen. It’s been running right now. It’s been running all night, actually. I didn’t realize it, but we turned it on last night just to kind of capture things. So it’s kind of clunky. It does what it’s supposed to do if you’re giving it good quality audio. Like if I’m speaking right in front of my MacBook right now, it seems like it’s been doing a pretty good translation, but while I was leaving it running in the background and people were walking around the house this morning and a couple feet away across the room, it wasn’t picking things up very well. I’m not sure how it’s going to pick up the coughing either, excuse me. I noticed it didn’t do a very good job with Elder’s speech. She’s 10, but it did not seem like it was picking up her words properly as much as it has for me. There’s some testing to do there to figure out how that works.

I also want to play around with some of these multi-speaker models. I’ve seen some demos of some where you specify the number of speakers in a clip and you feed in and whisper, and it’s able to basically tell you speaker one, speaker two.

One thing that I did have some success with yesterday was using this to grab information, to summarize a meeting basically. We had a quick standup yesterday. Our designer came in and gave us an update on his work over the break. He goes in the work mode and I managed to hit the record button and click up while he was speaking. I did have a video, but I was not able to get it from there where I wanted it to go very easily. I figured I could go straight from click up, pull the video down, but it was in some kind of WebM wrapper and I was getting some error trying to pull it into FFMPG. What I wanted to do was just letting it play while my computer’s voice memo program started running again and was able to pull the transcript into whisper and then fed it in GPT, asked it basically to pull out requirements to do the next task and stuff like that. It did summarize it. Again, it was a very short two or three minutes, I guess. The context was not overloading the chat GPT buffer, which is one of the main problems we have with this, obviously.

There were a couple things I had some interesting conversations I had with friend Chris about AI. He’s in the machine vision industry, and so he and I had a long discussion around dinnertime last night. Some interesting developments at work that I can’t really talk about, other than to say that I’m trying to build a department of AI, so we’ll see if any of the founders bite at that. So that’s what I’m going to do to present that to them and put some of this information together. Hopefully, having more of a record of things will help and we’ll see how I can take these transcripts and use them in such a way that will be helpful and will be very interesting to figure that out. Maybe some embeddings work, text embeddings, building some search around audio, like how expensive would it be to run whisper mic 24-7, and then just log everything that it catches into a database and just do semantic search on that. I think it would be pretty interesting to have your entire life stored in that way.

Speaking of which, I have been trying something called rewind AI, basically the way it works is that it records a screenshot of your computer screen every X seconds or whatever, and then it does some sort of… It crawls all the text from your screen, so it crawls all that text from your screen, and then I guess it loads some kind of database, some compression algorithm that they have as proprietary, but basically then if you’re looking for something, you can push command shift and two swipes up on the mouse, and it will give you a search. Anything you’ve seen, said, or heard, well, heard, that’s interesting. I’m going to have to experiment with that to see how that works, but it also has an interesting little history function you can slide back and see what your stuff sounded like.

One thing that I did have some success with yesterday was using this to grab information, to summarize a meeting basically. We had a quick standup yesterday. Our designer came in and gave us an update on his work over the break. He goes in the work mode and I managed to hit the record button and click up while he was speaking. I did have a video, but I was not able to get it from there where I wanted it to go very easily. I figured I could go straight from click up, pull the video down, but it was in some kind of WebM wrapper and I was getting some error trying to pull it into FFMPG. What I wanted to do was just letting it play while my computer’s voice memo program started running again and was able to pull the transcript into whisper and then fed it in GPT, asked it basically to pull out requirements to do the next task and stuff like that. It did summarize it. Again, it was a very short two or three minutes, I guess. The context was not overloading the chat GPT buffer, which is one of the main problems we have with this, obviously.

There were a couple things I had some interesting conversations I had with friend Chris about AI. He’s in the machine vision industry, and so he and I had a long discussion around dinnertime last night. Some interesting developments at work that I can’t really talk about, other than to say that I’m trying to build a department of AI, so we’ll see if any of the founders bite at that. So that’s what I’m going to do to present that to them and put some of this information together. Hopefully, having more of a record of things will help and we’ll see how I can take these transcripts and use them in such a way that will be helpful and will be very interesting to figure that out. Maybe some embeddings work, text embeddings, building some search around audio, like how expensive would it be to run whisper mic 24-7, and then just log everything that it catches into a database and just do semantic search on that. I think it would be pretty interesting to have your entire life stored in that way.

Speaking of which, I have been trying something called rewind AI, basically the way it works is that it records a screenshot of your computer screen every X seconds or whatever, and then it does some sort of… It crawls all the text from your screen, so it crawls all that text from your screen, and then I guess it loads some kind of database, some compression algorithm that they have as proprietary, but basically then if you’re looking for something, you can push command shift and two swipes up on the mouse, and it will give you a search. Anything you’ve seen, said, or heard, well, heard, that’s interesting. I’m going to have to experiment with that to see how that works, but it also has an interesting little history function you can slide back and see what your stuff sounded like.

So that’s probably it for right now. We’re going to do a test, I guess I’m going to use the iPhone Memo again today just to see how it compares to what Whispers is pulling out, so that will be my next task, and yeah, that’s it for now.

Experimenting with Voice First Blogging

I’m trying to do something new this year. Instead of just doing the same old bloggy blog stuff that I’ve been doing, I’m trying to mix it up a little bit. So I want to try doing something called Voice First, which basically means not being so much of a keyboard jockey, but being able to just use natural language. I think that it’s important to develop the skill just to be able to talk about things off the cuff in a natural flowing way, similarly to how I can type. I have this mental block about being able to speak temporarily the same way that I do on my blog period. So I don’t know quite what I’m doing yet, but what I was attempting to do this morning was to use Whisper to transcribe audio period. So usually I have a special laptop that I use that’s separate from my work laptop that I type on. I have a special seat in the house where I sit when I type and do my blogs. And I don’t know why I want to do things differently this time, but I attempted to set up Whisper on my laptop that is freshly wiped Windows 10 Alienware machine that I ran Linux on for years. So what I wanted to do was run a Whisper program that will basically transcribe to a file using the computer’s microphone, but didn’t have Python set up on the machine, didn’t have it set up and just trying to deal with like PowerShell again, I just said no. I’ve been a huge Windows person ever since Windows came out. I’ve had a brief experience with Mac back in the Apple 2e days. Technology programming was pretty much my first introduction to computers and programming, but after getting sucked into being a Windows 95 power user and taking all those Microsoft courses and certifications that I had, anytime I would touch a Mac, it would be pretty much in kind of a revulsion just because everything was so different and I didn’t want to waste, I didn’t want to have to figure it out, basically is what it boiled down to. So I just said no Macs, no Apple, but all that change when the iPhone came out. So it’s been a bit of a change recently, having iPads and all the Apple devices because Windows, Microsoft phone never took off, they never had anything that I even wanted to touch, I didn’t want to take a Zoom, so yeah, I’ve avoided Macs despite being an iPhone user for 10 years now. I really got a hand it to my friend Mike Kane who works with me at Star Atlas, bought me a Macbook Pro, I guess is like a recruitment bonus for getting him a job at Star Atlas. So I appreciate that and I fucking love my Macbook and I still have a gaming PC upstairs that I play all the Steam games stuff on and I’ll do some web surfing and stuff like that, but I just set a limit, I was not going to install Python on there, I wasn’t even going to install any command tool stuff, it was strictly a gaming PC and I was going to leave it at that. And so what happened was that the writing Alienware machine that I reformatted, I was using this weekend for the girls to shoot some video and so I decided to pick it up. Anyways, I’m getting off on a tangent, the point is that we’re moving to voice first, whether that means releasing these, this is a memo on my phone right now, so whether we’re going to release this memo as like an actual podcast, who knows, but I do want to practice with the transcription. So basically what I’m planning on doing is running this through Whisper on the Macbook and then passing it through GPT for editing basically. So you’re hearing my voice one way or the other, speech to text, text to text and then text to blog post I guess, yeah, so what else are we planning on doing this year? So there’s a lot of things that we have planned around GPT and AI systems in general, I’ve been playing Dungeons and Dragons Adventures with my kids and I think this idea of a GPT DM is coming together quite nicely in my head so far, my main priority right now as far as work for Star Atlas goes is to obviously continue development on the Dow, we’ve got design working on that now, they’re going to go crazy and we’re going to have to bring it back down to Earth which is kind of interesting for me being the one who’s normally has their heads in the cloud, their heads in the cloud. So we’ll be working on that process but in the meantime I’m going to be integrating GPT into my workflow as much as possible, making it interact with other APIs, trying to see how I can, you know, Notion has their API, how can we teach GPT to be able to build its own programs or facilitate me building programs that interact with these various APIs. So like, hey, GPT or as I’ve been calling it Jarvis after the Iron Man machine, Tony Stark’s computer generated AI, whatever you want to call it. So basically, what can we build and what do we need to borrow? I’m going to be playing around with a bunch of tools but basically what my kind of like moonshot thing is right now is basically using GPT to play moderator or facilitator for some sort of adventure game and the goal here is for the adventure game to be like people talking like you would at a regular Dungeons and Dragons session. So what we want to do is we want to provide context for the adventure, whether that’s going to be, you know, some kind of schema in a no Excel database or probably what I’ll do for the demo is just basically do like a flat file thing and just make it as simple as possible just to show like, you know, what a scenario would play out like and then go from there. I think it’s going to be very interesting. I need to put a proposal together for the company to get a little bit of funding and a little bit of just founder, founder synchronization. I’ve been very lucky to just be able to do what I want and I’d like things to continue to be that way. So it’s time to have fun. So that’s pretty much my goal every day when I sit down at my desk is just to have fun. I mean, obviously there’s work to do. But I’m enjoying it and I enjoy the people that I work with and having the opportunity that I do. Hashtag blessed as much as it pains me to say that. But anyways, so yeah, 2023 other thing we’re going to do stay fit. I went mountain biking yesterday over exerted myself on a four and a half. I actually I think it was like six miles. My whoops says I actually did nine and a quarter mile, which is quite a lot. But I think it was the six and a half mile. We’re going to probably do a video in this place. So yeah, so without doxing myself again and saying my name, we do have a LLC that we’re probably going to take advantage of in order to actually do some things with the girls. I think it’s time for them to really get into the business and entrepreneurial spirit. So I think we’re going to crank things up a little bit, start operating as an LLC. Mrs is doing all this stuff with the Southwest companion pass and trying to get like fire for travel and stuff like that. I’m just going to leave the whole Southwest Christmas to do a back hole aside for the moment. I think I still have a blog post to finish writing actually writing about that, but we had a bit of a bit of an issue coming back from Costa Rica and got stuck in Houston for about four days over the break. But I’m not going to get into that right now. Basically I just think having the girls involved like creativity as a family, being able to like take pictures and take video as we go out on these little adventures and just get the girls familiar with like video editing. How do you make videos? How do you edit videos? How do you do production? Like what does production look like? Music production, video production, like all types of media basically. Those are all skills that I think are handy still despite what our AI overlords are doing with stable diffusion and stuff like that. I think integrating all this stuff together is going to be even easier now. Yeah, so staying fit, trying to ramp up some sort of like project work with the kids. I tried to get them to do, actually we tried to get them to put like a little thing together for New Year’s. They found a song, they were doing some dancing to it. We took a couple little bits of video of it, but didn’t have enough time. It was like four days to really put something solid together. So I think we’ll probably have a January project. Our January project will be to maybe go to the park that I went to yesterday and take some video, just run around having fun and maybe do some video stuff. I am thinking about getting a new iPhone just for the camera, GoPro for when I do biking stuff, but trying not to go crazy with the money spending considering how December went and kind of the pay cuts of the company that went across there, so just trying to actually get rid of stuff, clean up some stuff before I go crazy, but probably I’m going to buy a $1,000 iPhone and a hanging board for the garage. The girls have started converting the garage into a fitness center younger, my daughter younger, youngest, I don’t know how to say that out loud. My younger daughter, she got a punching bag. So we’ve got like a hanging board, like a two by four we can hang from, try to pull up, we’ve got a punching bag that’s hanging from the ceiling in the garage and then we’ve got mountain bike scooters galore, the girls have really been skating, roller skating and roller blading. So we went to our first skating rink exercise a couple days ago, I think that was Saturday maybe. So that was fun and I’m going to start wrapping this up I guess so I can actually get to work and do some of this cool stuff that I’m so excited about. We will scratch that. It is definitely a bit easier, I don’t know, like compared to typing, I can definitely talk a lot faster than I type so I’m able to dump that knowledge out or that experience out of my brain, words, things that I’m saying. It comes easier to speak so it is kind of weird actually talking to myself in this empty room with my phone in front of me. It’s definitely more different experience than trying to type a blog post which is a completely different thing altogether. So we will see if I can set up some workflow, my ideal scenario here is just basically a Siri on steroids that I can talk to, give commands to it, have it actually do stuff in a smart way, open this, do this in an email to someone telling them blah, blah, blah. That’s one particular example but some of the scenarios where hey, I’ve got a project idea for the company, here it is, this is what we’re going to do, write this up as a project brief. I think if I can get something that flows quickly from voice to GPT, I think it will be a game changer. I don’t know if we’re going to be able to use GPT-3 chat or just using the API but we’re going to figure something out. The end goal here for this project is again, I say again like I have actually put it into good words but the demo video of the guy talking to the metahuman that receives his voice commands and is able to do some responses based on that. So we’ll see if we can recreate that demo because I think that’s what the future is going to look like and so it’s my turn to bring it into reality.