How much I'm paying for AI productivity software
This post is broken down into two parts:
- Which AI productivity tools am I currently using?
- Why does it currently feel hard to spend +$1000/month on AI to increase one's productivity drastically?
Which AI productivity tools am I currently using?
Let's get right to it. Here's what I'm currently using and how much I am paying:
- Superwhisper (or other new Speech-to-Text that leverage LLMs for rewriting) apps. Under $8.49 per month. You can use different STT models (different speed and accuracy for each) and LLM for rewriting the transcript based on a prompt you give the models. You can also have different "modes," meaning that you can have the model take your transcript and write code instructions in a pre-defined format when you are in an IDE, turn a transcript into a report when writing in Google Docs, etc. There is also an iOS app.
- Cursor Pro ($20-30/month). Switch to API credits when the slow responses take too long. (You can try Zed (an IDE) too if you want. I've only used it a little bit, but Anthropic apparently uses it and there's an exclusive "fast-edit" feature with the Anthropic models.)
- Claude.ai Pro ($20/month). You could consider getting two accounts or a Team account to worry less about hitting the token limit.
- Chatgpt.com Pro account ($20/month). Again, can get a second account to have more o1-preview responses from the chat.
- Aider (~$10/month max in API credits if used with Cursor Pro). AI coding assistant that runs in the terminal. I use this with Cursor and lean on the strengths I feel like they both have.
- Google Colab Pro subscription ($9.99/month). You could get the Pro+ plan for $49.99/month.
- Google One 2TB AI Premium plan ($20/month). This comes with Gemini chat and other AI features. I also sign up to get the latest features earlier, like Notebook LM and Illuminate.
- v0 chat ($20/month). Used for creating Next.js websites quickly.
- jointakeoff.com ($22.99/month) for courses on using AI for development.
- I still have GitHub Copilot (along with Cursor's Copilot++) because I bought a long-term subscription.
- Grammarly ($12/month).
- Reader by ElevenLabs (Free, for now). Best quality TTS app out there right now.
Other things I'm considering paying for:
- Perplexity AI ($20/month). Like Google, but it uses more AI features for the search. I will often find myself using it over Google. The paid version uses a better AI model.
- Other AI-focused courses that help me best use AI for productivity (web dev or coding in general).
- Suno AI ($8/month). I might want to make music with it.
Apps others may be willing to pay for:
- Warp, an LLM-enabled terminal ($20/month). I don't use the free version enough to upgrade to the paid version.
There are definitely ways to optimize my monthly payment to save a bit of cash, but I'm currently paying roughly $168.
That said, I am also utilizing research credits from Anthropic, which could range from $500 to $2000, depending on the month. In addition, I'm working on an "alignment research assistant" which will leverage LLMs, agents, API calls to various websites, and more. If successful, I could see this project absorbing hundreds of thousands in inference costs.
I am a technical AI alignment researcher who also works on augmenting alignment researchers and eventually automating more alignment research, so I'm biasing myself to overspend on products to make sure I'm aware of the bleeding-edge setup.
So, I'm certainly paying more than the average person when it comes to using AI for productivity. However, I can certainly imagine that I'm still paying less than I should in terms of AI software. This leads me to consider: "What should I spend considerably more on regarding AI software? Why isn't it easy to know this? If AI will increase productivity as much as I think it will, why hasn't it already?"
How could I spend way more on AI?
As AI becomes increasingly powerful and entrepreneurs/developers figure out how to make better user interfaces and interconnected systems with AI, we'll be getting massive jumps in our ability to leverage AI for boosting productivity.
Of course, people already see this with ChatGPT. However, I expect most people will underpay for AI tools.
Someone asked this question:
Suppose I wanted to spend much more on intelligence (~$1000/month), what should I spend it on?
This is a good question. I don't even know the obvious answer as someone who works in AI and even focuses on how to leverage these tools for safer development of AI. One reason for this is that most people have not given much thought about how to actually use intelligence and automation. Have you considered what you would do if you had three interns and an assistant? What if you had an intermediate-level software engineer?
Here's an insightful comment (slightly rewritten) by Gwern on the question, "If AI is so powerful, why hasn't it completely changed the world and increased GDP by several points yet?":
If you're struggling to find tasks for "artificial intelligence too cheap to meter," perhaps the real issue is identifying tasks for intelligence in general. Just because something is immensely useful doesn't mean you can immediately integrate it into your current routines; significant reorganization of your life and workflows may be necessary before any form of intelligence becomes beneficial.
There's an insightful post on this topic: The Great Data Integration Schlep. Many examples there illustrate that the problem isn't about AI versus employee or contractor; rather, organizations are often structured to resist improvements. Whether it's a data scientist or an AI attempting to access data, if an employee's career depends on that data remaining inaccessible, they may sabotage efforts to change. I refer to this phenomenon as "automation as a colonization wave": transformative technologies like steam power or the internet often take decades to have a massive impact because people are entrenched in local optima and may actively resist integrating the new paradigm. Sometimes, entirely new organizations must be built, and old ones phased out over time.
We have few "AI-shaped holes" of significant value because we've designed systems to mitigate the absence of AI. If there were organizations with natural LLM-shaped gaps that AI could fill to massively boost output, they would have been replaced long ago by ones adapted to human capabilities, since humans were the only option available. This explains why current LLM applications contribute minimally to GDP—they offer marginal improvements like better spellcheck or code generation, but don't usher in a new era of exponential economic growth.
One approach, if you're finding it hard to spend $1000/month effectively on AI, is to allocate that budget to natural intelligence instead—hire a remote worker, assistant, or intern. Such a person is a flexible, multimodal general intelligence capable of tool use and agency. By removing the variable of AI, you can focus on whether there are valuable tasks that an outsourced human could perform, which is analogous to the role an AI might play. If you can't find meaningful work for a hired human intelligence, it's unsurprising that you're struggling to identify compelling use cases for AI.
(If this concept is still unclear, try an experiment: act as your own remote worker. Send yourself emails with tasks, and respond as if you have amnesia, avoiding actions a remote worker couldn't perform, like directly editing files on your computer. Charge yourself an appropriate hourly rate, stopping once you reach a cumulative $1000.)
If you discover that you can't effectively utilize a hired human intelligence, this sheds light on your difficulties with AI. Conversely, if you do find valuable tasks, you now have a clear set of projects to explore with AI services.
Of course, this is beside the fact that we're still early, and we need a few more years to really see how powerful these AIs can become. I agree with Sam Altman (CEO of OpenAI) in his new blog post:
This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.
Leveraging AI for productivity presents a massive opportunity in the next few years. In fact, I expect there will be companies that essentially leverage AI automation internally in ways that the rest of the market doesn't (of course, I've considered doing this myself). These companies (like consultancies) will involve human-human interactions instead of interfacing with an AI but will charge a high premium for that interaction. Basically, their customers will compare the price to the rest of the market and find the price reasonable, but the rest of the market is still leveraging way too much human intelligence (HI) in comparison to artificial intelligence. It will take HI companies significantly longer to do the project and will be much more expensive.
Light spoiler for Pantheon ahead!
There's a TV show called Pantheon, which covers the entire singularity where humans can upload themselves into the cloud. One interesting point in the plot is when one of the uploaded humans is told that they are still being held back by how they work in their human body, and that character has a really difficult time grasping what that means. They simply couldn't imagine acting in the world in any way that they did in their past life. It just wasn't part of their ontology, how they imagined the world.
Eventually, through enough effort, they figured out how to use their newly uploaded body in ways that allowed him to achieve an exponential increase in productivity per second.
I think we'll experience several of these shifts in the coming decades, and those who can act on them early may benefit greatly.