One of the main reasons for me for sticking with Claude Code (also for non-coding tasks, I think the name is a misnomer) is the fixed price plan. Pretty much any other open-source alternative requires API key, which means that as soon as I start using it _for real_, I'll start overpaying and/or hitting limits too fast. At least that was my initial experience with API from OpenAI/Claude/Gemini.
Yep, this is a fair take. Token usage shoots up fast when you do agentic stuff for coding. I too end up doing the same thing.
But for most background automations your might actually run, the token usage is way lower and probably an order of magnitude cheaper than agentic coding. And a lot of these tasks run well on cheaper models or even open-source ones.
So I don't think you are wrong at all. It is just that I believe the expensive token pattern mostly comes from coding-style workloads.
For example, the NotebookLM-style podcast generator workflow in our demo uses around 3k tokens end to end. Using Claude Sonnet 4.5’s blended rate (about $4.5 per million tokens for typical input/output mix), you can run this every day for roughly eight months for a bit over three dollars. Most non-coding automations end up in this same low range.
You're not wrong, though I suspect the AI "bubble burst" begins to happen when companies like Anthropic stop giving us so much compute for 'free' the only hope is that as things get better their cheaper models get as good as their best models today and so it costs drastically less to use them.
Yeah, I think when they made the bet it genuinely made sense. But in coding workflows, once models got cheaper, people did not spend less. They just started packing way more LLM calls into a single turn to handle complex agentic coding steps. That is probably where the math started to break down.
I'm increasingly seeing code-adjacent people who are using coding agents for non-coding things because the tooling support it better, and the agents work really well.
It's an interesting area, and glad to see someone working on this.
The other program in the space that I'm aware of is Block's Goose.
Yep, totally agree. We actually had an earlier web version, and the big learning was that without access to code-related tools the agent feels pretty limited. That pushed us toward a CLI where it can use the full shell and behave more like a real worker.
Really appreciate the support and the Goose pointer. Would love to hear what you think of RowboatX once you try it.
One of the main reasons for me for sticking with Claude Code (also for non-coding tasks, I think the name is a misnomer) is the fixed price plan. Pretty much any other open-source alternative requires API key, which means that as soon as I start using it _for real_, I'll start overpaying and/or hitting limits too fast. At least that was my initial experience with API from OpenAI/Claude/Gemini.
Am I biased/wrong here?
Yep, this is a fair take. Token usage shoots up fast when you do agentic stuff for coding. I too end up doing the same thing.
But for most background automations your might actually run, the token usage is way lower and probably an order of magnitude cheaper than agentic coding. And a lot of these tasks run well on cheaper models or even open-source ones.
So I don't think you are wrong at all. It is just that I believe the expensive token pattern mostly comes from coding-style workloads.
I don't doubt you, but it would be interesting to see some token usage measurements for various tasks like you describe.
For example, the NotebookLM-style podcast generator workflow in our demo uses around 3k tokens end to end. Using Claude Sonnet 4.5’s blended rate (about $4.5 per million tokens for typical input/output mix), you can run this every day for roughly eight months for a bit over three dollars. Most non-coding automations end up in this same low range.
You're not wrong, though I suspect the AI "bubble burst" begins to happen when companies like Anthropic stop giving us so much compute for 'free' the only hope is that as things get better their cheaper models get as good as their best models today and so it costs drastically less to use them.
Yeah, I think when they made the bet it genuinely made sense. But in coding workflows, once models got cheaper, people did not spend less. They just started packing way more LLM calls into a single turn to handle complex agentic coding steps. That is probably where the math started to break down.
Pretty cool! A bit of an upgrade of just letting claude write pocketflow agents for stuff. That's what I'm doing now.
Thanks! Curious what kinds of workflows you are automating right now and any pain points you’ve run into.
I'm increasingly seeing code-adjacent people who are using coding agents for non-coding things because the tooling support it better, and the agents work really well.
It's an interesting area, and glad to see someone working on this.
The other program in the space that I'm aware of is Block's Goose.
Yep, totally agree. We actually had an earlier web version, and the big learning was that without access to code-related tools the agent feels pretty limited. That pushed us toward a CLI where it can use the full shell and behave more like a real worker.
Really appreciate the support and the Goose pointer. Would love to hear what you think of RowboatX once you try it.
Can this use local LLMs?
Yes - you can use local LLMs through LiteLLM and Ollama. Would you like us to support anything else?
LM Studio?
Yes, because LM Studio is openai-compatible. When you run rowboatx the first time, it creates a ~/.rowboat/config/models.json. You can then configure LM Studio there. Here is an example: https://gist.github.com/ramnique/9e4b783f41cecf0fcc8d92b277d...
Open source... "enter OpenAI API key"... closes tab
Saw comment about local LLM support that I somehow totally missed. Re-opening tab. Should have led with that!
Ah did not realize this - good to know!
Fixed the quick start instructions to not start with OpenAI.