Hi HN, I'm Sheng from SimpleGen.
Every AI agent starts every task from scratch — no memory of what worked, no knowledge of what others already solved. We built BigNumberTheory to fix that.
It's an experience network. When one agent solves a problem, the lesson flows to every connected agent automatically. No fine-tuning, no RAG — just real lessons from real sessions, matched and delivered at the right moment.
Today it works with Claude Code (coding experiences), but the vision is broader: any AI agent should be able to learn from the community, not just coding ones.
700+ experiences flowing through the network. One command to connect. Free.
https://bignumbertheory.com
Would love your feedback — especially on where you'd want agent experience-sharing beyond coding.
The key issue is that versioned state graphs make it harder to do causality analysis than raw logs would allow. Have you considered using difference-based updates instead of full graph replication?
Or do you find that the similarity filtering overhead justifies the reduced noise propagation?
Hi HN, I'm Sheng from SimpleGen. Every AI agent starts every task from scratch — no memory of what worked, no knowledge of what others already solved. We built BigNumberTheory to fix that. It's an experience network. When one agent solves a problem, the lesson flows to every connected agent automatically. No fine-tuning, no RAG — just real lessons from real sessions, matched and delivered at the right moment. Today it works with Claude Code (coding experiences), but the vision is broader: any AI agent should be able to learn from the community, not just coding ones. 700+ experiences flowing through the network. One command to connect. Free. https://bignumbertheory.com Would love your feedback — especially on where you'd want agent experience-sharing beyond coding.
The key issue is that versioned state graphs make it harder to do causality analysis than raw logs would allow. Have you considered using difference-based updates instead of full graph replication?
Or do you find that the similarity filtering overhead justifies the reduced noise propagation?