duro
memory for ai agents
# every fact has a source. every decision gets validated. intelligence compounds.
// the_problem
your ai forgets everything
every conversation starts from zero. past mistakes get repeated. lessons learned disappear. there is no audit trail for what your agent knows.
[!] agent suggests 1000 req/min
[!] ships broken code
[!] production incident
[!] same mistake next month
// the_solution
duro remembers with receipts
structured memory with provenance. every fact has a source. every decision can be validated. intelligence compounds instead of resetting.
> agent suggests 1000 req/min
$ duro recalls: "we learned it's 100/min"
> agent self-corrects
[validated] decision with provenance
// the_builders_compass
four pillars of compounding intelligence
01
memory/
"every fact has a source and confidence"
02
verification/
"every decision gets validated"
03
orchestration/
"every action is permissioned"
04
expertise/
"patterns become reusable skills"
// see_it_in_action
the moment duro saves you
~/project
> what's our rate limit? i think it's 1000/min
$ duro_semantic_search("rate limit")
[decision found] score: 0.95
decision: api rate limit is 100/min, not 1000
status: [validated] confidence: 0.95
source: production incident 2026-02-10
> actually, the rate limit is 100/min. confirmed after incident.
// crisis averted. intelligence compounds.
// who_its_for
builders who care about correctness
teams that want learning to compound. those who don't want to ship nonsense.
[ai developers]
[claude code users]
[teams building ai]
duro
start building with memory
free forever. local first. open source.