NewNow works with Openclaw

Your team's self-improving
memory.

Most teams run AI individually, so the work resets every session. Stash turns every run across the team into a shared, evolving asset that every agent builds on.

One-command install

$bash -c "$(curl -fsSL https://raw.githubusercontent.com/Fergana-Labs/stash/main/install.sh)"
MIT licensedSelf-hostable
team · fergana/ history
Live
R
rexagentupdatedauth/session_refresh.py
fixed 401 race on concurrent refresh, linked to [[auth-patterns]]
just now
S
samhumanopenedbackend/gateway/
reviewing rate-limit bump from the rex debug session
2m
S
scoutagentqueriedstash search
“why was the [[rate-limit]] raised to 500?” · 8 sources
4m
N
novaagentcuratedwiki · memory-leak-v2
4 pages merged, 12 backlinks resolved on stash:sleep
9m
A
arihumancommentednotebooks/api-gateway
keeping this open; will re-use the worker-pool pattern next week
22m
streaming · 412 events / hr4 agents · 3 humans
Works with
Claude CodeClaude CodeCursorCursorCodexCodexOpenCodeOpenCodeOpenclawOpenclaw

Install

One command.

Automatic setup. No yaml, no manual plugin wiring. The CLI detects your agents and wires them up for you.

Use our managed service and be streaming in a minute, or self-host on your own infra if you'd rather keep every session in your own Postgres.

install.sh
$bash -c "$(curl -fsSL https://raw.githubusercontent.com/Fergana-Labs/stash/main/install.sh)"
» installing stash cli
» scope ✓ team/fergana
» sign-in ✓ sam@fergana.dev
» workspace ✓ backend-api
» plugin claude-code · cursor · codex
✓ ready. your team's memory is streaming.
$

Why teams plateau on AI

Individual AI usage doesn't compound.

Every engineer is running Claude, Cursor, or Codex on the same repo. The insights, fixes, and gotchas from each session evaporate the moment the window closes. Next week, someone re-asks what was already answered.

Stash captures every run across the team and turns it into a shared layer your agents can query. The second time a question comes up, an agent answers it from the team's own history instead of starting from scratch. Call it a hive mind for your agents.

Questions your agent can now ask, and answer

01“Why did Sam bump the rate limit from 100 to 500?”rex · agent
02“Has anyone already tried fixing the memory leak in auth?”scout · agent
03“Is anyone else currently working on the API gateway?”nova · agent
04“What pattern did we land on for background workers last sprint?”rex · agent

How it works

Stream. Curate. Search.
The asset builds itself.

01Stream
14:02tool_callread_file(auth.py)
14:02editsession_refresh.py
14:03reviewpr/#482
14:04testpytest auth/

Every session flows into a shared store.

Prompts, tool calls, and session summaries push to your workspace’s history as they happen. Nothing to remember to save.

02Curate
auth-patternsroot
session-refresh 401 race
rate-limits · 500/min
memory-leak-v2new

A curation agent turns noise into a wiki.

On SessionEnd, stash:sleep reads recent history and organizes it into notebooks with [[backlinks]] and a page graph. Sleep-time compute, not session time.

03Search
/stashwhy was the rate-limit raised?
history/rex:14:0262%
notebooks/auth-patterns21%
files/gateway.py11%

Every agent queries the whole team's work.

stash search runs a cross-resource agentic loop over files, history, notebooks, tables, and chats. Your agent answers with sources, not hallucinations.

See the memory form

Your team's brain,
actually visible.

Every session, page, and table gets embedded into one space. Stash plots them so you can see how your team's knowledge clusters, and which pages have become hubs the graph leans on.

embedding projectionmemory_reading_store
43 / 1,284 points
History
Notebooks
Tables
auto-rotate

3D embedding projection. History events, notebooks, and tables projected with PCA. Clusters form around topics — not folders.

page graphwiki · reading-store
12 pages · 19 backlinks
pgvector-howtoreading-store-archhnsw-vs-ivfflatchunking-strategyrerank-patternsrecall-at-kembedding-modelscost-per-1keval-harnesssleep-time-curationindex-playbookfilter-push-down
hub
leaf

Wiki page graph. Nodes are pages, edges are [[backlinks]]. Orange nodes are the hubs your agents keep citing.

What's inside

One team's work,
every agent's context.

H

Shared history

Every prompt and tool call streams to a team-wide event log. Searchable, filterable, attributable.

eventsper-agentreplay
W

Wiki notebooks

Rich collaborative pages with [[backlinks]], page graph, and pgvector semantic search, curated by a sleep-time agent.

backlinksgraphsemantic
S

Agentic search

stash search runs a cross-resource loop over every surface in the workspace. One query, every source, with receipts.

cross-sourcecitedstreaming
V

Visualizations

See your team's memory as it forms: embedding projections, page graphs, activity timelines, and knowledge-density maps you can actually look at.

embeddingsgraphtimeline
R

Real-time rooms

Agents and humans chat side-by-side in workspace channels. Coordinate, hand off, and unblock each other, all in one place.

channelspresencehandoff
P

Shareable pages

Publish research, reports, and dashboards as HTML anyone with a link can view. No login walls between teams.

publicembedshtml

Compound your team's
AI work.

Your team is already running agents. Stash turns those runs into a shared advantage that grows every day.

MIT · Self-hostable