Eazo Agent Kit

Memory GUMem

Your agent forgets too much.
We fixed that.

Most agents have goldfish memory — they forget the user across sessions. GUMem gives you a memory layer that captures chat + behavior + intent, with white-box inspectability and per-event audit. Every memory write reviewed. Every recall traced.

The choice

Why it exists

Without itthe hard way

Without it, every conversation starts cold. Users repeat their preferences, retell their history, re-explain their context. Worse, the agent never learns from behavior — clicks, searches, and tool calls vanish the moment the session ends. Your "intelligent assistant" is a stateless function.

With itproduction-grade

GUMem gives the agent a real memory. Dual-track: conversation + behavior. Three layers of decay-aware compression: Facts → Summary → Recall. Every write inspectable, every recall traced to source. Your agent actually gets to know your user.

Memory sources

GUMem remembers what users say, and what they actually do.

Chat alone is not memory. GUMem captures the conversation, the behavior around it, and the tool outcomes that reveal intent. The result is context the agent can inspect, govern, and reuse across sessions.

2memory tracks3compression layerssession continuity
GGUMemwhite-box memory
Chatmessages
Browserclicks + visits
Searchqueries
Emailthreads
CRMaccounts
Toolscalls + results

Recall workflow

A new session can start with the context the user already earned.

GUMem turns past messages and actions into inspectable memory objects, then retrieves the right mix of short-term and long-term context for the next answer.

New session

Can you compare this vendor against the one I liked last week?

No pasted context. No repeated briefing.
Memory retrieval
Facts

User prefers concise answers

Summary

Compares vendors and cares about deployment risk

Recall

Use concise context in the next answer

Agent answer

Context-aware response

Compared against Acme from last week. This vendor is cheaper, but weaker on audit export and deployment controls.

source: chatsource: clickstream

Memory governance

Memory that can be inspected before it influences the model.

GUMem makes memory reviewable by design. Sanitize before write, approve important updates, and trace every recall back to the raw event that created it.

SanitizeClean before write

Strip secrets, normalize entities, and block unsafe memory entries.

ReviewApprove important updates

Keep high-impact propositions visible before they shape future answers.

TraceEvery recall has provenance

Show the source event, confidence, and time decay behind each memory.

How it works

Memory production lineWHAT HAPPENED → WHO THEY ARE → CONTEXT
Messagesconversation
Action Logclicks · searches · tools
  1. 01

    Facts

    Raw events → summarized facts

    prefers light · size 42 · runs at dawn

  2. 02

    Summary

    Facts → inferred user profile

    morning runner · cushion fan · brand-loyal

  3. 03

    Recall

    Profile → formatted context

    short / mid / long-term

LLMprompt with context
WebHooksIntercept at any stage. Sanitize, inject rules, sync to your audit pipeline.
before_addSanitize before write
before_llmInject rules into prompt
after_llmSync to audit / CRM
  1. 1Capture: messages and action logs flow into GUMem as raw events.
  2. 2Facts: raw events distill into summarized facts — entities, preferences, time ranges.
  3. 3Summary: facts get inferred into a user profile — themes, traits, confidence scores.
  4. 4Recall: a query returns short-term + mid-term + long-term context, formatted for the prompt.
  5. 5WebHooks at every stage let you govern, inject rules, or sync to CRM.

What you can do

Three capabilities. One SDK call away.

Cross-session recall

Open a new session and the agent already knows your user. Short, mid, long-term context — formatted for the LLM, ready to drop into the system prompt.

See it in docs →

Dual-track memory

Conversation and behavior captured separately. The agent learns from clicks, filters, and tool calls — not just what users type into the chat box.

See it in docs →

WebHook governance

Three stages — sanitize before write, inject rules before the LLM, sync to your audit pipeline after. Govern memory before it ever touches the model.

See it in docs →

Code samples and the full API live in the docs. This page tells you why; docs tell you how.