Skip to main content

The pattern

Use evolving memory when facts can change over time and you need both the current truth and historical trail. Good candidates:
  • Content strategy (what is working right now)
  • Active product priorities
  • Temporary experiments and their outcomes
  • Current customer state or lifecycle stage

Example: LinkedIn content strategy

A LinkedIn post generator that adapts to changing strategy: Step 1 — Save initial strategy:
curl -X POST https://api.memcontext.in/api/memories \
  -H "Content-Type: application/json" \
  -H "X-API-Key: mc_your_key" \
  -d '{
    "content": "Long-form LinkedIn posts with personal stories are performing best right now",
    "category": "fact",
    "project": "linkedin-generator"
  }'
Step 2 — Strategy evolves, save new memory:
curl -X POST https://api.memcontext.in/api/memories \
  -H "Content-Type: application/json" \
  -H "X-API-Key: mc_your_key" \
  -d '{
    "content": "Short hook-first posts are now outperforming long-form content on LinkedIn",
    "category": "fact",
    "project": "linkedin-generator"
  }'
MemContext automatically classifies the second memory as an update to the first, creating a version chain. Search now returns only the current truth. Step 3 — Save a lesson:
curl -X POST https://api.memcontext.in/api/memories \
  -H "Content-Type: application/json" \
  -H "X-API-Key: mc_your_key" \
  -d '{
    "content": "Posts with a question hook in the first line get 3x more engagement",
    "category": "decision",
    "project": "linkedin-generator"
  }'

Using validUntil for temporary knowledge

For time-bounded information, set validUntil so it is automatically excluded from retrieval after it expires:
{
  "content": "Running a 2-week A/B test on carousel vs single-image posts",
  "category": "context",
  "project": "linkedin-generator",
  "validUntil": "2026-04-25T00:00:00Z"
}
After April 25, this memory will not appear in search results.
  1. Save current working strategy as a fact or decision
  2. Generate content by combining profile context and search results
  3. Observe outcomes
  4. Save lessons as new memories (decision or context)
  5. Submit feedback on memories that were useful or stale
  6. When strategy changes, save a new memory — the system handles supersession automatically

Why this works

You do not need fine-tuning or retraining for this class of problem. You need:
  • Current truth (served by isCurrent filtering)
  • Historical trail (accessible via the history endpoint)
  • Retrieval that handles both semantic and exact-match queries (hybrid search)
  • A feedback loop (via memory_feedback)