MemFuse LogoMemFuse

Key Concepts

Understanding MemFuse's memory architecture

Key Concepts in MemFuse

MemFuse provides a sophisticated memory architecture for AI agents, enabling human-like interactions and evolutionary learning. This document explains the core concepts that form the foundation of the framework.

What is a Memory?

In MemFuse, a memory is fundamentally owned by an AI agent. A memory can be created or exist within multiple optional contexts, offering extensive flexibility:

  • Agent-only context: Memories injected directly into an agent without interaction.
  • Conversation context: Episodic memories created during interactions between agents and users.
  • User context: Memories tied specifically to interactions or information from individual users.
  • Multi-user context: Shared memories representing common knowledge or interactions involving multiple users.
  • Agent-to-agent sharing: Memories shared directly between AI agents without a conversational intermediary, enabling efficient agent-to-agent communication.

This comprehensive setup allows agents to dynamically manage diverse forms of memory, creating a rich, layered context for meaningful AI interactions.

Types of Memory

MemFuse deals with various kinds of memories:

Semantic Memory: Facts and Knowledge

MemFuse uses a Knowledge Graph to manage facts and knowledge, which updates itself as new facts are added and reconciles when conflicting facts are present. Semantic Memory also includes:

  • Profiles: Stores memories of the agent about users
  • Personalities: Represents the personality of the agent, shaped by past system prompts and words said by the agent

These components allow developers to create more human-like agents that evolve with the users they've interacted with. For example, if an agent receives a system prompt to act like Doraemon, the personality of the agent will adopt those characteristics. The agent maintains that personality and can only be updated gradually, unless the Personality is explicitly deleted.

Episodic Memory: Past Experiences

Episodic memory preserves successful interactions as learning examples that guide future behavior. Unlike semantic memory which stores facts, episodic memory captures the full context of an interaction—the situation, the thought process that led to success, and why that approach worked. These memories help the agent learn from experience, adapting its responses based on what has worked before.

Reflective Memory: Higher-Level Understanding

Reflective memory synthesizes information from other memory types to create higher-level insights, patterns, and abstractions. It represents the agent's ability to contemplate its own knowledge and experiences, similar to how humans reflect on their past to form deeper understanding. Through reflective memory, the agent can:

  • Generate insights by connecting information across different memories
  • Recognize patterns in user behavior and preferences
  • Perform advanced reasoning about complex situations
  • Resolve conflicts between contradictory memories

This introspective capability allows agents to continuously evolve their understanding and provide more contextually appropriate responses.

Levels of Memories: Three-Level Memory Architecture

MemFuse employs a hierarchical 3-level structure designed to efficiently handle context, knowledge, and insights:

Level 0 – Conversations (Verbatim Memory)

Stores the exact transcript of interactions between users and the AI, enabling immediate retrieval for reference and clarity.

from pydantic import BaseModel, Field
from datetime import datetime
from typing import Literal, Optional, List
from uuid import UUID, uuid4
 
class Message(BaseModel):
    """Represents a single message in a conversation."""
    id: UUID = Field(default_factory=uuid4)
    timestamp: datetime = Field(default_factory=datetime.now)
    sender_id: str
    sender_type: Literal["user", "agent"]
    content: str
 
class Conversation(BaseModel):
    """Level 0 - Verbatim transcript of a conversation."""
    conversation_id: UUID
    messages: List[Message]
 
    def retrieve_recent(self, n: int = 10) -> List[Message]:
        """Get the n most recent messages."""
        return self.messages[-n:]
 
# Example
chat = Conversation(
    conversation_id=UUID("f8d7e6c5-a4b3-2c1d-0e9f-8a7b6c5d4e3f"),
    messages=[
        Message(
            sender_id="user123",
            sender_type="user",
            content="How does the memory system in MemFuse work?"
        ),
        Message(
            sender_id="agent456",
            sender_type="agent",
            content="MemFuse employs a three-level memory architecture that includes Conversations, Semantic & Episodic Memories, and Reflective Memory."
        )
    ]
)

Level 1 – Semantic & Episodic Memories

  • Semantic Memory: Stores structured facts, concepts, and general knowledge derived from past interactions, forming a knowledge base for quick, accurate reference.
  • Episodic Memory: Retains contextual recollections of events, interactions, and situations from past sessions, allowing the AI to recall specifics such as who, what, where, and when.
class Profile(BaseModel):
    """Semantic Memory - User profile information."""
    user_id: str
    name: str
    preferred_name: str
    communication_preferences: str
    interests: List[str]
    mentioned_entities: List[str] = []
    relationship_context: str = ""
 
class Personality(BaseModel):
    """Semantic Memory - Agent's personality traits."""
    agent_id: str
    character_traits: List[str]
    behavioral_tendencies: List[str]
    language_style: str
    response_patterns: List[str]
 
class KnowledgeNode(BaseModel):
    """Semantic Memory - A fact or piece of knowledge."""
    id: UUID = Field(default_factory=uuid4)
    fact: str
    confidence: float = 1.0
    source_message_id: Optional[UUID] = None
    related_entities: List[str] = []
    timestamp: datetime = Field(default_factory=datetime.now)
 
class Episode(BaseModel):
    """Episodic Memory - A memorable interaction or experience."""
    id: UUID = Field(default_factory=uuid4)
    timestamp: datetime = Field(default_factory=datetime.now)
    context: str = Field(..., description="The situation and relevant context")
    observations: str = Field(..., description="What was observed about the interaction")
    thought_process: str = Field(..., description="Agent's reasoning during the interaction")
    actions: str = Field(..., description="What the agent did in response")
    outcome: str = Field(..., description="The result and why it was successful or not")
    importance: float = 0.5  # How important this episode is (0.0 to 1.0)
 
# Examples
user_profile = Profile(
    user_id="user123",
    name="Alexander",
    preferred_name="Lex",
    communication_preferences="Casual, witty communication with relevant emojis",
    interests=["AI", "Machine Learning", "Memory systems"]
)
 
agent_personality = Personality(
    agent_id="agent456",
    character_traits=["helpful", "friendly", "knowledgeable"],
    behavioral_tendencies=["provides detailed explanations", "asks clarifying questions"],
    language_style="conversational yet precise",
    response_patterns=["acknowledges user concerns", "offers solutions"]
)
 
knowledge_fact = KnowledgeNode(
    fact="Lex prefers documentation with visual diagrams",
    related_entities=["Lex", "documentation", "diagrams"]
)
 
memory_episode = Episode(
    context="User asked about adding diagrams to the documentation",
    observations="User was enthusiastic about including Mermaid diagrams",
    thought_process="Adding visualizations will improve understanding of complex memory concepts",
    actions="Added three Mermaid diagrams to illustrate key memory concepts",
    outcome="User was pleased with the visual representation and requested further enhancements"
)

Level 2 – Reflective Memory

Synthesizes information from Levels 0 and 1, creating higher-level insights, patterns, and abstractions. This enables advanced reasoning, conflict resolution among memories, and generation of informed, context-aware decisions. Reflective Memory represents the agent's ability to contemplate its own knowledge and experiences, similar to how humans reflect on their past to form deeper understanding.

class Insight(BaseModel):
    """A high-level insight derived from lower-level memories."""
    id: UUID = Field(default_factory=uuid4)
    timestamp: datetime = Field(default_factory=datetime.now)
    title: str
    description: str
    source_memories: List[UUID] = Field(..., description="IDs of memories that contributed to this insight")
    confidence: float = 0.8
    reasoning: str = Field(..., description="How this insight was derived")
 
class PatternRecognition(BaseModel):
    """A pattern identified across multiple memories."""
    id: UUID = Field(default_factory=uuid4)
    pattern_name: str
    description: str
    supporting_evidence: List[UUID] = Field(..., description="Memory IDs that support this pattern")
    implications: str = Field(..., description="What this pattern suggests for future interactions")
 
class ReflectiveMemory(BaseModel):
    """Level 2 - The agent's deep understanding derived from other memories."""
    insights: List[Insight] = []
    patterns: List[PatternRecognition] = []
 
    def retrieve_relevant(self, query: str) -> List[Insight]:
        """Retrieve insights relevant to a query (simplified example)."""
        # In a real implementation, this would use semantic search
        return [i for i in self.insights if query.lower() in i.title.lower() or query.lower() in i.description.lower()]
 
# Example
reflective = ReflectiveMemory(
    insights=[
        Insight(
            title="Lex's Documentation Preferences",
            description="Lex consistently prefers documentation that balances visual elements with concise explanations",
            source_memories=[
                UUID("a1b2c3d4-e5f6-7a8b-9c0d-1e2f3a4b5c6d"),  # A chat message ID
                UUID("b2c3d4e5-f6a7-8b9c-0d1e-2f3a4b5c6d7")   # An episodic memory ID
            ],
            reasoning="Analysis of Lex's positive responses to visual documentation and requests for clarification suggests a preference for balanced visual-textual learning"
        )
    ],
    patterns=[
        PatternRecognition(
            pattern_name="Visual Learning Preference",
            description="Lex responds more positively to information presented with diagrams",
            supporting_evidence=[
                UUID("c3d4e5f6-a7b8-9c0d-1e2f-3a4b5c6d7e8"),
                UUID("d4e5f6a7-b8c9-0d1e-2f3a-4b5c6d7e8f9")
            ],
            implications="Future complex explanations should include visual elements to improve engagement"
        )
    ]
)