MemFuse LogoMemFuse

Installation

Install and set up MemFuse Core Server and Python SDK

Installing MemFuse

MemFuse consists of two main components: the MemFuse Core Server (backend services) and the MemFuse Python SDK (client library). This guide covers installation and setup for both components.

Note: This guide covers both the MemFuse Core Server and Python SDK installation and setup.

MemFuse Core Server Setup

Prerequisites

  • Python 3.10 or higher
  • Poetry (recommended) or pip

Installation Steps

  1. Clone the repository:

    git clone https://github.com/memfuse/memfuse.git
    cd memfuse
  2. Install dependencies and run the server:

    Using Poetry (Recommended)

    poetry install
    poetry run memfuse-core

    Using pip

    pip install -e .
    python -m memfuse_core

The server will start and be available at http://localhost:8000 by default.

Python SDK Installation

To use MemFuse in your applications, install the Python SDK from PyPI:

pip install memfuse

Alternative Installation Methods

Using Poetry:

poetry add memfuse

Development Version:

For the latest features from the development branch:

pip install git+https://github.com/memfuse/memfuse-python.git

Verifying Installation

1. Check Server Status

Ensure your MemFuse server is running by visiting http://localhost:8000 in your browser or using curl:

curl http://localhost:8000/api/v1/health

2. Basic Connection Test

Test the complete setup with a simple example:

from memfuse.llm import OpenAI
from memfuse import MemFuse
import os
 
# Initialize MemFuse client
memfuse_client = MemFuse(
    # base_url=os.getenv("MEMFUSE_BASE_URL"),  # Defaults to localhost:8000
)
 
# Create a memory scope
memory = memfuse_client.init(
    user="alice",
    # agent="agent_default",
    # session=<randomly-generated-uuid>
)
 
print("MemFuse setup completed successfully!")

Configuration Options

Environment Variables

Configure your MemFuse server using environment variables:

  • MEMFUSE_BASE_URL: Server URL (defaults to http://localhost:8000)
  • OPENAI_API_KEY: Your OpenAI API key for LLM integration

Troubleshooting

Common Issues

  1. Port conflicts: If port 8000 is already in use, configure a different port in your server settings

  2. Connection errors: Ensure the MemFuse server is running before initializing the client

  3. Import errors: Make sure you've installed the correct package:

    • Server: git clone https://github.com/memfuse/memfuse.git
    • Client SDK: pip install memfuse
  4. Permission issues: Ensure you have proper read/write permissions in the installation directory

Getting Help

  • GitHub Discussions: Join conversations about roadmap, RFCs, and Q&A
  • GitHub Issues: Report bugs and request new features

Next Steps

Once installation is complete:

  1. Explore the Quickstart Guide to begin integrating MemFuse into your applications
  2. Check out Examples for sample implementations demonstrating how to use MemFuse with various LLM providers (OpenAI, Anthropic, Gemini), perform asynchronous and synchronous operations, and maintain continuous conversations with memory.

Ready to build? MemFuse automatically handles memory storage and retrieval, so you can focus on creating amazing AI experiences! 🚀