Installation
Install and set up MemFuse Core Server and Python SDK
Installing MemFuse
MemFuse consists of two main components: the MemFuse Core Server (backend services) and the MemFuse Python SDK (client library). This guide covers installation and setup for both components.
Note: This guide covers both the MemFuse Core Server and Python SDK installation and setup.
MemFuse Core Server Setup
Prerequisites
- Python 3.10 or higher
- Poetry (recommended) or pip
Installation Steps
-
Clone the repository:
-
Install dependencies and run the server:
Using Poetry (Recommended)
Using pip
The server will start and be available at http://localhost:8000
by default.
Python SDK Installation
To use MemFuse in your applications, install the Python SDK from PyPI:
Alternative Installation Methods
Using Poetry:
Development Version:
For the latest features from the development branch:
Verifying Installation
1. Check Server Status
Ensure your MemFuse server is running by visiting http://localhost:8000
in your browser or using curl:
2. Basic Connection Test
Test the complete setup with a simple example:
Configuration Options
Environment Variables
Configure your MemFuse server using environment variables:
MEMFUSE_BASE_URL
: Server URL (defaults tohttp://localhost:8000
)OPENAI_API_KEY
: Your OpenAI API key for LLM integration
Troubleshooting
Common Issues
-
Port conflicts: If port 8000 is already in use, configure a different port in your server settings
-
Connection errors: Ensure the MemFuse server is running before initializing the client
-
Import errors: Make sure you've installed the correct package:
- Server:
git clone https://github.com/memfuse/memfuse.git
- Client SDK:
pip install memfuse
- Server:
-
Permission issues: Ensure you have proper read/write permissions in the installation directory
Getting Help
- GitHub Discussions: Join conversations about roadmap, RFCs, and Q&A
- GitHub Issues: Report bugs and request new features
Next Steps
Once installation is complete:
- Explore the Quickstart Guide to begin integrating MemFuse into your applications
- Check out Examples for sample implementations demonstrating how to use MemFuse with various LLM providers (OpenAI, Anthropic, Gemini), perform asynchronous and synchronous operations, and maintain continuous conversations with memory.
Ready to build? MemFuse automatically handles memory storage and retrieval, so you can focus on creating amazing AI experiences! 🚀