Quickstart to the API
Welcome to the Mpalo API Quickstarts! This guide will show you how to get a memory-enhanced response from your chosen LLM with a single API call to our platform.
Prerequisites
Before you begin, make sure you have:
- An Mpalo account.
- An API key from an external LLM provider (like OpenAI, Google, Anthropic, etc.) or your own self-hosted model.
- A tool like cURL or Postman to make API calls.
Step 1: Get Your Mpalo API Key
Your Mpalo API key authenticates your requests to our platform.
- Log in to the Mpalo Workbench.
- Navigate to the 'API Keys' section.
- Generate a new key or copy an existing one. Keep this key secure.
Step 2: Connect Your External LLM
This is where you tell Mpalo which LLM to use. You only have to do this once per key.
- In the Workbench, navigate to the 'Connections' or 'Integrations' section.
- Securely add the API key for your chosen external LLM (e.g., your OpenAI key). This key is encrypted and stored securely by Mpalo.
Step 3: Make a Memory-Enhanced Chat Completion Call
Now, let's have a conversation. Notice how you're only making one call to Mpalo. We handle the rest.
First, tell Palo something to remember.
curl -X POST https://api.mpalo.com/v1/chat/completions \
-H "Authorization: Bearer YOUR_MPALO_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"connection": "openai", # Specifies which of your connected LLMs to use
"model": "palo-lite",
"messages": [
{"role": "user", "content": "Please remember that my favorite color is blue."}
],
"session_id": "user_conversation_123"
}'
Mpalo will pass this to your connected LLM, and you'll get a standard response like: "Okay, I will remember that your favorite color is blue."
Now, in a separate API call, ask a question that requires memory:
curl -X POST https://api.mpalo.com/v1/chat/completions \
-H "Authorization: Bearer YOUR_MPALO_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"connection": "openai",
"model": "palo-lite",
"messages": [
{"role": "user", "content": "What is my favorite color?"}
],
"session_id": "user_conversation_123" # Use the same session_id to access the memory
}'
Expected Intelligent Response:
Because Palo provided the memory, the external LLM can now answer correctly.
{
"choices": [
{
"message": {
"role": "assistant",
"content": "You told me your favorite color is blue."
}
}
],
...
}
How It Works (The Mpalo Magic)
You made one simple API call to Mpalo. Behind the scenes, we did the heavy lifting:
- Palo processed your query using the memory associated with "session_id".
- We constructed a new, enhanced prompt with the relevant context.
- We securely used your connected OpenAI API key to send this enhanced prompt to their API.
- We instantly returned their final, memory-aware response to you.
Next Steps
You've just seen how easy it is to add memory to any LLM. Now you can:
- Explore the different Palo Engines (Palo Bloom, DEEP) for more power.
- Dive deeper into specific endpoints in the full API Documentation.
- Check out Memory Templates for common use cases.
Was this page helpful?
Your feedback helps us improve our documentation.