Quickstarts

Quickstart to the API

Welcome to the Mpalo API Quickstarts! This guide will show you how to get a memory-enhanced response from your chosen LLM with a single API call to our platform.

Prerequisites

Before you begin, make sure you have:

  • An Mpalo account.
  • An API key from an external LLM provider (like OpenAI, Google, Anthropic, etc.) or your own self-hosted model.
  • A tool like cURL or Postman to make API calls.

Step 1: Get Your Mpalo API Key

Your Mpalo API key authenticates your requests to our platform.

  1. Log in to the Mpalo Workbench.
  2. Navigate to the 'API Keys' section.
  3. Generate a new key or copy an existing one. Keep this key secure.

Step 2: Connect Your External LLM

This is where you tell Mpalo which LLM to use. You only have to do this once per key.

  1. In the Workbench, navigate to the 'Connections' or 'Integrations' section.
  2. Securely add the API key for your chosen external LLM (e.g., your OpenAI key). This key is encrypted and stored securely by Mpalo.

Step 3: Make a Memory-Enhanced Chat Completion Call

Now, let's have a conversation. Notice how you're only making one call to Mpalo. We handle the rest.

First, tell Palo something to remember.


curl -X POST https://api.mpalo.com/v1/chat/completions \
     -H "Authorization: Bearer YOUR_MPALO_API_KEY" \
     -H "Content-Type: application/json" \
     -d '{
           "connection": "openai", # Specifies which of your connected LLMs to use
           "model": "palo-lite",
           "messages": [
             {"role": "user", "content": "Please remember that my favorite color is blue."}
           ],
           "session_id": "user_conversation_123"
         }'
        

Mpalo will pass this to your connected LLM, and you'll get a standard response like: "Okay, I will remember that your favorite color is blue."

Now, in a separate API call, ask a question that requires memory:


curl -X POST https://api.mpalo.com/v1/chat/completions \
     -H "Authorization: Bearer YOUR_MPALO_API_KEY" \
     -H "Content-Type: application/json" \
     -d '{
           "connection": "openai",
           "model": "palo-lite",
           "messages": [
             {"role": "user", "content": "What is my favorite color?"}
           ],
           "session_id": "user_conversation_123" # Use the same session_id to access the memory
         }'
        

Expected Intelligent Response:

Because Palo provided the memory, the external LLM can now answer correctly.


{
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "You told me your favorite color is blue."
      }
    }
  ],
  ...
}
        

How It Works (The Mpalo Magic)

You made one simple API call to Mpalo. Behind the scenes, we did the heavy lifting:

  1. Palo processed your query using the memory associated with "session_id".
  2. We constructed a new, enhanced prompt with the relevant context.
  3. We securely used your connected OpenAI API key to send this enhanced prompt to their API.
  4. We instantly returned their final, memory-aware response to you.

Next Steps

You've just seen how easy it is to add memory to any LLM. Now you can: