Skip to content

Our Commitment to Trust & Ethics

At Mpalo, building trustworthy AI is not an afterthought—it's the foundation of everything we do. We are committed to transparency, fairness, user empowerment, and ethical innovation in AI memory.

Philosophy & Values: Guiding Our Innovation

Ethical AI Development

Mpalo's core philosophy is rooted in the belief that AI should augment human potential, not replace it, and that technology must be developed and deployed responsibly. Our values guide every decision we make, from model development to user interaction:

  • Human-Centricity: We design AI to understand, support, and empower users. Palo's memory is built to be intuitive and beneficial.
  • Transparency: We believe in clear communication about how our AI systems work, how data is used, and the capabilities and limitations of our technology.
  • Privacy by Design: Protecting user data is paramount. We integrate privacy considerations into the entire lifecycle of our products and services. You own and control your data.
  • Fairness & Equity: We are committed to mitigating bias in our AI models and ensuring our technology is accessible and beneficial to all users.
  • Accountability: We take responsibility for the impact of our technology and are committed to continuous improvement and ethical oversight.
  • Innovation with Responsibility: We pursue cutting-edge AI research while ensuring our advancements are aligned with ethical principles and societal well-being. This includes our commitment to the Palo Marketplace, fostering a fair ecosystem for creators.

Data Handling & Transparency: Your Data, Your Control

Transparency in data handling is crucial for trust. At Mpalo, we ensure you have clear information and control over your data when using Palo Bloom, our AI memory system. We do not train our general AI models on your private data or memories without your explicit consent.

Core Principles of Data Management

  • User Ownership & Control: You own the data you input into Palo and the memories it creates for you. You have tools to view, manage, and delete your memories.
  • Explicit Consent for Training: Mpalo does not use your personal or business data to train our core AI models unless you provide explicit, opt-in consent for specific programs.
  • Encryption: All data, including memories, is encrypted both in transit (using TLS/SSL) and at rest (using AES-256 or equivalent).
  • Bring Your Own Vector Store (BYOVS): For enhanced data governance, users (particularly Enterprise tier) can opt to use their own managed vector database, keeping memory data within their designated environment.
  • Private Data Spaces: For commercial applications, this feature (Beta, Pro/Enterprise) ensures secure sharding and isolation of end-user memories, even within Mpalo-managed infrastructure.
  • Document-to-Memory Pipeline: When using this pipeline, raw document content is processed for embedding based on your format string. The original document is not stored by Mpalo unless you choose an explicit storage option. Vectors are placed in your designated vector store.

Data Handling Across Our Palo Engine Models

Feature / Model Palo Mini Palo Bloom Palo DEEP Palo DEEP-Research
Data Ownership User User User User
Training on User Data No (without explicit consent) No (without explicit consent) No (without explicit consent) No (without explicit consent, potentially for specific research opt-ins)
Data Encryption (Transit & At Rest) Yes (TLS/SSL, AES-256) Yes (TLS/SSL, AES-256) Yes (TLS/SSL, AES-256) Yes (TLS/SSL, AES-256)
User Control (View, Delete Memories) Full Full Full Full
BYOVS Support Limited (Enterprise) Limited (Enterprise) Yes (Pro/Enterprise) Yes (Enterprise/Research Tier)
Private Data Spaces Support Via Pro/Enterprise account Via Pro/Enterprise account Yes (Pro/Enterprise) Yes (Enterprise/Research Tier)
Mode-Specific Handling (Palo DEEP) - - Personalization Mode: Organic forgetting curves. Data primarily used for individual user's context.
Research Mode: Prioritizes high-fidelity recall. Data may be used for aggregated, anonymized insights if explicitly consented for research purposes.
Research Mode Dominant: Focus on maximum recall and consistency. Data handling subject to specific research agreements and highest privacy standards.

For detailed information on data processing, retention policies, and your rights, please refer to our Privacy Policy and Terms of Use. Our Safety & Security page provides further details on our security measures.

AI Safety & Bias Mitigation

Proactive Safety Measures

AI safety is integral to our development process. We employ a multi-layered approach to ensure Palo operates reliably and securely:

  • Rigorous Testing: Extensive testing for vulnerabilities, performance anomalies, and unexpected behaviors across diverse scenarios.
  • Content Moderation: Mechanisms to filter and manage harmful or inappropriate content that might be inadvertently stored or generated.
  • Secure Architecture: Building on a secure infrastructure with robust access controls and data protection measures. (See Safety & Security for details).
  • Anomaly Detection: (Primarily Enterprise Tier) Systems to monitor for unusual patterns in memory access or data interaction that could indicate misuse or a security threat.
  • Fair Usage Policies: Clear guidelines to prevent abuse and ensure the platform is used responsibly by all.

Addressing Bias in AI Memory

AI models can inadvertently learn and perpetuate biases present in training data. While Palo's core function is memory rather than generative decision-making based on broad datasets, we take steps to mitigate potential biases in how memories are stored, prioritized, or recalled:

  • Focus on Episodic Data: Palo primarily remembers specific user interactions (episodic memory), reducing reliance on generalized world knowledge that might contain societal biases.
  • User-Controlled Input: Since users largely control the data Palo remembers for them, the initial source of bias is often from the input data itself. We encourage users to be mindful of this.
  • Diverse Internal Datasets (for general capabilities): For any foundational capabilities that might draw on broader knowledge, we strive to use diverse and representative datasets and continually refine them.
  • Ongoing Research: We actively research techniques to identify and mitigate biases in memory retrieval and representation, especially for features that summarize or interpret memories.
  • Feedback Mechanisms: We provide channels for users to report instances where they perceive biased behavior, helping us refine our systems.

Our goal is to create an AI memory system that is as fair, objective, and reliable as possible for every user.

Ethical Oversight & Future Commitments

Mpalo is establishing an external Ethics Advisory Council and internal review processes to guide our development and address emerging ethical challenges. We are committed to ongoing dialogue with users, researchers, and policymakers to ensure Mpalo remains a force for good. This includes reinvesting a majority of profits into research focused on consumer-friendly, transparent, and ethical AI.

Our Mission & Reinvestment