Open source · Model-agnostic · Zero vendor lock-in

Run every AI. Lock in to none. One open-source API. Every agent. Every model. All working together on your data.

Claude, OpenAI, Gemini, LLaMA, OpenClaw — all talking to your files, sharing context, building on each other's work. Olympus is the open-source API layer that sits on your existing storage and gives every agent, person, and app a governed, permission-aware way to collaborate — without handing your data to any vendor's cloud.

Claude + GPT + Gemini + LLaMA — together Your data never leaves your perimeter Free & open source
olympus api — sitting on your existing storage
🤖
Claude 4.6 Opus
Writing /contracts/Q3_Acme_v2.pdf
WRITE
🧠
GPT-5.5
RAG on /research/market_study/
READ
💡
Gemini 3.1 Pro
Saving memory → /agents/gemini/state
PERSISTED
⚙️
Custom LLaMA agent
Resumed session from checkpoint
RESUMED
🔗
MCP Orchestrator
Routing tool calls via Olympus API
ACTIVE
Olympus API Layer
● Olympus API — on top of your existing storage
847
docs indexed
5
agents live
0 MB
data egressed
🧱
Your storage has no AI API
Your NAS, file servers, and drives are full of critical data — but no agent, no app, no AI model can safely touch it.
🧩
Every agent is an island
Claude can't see what GPT-5.5 wrote. People and agents work in separate silos. Nothing is shared. Nothing persists.
🔒
IT can't govern AI access
Shadow AI is filling gaps your approved tools can't cover. Nobody knows what data agents are touching or why.
⛓️
Locked to one vendor's stack
Whoever owns your AI data layer owns your strategy. Switching models means starting over. That's not a foundation — it's a trap.
The core insight

Every AI. One API.
Your data. Your rules.

You shouldn't have to choose between Claude, GPT-4o, and Gemini. You should run all of them — on your own data, governed by your own rules, with every agent able to see what the others built. Olympus is the open-source API layer that makes that possible, on whatever storage you already have.

  • 🔌
    Works on what you already have
    MacBook, Windows Server, Synology NAS, or enterprise storage from NetApp, Dell, HPE, or Pure Storage. Olympus wraps it all in one clean, secure API — no migration, no rip-and-replace, no new storage contracts.
  • 🤝
    Agents, people, and apps — all collaborating
    For the first time, your human team and your AI agents share the same access layer. Agents read and write through the same governed API as your people. Everything they produce is visible, versioned, and reviewable.
  • 🛡️
    Governed, permission-aware, auditable
    Olympus integrates with Active Directory and SSO. Every API call is permission-checked — agents only access what their user is authorized to see. Every action is logged. IT finally has visibility into what AI is doing with company data.
  • 🔄
    Swap any model. Keep everything.
    Because Olympus is the API layer, you swap Claude for GPT-5.5 tomorrow with a config change. Agent memory, work products, and RAG indexes stay exactly where they are. You own the intelligence layer.
Olympus.io — Multi-Agent Architecture
Any Agent · Any Model · Any Vendor Claude GPT-4o Gemini Mistral Custom LLM Olympus.io — Secure AI API Layer Sits on top of your existing storage · No migration · No new hardware REST API · RAG · Agent Memory · SSO/AD · Audit Logs · MCP · Docker Model-agnostic · Swap any LLM · Data never moves Human Team Reviews · approves · directs MCP Orchestrator Routes tool calls · manages tasks Your existing storage — untouched, unmoved MacBook · Windows · NetApp · Dell · HPE · Pure · SMB/NFS · S3
💡
Think of Olympus the way you think of Stripe. Stripe didn't replace banks — it put a clean, secure API on top of the existing financial system. Olympus does the same for your storage: your data stays exactly where it is, and every agent, person, and app gets a governed API to collaborate on it.
Why this matters now

The multi-agent era needs new infrastructure

Single-agent AI is already table stakes. The competitive advantage in 2026 is orchestrating teams of specialized agents that compound each other's work. That requires a substrate none of the model vendors provide.

01 — The access problem

AI API on any storage, instantly

You already have the data. The problem is no AI model, agent, or application has a safe, governed way to reach it. Olympus wraps your existing Linux server, Mac Mini, Windows Server, or enterprise NAS in a clean REST API — and every agent goes live on your real data in minutes, not months.

macbook to netapp — one api
02 — The collaboration problem

People and agents, finally working together

Today your human team and your AI agents operate in completely separate systems. Olympus gives them the same access layer — people browse and edit through familiar interfaces, agents read and write through the API, and everything is visible, versioned, and governed in one place.

human + agent · shared access
03 — The lock-in problem

Model-agnostic by design

Because Olympus is the API layer — not a model vendor's product — you run Claude Opus 4.7, GPT-5.5, Gemini, Qwen, Kimi Deepsek, or any custom model through the same interface. When a better model ships next quarter, you swap it in one line. Agent memory and RAG indexes stay exactly where they are.

any model · swap anytime
Vendor independence

Pick the best model
for every job.
Always.

The AI model market changes every 90 days. The enterprise that wins isn't the one that picked the right vendor — it's the one that built an API layer that works with all of them, on their own data, on their own terms.

Agents running on Olympus today
Claude 4.6 GPT-5.5 Gemini 3.5 Mistral Large LLaMA 4.0 DeepSeek R4 Your fine-tune + any model via Ollama
Every agent — regardless of vendor — calls the same Olympus API to access your files, run RAG on your documents, and persist state back to your own storage. The API is model-neutral by design. Switch models mid-project. Run Claude 4.6 and GPT-5.5 simultaneously on the same dataset.
Without Olympus

You're running Claude for contracts, GPT-4o for research, and a custom model for finance. They share nothing. No common API. No shared access to your files. Each starts every session cold. IT has no visibility. Shadow AI is filling the gaps.

With Olympus

Every agent — from any vendor — calls the Olympus API to access your existing storage. One governed access layer. RAG on demand. Permissions enforced. Swap Claude for GPT-5 tomorrow — the API, the memory, the indexes stay exactly as they are. Your data never left the building.

Real API — any agent connects in minutes
POST /api/v1/genai/chat-on-directory
  { "directoryId": "contracts_Q3",
    "question": "Flag all renewal clauses",
    "selectedModel": "claude-3-5-sonnet@anthropic" }
Works identically with gpt-4o, gemini-1.5, llama3@ollama
Competitive positioning

Box brings your data to the cloud.
Olympus brings AI to your data.

Cloud content platforms were built for human collaboration. Olympus was built for the world where agents need direct, governed, persistent access to your data — where it already lives.

Box / SharePoint / Google Drive

Your data must migrate to their cloud
Governance controlled by the vendor
No on-premises or air-gapped support
Inference runs on their infrastructure
Agent state is ephemeral — nothing persists
One AI vendor — their model, their terms
Expensive per-seat SaaS pricing
Closed source — no customization

Olympus.io

Data stays on your NAS — zero egress, ever
Your AD, your compliance boundary, your rules
On-prem, VPC, hybrid, or fully air-gapped
NVIDIA GPU inference on your own hardware
Agent state, memories & work products persist
Model-agnostic — Claude, GPT, Gemini, any LLM
Single container — no per-seat SaaS tax
Open source — audit, fork, extend
Open source · Free to deploy

Your storage.
AI-ready in minutes.

Olympus deploys as a single Docker container and connects to whatever storage you already have — a Mac Mini, a Windows file server, or enterprise NAS from NetApp, Dell, HPE, or Pure Storage. From that point on, every agent you deploy has a secure, governed, RAG-enabled API to all of your private data. Nothing moves. Everything connects.

terminal — deploy olympus
# 1. Pull and run — one container docker run -d \ -p 3000:3000 \ -e NAS_HOST=your-nas.company.com \ -e AD_LDAP_URL=ldap://dc.company.com \ -e OLLAMA_HOST=http://gpu-server:11434 \ olympusio/olympus:latest # 2. Any agent connects immediately curl -X POST localhost:3000/api/v1/genai/chat-on-directory \ -d '{ "directoryId": "all_contracts", "question": "Flag renewal clauses >$500k", "selectedModel": "claude-4.5-sonnet@anthropic" }' ✓ Data stays put. Every agent. Every vendor. Governed.
Works with your existing storage NetApp ONTAP Dell PowerScale HPE Alletra EverPure Storage Amazon FSxN Azure NetApp Files SMB / NFS / NAS S3-compatible
⚡ Open source. Free to deploy. Forever model-agnostic.

Run every AI.
Lock in to none.

Deploy Olympus once. Every agent you run — Claude, GPT, Gemini, Kimi, DeepSeek, Qwen, LLaMA, whatever ships next year — gets governed, collaborative access to your data. You own the layer. No vendor does.

Scroll to Top