RAMSES Organizational Memory

ROM captures the reasoning behind every decision in your organization.

Not transcripts. Not summaries. Reasoning.

The Problem We Solve

In every company, when a senior leaves, approximately 30% of knowledge leaves too. Not just what they knew — that can be documented. What leaves is why decisions were made. The reasoning leaves.

The CFO leaves after 5 years. Someone new comes in. First week: "Why don't we offer monthly billing? Everyone does."

You know you had this conversation. A 3-hour meeting two years ago. Data analyzed. The CFO showed monthly billing increases churn by 300%. Decision: annual only. It was correct, based on solid data.

But where is that logic? Which meeting? Who remembers the full chain?

Confluence — nothing. Email — maybe, but which of the 50,000? Slack — same.

Your organization either re-does the analysis (time, money, energy), or it makes the wrong decision because it no longer remembers why you chose what you chose.

It's not a documentation problem. Documentation is retroactive, selective, shallow. After a 3-hour meeting someone writes: "We chose annual billing for better retention." That's it. No 15 pros/cons. No enterprise exception. No note that monthly was considered but you were worried about cash flow. It captures the result, not the reasoning.

And nobody documents in real time — you document at the end, from memory, tired. Half the nuance disappears.

But the problem is deeper. It's not just "someone left." It's how decisions work in complex organizations when information is diffuse — horizontally between departments and vertically from execution to leadership.

Real example: Customer satisfaction drops 15% in Q3. Sales says it's a product issue — feature X is broken. Product allocates 3 engineers, 2 months, fixes the feature. Satisfaction doesn't improve.

Why? Because if you had followed the full thread, you'd have seen the issue started when Finance changed the invoicing policy in Q2. Clients don't understand the new invoices. Support knows — 200 tickets. But this never reaches Product.

Product cuts in the middle of the problem, not at the root. This happens constantly. No one sees the full picture. Everyone sees their piece.

ROM fixes this by automatically capturing reasoning from everywhere and connecting the threads.

What ROM Is

ROM is RAMSES Organizational Memory — the platform that connects to all official sources of interaction in a company and captures the reasoning behind every decision.

We're not talking about transcripts or documents. We're talking about reasoning — why decision X was made, which alternatives were considered, what trade-offs were accepted, and what happened afterwards.

ROM connects to meetings, email, chat, ERP, CRM. It ingests each interaction with full context and stores it in RAMSES together with the associated reasoning. RAMSES optimizes retrieval over huge datasets, generating a history of interactions for each POST/ROLE in the organization — not the person, the post.

This is critical: when John (the CFO) leaves, his personal data is deleted. But the reasoning associated with the CFO role remains. Mary (the new CFO) sees the decisions of the post, not John's conversations. GDPR compliance by design.

ROM captures and classifies everything that matters in the organization, including interactions with ERP/CRM apps. But this is not surveillance — only the official sources the organization defines as its legitimate knowledge base. Official meetings, corporate email, ERP transactions. Not casual chat, not private conversations.

What We Sell vs What Is Emergent

✅ THE PRODUCT (what we promise and deliver)

1) RAMSES – Reasoning Memory

You ask: "Why did we decide X?" ROM answers: What (the decision), Why (complete reasoning), When (temporal context), Who (the post that decided).

This works from day 1. It's measurable, demonstrable, sellable.

2) Deep Integrations

Zoom, Gmail, Slack, Odoo, Salesforce, etc. NOT like Zapier (surface-level "connect API"). But deep integration for reasoning extraction. We understand business logic inside each tool. We extract reasoning, not just "event happened".

This is the clear product: reasoning memory + capture integrations.

⚠️ EMERGENT (we don't promise it, but it can appear)

Process Intelligence:

Process mining (see what actually happens in the organization)

Process redesign (identify what should change)

Process understanding (why things work or don't)

Good vs risky shortcuts

Process gaps that cause errors

Deviations that have real impact

This is NOT a sold product. It's a natural valorization of RAMSES. Once you capture reasoning connected in a graph, you CAN do process intelligence. But this requires:

• Time: 3–6+ months of capture for meaningful patterns

• Focus: the customer must pay attention to insights

• Specific resources: process manager, quality team, someone to act

• Joint effort: collaboration between us and the client for interpretation and implementation

Simple analogy: ROM = fitness tracker

The product: captures steps, sleep, heart rate accurately. It delivers correct data about what you did.

Getting fit = Emergent. You CAN get fit using the data. BUT it requires: your decision, your discipline, your consistent effort. The tracker doesn't make you fit. It gives you the information so you can become fit.

ROM = exactly the same:

The product: captures accurate reasoning from all sources

The emergent value: process intelligence IF the organization allocates resources and acts

The RAMSES Philosophy

RAMSES = Reasoning-Augmented Memory with Semantic Embedding System. Not a wrapper over GPT. It's a specific architecture for capturing and retrieving organizational reasoning.

Graph with reasoning nodes

Each node is not raw text — it's a LLM-extracted reasoning unit. A decision, a chain of thought, a context. Nodes connect: decision A led to decision B, which caused problem C. Or: process X, deviation Y, error Z. The graph preserves the entire thread.

ASU – Atomic Semantic Units

From each reasoning piece, RAMSES uses LLMs to extract base concepts. "pricing decision," "vendor evaluation," "customer complaint." These enable granular semantic retrieval, not full-text matching. When someone asks "why are customers unhappy?", RAMSES finds not only "customer satisfaction" but also "churn," "complaints," "support tickets" — all semantically linked via ASUs.

Metadata for intelligent retrieval

Context: when, who (post), about what. Connections: what decisions followed, what preceded. Impact: what happened after.

Many metadata fields are predefined (financial, technical, HR, legal), some are LLM-suggested for approval. When the LLM notices a new pattern — "remote work policy decisions" — it proposes a metadata category. Admin approves or rejects.

E-E-A-T ranking

The graph follows E-E-A-T principles:

• Experience: how often this pattern appears in the organization

• Expertise: which post made the decision — CFO > intern for financial matters

• Authority: official decision vs brainstorming session

• Trust: did the decision work — outcome tracking

When retrieval finds 10 reasoning nodes on "pricing strategy," E-E-A-T decides which appear first. The CFO's decision that increased revenue outranks a brainstorming idea that led nowhere.

Why this matters technically: This is not keyword search. You ask "why don't we offer monthly billing?" — keyword search finds docs with "monthly" and "billing." RAMSES returns the complete reasoning behind that decision, ranked by E-E-A-T, with all connections (what led to it, what followed, what alternatives were considered).

Real Challenges

Let's be honest — this isn't easy.

Challenge 1: reasoning extraction quality

RAMSES must extract correct reasoning, not keywords or summaries. If someone says in a meeting "we choose vendor A because they have 24/7 support and decent pricing, while vendor B is cheaper but SLA is poor," the extraction must not be "we chose A" but: "we prioritized reliability (24/7 support + SLA) over cost; vendor B was 20% cheaper but had insufficient SLA."

This needs advanced prompt engineering, fine-tuning, iteration with real data. Months of work. Not "throw a prompt into GPT and done." It's a craft, refined continuously.

Challenge 2: every tool is different

Slack has complex thread structure. Gmail has conversation chains. Odoo has business processes with state machines. Each requires deep understanding — not "call API" but understand the business logic. Months per tool for deep integration.

Challenge 3: LLM costs can explode

A heavy user can cost $50–80/month in LLM. You need smart batching, caching, selective extraction. Otherwise the economics don't work for us or for the customer.

Challenge 4: GDPR and privacy by design

Not an afterthought. It's part of the architecture. POST-based storage, not person-based. Reasoning extraction, not conversation recording. Customer control over what's captured, excluded, or deleted. Right-to-be-forgotten done correctly means: delete John's personal data, keep the CFO role's reasoning.

We don't hide challenges. We acknowledge them — because that's what separates good execution from bullshit promises.

Pricing: Transparent and Fair

Simple model: fixed fee per user + LLM usage at real cost with reasonable markup.

Why transparent? Because LLM costs vary a lot. A heavy meeting user (20 meetings/month, 1h each) consumes far more than a light email user. If we set a flat fee, either we overcharge light users or undercharge heavy users and break economics.

Transparency is fair — universally, not just for us. The customer sees exactly what they consume. They can optimize usage if they want. They can allocate costs correctly internally (the department that uses more pays more).

Minimum commitment: $1,000/month. Not greed — infrastructure costs money. Below $1K, the economics don't work for us. But $1K is low enough that most companies over 50 people can try without approval committees.

Why Now

Timing is perfect for three reasons:

LLMs are good enough for reasoning extraction. Two years ago, impossible. AI could do keywords, summaries, sentiment — but not "why did we decide X." GPT-4 and Claude changed that. For the first time you can ask "why was this decision taken?" and AI can identify the reasoning. Not perfect, but good enough.

Integration ecosystems are mature. Stable APIs, OAuth standards, webhooks everywhere. You don't build every connector from scratch — you have solid foundations. The complexity lies in understanding, not basic connectivity.

Market awareness is high. Everyone talks about "AI memory," "organizational knowledge," "decision intelligence." Timing for category creation is perfect — people understand the problem, seek solutions, open to new approaches.

Get In Touch

If this resonates — if you recognize the problem and see ROM as infrastructure worth exploring — let's talk.

We're building this with design partners who understand that organizational memory is a long-term investment, not a quick-fix tool.

Email: admin@clotier.eu

No sales pitch. Just strategic conversation about whether ROM makes sense for your organization.