Overview
Decision Journal logs every decision informed by Condelo, capturing the evidence that supported it, alternatives considered, and confidence level. Over time, it builds an organizational memory of decisions and their outcomes — enabling calibration, accountability, and learning.
This transforms Condelo from a tool that informs decisions into a system that tracks whether those decisions were good ones.
The Problem
- Organizations make thousands of decisions based on intelligence tools but rarely track which decisions succeeded and why
- 47% of enterprise users have made major decisions based on hallucinated or unreliable AI content — with no mechanism to catch or learn from these errors
- Regulated industries (finance, healthcare, defense) require auditable decision trails but build them manually
- Institutional memory of past decisions is lost when team members leave or when enough time passes
- No feedback loop exists between decision quality and the intelligence systems that informed them
How It Works
- Decision capture — Track user actions that constitute decisions: resolving signals, bookmarking inferences, completing scenario analyses, publishing stories
- Annotation — Prompt users to annotate decisions with rationale, expected outcome, and review date (optional but encouraged)
- Decision timeline — Maintain per-user and per-space decision histories with full evidence provenance
- Outcome assessment — At review dates, auto-generate outcome reports by re-running relevant retrievals and comparing current state to decision-time assumptions
- Calibration metrics — Track accuracy: how often did high-confidence decisions have good outcomes? Which evidence types are most predictive?
User Story
A portfolio manager uses Condelo's Scenario Simulator to evaluate a market entry. The Decision Journal logs: "Decided to enter APAC market based on 3 inferences, 2 scenario results, and 5 source documents. Confidence: High. Alternatives considered: delay 6 months, partner instead of direct entry." Six months later, the system prompts: "This decision is due for review. Here's what's happened since." The manager reviews, marks the outcome, and their calibration score updates. Over time, the team learns which types of evidence and analysis consistently lead to good decisions.
Complexity & Timeline
| Aspect | Detail |
|---|---|
| Complexity | Medium |
| Estimated Build | 3–4 weeks |
| Platform Dependencies | Inferences, Signals, Scenario Simulator, Explore (bookmarks/collections) |
| New Infrastructure | Decision schema, calibration scoring engine, outcome tracking UI |
Target Clients
- Personas: Portfolio Managers, Strategy Directors, Chief Risk Officers, Board Members, Compliance Officers
- Verticals: Financial Services, Private Equity, Healthcare, Defense, Government
- Pitch: "Know which decisions worked, which didn't, and why — build organizational judgment over time."
Revenue Potential
Strong governance and compliance value proposition justifies enterprise pricing. Particularly attractive to regulated industries where decision auditability is a legal requirement. Supports premium "governance tier" packaging. Creates a virtuous feedback loop: as decision tracking improves, the intelligence system itself gets better — increasing switching costs and long-term retention.
Feature Synergies
- Scenario Simulator — Decisions informed by scenario analysis are automatically logged with full context
- Source Trust Scoring — Decision outcomes feed back into source trust scores, creating a learning loop
- Collaborative Sensemaking — Team decisions made in shared investigation spaces get richer context and multi-perspective rationale
Risks & Open Questions
- Outcome attribution is inherently noisy — many factors influence results beyond the decision itself
- User adoption challenge: annotation friction may lead to sparse, low-quality decision records
- Calibration scoring requires sufficient decision volume to be statistically meaningful
- Privacy and blame concerns: decision journals could be used punitively rather than for learning