Framework Documentation — v0.1
A system for building connected knowledge graphs and ontology design layers that represent a business as a navigable, toggleable, traceable intelligence system.
The Core Problem
A gap analysis answers one question on one day. A brand score captures one dimension. A competitive brief maps one moment in time. These outputs are valuable — but they're disconnected. The Layered Ontology Architecture solves this by making every intelligence artifact a layer in a connected, queryable, toggleable system.
The result: a single business can be viewed through multiple analytical lenses simultaneously — competitive position, value flow, brand strength, temporal execution gaps — all anchored to the same underlying knowledge graph.
The Engelbart Foundation
| Team | Role | In This System |
|---|---|---|
| Team A | Does the work | The client or project team executing their business plan. They operate inside the system but don't see its full structure. |
| Team B | Improves how A works | Our agent teams. Building K1–K5 knowledge graphs, O1–O4 ontology overlays, and BMC guidance that help Team A make better decisions. |
| Team C | Improves how B improves | Us refining the architecture itself. Every client engagement makes the system tighter. This document is Team C output. |
This document is Team C output. It defines the system that Team B uses. Each client engagement is a Team B instance. Long Zhu is the first complete Team B deployment.
Two Fundamental Types
Descriptive. Empirically derived from source material. Represents observed reality — entities, relationships, cluster structure, gaps. Queryable: InfraNodus returns factual answers about the graph's structure.
Prescriptive. Designed by the analyst. Defines what SHOULD be true, what categories exist, what constraints apply. Toggled on/off depending on the analytical task. Composable: multiple layers can be active simultaneously.
What It Enables
What entities exist? What connects to what?
Where are the disconnects? What should be connected but isn't?
How does value move? Where are flows broken?
How strong is the brand? Where is it vulnerable?
Who are the competitors? What do they have that we don't?
Which business model block does each entity belong to?
Market, social, environmental — which lens am I using?
Is this possible, planned, or proven?
How much should I trust this claim?
The Knowledge Layers
Each K-layer is a distinct InfraNodus graph per project. They stack on top of each other — K1 is always on, the rest are toggled depending on the analytical question. Together they form a complete intelligence picture.
Entity landscape extracted from all source material. Pitch decks, business plans, transcripts, web research. The foundation every other layer builds on.
"What entities exist? What connects to what?"
Structural gaps between clusters. Maps the absence of connections. Where disconnects live. Built from K1 gap analysis output using InfraNodus bridge detection.
"What should be connected but isn't?"
REA (Resource–Event–Agent) mapping of how value moves through the system. Three sub-graphs: agents, resources, planned flows. Verb-mode extraction preserves action semantics.
"How does value move? Where are flows broken?"
Feeds the 5-dimension scoring framework: Awareness, Trust, Mission Alignment, Differentiation, Loyalty. SERP analysis, Edelman/Morning Consult benchmarks.
"How strong is the brand? Where is it vulnerable?"
Market positioning relative to competitors. Competitor entity landscape, differentiation gaps, SERP-derived structural analysis. Supports the Brand Power Score through SBPI methodology.
"Who are the competitors? What do they have that we don't?"
The Ontology Layers
O-layers are not separate graphs — they are frameworks that classify, filter, and evaluate knowledge graph entities. They can be toggled on and off. Multiple layers can be active simultaneously.
Maps every KG entity to one or more of the 9 BMC blocks. Enables channel tracing, resource impact modeling, and initiative tracing through Value Propositions → Customer Segments → Revenue Streams.
"Which block does this entity live in?"
Four analytical frames: Market, Social, Environmental, General. Each filters the same KG data through a different lens with a different consensus floor. Same entity, four readings.
"Which lens am I analyzing through?"
Tags every flow as: Recipe (capability — could happen), Plan (commitment — will happen), or Observation (reality — did happen). The Plan–Observation gap is where blindspots live.
"Is this possible, planned, or proven?"
Five-tier authority hierarchy from Legal/Regulatory (0.9–1.0) down to Personal judgment (0.0–0.2). Governs how much weight each claim gets. Adjustable floor per analytical task.
"How much should I trust this claim?"
The Toggle System
To answer "Should Long Zhu invest in organized play?" — activate K1 (entities) + K3 (value flow: what does organized play require?) + K5 (competitive: what did Flesh and Blood and MetaZoo do?) + O1 (BMC: it's Key Activity + Channel) + O3 (temporal: Recipe-only, no Plan or Observation) + O4 floor = 0.7 (professional consensus: organized play is survival-critical). Result: a structured, traceable, evidence-graded answer.
The BMC Guidance System
The BMC is not a one-time snapshot. When O1 is active, every entity in the knowledge graph carries a block label. This enables three capabilities:
Pick any entity. Follow the chain: Key Resource → Key Activity → Value Proposition → Channel → Customer Segment → Revenue Stream. If any link is missing, that's an actionable gap. The same chain can be traced in reverse to find upstream dependencies.
For any resource: what happens downstream if it's added, removed, increased, decreased, or redirected? The BMC makes cause-and-effect visible across the entire canvas — not just within a single block.
Any active campaign, product launch, or sales push maps to a specific path through the BMC. Every element of the initiative — resources, activities, channels, segments — becomes a traceable node in the knowledge graph.
Instantiation Protocol
The prerequisites: a K1 graph exists (Business Intelligence KG), source documents are available, and a gap report exists. From there, the protocol runs in six phases — each producing artifacts that feed the next.
Extract agents, resources, and flows from source documents. Three named InfraNodus graphs: {project}-vf-agents (extractEntitiesOnly), {project}-vf-resources (extractEntitiesOnly), {project}-vf-flows-planned (none mode, preserves verb semantics). Statement patterns follow REA Resource–Event–Agent structure.
For each entity in K3, assign BMC block(s). Query each block against the flows graph using retrieve_from_knowledge_base. Document in {project}/ontology/bmc-overlay.md. Identify gaps: blocks with few/no entities, broken chains between blocks.
Tag each flow as Recipe (capability mentioned but uncommitted), Plan (committed with timeline), or Observation (evidence of actual execution). Run difference_between_texts between Plan and Observation to surface execution gaps. The gap is the blindspot.
For each major initiative, map the full BMC chain from Key Resource through Revenue Stream. Identify missing links and single-point-of-failure nodes. Document traceable paths in {project}/ontology/channel-traces.md.
For each key resource: trace downstream effects through the BMC chain. Model add/remove/increase/decrease/redirect scenarios. Connect to temporal layer — is this resource Recipe-only, Planned, or Observed? Document in {project}/ontology/resource-impact.md.
Six artifacts: BMC Overlay Report, Channel Trace Document, Resource Impact Report, updated INDEX.md, editorial brief (4-tab site), visualization hub (5 viewport pages). The viz hub is the browsable intelligence surface for the client.
The Temporal Ontology
Every flow in the system can be tagged with when it is true. This is the most diagnostic of the four ontology layers — the gap between what's planned and what's observed is where strategic blindspots live.
| Layer | VF Concept | Verb Form | What It Represents |
|---|---|---|---|
| Recipe | RecipeFlow, ProcessSpec | Infinitive — "to produce" | Capability space. What COULD happen. Untapped potential lives here. |
| Plan | Intent, Commitment | Future — "we WILL launch" | Strategy. What SHOULD happen. Commitments with timelines. |
| Observation | EconomicEvent, Claim | Past — "we DID deliver" | Reality. What DID happen. Evidence of actual execution. |
When the gap between Plan and Observation is large, confidence scores need a higher floor (0.7+). When the entire BMC operates in Recipe and Plan layers with minimal Observation, you are analyzing a hypothesis — not a business. Long Zhu's first complete instance showed 87% Recipe/Plan, 13% Observation.
The Consensus Scoring Ontology
| Tier | Score Range | Authority | Example |
|---|---|---|---|
| Legal / Regulatory | 0.9 – 1.0 | Binding | Patent filings, trademark status, regulatory requirements |
| Professional | 0.7 – 0.9 | Industry standard | TCG lifecycle patterns, market sizing methodologies |
| Emerging / Contested | 0.4 – 0.7 | Debatable | $15B market claim, educational efficacy claims |
| Organizational | 0.2 – 0.4 | Team agreement | Intake scoring rubric, brand power weights |
| Personal | 0.0 – 0.2 | Individual judgment | Creative direction, priority calls |
High-stakes decisions (investor pitch, strategic pivot) use a 0.7+ consensus floor — only professional-consensus and above. Exploratory analysis (brainstorming, discovery) uses 0.3+. The floor setting changes what's visible in the graph without changing the underlying data.
Toolchain
Every K-layer graph is a named InfraNodus project. The MCP connection makes these graphs queryable from agent workflows — gap analysis, bridge detection, cluster anatomy, and overlap comparison all run through the same toolchain.
| Skill | Role |
|---|---|
value-flow-ontology | Phase 1 engine — VF graph creation, REA mapping, 16-dimension classification |
intelligence-viewports | O2 activation — viewport composition, agent sequences, graph layer creation |
ontology-management | O4 engine — consensus scoring, toggle system, bridge analysis |
intelligence-brief | K1 pipeline — business intelligence KG, gap report, editorial + viz deliverables |
competitive-intel | K5 pipeline — SBPI scoring, competitor landscape, market positioning |
infranodus-expert | Tool layer — all 24+ InfraNodus MCP tool operations |
infranodus-viz-designer | Visualization — D3.js force graphs, radar charts, bento dashboards |
First Instance
The Layered Ontology Architecture is built to compound. Each new client engagement adds another instance, refines the instantiation protocol, and identifies skill gaps that improve the system for Team C.
Trading Card Game startup with Chinese language learning mechanic. 7 ontology artifacts. Complete K1–K3 + O1–O3 instantiation.
Long Zhu (Lóng Zhū Dragon Master) is a TCG startup building bilingual gameplay — learning Chinese is the game mechanic, not an overlay. As the first complete instance of the Layered Ontology Architecture, it has a full K1–K3 knowledge graph stack, a BMC overlay showing the full business model, temporal analysis revealing that 87% of the BMC operates in Recipe and Plan layers only, and channel traces identifying 3 single-points-of-failure.
Long Zhu — K1 Business Intelligence
150 nodes, 276 edges, 16 clusters, 0.87 modularity. Extracted from 18-page seed round deck, project plan workstream doc, and Gantt timeline.
Top 5 clusters carry 95% of betweenness centrality. The community-building cluster (9% influence, 1% BC) is the most disconnected high-influence cluster. Revenue forecast and player commitment clusters have zero bridging to anything else — claims floating without structural support.
Long Zhu — K3 Value Flow
137 nodes, 335 edges, 12 clusters, 0.77 modularity. The "person" node has highest betweenness centrality (0.45). The business is deeply people-dependent.
| Agent | Role | Background |
|---|---|---|
| Kevin Mowrer | CEO + Game Design | Hasbro R&D, 20+ patents, Beast Wars, Dragon Booster |
| Limore Shur | Marketing + App Dev | Nike, Amazon, Best Buy, Target brand building |
| Steve Weinstein | Chief Creative | Mattel, Hasbro, Tonka product design |
| Julian Chan-Bevan | Creative Director | Netflix, Paramount, Universal brand strategy |
| Keith Bencher | Finance | — |
| Ben Mauceri | Legal / IP | — |
137 nodes, 319 edges, 12 clusters, 0.79 modularity. Funding allocation has 47% of betweenness centrality. The business is capital-constrained; every resource traces back to seed money allocation.
15 clusters, 0.69 modularity. meaningful_fun is the sole hub (bc: 0.55). Every value flow routes through the company. No distributed value creation exists.
Five primary value chains identified: Capital→Product→Revenue, Product→Distribution→Revenue, Product→Digital→Engagement, Marketing→Awareness→Conversion, Growth→Series A.
Long Zhu — Applied Analysis
Long Zhu / Meaningful Fun is a pre-launch educational TCG startup with an elite team (Hasbro, Nike, Netflix alumni), a novel product category, and a $1.1M seed round target. The layered ontology reveals the structural reality beneath the pitch.
Only 5 of 67 mapped entities have observation-layer evidence. The entire business model canvas operates on projected capabilities with near-zero validation. This is expected for a pre-launch startup. The critical question: which projections need validation before investors commit?
O1 BMC Overlay
67 entities mapped across 9 BMC blocks. Each entity tagged with its temporal status.
| BMC Block | Key Entities | Temporal Status |
|---|---|---|
| Value Propositions | Bilingual gameplay, Battle Story App, graphic novels | Recipe |
| Customer Segments | TCG players 11+, parents, after-school, Chinese heritage | Plan |
| Revenue Streams | Starter decks ($15), boosters ($5), app subscription, merch | Plan |
| Key Activities | Game design (✓), card production, app dev, organized play | Observation + Recipe |
| Key Resources | IP/patents (✓), team (4 veterans), AI gameplay engine | Observation + Recipe |
| Key Partnerships | Game stores, Gen Con, after-school networks, educators | Recipe |
O1 + O3 Combined
Long Zhu — O3 Temporal Layer
The only flows with real-world evidence:
| Kevin's franchise track record | 0.95 |
| Team industry experience | 0.90 |
| Patents filed | 0.85 |
| Game mechanics in development | 0.75 |
| Concept testing occurred | 0.60 |
The highest execution risk. Items committed in the plan with timelines but zero evidence of execution:
| Planned Flow | What Would Make It Observation | Status |
|---|---|---|
| Raise $1.1M seed | Signed term sheets, money in bank | PLAN |
| 350 cards developed | Playable card set, playtesting data | PLAN |
| Battle Story App built | Working prototype, app store listing | PLAN |
| Distribution partnerships | Signed agreements with distributors | PLAN |
| Gen Con debut | Booth reserved, demos planned | PLAN |
| $150K Y1 revenue | Actual sales data | PLAN |
O1 + K3 Combined
Four structural breaks where the value flow chain is disconnected.
Channel Traces
Each go-to-market phase traced through the full BMC chain. Readiness = percentage of chain links that exist.
The business can execute Pre-Release and partial Debut with current resources. Everything from Launch onward requires significant capability building that isn't yet in the plan.
K3 + O1 + O3 Combined
For each critical resource: what happens downstream through the BMC if removed vs. added?
One unnamed person assigned to build 3 major app features. No CTO. No prototype. RECIPE
Zero allocation. Competitive scene mentioned in plan but unfunded. RECIPE
Listed as HR need. Nobody on team has education credentials. Core VP depends on this. RECIPE
Two-tranche SAFE. Not yet raised. Every downstream resource depends on this. PLAN
CEO + game designer + BD + investor relations + demo presenter. Hub node (bc: 0.55). OBSERVATION
The entire Long Zhu BMC operates in Recipe and Plan layers. The only Observation-layer elements: Kevin's TCG track record (proven), patents filed (legal), and partial game concept testing. Every channel, partnership, customer relationship, and revenue stream is projected — none are observed. This is not a failure — it's a pre-seed reality — but the system makes it legible.
The BMC overlay revealed that organized play exists as Key Activity + Channel in the Recipe layer only — no Plan commitment, no Observation evidence — despite professional consensus (0.85 score) that organized play is survival-critical for competitive TCGs, as validated by the MetaZoo bankruptcy and Flesh and Blood success data.
What's Next
Both projects have K1 graphs built. FrameBright has 4 InfraNodus graphs and a complete editorial brief deployed. Fiserv has 5 graphs and brand power scored at 46/100. Both are candidates for full K1–K5 + O1–O4 instantiation.
4 graphs, editorial deployed. Awaiting team materials for full VF extraction. FrameBright-negative-space graph: 10 clusters, 0.568 modularity.
5 graphs, brand power 46/100 (Vulnerable). 26 strategic gaps identified. GTM roadmap in progress. Full O1–O4 instantiation would reveal BMC execution gaps.
The missing capability is the bmc-guidance-system orchestration skill — a single agent that takes a project with K1 complete and runs all six phases in sequence. Each engagement adds a refinement. This document (and the system it describes) is the Team C output from the Long Zhu instance.
Interactive Intelligence Viewports
Each viewport is a standalone HTML page with D3.js visualizations driven by the same underlying data. Select a viewport to explore the Long Zhu instance from a different analytical angle.