AI-Accelerated Entrepreneurship Practicum
A 14-week experiential program where MBA students build real AI systems using the Trinity Graph architecture — Social, Knowledge, and Generative — while simultaneously delivering for a real client partner and launching an entrepreneurial venture.
The Trinity Graph Architecture
Edges: Relationships
Props: Confidence
Grounded Output
Awareness Layer
Grounded
Compounding
Dual-Track Model
- Client Track: Build a Trinity Graph system for a real Nashville artist, collective, or community
- Venture Track: Develop an original AI-powered startup using the same architecture
- Both tracks run in parallel throughout the 14 weeks
- Week 7: Midterm review of both tracks
- Week 13: Client delivery + Week 14: Investor Demo Day
Learning Arc
- Weeks 1–2: Social Graph foundations, stakeholder mapping
- Weeks 3–5: Knowledge Graph, ontologies, verification
- Weeks 6–7: Integration + Midterm checkpoint
- Weeks 8–9: Agentic workflows + -ity ontology
- Weeks 10–12: Business model, scaling, advanced patterns
- Weeks 13–14: Delivery, client handoff, Demo Day
Key Embedded Tools
- Week 1: Trinity Pod Formation App — forms balanced student teams
- Week 3: Vibe Coding App — radar chart brand mapping
- Week 7: Midterm Pitch Timer — 15-min countdown with phases
- Week 14: Demo Day Scoreboard — live scoring + confetti reveal
Technology Stack
- Neo4j Aura — all three graph layers
- Claude AI + OpenAI — generation & reasoning
- LangChain / CrewAI — agentic pipelines
- Cursor AI — AI-accelerated development
- Figma / Miro / Canva — visual deliverables
- Railway / Render — deployment
More than 2 unexcused absences will affect your final grade — each additional absence reduces your Participation & Reflection score by one letter grade. Live class sessions are not recorded. This is a practicum; in-class exercises cannot be replicated asynchronously. If you must miss class, notify Baxter Webb in advance and coordinate with your pod.
Submissions lose 10% per day late. No late submissions are accepted after 7 days — the assignment receives a zero at that point. No exceptions without prior written approval from Baxter Webb. The pace of this course mirrors a real startup environment: deadlines are real.
Pod work is collaborative by design — use each other freely on all Pod Deliverables. Individual assignments must be solo — pod members may not co-author, co-edit, or substantively advise on individual work. For all work, cite every AI tool used using the standard disclosure format (see AI Usage Policy).
All AI tool usage must be disclosed. Using AI to complete individual assignments without disclosure is a violation of Vanderbilt's Honor Code and will be reported to the Academic Integrity board. This course takes a progressive stance on AI — it is a required tool — but transparency is non-negotiable. Undisclosed AI use is treated the same as plagiarism.
Client partners share sensitive organizational data in good faith. Everything discussed in client interviews and shared in client documents is confidential. Do not post client data publicly or discuss it in open forums. Venture Track work may be shared publicly only with full pod consent. Treat the client relationship like a professional consulting engagement.
TA Baxter Webb — baxter.webb@vanderbilt.edu · Tuesday & Thursday 3–5 pm (Central). All pod deliverable questions, grade inquiries, and week-to-week logistics go to Baxter first.
Professor Oliver Luckett — Oliver.Luckett@Vanderbilt.edu · By appointment only; email to schedule. Reserved for strategic, conceptual, or escalated matters.
AI is a required tool in this course — not a shortcut, not a cheat code, not optional. The question is never whether to use AI but how well you use it. All AI-generated content must be disclosed, attributed, and critically evaluated before it enters any deliverable.
Using AI to explore a domain, surface prior work, generate questions, and build background knowledge. You verify and synthesize.
Using AI to generate first drafts, code scaffolding, or outlines that you then substantively revise. You own the final form.
Using AI to stress-test arguments, model scenarios, and pressure-check strategy. You make the call.
Students are expected to understand how the AI produced its output — not just accept it. If you use AI-generated code, you must be able to explain every line. If you use AI-generated prose, you must be able to defend every argument. "The AI said so" is not a defense.
All AI-assisted outputs in this course must meet the Trinity Graph standard: they must be traceable (what prompted this?), attributable (which tool, which model, which version?), and defensible (can you stand behind this in front of an expert?). This standard applies to code, writing, design, and strategy deliverables alike.
AI Tool: [tool name + version, e.g. "Claude 3.5 Sonnet"]
Prompt: [brief description of what you asked it to do]
Human contribution: [what you added, changed, verified, or rejected]
This course runs on two parallel tracks. In Week 2, each pod submits a brief Letter of Intent declaring their chosen track for the semester. The track selection shapes your deliverables, your client relationships, and your Demo Day narrative — but both tracks share the same Trinity Graph architecture and present together at Week 14.
Client Track and Venture Track pods present in the same format to a shared panel of external judges. Each pod receives a 10-minute live demo slot + 5 minutes of Q&A. There are no separate tracks at Demo Day — the stage belongs to the work.
Welcome to the Serendipity Engine · Team Formation
"The Serendipity Engine isn't magic — it's architecture. And you're going to build it."
Introduction to the Trinity Graph — three distinct graphs operating as one unified awareness system. The Social Graph answers WHO: nodes are people, edges are relationships, properties carry emotion and behavior. The Knowledge Graph answers WHAT: nodes are verified concepts, edges are ontological relationships, properties have confidence scores and sources. The Generative Graph answers WHAT IF: it uses RAG (Retrieval-Augmented Generation) to produce context-sensitive, grounded output. Together, they constitute the architecture of aware AI.
- Understand the Trinity Graph architecture: Social + Knowledge + Generative — three graphs, one awareness
- Form Trinity Pods with strategically complementary skill sets across Background and Primary Strength dimensions
- Select a Client Partner organization and define an Entrepreneurial Venture domain for the semester
Set up all three tools before next class. Create a Neo4j Aura free account and confirm you can log in. Download and activate Cursor AI. Set up Claude AI at claude.ai.
Each pod member writes a 2-page "Convergent Intelligence Vision" memo: What problem would you solve with aware AI? Who are the users? How would each of the three graph layers — Social (WHO), Knowledge (WHAT), Generative (WHAT IF) — make your solution smarter over time? Use Claude to pressure-test your thinking. Also: complete the Trinity Graph orientation on Brightspace and confirm your client partner assignment with Baxter Webb. Due: before Week 2 class.
AI Tool Landscape & Sprint Methodology
"You don't need to know how the engine works. You need to know how to drive."
The Social Graph is the WHO layer of the Trinity. Nodes are people and organizations; edges are typed relationships; properties carry emotional valence, influence weight, and directional power. Every relationship has a WEIGHT (how strong?) and a DIRECTION (who influences whom?). The Social Graph answers not just "who are these people?" but "how do they relate, and why does it matter to your application?"
- Design a social graph schema for a real client domain — nodes, edge types, property schema
- Conduct 5+ empathetic stakeholder interviews using structured discovery protocol
- Map stakeholder relationships in Neo4j Aura — minimum 20 nodes, typed edges
- Each pod draws their client organization's complete stakeholder map on whiteboard — include internal teams, external partners, end users, regulators
- Convert the whiteboard to Neo4j schema: identify node labels, relationship types, and 3+ properties per node type
- Write and run the discovery query: Who are the connectors? Who are the isolates? Who bridges otherwise disconnected groups?
- Debrief: What surprised you about the graph? What would the client never see just from their org chart?
📖 Session 2 Reference: AI: Where We Are Today → — Complete reference covering transformers, LLMs, LoRAs, diffusion models, the dangers, the possibilities, ethics, legal landscape, and prompt engineering. Read before Session 3.
Conduct 5+ stakeholder interviews with your client organization and 10 customer discovery conversations for your venture. Use the in-class mapping tool to visualize who influences whom, who holds informal power, and who the hidden connectors are. Submit: (1) Stakeholder Map — visual relationship diagram + 1-page synthesis answering "What does this network reveal that an org chart cannot?" (2) Customer Discovery Summary — your top 3 validated pain points with direct quotes from real conversations. Due: before Week 3 class.
Emotional Intelligence & Vibe Coding
"Every brand has a soul. The -ity words are how we describe it."
The -ity Ontology is a system of 225+ abstract nouns that serve as semantic primitives for describing states of being, brand essence, and emotional quality. Words like Vitality, Synchronicity, Propensity, Authenticity — these are not adjectives but measurable coordinates in a semantic space. In the Trinity Graph, -ity words become edge properties, node attributes, and cross-graph bridge concepts. Every brand has a vibe; vibe coding makes that vibe precise.
- Apply the -ity ontology to real brands and products — select 5 core coordinates that define a brand's semantic signature
- Build "vibe coordinates" as graph properties that can be queried, matched, and compared in Neo4j
- Connect emotional intelligence to social graph design — users have vibe profiles too
Select a brand, choose exactly 5 -ity words that define its vibe, then map it to a radar chart. Compare two brands side-by-side.
Use the Vibe Coder to map your venture's 5 core -ity coordinates, then do the same for 2 direct competitors. Where is the white space? Where does your brand stand for something unique? Submit: (1) Three radar charts — yours vs. both competitors, (2) 1-page Brand Positioning Memo: "How does our -ity profile create differentiation that traditional positioning frameworks miss?" Use Claude to help you interpret the competitive gaps. Due: before Week 4 class.
Knowledge Representation & Ontologies
"Verified facts are the skeleton. Opinions are the flesh. Your job: tell the difference."
The Knowledge Graph is the WHAT layer of the Trinity. Every node is a verified concept or entity. Every edge is an ontological relationship (IS_A, HAS_PROPERTY, CAUSES, ENABLES). Every property has a confidence score (0–1) and a source citation. This is what separates a knowledge graph from a list of facts: provenance, confidence, and structured relationships. The KG enables the system to say not just "this is true" but "here's how confident we are, and here's the receipts."
- Design a domain ontology with proper taxonomy — identify entity types, relationship types, and property schemas
- Create 100+ RDF-style triples in Neo4j with confidence scoring and source citation on every triple
- Implement confidence scoring and source citation as first-class graph properties
- Each pod gets 10 minutes to define 20 RDF-style triples for their venture domain
- Each triple must specify: subject, predicate, object, confidence (0–1), and source URL
- Share with class: "What categories of knowledge does your application actually need?"
- Peer critique round: What's missing? What can't be verified? What confidence scores are too high?
Complete a Knowledge Architecture Document (template on Brightspace) for your client's domain. Identify the 20 most important facts your application needs to reliably know. For each, answer: What is it? Where does it come from? How confident are you (1–10)? What breaks if it's wrong? Group them into 5 categories that match your domain. This is strategic information architecture — no code required. Submit: completed Knowledge Architecture Document + 1-page "What We Don't Know" risk analysis highlighting your top 3 uncertainty gaps. Due: before Week 5 class.
Verification & Ground Truth
"LLMs hallucinate. Knowledge graphs remember. Use both wisely."
Provenance tracking is what separates aware AI from autocomplete. Every fact in your Knowledge Graph knows: where it came from, how confident we are, when it was last verified, and who verified it. Confidence degrades over time — a fact verified in 2020 has lower confidence today than one verified yesterday. This temporal dimension is what enables a system to say "I knew this, but I'm less sure now."
- Build an automated fact-checking pipeline: KG query → LLM query → discrepancy detection
- Implement confidence degradation over time — write a function that adjusts confidence based on verification age
- Compare KG facts vs. LLM-generated "facts" across 10 domain questions — quantify the hallucination rate
- Each pod takes their 100-triple KG from Week 4 and formulates 10 domain-specific questions it should answer
- Ask Claude or GPT-4 to answer the same 10 questions without access to the KG
- Record: Where does the LLM agree? Where does it disagree? Where does it invent new "facts" not in the KG?
- Discussion: What is the cost of hallucination in your specific domain? Healthcare? Finance? Legal? What's the risk profile?
Conduct an AI Reliability Audit for your domain. Use Claude and GPT to answer 10 domain-specific questions, then cross-reference their answers against your Knowledge Architecture Document from Week 4. Where does AI confidently state something that contradicts what you know to be true? Submit: 2-page "AI Reliability Analysis" covering: (1) where each AI tool was wrong, (2) the business consequences if a real user trusted that wrong answer, (3) how a verified Knowledge Graph would have prevented each error. This is your strategic argument for why grounded AI matters in your domain. Due: before Week 6 class.
Knowledge Graph + Social Graph Integration
"Context changes everything. WHO is asking changes WHAT matters."
Cross-graph entity linking is the mechanism that creates personalization. The same Concept node in the Knowledge Graph means different things to different Person nodes in the Social Graph. A CEO asking "What is our retention rate?" needs strategic context. A Customer Service rep asking the same question needs operational detail. Personalization emerges from the intersection of WHAT and WHO. This week we build that intersection.
- Implement entity linking between Social and Knowledge graphs — INTERESTED_IN, EXPERTISE_IN, RESPONSIBLE_FOR edges
- Build context-aware retrieval: same query + different user_id = different result set
- Demonstrate live: personalization emerges from graph intersection, not from hard-coded rules
- Build a simple query function: input = (question, user_id); output = personalized fact set
- user_id is looked up in Social Graph → returns context: role, expertise_level, interests, reporting_structure
- Same question → different Cypher traversal based on user context → different top-k facts returned
- LIVE DEMO: Ask "What is our retention rate?" as CEO vs. as Customer Service rep — show 3 distinct result sets
Design a Persona Architecture for your application. Define 3 distinct user personas — different roles, expertise levels, goals, and fears. For each persona: write the 5 most important questions they'd ask your system and describe how the ideal answer would differ from what the other two personas should receive. Then record a 2-minute screen-capture demo using your AI tool showing how context changes the response to the same question. Submit: Persona Architecture Document + demo video link. Due: before Week 7 class.
★ Midterm — Knowledge Graphs, Ontology & Dual-Track Review
"Halfway there. Show us what you've built. Then we add the final dimension."
The Generative Graph is the WHAT IF layer. RAG (Retrieval-Augmented Generation) uses your Trinity Graph as the retrieval layer: user query → semantic search in Knowledge Graph → top-k relevant facts → fed to LLM as context → grounded, personalized response. The generation is not just using an LLM — it's using your LLM, trained on your data, responding to your users, grounded by your facts.
- Implement basic RAG pipeline using Neo4j as vector store — embed KG nodes, enable semantic search
- Present both client and venture projects (15 min each) — live demo of working Social + Knowledge integration
- Demonstrate Social + Knowledge integration — show personalized retrieval working live for the class
- 15-minute presentation: Client project status + venture concept + live demo of working Social + Knowledge integration
- Architecture diagram showing all 3 graph layers — even if Generative is still a placeholder/stub
- 1-page Business Model Canvas for your venture (Strategyzer format)
- Neo4j export showing social + knowledge graph with cross-graph entity links
Prepare your Midterm Package for both tracks: (1) Architecture Visual — a single-page diagram showing how all three graph layers connect in your application (Figma, Miro, or Canva — no code required), (2) Business Model Canvas for your venture (Strategyzer template, completed in full), (3) Client Status Memo — 1-page update to your client summarizing what you've learned, what you've built, and what comes next. Submit all three by midnight before midterm class. The 15-minute in-class presentation walks through these artifacts live with your client stakeholders in the room.
Agentic Workflows & Multi-Graph Fusion
"Single queries are questions. Agents are reasoning."
Trinity Convergence is the Convergence Layer: agents that query all three graphs in sequence. Social first (WHO is asking?) → Knowledge second (WHAT do we know relevant to them?) → Generative third (WHAT SHOULD WE SAY, given who they are and what we know?). This is the architecture of a system that doesn't just answer — it reasons. Every output is triple-grounded: in identity, in knowledge, in context.
- Build a 3-step reasoning agent using the Trinity architecture: Social → Knowledge → Generative pipeline
- Implement the full Social→Knowledge→Generative pipeline with real data and live queries
- Measure "awareness score" — quantify how much user context changes the output across 10 test cases
- Each pod designs their agent's decision tree on paper first — what does the agent check at each step?
- Implement in CrewAI or LangGraph: Agent must (1) identify user from Social Graph, (2) retrieve relevant facts from KG, (3) generate personalized response
- LIVE DEMO: Run same agent with 3 different users — show 3 different outputs and explain why they differ
- Measure: what % of the output changed based on user context? Is it meaningful change or just style?
Write an Agent Logic Map — a visual flowchart showing your AI agent's decision process for 5 real user scenarios. For each scenario trace: (1) What Social context does the agent use? (2) What verified facts does it retrieve? (3) What does it generate — and why is this different from a generic AI answer? Submit: Agent Logic Map (visual, Miro or Figma) + 2-page "Why Context Changes Everything" analysis demonstrating how the same question produces meaningfully different outputs for different users. Your AI tools should assist with both the mapping and the writing. Due: before Week 9 class.
The -ity Ontology in Practice
"Language is architecture. The words you choose become the structure you build."
The 225+ -ity terms are not decorative vocabulary — they are semantic bridges between graphs. Authenticity links the Social Graph (real identity, verified persona) to the Knowledge Graph (verified facts, confirmed sources) to the Generative Graph (genuine creative output that matches both). Every -ity word represents a cross-graph relationship type. Building with -ity vocabulary means building a system that has emotional and conceptual coherence across all three layers.
- Map your venture's core -ity vocabulary to all 3 graphs — which words live primarily in Social, Knowledge, or Generative?
- Build "awareness metrics" using -ity coordinates — quantify how "aligned" your system is with its intended vibe
- Design the semantic bridge layer specific to your domain — the cross-graph relationship schema
- Each pod picks their 10 most important -ity words for their specific domain
- Draw the cross-graph map: which -ity words appear in Social? Knowledge? Generative? All three?
- Build Cypher queries that traverse all 3 graphs using -ity as the bridge relationship
- Present: "Here are our 3 most powerful -ity bridges and what they unlock in our system"
Build your Semantic Architecture Document — your venture's conceptual vocabulary. Define your 15 most important -ity words (use the Trinity Vocabulary reference and Vibe Coder for inspiration). For each word document: which graph layer it primarily lives in, what behavior or feature it enables, how you would measure whether it's working, and what breaks when it's absent. Use Claude to stress-test each one. This document becomes the conceptual backbone of your investor pitch. Submit: completed Semantic Architecture Document. Due: before Week 10 class.
Business Model Design
"The graph is the moat. Every new user makes it smarter. Every new fact makes it more defensible."
The Trinity Graph creates data network effects — the most powerful and defensible moat in software. More users → richer Social Graph → better personalization → better experience → more users. More facts verified → higher knowledge confidence → better grounding → fewer hallucinations → more trust → more usage → more facts. And crucially: the graph compounds. A competitor starting today is not just behind on features — they are behind on years of relationships, verifications, and connections they can never fully replicate.
- Design a revenue model that leverages the Trinity Graph's compounding nature — show how monetization scales with graph density
- Build unit economics model: LTV, CAC, payback period, LTV/CAC ratio — grounded in real assumptions
- Identify what makes your graph defensible: data moats, switching costs, network effects, proprietary knowledge
- Each pod completes a full Strategyzer Business Model Canvas in 20 minutes — all 9 blocks, no gaps
- Build a simple 12-month model: if you acquire 100 users in month 1, what does month 12 look like for revenue, graph size, and graph quality?
- Present to class: "What is our data moat? What happens to our graph after 1 year vs. 3 years vs. 5 years?"
- Peer question: "What would it cost a well-funded competitor to replicate your graph in 6 months?"
Build your Trinity Financial Model in Google Sheets (template on Brightspace). Include: 3-year P&L projection, unit economics (CAC, LTV, payback period), and a "Graph Compounding" section — a simple visualization showing how your system gets smarter and more defensible as users and data accumulate, and how that translates to revenue. Include base / bull / bear scenarios. Use Claude or Cursor to help build and check your formulas. Submit: Google Sheets link + 1-page narrative explaining your key assumptions and what would need to be true for the bull case to materialize. Due: before Week 11 class.
Scaling & Performance
"Beautiful architecture that can't handle load is a prototype, not a product."
Production Trinity Architecture requires three layers of optimization: caching strategies for repeated graph queries, async processing for expensive operations (embedding generation, large KG traversals), and graph partitioning for very large social graphs. The target is <200ms end-to-end response time for most queries. Awareness at scale means the system is still context-sensitive at 10,000 simultaneous users — not just at 10.
- Implement a caching layer (Redis or in-memory) for repeated graph queries — measure cache hit rate
- Optimize Neo4j queries using EXPLAIN/PROFILE — add indexes, rewrite traversals, eliminate Cartesian products
- Design async processing for expensive operations — what can be pre-computed, what must be real-time?
- Each pod runs EXPLAIN on their top 5 most frequent queries — identify DB Hits count before optimization
- Identify the slowest part of their Trinity pipeline (embedding? graph traversal? LLM call? API serialization?)
- Apply one optimization: add index, rewrite Cypher, add cache layer, or pre-compute embeddings
- Before/after: measure and present the query time improvement — what was the DB Hits reduction?
Write a Technical Partner Brief — a 3-page document designed for a prospective CTO or technical co-founder. Explain: (1) what your system does and how the three graph layers work together, (2) where the current performance constraints are and how you'd prioritize addressing them, (3) what infrastructure investment is required to scale from 100 to 10,000 users. This is a communication and prioritization exercise — translate what you've built into strategic language for a technical audience. Use Claude to help you think through the trade-offs clearly. Submit: Technical Partner Brief. Due: before Week 12 class.
Advanced Trinity Patterns
Where convergence gets weird — and wonderful.
Emergent awareness — when all three graphs are properly integrated, the system exhibits behaviors that none of the individual graphs could produce alone. Digital twins mirror real-world entities in the KG. The -pathy layer adds emotional/health dimensions. And when two Trinity Graphs talk to each other — cross-domain synthesis emerges.
Run your fully integrated Trinity system with 3 brand-new user scenarios you haven't tried before. Document:
- What did the system do that you didn't explicitly program?
- Where did it surprise you — in a good way? In a bad way?
- What is the system "learning" over time as data accumulates?
- Is this the beginning of awareness? What's missing?
Then: attempt a cross-domain synthesis — connect your venture's Trinity Graph to a classmate's. What new emergent behaviors appear when the two graphs share edges?
Run an Integration Review with at least 2 people outside your pod — ideally people from your target user group. Have them interact with your system and observe without coaching. Document everything: What confused them? What delighted them? What did they want that wasn't there? Submit: (1) 2-page User Feedback Report with direct observations and quotes, (2) Revised Demo Script for Demo Day based on what you learned, (3) Updated one-page system overview diagram. Due: before Week 13 class.
Dual Delivery — Client Presentations
Serve first. Pitch second. The best founders do both beautifully.
Authenticity in practice — the difference between a system built FOR users (client project) and a system built TO SELL (venture pitch). Both require genuine Trinity Graph insight, but the frame shifts entirely. One asks "what do they need?" The other asks "why should they believe?"
30 min per pod: 20 min live demo + 10 min Q&A from client stakeholders. University partners attend in person.
Peer feedback sessions. Baxter Webb facilitates. Every pod presents, every pod critiques. Ruthless but constructive.
- Working System — deployed URL or installable package, tested end-to-end
- User Documentation — written for a non-technical audience; could be a client's first-year employee
- Technical Documentation — schema, API endpoints, deployment instructions for future maintainers
- Training Session Plan — 60-minute onboarding plan the client can run themselves
- "What's Next" Roadmap — 3 features the client should build in the next 6 months, with rough effort estimates
Demo Day
This is not a school project. This is a company. Show us.
Every great pitch follows the Omega Protocol arc: Grounding (the problem is real) → Awakening (something has changed that makes now the moment) → Friction (why hasn't this been solved?) → Connection (your Trinity Graph is the bridge) → Convergence (demo: all three graphs, live) → Omega (the vision, fully realized) → Return (the ask, grounded in reality).
- GitHub Repo — public, clean README, deployed demo linked
- Pitch Deck — max 12 slides, PDF submitted night before
- Financial Model — 3-year P&L, unit economics (CAC, LTV, payback)
- 1-Page Executive Summary — for judges to keep after Demo Day
- Live Application — must stay live for 30 days post-Demo Day
Trinity Vocabulary — The -ity Ontology
225+ abstract nouns as semantic primitives. Filter by graph layer.
Neo4j Cypher Cheat Sheet
The queries you'll use every week.
// Create a person node
CREATE (p:Person {id: "u001", name: "Alex Chen", role: "Student",
vitality: 0.8, curiosity: 0.9, authenticity: 0.7})
// Create a relationship with weight
MATCH (a:Person {id:"u001"}), (b:Person {id:"u002"})
CREATE (a)-[:KNOWS {strength: 0.85, context: "class", since: date()}]->(b)
// Find bridge nodes (high betweenness centrality proxy)
MATCH (p:Person)-[:KNOWS]->(q:Person)-[:KNOWS]->(r:Person)
WHERE NOT (p)-[:KNOWS]->(r) AND p <> r
RETURN q.name AS bridge, count(*) AS connections
ORDER BY connections DESC LIMIT 10
// Shortest path between two people
MATCH path = shortestPath(
(a:Person {name:"Alex"})-[:KNOWS*..6]-(b:Person {name:"Jordan"})
) RETURN path
// Create a verified concept
CREATE (c:Concept {name: "Churn Rate", domain: "SaaS",
definition: "% customers lost in a period",
confidence: 0.98, source: "https://a16z.com/...",
verified_at: datetime()})
// Link concepts with typed relationships
MATCH (a:Concept {name:"Churn Rate"}), (b:Concept {name:"LTV"})
CREATE (a)-[:INVERSELY_AFFECTS {weight: 0.9, citation: "Gupta 2004"}]->(b)
// Fact-check: find low-confidence claims
MATCH (c:Concept) WHERE c.confidence < 0.7
RETURN c.name, c.confidence, c.source ORDER BY c.confidence ASC
// Cross-graph: concepts relevant to a specific user role
MATCH (u:Person {role: "CEO"})-[:CARES_ABOUT]->(topic:Topic)
MATCH (c:Concept)-[:COVERS]->(topic)
WHERE c.confidence > 0.85
RETURN c.name, c.definition ORDER BY c.confidence DESC
// Log a generated response
CREATE (g:GeneratedContent {
id: randomUUID(), query: "What is our churn?",
user_id: "u001", model: "claude-3-5",
response: "Based on your 847 customers...",
ity_coords: ["clarity:0.9","authenticity:0.85"],
created_at: datetime()})
// Trinity convergence query: full pipeline
MATCH (u:Person {id: $userId})
MATCH (u)-[:CARES_ABOUT]->(t:Topic)
MATCH (c:Concept)-[:COVERS]->(t) WHERE c.confidence > 0.8
OPTIONAL MATCH (u)-[:PREVIOUSLY_ASKED]->(g:GeneratedContent)
RETURN u.name, u.vitality, u.curiosity,
collect(c.name)[..5] AS top_facts,
count(g) AS interaction_count
// Find concepts that bridge social + knowledge graphs
MATCH (u:Person)-[:KNOWS {strength: >0.7}]->(v:Person)
MATCH (c:Concept) WHERE c.name IN u.ity_interests AND c.name IN v.ity_interests
RETURN c.name AS shared_interest, count(*) AS pair_count
ORDER BY pair_count DESC
Tool Comparison Matrix
Choosing the right tool for each layer of your Trinity Graph.
| Layer | Tool | Best For | Free Tier | Course Recommendation |
|---|---|---|---|---|
| Social Graph | Neo4j Aura | Property graphs, Cypher queries | ✅ 1GB free | ⭐ Primary |
| Social Graph | Gephi | Visualization, layout algorithms | ✅ Open source | Visualization only |
| Knowledge Graph | Neo4j + Ontology | Unified graph (recommended) | ✅ | ⭐ Primary |
| Knowledge Graph | Apache Jena Fuseki | RDF triples, SPARQL | ✅ Open source | Week 4 exploration |
| Generative AI | LangChain | RAG pipelines, chain composition | ✅ | ⭐ Primary |
| Generative AI | CrewAI | Multi-agent orchestration | ✅ | Week 8+ |
| Generative AI | LlamaIndex | Document ingestion, indexing | ✅ | Alternative to LangChain |
| LLM | Claude (Anthropic) | Analysis, long context, reasoning | ✅ Credits provided | ⭐ Primary |
| LLM | GPT-4o (OpenAI) | Code generation, multimodal | Limited free | Complement to Claude |
| Dev Tools | Cursor AI | AI-assisted coding, architecture | Limited free / $20/mo | ⭐ Strongly Recommended |
| Dev Tools | Railway / Render | Fast deployment, free hosting | ✅ | Deploy from Week 8 |
Course References
Required texts, supplementary readings, and module-by-module research papers.
All written work should use APA 7th Edition citation format. For AI-generated content used in your deliverables, include a disclosure statement: "[Tool name] was used to [specific task]. All outputs were reviewed, verified, and edited by the student authors." Academic integrity policy applies fully to AI-assisted work — the ideas must be yours, even when AI assists with execution.