Owen Graduate School of Management · MBA Practicum

AI-Accelerated Entrepreneurship Practicum

A 14-week experiential program where MBA students build real AI systems using the Trinity Graph architecture — Social, Knowledge, and Generative — while simultaneously delivering for a real client partner and launching an entrepreneurial venture.

14Sessions
3Graph Types
2Tracks
225+-ity Concepts

The Trinity Graph Architecture

+
🧠
Knowledge Graph
WHAT
Nodes: Concepts
Edges: Relationships
Props: Confidence
+
Generative Graph
WHAT IF
RAG + Context
Grounded Output
Awareness Layer
=
🔺
Aware AI
Convergence
Personalized
Grounded
Compounding

Dual-Track Model

  • Client Track: Build a Trinity Graph system for a real Nashville artist, collective, or community
  • Venture Track: Develop an original AI-powered startup using the same architecture
  • Both tracks run in parallel throughout the 14 weeks
  • Week 7: Midterm review of both tracks
  • Week 13: Client delivery + Week 14: Investor Demo Day

Learning Arc

  • Weeks 1–2: Social Graph foundations, stakeholder mapping
  • Weeks 3–5: Knowledge Graph, ontologies, verification
  • Weeks 6–7: Integration + Midterm checkpoint
  • Weeks 8–9: Agentic workflows + -ity ontology
  • Weeks 10–12: Business model, scaling, advanced patterns
  • Weeks 13–14: Delivery, client handoff, Demo Day

Key Embedded Tools

  • Week 1: Trinity Pod Formation App — forms balanced student teams
  • Week 3: Vibe Coding App — radar chart brand mapping
  • Week 7: Midterm Pitch Timer — 15-min countdown with phases
  • Week 14: Demo Day Scoreboard — live scoring + confetti reveal

Technology Stack

  • Neo4j Aura — all three graph layers
  • Claude AI + OpenAI — generation & reasoning
  • LangChain / CrewAI — agentic pipelines
  • Cursor AI — AI-accelerated development
  • Figma / Miro / Canva — visual deliverables
  • Railway / Render — deployment
📊Grading Rubric
Hands-On AI
25%
Venture Build Project
35%
Graph Architecture Model
15%
Investor Pitch & Demo
15%
Participation & Engagement
10%
Participation & Reflection
15%

Weekly engagement in live sessions, quality of questions and contributions to class discussion, peer feedback provided in pod reviews, and weekly AI usage reflection log submitted via Brightspace.

✦ What distinguishes A work:

Student consistently asks questions that advance the entire class's thinking — not just their own. Reflection logs show genuine critical evaluation of AI outputs (not just usage confirmation). Peer feedback is specific, actionable, and generous.

Pod Deliverables
25%

Weekly team outputs submitted as a pod unit — schemas, working code, client research, design artifacts, and progress documentation. Graded on collaborative quality: does the whole exceed the sum of its parts? 12 milestone deliverables across the 14 sessions.

✦ What distinguishes A work:

Deliverables show a coherent pod voice — individual contributions are integrated, not just concatenated. The pod proactively addresses review feedback before the next checkpoint. Documentation reveals the reasoning process, not just the result.

Individual Assignments
25%

12 individual assignments distributed across the 14 sessions (Sessions 13 and 14 focus on final delivery; no individual assignments). Assignments must be completed solo — pod collaboration is not permitted on individual work. All AI tool usage must be cited using the standard disclosure format.

✦ What distinguishes A work:

Work shows original synthesis — ideas that could only come from this student's unique perspective. AI disclosures demonstrate a genuine feedback loop: the student challenged the AI's output, iterated, and improved it. Writing is precise and conceptually sharp, not padded.

Client / Venture Project
25%

Two graded milestones: Mid-term presentation (Session 7 — March 30) — architecture walkthrough and progress demo (15 min); and Final Demo Day (Session 14 — April 22) — live system demonstration with external judges. Both Client Track and Venture Track pods present in a shared format at Demo Day.

✦ What distinguishes A work:

The system actually works in front of an audience — no slide decks standing in for a missing demo. The pod can answer unexpected judge questions about design choices. The narrative clearly connects the technical architecture to real human value for users.

Peer Evaluation
10%

End-of-semester 360° review of all pod members, completed individually and confidentially. Evaluates: contribution level, reliability, communication quality, and growth over the course. Results are shared only with the individual being evaluated.

✦ What distinguishes A work:

Peers describe you as someone who raised the floor for the whole team — you made everyone else better. You shared knowledge proactively, absorbed feedback without defensiveness, and made the hard decisions when the pod was stuck.

📋Course Policies
ATTENDANCE

More than 2 unexcused absences will affect your final grade — each additional absence reduces your Participation & Reflection score by one letter grade. Live class sessions are not recorded. This is a practicum; in-class exercises cannot be replicated asynchronously. If you must miss class, notify Baxter Webb in advance and coordinate with your pod.

LATE WORK

Submissions lose 10% per day late. No late submissions are accepted after 7 days — the assignment receives a zero at that point. No exceptions without prior written approval from Baxter Webb. The pace of this course mirrors a real startup environment: deadlines are real.

COLLABORATION

Pod work is collaborative by design — use each other freely on all Pod Deliverables. Individual assignments must be solo — pod members may not co-author, co-edit, or substantively advise on individual work. For all work, cite every AI tool used using the standard disclosure format (see AI Usage Policy).

ACADEMIC INTEGRITY

All AI tool usage must be disclosed. Using AI to complete individual assignments without disclosure is a violation of Vanderbilt's Honor Code and will be reported to the Academic Integrity board. This course takes a progressive stance on AI — it is a required tool — but transparency is non-negotiable. Undisclosed AI use is treated the same as plagiarism.

CONFIDENTIALITY

Client partners share sensitive organizational data in good faith. Everything discussed in client interviews and shared in client documents is confidential. Do not post client data publicly or discuss it in open forums. Venture Track work may be shared publicly only with full pod consent. Treat the client relationship like a professional consulting engagement.

OFFICE HOURS

TA Baxter Webbbaxter.webb@vanderbilt.edu · Tuesday & Thursday 3–5 pm (Central). All pod deliverable questions, grade inquiries, and week-to-week logistics go to Baxter first.

Professor Oliver LuckettOliver.Luckett@Vanderbilt.edu · By appointment only; email to schedule. Reserved for strategic, conceptual, or escalated matters.

🤖AI Usage PolicyRequired Tool
CORE STANCE

AI is a required tool in this course — not a shortcut, not a cheat code, not optional. The question is never whether to use AI but how well you use it. All AI-generated content must be disclosed, attributed, and critically evaluated before it enters any deliverable.

🔍
(a) Research Tool

Using AI to explore a domain, surface prior work, generate questions, and build background knowledge. You verify and synthesize.

✏️
(b) Drafting Assistant

Using AI to generate first drafts, code scaffolding, or outlines that you then substantively revise. You own the final form.

⚖️
(c) Decision Support

Using AI to stress-test arguments, model scenarios, and pressure-check strategy. You make the call.

UNDERSTAND THE OUTPUT

Students are expected to understand how the AI produced its output — not just accept it. If you use AI-generated code, you must be able to explain every line. If you use AI-generated prose, you must be able to defend every argument. "The AI said so" is not a defense.

THE TRINITY GRAPH STANDARD

All AI-assisted outputs in this course must meet the Trinity Graph standard: they must be traceable (what prompted this?), attributable (which tool, which model, which version?), and defensible (can you stand behind this in front of an expert?). This standard applies to code, writing, design, and strategy deliverables alike.

REQUIRED DISCLOSURE FORMAT
// Include this block at the end of any AI-assisted deliverable
AI Tool: [tool name + version, e.g. "Claude 3.5 Sonnet"]
Prompt: [brief description of what you asked it to do]
Human contribution: [what you added, changed, verified, or rejected]
One disclosure block per AI tool used. Multiple tools = multiple blocks. Undisclosed AI use = academic integrity violation.
🤝Client Partner Program

This course runs on two parallel tracks. In Week 2, each pod submits a brief Letter of Intent declaring their chosen track for the semester. The track selection shapes your deliverables, your client relationships, and your Demo Day narrative — but both tracks share the same Trinity Graph architecture and present together at Week 14.

📁 CLIENT TRACK
Nashville artist, collective, or community
  • Partner is a Nashville artist, collective, or community with a real creative challenge
  • Pod conducts discovery interviews with real stakeholders
  • Client Track deliverables are shared directly with the partner
  • Final handover package includes working system + documentation + staff training
  • Partner attends Demo Day presentation
🚀 VENTURE TRACK
Original AI-powered startup concept
  • Pod develops an original startup concept using Trinity Graph architecture
  • Students own their IP — the course and University make no IP claims
  • Pod self-selects target users and conducts primary market research
  • Mid-term pitch to professor + optional investor guest
  • Demo Day pitch to external judge panel in investor format
Past Client Partner Types
Healthcare Systems University Departments Nonprofits Mid-Market Companies Research Institutions Student Affairs Offices
🎤 Demo Day (Week 14) — Both Tracks, Shared Stage

Client Track and Venture Track pods present in the same format to a shared panel of external judges. Each pod receives a 10-minute live demo slot + 5 minutes of Q&A. There are no separate tracks at Demo Day — the stage belongs to the work.

Wk 1
Mon, March 9

Welcome to the Serendipity Engine · Team Formation

"The Serendipity Engine isn't magic — it's architecture. And you're going to build it."
🔺IAM ConceptTrinity Graph

Introduction to the Trinity Graph — three distinct graphs operating as one unified awareness system. The Social Graph answers WHO: nodes are people, edges are relationships, properties carry emotion and behavior. The Knowledge Graph answers WHAT: nodes are verified concepts, edges are ontological relationships, properties have confidence scores and sources. The Generative Graph answers WHAT IF: it uses RAG (Retrieval-Augmented Generation) to produce context-sensitive, grounded output. Together, they constitute the architecture of aware AI.

🎯Learning Objectives
  1. Understand the Trinity Graph architecture: Social + Knowledge + Generative — three graphs, one awareness
  2. Form Trinity Pods with strategically complementary skill sets across Background and Primary Strength dimensions
  3. Select a Client Partner organization and define an Entrepreneurial Venture domain for the semester
In-Class Exercise: Trinity Pod FormationLive App
🔺 Trinity Pod Formation System v1.0 · Owen MBA
Name Background Primary Strength Action
0 students enrolled
🛠️Tools This Week
Neo4j Aura (free tier) Cursor AI Claude AI LinkedIn (for assignment)

Set up all three tools before next class. Create a Neo4j Aura free account and confirm you can log in. Download and activate Cursor AI. Set up Claude AI at claude.ai.

📋Assignment

Each pod member writes a 2-page "Convergent Intelligence Vision" memo: What problem would you solve with aware AI? Who are the users? How would each of the three graph layers — Social (WHO), Knowledge (WHAT), Generative (WHAT IF) — make your solution smarter over time? Use Claude to pressure-test your thinking. Also: complete the Trinity Graph orientation on Brightspace and confirm your client partner assignment with Baxter Webb. Due: before Week 2 class.

Wk 2
Wed, March 11

AI Tool Landscape & Sprint Methodology

"You don't need to know how the engine works. You need to know how to drive."
🔺IAM ConceptSocial Graph

The Social Graph is the WHO layer of the Trinity. Nodes are people and organizations; edges are typed relationships; properties carry emotional valence, influence weight, and directional power. Every relationship has a WEIGHT (how strong?) and a DIRECTION (who influences whom?). The Social Graph answers not just "who are these people?" but "how do they relate, and why does it matter to your application?"

🎯Learning Objectives
  1. Design a social graph schema for a real client domain — nodes, edge types, property schema
  2. Conduct 5+ empathetic stakeholder interviews using structured discovery protocol
  3. Map stakeholder relationships in Neo4j Aura — minimum 20 nodes, typed edges
In-Class Exercise: Live Stakeholder Mapping
  • Each pod draws their client organization's complete stakeholder map on whiteboard — include internal teams, external partners, end users, regulators
  • Convert the whiteboard to Neo4j schema: identify node labels, relationship types, and 3+ properties per node type
  • Write and run the discovery query: Who are the connectors? Who are the isolates? Who bridges otherwise disconnected groups?
  • Debrief: What surprised you about the graph? What would the client never see just from their org chart?
// Stakeholder schema example CREATE (p:Person {name: "Ana Reyes", role: "VP Product", influence: 0.85}) CREATE (p)-[:REPORTS_TO]->(ceo:Person {name: "CEO"}) CREATE (p)-[:INFLUENCES {strength: 0.7}]->(eng:Team) // Find bridge nodes MATCH (n) WHERE size((n)--()) > 3 RETURN n.name, size((n)--()) AS degree ORDER BY degree DESC
🛠️Tools This Week
Neo4j Browser Gephi Otter.ai (interview recording) Notion (interview notes)
📋Assignment

📖 Session 2 Reference: AI: Where We Are Today → — Complete reference covering transformers, LLMs, LoRAs, diffusion models, the dangers, the possibilities, ethics, legal landscape, and prompt engineering. Read before Session 3.

Conduct 5+ stakeholder interviews with your client organization and 10 customer discovery conversations for your venture. Use the in-class mapping tool to visualize who influences whom, who holds informal power, and who the hidden connectors are. Submit: (1) Stakeholder Map — visual relationship diagram + 1-page synthesis answering "What does this network reveal that an org chart cannot?" (2) Customer Discovery Summary — your top 3 validated pain points with direct quotes from real conversations. Due: before Week 3 class.

Wk 3
Mon, March 16

Emotional Intelligence & Vibe Coding

"Every brand has a soul. The -ity words are how we describe it."
🔺IAM Concept-ity Ontology

The -ity Ontology is a system of 225+ abstract nouns that serve as semantic primitives for describing states of being, brand essence, and emotional quality. Words like Vitality, Synchronicity, Propensity, Authenticity — these are not adjectives but measurable coordinates in a semantic space. In the Trinity Graph, -ity words become edge properties, node attributes, and cross-graph bridge concepts. Every brand has a vibe; vibe coding makes that vibe precise.

🎯Learning Objectives
  1. Apply the -ity ontology to real brands and products — select 5 core coordinates that define a brand's semantic signature
  2. Build "vibe coordinates" as graph properties that can be queried, matched, and compared in Neo4j
  3. Connect emotional intelligence to social graph design — users have vibe profiles too
In-Class Exercise: Vibe Coding AppLive App
🌐 Vibe Coder — Brand -ity Mapper v1.0 · Trinity Vocab

Select a brand, choose exactly 5 -ity words that define its vibe, then map it to a radar chart. Compare two brands side-by-side.

Select exactly 5 -ity words for your brand (0 / 5 selected)
🛠️Tools This Week
Midjourney Claude (semantic analysis) Google Trends API Neo4j (graph properties)
📋Assignment

Use the Vibe Coder to map your venture's 5 core -ity coordinates, then do the same for 2 direct competitors. Where is the white space? Where does your brand stand for something unique? Submit: (1) Three radar charts — yours vs. both competitors, (2) 1-page Brand Positioning Memo: "How does our -ity profile create differentiation that traditional positioning frameworks miss?" Use Claude to help you interpret the competitive gaps. Due: before Week 4 class.

Wk 4
Wed, March 18

Knowledge Representation & Ontologies

"Verified facts are the skeleton. Opinions are the flesh. Your job: tell the difference."
🔺IAM ConceptKnowledge Graph

The Knowledge Graph is the WHAT layer of the Trinity. Every node is a verified concept or entity. Every edge is an ontological relationship (IS_A, HAS_PROPERTY, CAUSES, ENABLES). Every property has a confidence score (0–1) and a source citation. This is what separates a knowledge graph from a list of facts: provenance, confidence, and structured relationships. The KG enables the system to say not just "this is true" but "here's how confident we are, and here's the receipts."

🎯Learning Objectives
  1. Design a domain ontology with proper taxonomy — identify entity types, relationship types, and property schemas
  2. Create 100+ RDF-style triples in Neo4j with confidence scoring and source citation on every triple
  3. Implement confidence scoring and source citation as first-class graph properties
In-Class Exercise: Ontology Speed-Build
  • Each pod gets 10 minutes to define 20 RDF-style triples for their venture domain
  • Each triple must specify: subject, predicate, object, confidence (0–1), and source URL
  • Share with class: "What categories of knowledge does your application actually need?"
  • Peer critique round: What's missing? What can't be verified? What confidence scores are too high?
// Knowledge graph node with provenance CREATE (c:Concept { name: "Customer Churn", confidence: 0.92, source: "https://harvard.edu/study-2023", verified_by: "team:knowledge-lead", verified_at: "2026-02-01" }) CREATE (c)-[:CAUSES {confidence: 0.78}]->(:Concept {name: "Revenue Loss"}) CREATE (c)-[:IS_A]->(:Concept {name: "Business Metric"}) CREATE (c)-[:HAS_PROPERTY]->(:Metric {name: "churn rate", unit: "%"})
🛠️Tools This Week
Protégé (ontology editor) Apache Jena Neo4j Aura Perplexity (fact-checking)
📋Assignment

Complete a Knowledge Architecture Document (template on Brightspace) for your client's domain. Identify the 20 most important facts your application needs to reliably know. For each, answer: What is it? Where does it come from? How confident are you (1–10)? What breaks if it's wrong? Group them into 5 categories that match your domain. This is strategic information architecture — no code required. Submit: completed Knowledge Architecture Document + 1-page "What We Don't Know" risk analysis highlighting your top 3 uncertainty gaps. Due: before Week 5 class.

Wk 5
Mon, March 23

Verification & Ground Truth

"LLMs hallucinate. Knowledge graphs remember. Use both wisely."
🔺IAM ConceptProvenance

Provenance tracking is what separates aware AI from autocomplete. Every fact in your Knowledge Graph knows: where it came from, how confident we are, when it was last verified, and who verified it. Confidence degrades over time — a fact verified in 2020 has lower confidence today than one verified yesterday. This temporal dimension is what enables a system to say "I knew this, but I'm less sure now."

🎯Learning Objectives
  1. Build an automated fact-checking pipeline: KG query → LLM query → discrepancy detection
  2. Implement confidence degradation over time — write a function that adjusts confidence based on verification age
  3. Compare KG facts vs. LLM-generated "facts" across 10 domain questions — quantify the hallucination rate
In-Class Exercise: Hallucination Hunt
  • Each pod takes their 100-triple KG from Week 4 and formulates 10 domain-specific questions it should answer
  • Ask Claude or GPT-4 to answer the same 10 questions without access to the KG
  • Record: Where does the LLM agree? Where does it disagree? Where does it invent new "facts" not in the KG?
  • Discussion: What is the cost of hallucination in your specific domain? Healthcare? Finance? Legal? What's the risk profile?
// Confidence degradation query MATCH (c:Concept) WHERE c.verified_at < '2025-01-01' SET c.effective_confidence = c.confidence * 0.85 RETURN c.name, c.confidence, c.effective_confidence
🛠️Tools This Week
Claude API OpenAI API Neo4j Python Driver Python (pandas, difflib)
📋Assignment

Conduct an AI Reliability Audit for your domain. Use Claude and GPT to answer 10 domain-specific questions, then cross-reference their answers against your Knowledge Architecture Document from Week 4. Where does AI confidently state something that contradicts what you know to be true? Submit: 2-page "AI Reliability Analysis" covering: (1) where each AI tool was wrong, (2) the business consequences if a real user trusted that wrong answer, (3) how a verified Knowledge Graph would have prevented each error. This is your strategic argument for why grounded AI matters in your domain. Due: before Week 6 class.

Wk 6
Wed, March 25

Knowledge Graph + Social Graph Integration

"Context changes everything. WHO is asking changes WHAT matters."
🔺IAM ConceptCross-Graph Linking

Cross-graph entity linking is the mechanism that creates personalization. The same Concept node in the Knowledge Graph means different things to different Person nodes in the Social Graph. A CEO asking "What is our retention rate?" needs strategic context. A Customer Service rep asking the same question needs operational detail. Personalization emerges from the intersection of WHAT and WHO. This week we build that intersection.

🎯Learning Objectives
  1. Implement entity linking between Social and Knowledge graphs — INTERESTED_IN, EXPERTISE_IN, RESPONSIBLE_FOR edges
  2. Build context-aware retrieval: same query + different user_id = different result set
  3. Demonstrate live: personalization emerges from graph intersection, not from hard-coded rules
In-Class Exercise: "Same Question, Different Person"
  • Build a simple query function: input = (question, user_id); output = personalized fact set
  • user_id is looked up in Social Graph → returns context: role, expertise_level, interests, reporting_structure
  • Same question → different Cypher traversal based on user context → different top-k facts returned
  • LIVE DEMO: Ask "What is our retention rate?" as CEO vs. as Customer Service rep — show 3 distinct result sets
// Context-aware retrieval across both graphs MATCH (u:Person {id: $user_id})-[:HAS_ROLE]->(r:Role) MATCH (r)-[:NEEDS]->(domain:Domain) MATCH (k:Concept)-[:BELONGS_TO]->(domain) WHERE k.name CONTAINS $query_term RETURN k.name, k.confidence, r.name AS user_role ORDER BY k.confidence DESC LIMIT 10
🛠️Tools This Week
Neo4j Aura FastAPI / Flask Claude (generation layer) Postman (API testing)
📋Assignment

Design a Persona Architecture for your application. Define 3 distinct user personas — different roles, expertise levels, goals, and fears. For each persona: write the 5 most important questions they'd ask your system and describe how the ideal answer would differ from what the other two personas should receive. Then record a 2-minute screen-capture demo using your AI tool showing how context changes the response to the same question. Submit: Persona Architecture Document + demo video link. Due: before Week 7 class.

Wk 7
Mon, March 30

★ Midterm — Knowledge Graphs, Ontology & Dual-Track Review

"Halfway there. Show us what you've built. Then we add the final dimension."
🔺IAM ConceptGenerative Graph

The Generative Graph is the WHAT IF layer. RAG (Retrieval-Augmented Generation) uses your Trinity Graph as the retrieval layer: user query → semantic search in Knowledge Graph → top-k relevant facts → fed to LLM as context → grounded, personalized response. The generation is not just using an LLM — it's using your LLM, trained on your data, responding to your users, grounded by your facts.

🎯Learning Objectives
  1. Implement basic RAG pipeline using Neo4j as vector store — embed KG nodes, enable semantic search
  2. Present both client and venture projects (15 min each) — live demo of working Social + Knowledge integration
  3. Demonstrate Social + Knowledge integration — show personalized retrieval working live for the class
In-Class Exercise: Pitch TimerMidterm
⏱️ Trinity Pitch Timer v1.0 · Midterm Review
Currently Presenting
— Set a Pod Above —
Presentation Mode
15:00
Ready to start
📊Midterm Deliverables
  • 15-minute presentation: Client project status + venture concept + live demo of working Social + Knowledge integration
  • Architecture diagram showing all 3 graph layers — even if Generative is still a placeholder/stub
  • 1-page Business Model Canvas for your venture (Strategyzer format)
  • Neo4j export showing social + knowledge graph with cross-graph entity links
🛠️Tools This Week
LangChain LlamaIndex OpenAI Embeddings Neo4j Vector Index
📋Post-Midterm Assignment

Prepare your Midterm Package for both tracks: (1) Architecture Visual — a single-page diagram showing how all three graph layers connect in your application (Figma, Miro, or Canva — no code required), (2) Business Model Canvas for your venture (Strategyzer template, completed in full), (3) Client Status Memo — 1-page update to your client summarizing what you've learned, what you've built, and what comes next. Submit all three by midnight before midterm class. The 15-minute in-class presentation walks through these artifacts live with your client stakeholders in the room.

Wk 8
Wed, April 1

Agentic Workflows & Multi-Graph Fusion

"Single queries are questions. Agents are reasoning."
🔺IAM ConceptTrinity Convergence

Trinity Convergence is the Convergence Layer: agents that query all three graphs in sequence. Social first (WHO is asking?) → Knowledge second (WHAT do we know relevant to them?) → Generative third (WHAT SHOULD WE SAY, given who they are and what we know?). This is the architecture of a system that doesn't just answer — it reasons. Every output is triple-grounded: in identity, in knowledge, in context.

🎯Learning Objectives
  1. Build a 3-step reasoning agent using the Trinity architecture: Social → Knowledge → Generative pipeline
  2. Implement the full Social→Knowledge→Generative pipeline with real data and live queries
  3. Measure "awareness score" — quantify how much user context changes the output across 10 test cases
In-Class Exercise: Agent Design Sprint
  • Each pod designs their agent's decision tree on paper first — what does the agent check at each step?
  • Implement in CrewAI or LangGraph: Agent must (1) identify user from Social Graph, (2) retrieve relevant facts from KG, (3) generate personalized response
  • LIVE DEMO: Run same agent with 3 different users — show 3 different outputs and explain why they differ
  • Measure: what % of the output changed based on user context? Is it meaningful change or just style?
🛠️Tools This Week
CrewAI LangGraph ReAct pattern Neo4j Aura Railway / Render (deploy)
📋Assignment

Write an Agent Logic Map — a visual flowchart showing your AI agent's decision process for 5 real user scenarios. For each scenario trace: (1) What Social context does the agent use? (2) What verified facts does it retrieve? (3) What does it generate — and why is this different from a generic AI answer? Submit: Agent Logic Map (visual, Miro or Figma) + 2-page "Why Context Changes Everything" analysis demonstrating how the same question produces meaningfully different outputs for different users. Your AI tools should assist with both the mapping and the writing. Due: before Week 9 class.

Wk 9
Mon, April 6

The -ity Ontology in Practice

"Language is architecture. The words you choose become the structure you build."
🔺IAM ConceptSemantic Bridges

The 225+ -ity terms are not decorative vocabulary — they are semantic bridges between graphs. Authenticity links the Social Graph (real identity, verified persona) to the Knowledge Graph (verified facts, confirmed sources) to the Generative Graph (genuine creative output that matches both). Every -ity word represents a cross-graph relationship type. Building with -ity vocabulary means building a system that has emotional and conceptual coherence across all three layers.

🎯Learning Objectives
  1. Map your venture's core -ity vocabulary to all 3 graphs — which words live primarily in Social, Knowledge, or Generative?
  2. Build "awareness metrics" using -ity coordinates — quantify how "aligned" your system is with its intended vibe
  3. Design the semantic bridge layer specific to your domain — the cross-graph relationship schema
In-Class Exercise: -ity Architecture Workshop
  • Each pod picks their 10 most important -ity words for their specific domain
  • Draw the cross-graph map: which -ity words appear in Social? Knowledge? Generative? All three?
  • Build Cypher queries that traverse all 3 graphs using -ity as the bridge relationship
  • Present: "Here are our 3 most powerful -ity bridges and what they unlock in our system"
// -ity as cross-graph bridge MATCH (u:User)-[:HAS_ITY {type: "curiosity"}]->(k:Concept) MATCH (k)-[:GENERATES]->(c:Content) RETURN u.name, k.name, c.title, c.type // Map vibe profile across graphs MATCH (u:User)-[r:HAS_ITY]->(ity:ItyNode) RETURN u.name, collect(ity.name) AS vibe_profile
📋Assignment

Build your Semantic Architecture Document — your venture's conceptual vocabulary. Define your 15 most important -ity words (use the Trinity Vocabulary reference and Vibe Coder for inspiration). For each word document: which graph layer it primarily lives in, what behavior or feature it enables, how you would measure whether it's working, and what breaks when it's absent. Use Claude to stress-test each one. This document becomes the conceptual backbone of your investor pitch. Submit: completed Semantic Architecture Document. Due: before Week 10 class.

Wk 10
Wed, April 8

Business Model Design

"The graph is the moat. Every new user makes it smarter. Every new fact makes it more defensible."
🔺IAM ConceptData Moats

The Trinity Graph creates data network effects — the most powerful and defensible moat in software. More users → richer Social Graph → better personalization → better experience → more users. More facts verified → higher knowledge confidence → better grounding → fewer hallucinations → more trust → more usage → more facts. And crucially: the graph compounds. A competitor starting today is not just behind on features — they are behind on years of relationships, verifications, and connections they can never fully replicate.

🎯Learning Objectives
  1. Design a revenue model that leverages the Trinity Graph's compounding nature — show how monetization scales with graph density
  2. Build unit economics model: LTV, CAC, payback period, LTV/CAC ratio — grounded in real assumptions
  3. Identify what makes your graph defensible: data moats, switching costs, network effects, proprietary knowledge
In-Class Exercise: Business Model Canvas + Unit Economics Sprint
  • Each pod completes a full Strategyzer Business Model Canvas in 20 minutes — all 9 blocks, no gaps
  • Build a simple 12-month model: if you acquire 100 users in month 1, what does month 12 look like for revenue, graph size, and graph quality?
  • Present to class: "What is our data moat? What happens to our graph after 1 year vs. 3 years vs. 5 years?"
  • Peer question: "What would it cost a well-funded competitor to replicate your graph in 6 months?"
🛠️Tools This Week
Strategyzer BMC Google Sheets Cursor AI (financial modeling) Airtable
📋Assignment

Build your Trinity Financial Model in Google Sheets (template on Brightspace). Include: 3-year P&L projection, unit economics (CAC, LTV, payback period), and a "Graph Compounding" section — a simple visualization showing how your system gets smarter and more defensible as users and data accumulate, and how that translates to revenue. Include base / bull / bear scenarios. Use Claude or Cursor to help build and check your formulas. Submit: Google Sheets link + 1-page narrative explaining your key assumptions and what would need to be true for the bull case to materialize. Due: before Week 11 class.

Wk 11
Mon, April 13

Scaling & Performance

"Beautiful architecture that can't handle load is a prototype, not a product."
🔺IAM ConceptProduction Architecture

Production Trinity Architecture requires three layers of optimization: caching strategies for repeated graph queries, async processing for expensive operations (embedding generation, large KG traversals), and graph partitioning for very large social graphs. The target is <200ms end-to-end response time for most queries. Awareness at scale means the system is still context-sensitive at 10,000 simultaneous users — not just at 10.

🎯Learning Objectives
  1. Implement a caching layer (Redis or in-memory) for repeated graph queries — measure cache hit rate
  2. Optimize Neo4j queries using EXPLAIN/PROFILE — add indexes, rewrite traversals, eliminate Cartesian products
  3. Design async processing for expensive operations — what can be pre-computed, what must be real-time?
In-Class Exercise: Performance Optimization Sprint
  • Each pod runs EXPLAIN on their top 5 most frequent queries — identify DB Hits count before optimization
  • Identify the slowest part of their Trinity pipeline (embedding? graph traversal? LLM call? API serialization?)
  • Apply one optimization: add index, rewrite Cypher, add cache layer, or pre-compute embeddings
  • Before/after: measure and present the query time improvement — what was the DB Hits reduction?
// Add index for performance CREATE INDEX person_name IF NOT EXISTS FOR (p:Person) ON (p.id) CREATE INDEX concept_name IF NOT EXISTS FOR (c:Concept) ON (c.name) // Profile a query PROFILE MATCH (u:Person)-[:KNOWS*1..2]->(p:Person) WHERE u.id = 'user-001' RETURN p.name, p.role
🛠️Tools This Week
Neo4j EXPLAIN/PROFILE Redis (caching) k6 (load testing) Grafana (monitoring)
📋Assignment

Write a Technical Partner Brief — a 3-page document designed for a prospective CTO or technical co-founder. Explain: (1) what your system does and how the three graph layers work together, (2) where the current performance constraints are and how you'd prioritize addressing them, (3) what infrastructure investment is required to scale from 100 to 10,000 users. This is a communication and prioritization exercise — translate what you've built into strategic language for a technical audience. Use Claude to help you think through the trade-offs clearly. Submit: Technical Partner Brief. Due: before Week 12 class.

Week 12
Wed, April 15

Advanced Trinity Patterns

Where convergence gets weird — and wonderful.

IAM Concept

Emergent awareness — when all three graphs are properly integrated, the system exhibits behaviors that none of the individual graphs could produce alone. Digital twins mirror real-world entities in the KG. The -pathy layer adds emotional/health dimensions. And when two Trinity Graphs talk to each other — cross-domain synthesis emerges.

1Implement the Digital Twin pattern — KG node that mirrors a real-world entity and updates in real time
2Apply -pathy integration for health or emotional domains (empathy, synchrony, sensitivity)
3Design cross-domain synthesis — two Trinity Graphs exchanging information
In-Class Exercise: "What Emerges?"

Run your fully integrated Trinity system with 3 brand-new user scenarios you haven't tried before. Document:

  • What did the system do that you didn't explicitly program?
  • Where did it surprise you — in a good way? In a bad way?
  • What is the system "learning" over time as data accumulates?
  • Is this the beginning of awareness? What's missing?

Then: attempt a cross-domain synthesis — connect your venture's Trinity Graph to a classmate's. What new emergent behaviors appear when the two graphs share edges?

🛠️Tools This Week
CrewAI (multi-agent) LangGraph Neo4j Federation Kafka (event streaming)
📋Assignment — Final Integration Check

Run an Integration Review with at least 2 people outside your pod — ideally people from your target user group. Have them interact with your system and observe without coaching. Document everything: What confused them? What delighted them? What did they want that wasn't there? Submit: (1) 2-page User Feedback Report with direct observations and quotes, (2) Revised Demo Script for Demo Day based on what you learned, (3) Updated one-page system overview diagram. Due: before Week 13 class.

Week 13
Mon, April 20

Dual Delivery — Client Presentations

Serve first. Pitch second. The best founders do both beautifully.

IAM Concept

Authenticity in practice — the difference between a system built FOR users (client project) and a system built TO SELL (venture pitch). Both require genuine Trinity Graph insight, but the frame shifts entirely. One asks "what do they need?" The other asks "why should they believe?"

1Deliver a professional client presentation — non-technical audience, focus on value delivered
2Complete the full client handover package — system + docs + training plan
3Final polish on investor pitch — tight narrative, live demo, defensible claims only
📅Week Structure
Mon–Wed: Client Presentations

30 min per pod: 20 min live demo + 10 min Q&A from client stakeholders. University partners attend in person.

Thu–Fri: Investor Pitch Rehearsal

Peer feedback sessions. Baxter Webb facilitates. Every pod presents, every pod critiques. Ruthless but constructive.

📦Client Deliverable Package
  • Working System — deployed URL or installable package, tested end-to-end
  • User Documentation — written for a non-technical audience; could be a client's first-year employee
  • Technical Documentation — schema, API endpoints, deployment instructions for future maintainers
  • Training Session Plan — 60-minute onboarding plan the client can run themselves
  • "What's Next" Roadmap — 3 features the client should build in the next 6 months, with rough effort estimates
Week 14
Wed, April 22

Demo Day

This is not a school project. This is a company. Show us.

IAM Concept — The Omega Arc

Every great pitch follows the Omega Protocol arc: Grounding (the problem is real) → Awakening (something has changed that makes now the moment) → Friction (why hasn't this been solved?) → Connection (your Trinity Graph is the bridge) → Convergence (demo: all three graphs, live) → Omega (the vision, fully realized) → Return (the ask, grounded in reality).

1Deliver a 15-minute investor pitch with live Trinity Graph demo
2Show compounding intelligence — "here's what the system learned this semester"
3Defend your business model and data moat under live Q&A from external judges
🏆Demo Day ScoreboardLive App
🎤Pitch Structure (15 Minutes)
1
The Problem — 2 min. Show the gap. Make it visceral. Real story, real pain.
2
The Solution — 3 min. Live Trinity Graph demo. All three layers. Real data.
3
The Market — 2 min. TAM/SAM/SOM. Who pays, why, how much.
4
The Moat — 2 min. Why your data graph is defensible. What happens after year 1 vs. year 3.
5
Traction — 2 min. Client project results as proof. Real users, real feedback, real data.
6
The Ask — 2 min. Specific amount, specific use of funds, specific milestones. Never vague.
📦Final Deliverables
  • GitHub Repo — public, clean README, deployed demo linked
  • Pitch Deck — max 12 slides, PDF submitted night before
  • Financial Model — 3-year P&L, unit economics (CAC, LTV, payback)
  • 1-Page Executive Summary — for judges to keep after Demo Day
  • Live Application — must stay live for 30 days post-Demo Day
Reference

Trinity Vocabulary — The -ity Ontology

225+ abstract nouns as semantic primitives. Filter by graph layer.

Reference

Neo4j Cypher Cheat Sheet

The queries you'll use every week.

🔵Social Graph Queries
// Create a person node
CREATE (p:Person {id: "u001", name: "Alex Chen", role: "Student",
  vitality: 0.8, curiosity: 0.9, authenticity: 0.7})

// Create a relationship with weight
MATCH (a:Person {id:"u001"}), (b:Person {id:"u002"})
CREATE (a)-[:KNOWS {strength: 0.85, context: "class", since: date()}]->(b)

// Find bridge nodes (high betweenness centrality proxy)
MATCH (p:Person)-[:KNOWS]->(q:Person)-[:KNOWS]->(r:Person)
WHERE NOT (p)-[:KNOWS]->(r) AND p <> r
RETURN q.name AS bridge, count(*) AS connections
ORDER BY connections DESC LIMIT 10

// Shortest path between two people
MATCH path = shortestPath(
  (a:Person {name:"Alex"})-[:KNOWS*..6]-(b:Person {name:"Jordan"})
) RETURN path
🟢Knowledge Graph Queries
// Create a verified concept
CREATE (c:Concept {name: "Churn Rate", domain: "SaaS",
  definition: "% customers lost in a period",
  confidence: 0.98, source: "https://a16z.com/...",
  verified_at: datetime()})

// Link concepts with typed relationships
MATCH (a:Concept {name:"Churn Rate"}), (b:Concept {name:"LTV"})
CREATE (a)-[:INVERSELY_AFFECTS {weight: 0.9, citation: "Gupta 2004"}]->(b)

// Fact-check: find low-confidence claims
MATCH (c:Concept) WHERE c.confidence < 0.7
RETURN c.name, c.confidence, c.source ORDER BY c.confidence ASC

// Cross-graph: concepts relevant to a specific user role
MATCH (u:Person {role: "CEO"})-[:CARES_ABOUT]->(topic:Topic)
MATCH (c:Concept)-[:COVERS]->(topic)
WHERE c.confidence > 0.85
RETURN c.name, c.definition ORDER BY c.confidence DESC
🟡Generative Graph Queries
// Log a generated response
CREATE (g:GeneratedContent {
  id: randomUUID(), query: "What is our churn?",
  user_id: "u001", model: "claude-3-5",
  response: "Based on your 847 customers...",
  ity_coords: ["clarity:0.9","authenticity:0.85"],
  created_at: datetime()})

// Trinity convergence query: full pipeline
MATCH (u:Person {id: $userId})
MATCH (u)-[:CARES_ABOUT]->(t:Topic)
MATCH (c:Concept)-[:COVERS]->(t) WHERE c.confidence > 0.8
OPTIONAL MATCH (u)-[:PREVIOUSLY_ASKED]->(g:GeneratedContent)
RETURN u.name, u.vitality, u.curiosity,
       collect(c.name)[..5] AS top_facts,
       count(g) AS interaction_count

// Find concepts that bridge social + knowledge graphs
MATCH (u:Person)-[:KNOWS {strength: >0.7}]->(v:Person)
MATCH (c:Concept) WHERE c.name IN u.ity_interests AND c.name IN v.ity_interests
RETURN c.name AS shared_interest, count(*) AS pair_count
ORDER BY pair_count DESC
Reference

Tool Comparison Matrix

Choosing the right tool for each layer of your Trinity Graph.

Layer Tool Best For Free Tier Course Recommendation
Social Graph Neo4j AuraProperty graphs, Cypher queries✅ 1GB free⭐ Primary
Social Graph GephiVisualization, layout algorithms✅ Open sourceVisualization only
Knowledge Graph Neo4j + OntologyUnified graph (recommended)⭐ Primary
Knowledge Graph Apache Jena FusekiRDF triples, SPARQL✅ Open sourceWeek 4 exploration
Generative AI LangChainRAG pipelines, chain composition⭐ Primary
Generative AI CrewAIMulti-agent orchestrationWeek 8+
Generative AI LlamaIndexDocument ingestion, indexingAlternative to LangChain
LLM Claude (Anthropic)Analysis, long context, reasoning✅ Credits provided⭐ Primary
LLM GPT-4o (OpenAI)Code generation, multimodalLimited freeComplement to Claude
Dev Tools Cursor AIAI-assisted coding, architectureLimited free / $20/mo⭐ Strongly Recommended
Dev Tools Railway / RenderFast deployment, free hostingDeploy from Week 8
References

Course References

Required texts, supplementary readings, and module-by-module research papers.

📗Required Texts
📘
The Social Organism: A Radical Understanding of Social Media to Transform Your Business and Life
Oliver Luckett & Michael J. Casey — Grand Central Publishing, 2016
The foundational text for this course. Argues that social media networks behave like living biological organisms — evolving, mutating, spreading, dying — and that understanding this organic logic is the key to effective engagement at scale. Introduces the Social Graph as the substrate of all human digital behavior. Required reading: Ch. 1 (Week 1), Ch. 3–4 (Week 2), Ch. 7 (Week 9).
PRIMARY TEXT
📙
Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World
Marco Iansiti & Karim R. Lakhani — Harvard Business Review Press, 2020
Reframes the firm as an "AI factory" — a system that converts data into automated decisions at scale. Essential context for understanding why Trinity Graph applications have structural competitive advantages: they are AI factories that compound intelligence over time. Directly informs Week 10 business model discussions on data moats and network effects.
SUPPLEMENTARY — WEEKS 10–12
📕
Disciplined Entrepreneurship: 24 Steps to a Successful Startup (2nd Edition)
Bill Aulet — Wiley, 2024
The operational backbone for the venture track. Aulet's 24-step framework provides the scaffolding for customer discovery, market sizing, business model design, and go-to-market strategy. Students apply each step to their Trinity Graph application. Particularly relevant for Weeks 2 (customer discovery), 6 (beachhead market), and 10 (business model canvas).
SUPPLEMENTARY — VENTURE TRACK
🗂️
Course Case Pack
Compiled by Oliver Luckett — Available via Brightspace
Includes weekly readings, Trinity Graph technical documentation, API access instructions, and credits for generative and agentic AI resources (Claude, GPT-4o, Midjourney). Also includes the Inkwell IAM specification documents, -ity ontology reference (225+ terms), and S.A.V.E.S.U.C.C.E.S.S. framework guide. Updated weekly.
REQUIRED — ALL WEEKS
🔬Research Papers by Module
MODULE 1 — SOCIAL GRAPH FOUNDATIONS (WEEKS 1–3)
Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of 'small-world' networks. Nature, 393(6684), 440–442. doi:10.1038/30918
The foundational small-world network paper. Establishes why social graphs cluster locally but connect globally — the mathematical basis for viral spread in the Social Organism model.
Barabási, A. L., & Albert, R. (1999). Emergence of scaling in random networks. Science, 286(5439), 509–512. doi:10.1126/science.286.5439.509
Introduces scale-free networks and preferential attachment. Explains why social graphs are not random — popular nodes get more popular (the "rich get richer" structure behind follower counts and influence).
Granovetter, M. S. (1973). The strength of weak ties. American Journal of Sociology, 78(6), 1360–1380. doi:10.1086/225469
Landmark paper proving that weak ties (acquaintances, not close friends) are the primary channel for novel information and job opportunities. Directly maps to the KNOWS edge weight properties in Week 2 graph design.
Plutchik, R. (1980). A general psychoevolutionary theory of emotion. In Theories of Emotion (pp. 3–33). Academic Press.
The Wheel of Emotions — the psychological taxonomy underlying the -pathy layer. Used in Week 3 to map emotional properties onto social graph nodes as -ity coordinates.
MODULE 2 — KNOWLEDGE GRAPH FOUNDATIONS (WEEKS 4–6)
Ehrlinger, L., & Wöß, W. (2016). Towards a definition of knowledge graphs. SEMANTiCS (Posters, Demos, SuCCESS), 48(1–4), 2.
Establishes the formal definition of knowledge graphs as used throughout this course. Distinguishes between KGs (structured, verified, typed) and LLM "knowledge" (probabilistic, unverified, stateless).
Hogan, A., Blomqvist, E., Cochez, M., et al. (2021). Knowledge graphs. ACM Computing Surveys, 54(4), 1–37. doi:10.1145/3447772
The most comprehensive survey of knowledge graph techniques. Covers ontology design, entity linking, embedding, querying, and quality. The technical reference for the entire Module 2.
Singhal, A. (2012). Introducing the Knowledge Graph: Things, not strings. Google Official Blog. blog.google
Google's introduction of the Knowledge Graph. The "things not strings" framing is foundational to understanding why typed nodes (Concept, Entity, Person) are more powerful than keyword-indexed text.
Ji, S., Pan, S., Cambria, E., Marttinen, P., & Yu, P. S. (2021). A survey on knowledge graphs: Representation, acquisition, and applications. IEEE Transactions on Neural Networks and Learning Systems, 33(2), 494–514. doi:10.1109/TNNLS.2021.3070843
Covers knowledge representation, embedding methods (TransE, RotatE), and reasoning over KGs. Directly relevant to Week 6's entity linking and cross-graph integration.
MODULE 3 — GENERATIVE AI & TRINITY CONVERGENCE (WEEKS 7–9)
Lewis, P., Perez, E., Piktus, A., et al. (2020). Retrieval-augmented generation for knowledge-intensive NLP tasks. Advances in Neural Information Processing Systems, 33, 9459–9474. arxiv:2005.11401
The original RAG paper from Meta AI. Foundational for Week 7 — explains why grounding LLM generation in a retrieval corpus (your Knowledge Graph) reduces hallucination and improves factual accuracy.
Wei, J., Wang, X., Schuurmans, D., et al. (2022). Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35, 24824–24837. arxiv:2201.11903
Introduces chain-of-thought prompting — the technique underlying the Trinity Agent's step-by-step reasoning pipeline (Social context → Knowledge retrieval → Generative synthesis). Essential for Week 8 agent design.
Yao, S., Zhao, J., Yu, D., et al. (2022). ReAct: Synergizing reasoning and acting in language models. International Conference on Learning Representations (ICLR). arxiv:2210.03629
The ReAct (Reason + Act) paper — the pattern used in Week 8 agent workflows. Shows how LLMs can interleave reasoning traces with external tool calls (e.g., Neo4j queries), creating more reliable and interpretable agents.
Minsky, M. (1986). The Society of Mind. Simon & Schuster.
Minsky's theory that intelligence emerges from the interaction of many small, simple agents. The philosophical ancestor of the Trinity Graph's convergence architecture — and the conceptual bridge to the -ity ontology as semantic agents.
MODULE 4 — BUSINESS MODEL & ADVANCED APPLICATIONS (WEEKS 10–12)
Parker, G., Van Alstyne, M., & Choudary, S. P. (2016). Platform Revolution: How Networked Markets Are Transforming the Economy. W. W. Norton & Company.
Explains platform economics, network effects, and why data-generating platforms compound in value over time. Week 10's "graph as moat" argument is grounded in Parker et al.'s network effect taxonomy.
Andreessen, M. (2011). Why software is eating the world. The Wall Street Journal. wsj.com
The foundational argument that every industry will eventually be dominated by software companies. Updated context for 2025: "AI is eating software" — and Trinity Graph applications are the next layer of this trend.
Grigorescu, S., Trasnea, B., Cocias, T., & Macesanu, G. (2020). A survey of deep learning techniques for autonomous driving. Journal of Field Robotics, 37(3), 362–386.
Used as a case study for Digital Twin architecture in Week 12 — specifically how real-time KG mirroring of physical systems enables predictive intelligence that static databases cannot.
Luckett, O. (2025). Words Matter: The -ity Ontology as Operating Vocabulary for Convergent Intelligence. Inkwell Labs White Paper.
Oliver's foundational white paper introducing the 225+ -ity term framework as semantic primitives for AI awareness systems. The theoretical backbone of the entire -ity vocabulary used throughout this course. Available in Course Case Pack.
MODULE 5 — LAUNCH (WEEKS 13–14)
Ries, E. (2011). The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses. Crown Business.
The build-measure-learn loop is directly applicable to Trinity Graph iteration. Week 13 client delivery embodies the "validated learning" principle — the client project is your minimum viable product, proven in the real world.
Graham, P. (2012). Startup = growth. paulgraham.com. paulgraham.com
Graham's essential definition of a startup as "a company designed to grow fast." Context for Demo Day: judges are evaluating whether your Trinity Graph application has the structural capacity for exponential growth, not just linear improvement.
Thiel, P., & Masters, B. (2014). Zero to One: Notes on Startups, or How to Build the Future. Crown Business.
"Every moment in business happens only once." The philosophical complement to Aulet's operational framework — Thiel's question "what important truth do very few people agree with you on?" is the right framing for your Demo Day pitch opening.
CITATION FORMAT

All written work should use APA 7th Edition citation format. For AI-generated content used in your deliverables, include a disclosure statement: "[Tool name] was used to [specific task]. All outputs were reviewed, verified, and edited by the student authors." Academic integrity policy applies fully to AI-assisted work — the ideas must be yours, even when AI assists with execution.