A System for Delegation, Execution, and Verified Outcomes.
Preamble
This document is the canonical reference for the Alissa System. It defines what Alissa is, what it is not, and why those distinctions matter. Every design decision, every product choice, every line of code written for this project should trace back to the ideas in this document.
This is not a feature specification. It is not a product requirements document. It is a doctrine — a set of beliefs about how work actually functions in the real world, and the system design consequences of taking those beliefs seriously.
If you are an engineer, this document tells you why the system is shaped the way it is, so you never have to guess at the intent behind an abstraction. If you are a product manager, it tells you which features to build and, more importantly, which to refuse. If you are a designer, it tells you what the user is actually doing when they interact with Alissa — not clicking buttons, but making and honoring commitments. If you are an operator or investor, it tells you what makes this system fundamentally different from the hundreds of productivity tools that came before it.
Read this before you build anything.
I. The problem we are trying to solve
There is no shortage of tools that help people "get work done." To-do apps, project management platforms, Kanban boards, AI assistants — the market is saturated. And yet, anyone who has managed a team, run a household, or coordinated across more than two people knows the same truth:
Work still falls through the cracks
Not because people are lazy. Not because the tools are ugly. But because every tool in this space makes the same structural mistake: they optimize for capturing activity rather than ensuring outcomes.
Consider what these tools actually do:
To-do apps capture intentions. You write down "buy groceries" or "review the contract." The app stores your intention. It does not care whether the groceries were bought or the contract was reviewed. It has no opinion on the matter. You check the box, and the app believes you.
Project management tools track coordination. They let you organize cards into columns, assign names to rows, draw dependency arrows. But they are fundamentally passive. They record what you tell them. If you say a task is done, it's done. If you forget to update the board, the board lies.
AI tools execute isolated actions. They can draft an email, summarize a document, generate code. But they operate in a vacuum. They have no concept of who asked for the work, whether anyone accepted responsibility for it, or how the output connects to a larger objective. They are powerful hands attached to no body.
Each of these categories solves a real sub-problem. But none of them unify the five things that actually determine whether work gets completed in the real world:
Delegation — the act of asking someone to do something
Ownership — the clear, singular accountability for that work
Execution — the actual doing
Validation — the proof that it was done correctly
Visibility — the ability for stakeholders to see reality, not reports
Because no tool unifies these, organizations and individuals experience the same failure modes over and over:
Tasks are created but never completed, because creation is confused with commitment.
Work is assigned but never truly accepted, because assignment is confused with delegation.
Outcomes are claimed but never verified, because marking a checkbox is confused with proving a result.
Stakeholders are "kept in the loop" but not grounded in reality, because status updates are confused with observability.
Alissa exists to fix this — not by adding yet another feature to the productivity stack, but by rethinking the foundation.
Alissa is not a task manager. It is a system of record for commitments and outcomes.
That sentence is not marketing. It is a design constraint. Every time someone on this team proposes a feature, a flow, or a data model, it must be tested against that statement. If it makes Alissa a better task manager, that is not sufficient justification. If it makes Alissa a more reliable system of record for commitments and outcomes, it belongs.
II. The Core Thesis
Work is a commitment to produce a verifiable outcome, owned by an accountable actor, and executed within a shared system of visibility and coordination.
Read that again slowly. Every word carries weight.
Commitment: not an intention, not a wish, not a note to self. A commitment is a promise made by someone who has explicitly accepted responsibility. Until that acceptance happens, there is no work — there is only a request.
Verifiable outcome: not "I worked on it" or "I think it's done." An outcome that can be checked against a predefined standard. Did the flight get booked? Is there a confirmation number? Does the code pass the tests? Verifiability is what separates a system of record from a system of opinions.
Owned by an accountable actor: not "the team" or "whoever gets to it." One entity — whether human or machine — holds responsibility. That entity can be praised when work succeeds and held accountable when it doesn't. Without singular ownership, there is no accountability, and without accountability, any system degenerates into a shared notepad.
Shared system of visibility and coordination: work does not happen in a black box. Its status, its progress, its blockers — all of this is structurally visible to the people and systems that need to see it. Not because someone writes a status update, but because the system itself surfaces reality.
This definition is intentionally narrow because it excludes the things that break real-world execution:
Vague intentions ("we should probably look into that")
Unowned responsibilities ("someone needs to handle this")
Unverified completion ("yeah, I took care of it")
If it doesn't have an owner, it isn't work. If it can't be verified, it isn't done. If it wasn't accepted, it was never truly delegated. These are not edge cases we handle gracefully — they are states the system refuses to represent.
At the center of every decision in Alissa sits a single, deliberately narrow definition:
III. The Non-Negotiable Principles
The following six principles are not product choices. They are not preferences. They are structural constraints that preserve the integrity of the system. Removing any one of them doesn't just change Alissa — it collapses Alissa into something that already exists and already fails.
Each principle is presented with its rationale and its consequences — what the system gains by enforcing it, and what the system becomes if it's removed.
Principle 1: Work Must Be Verifiable
A task is not complete because someone says it's complete. A task is complete when its Definition of Done is satisfied and its validation checks pass.
This is perhaps the most important single idea in the system, because it is the one most commonly violated by every other tool. In every other platform, "done" is a button. You click it, the task disappears, and the system assumes everything went well. There is no mechanism for the system to disagree with you.
Think about what this means in practice. A manager assigns "prepare the quarterly report." The assignee spends two hours on it, gets it to a state they personally consider "good enough," and marks it complete. The manager sees a green checkmark. But the report is missing three sections, uses last quarter's numbers, and hasn't been reviewed by finance. The system says it's done. Reality says otherwise.
Alissa eliminates this gap by requiring that every task define two things at creation:
A Definition of Done (DoD) — a human-readable description of what "finished" actually looks like. Not the activity, but the outcome. Not "work on the report" but "quarterly report with current figures, all five sections complete, reviewed by finance."
Validation checks — concrete, inspectable criteria that can be evaluated. These might be automated (a test suite passes, a file is attached, a field is populated) or human-verified (a reviewer signs off). The point is that they exist before the work begins, and they are checked before the work is accepted as complete.
Why this matters beyond the obvious. Verifiability is not just about catching mistakes. It's about building trust — between humans, between humans and agents, and between the system and its users. When a stakeholder looks at Alissa and sees that a task is complete, they know that "complete" means something. It has been checked. Standards were met. The system is not lying to them.
Without verifiability, you get a different kind of system. You get a system where "done" is a social performance. Where standards drift because each person's definition of "good enough" quietly diverges from everyone else's. Where trust gradually erodes because no one can be sure that a green checkmark reflects reality. The system becomes, at best, a shared checklist — and at worst, a shared fiction.
Principle 2: Ownership Must Be Singular
Every task has exactly one primary assignee. Not two. Not "the team." One.
This principle generates more pushback than any other, because modern work culture worships collaboration. Shared ownership feels democratic. It feels inclusive. It also feels safe — if three people own a task, no single person can be blamed when it fails.
That safety is exactly the problem.
Singular ownership is not about removing collaboration. Tasks in Alissa can have contributors, reviewers, and observers. The work itself can be deeply collaborative. But accountability — the answer to the question "whose job is it to make sure this gets done?" — must resolve to a single actor.
Consider a household analogy. A couple decides "we need to plan the vacation." If both of them "own" it, what happens? Each assumes the other is handling the flights. Neither books the hotel because both think the other one is looking into it. The departure date arrives and nothing is ready — not because anyone was negligent, but because shared ownership creates diffusion of responsibility. The same dynamic plays out with devastating regularity in workplaces: a task is assigned to a team, everyone assumes someone else is on it, and the task quietly dies.
If everyone owns it, no one owns it.
That's not a slogan. It's an observable law of organizational behavior, and Alissa encodes it at the structural level. The system does not permit a task to exist without exactly one primary assignee. This is not a default that can be overridden — it is a constraint enforced by the data model.
What this gives you. A clear escalation path. When a task is late, you know exactly who to talk to. When a task succeeds, you know who delivered. When priorities conflict, the owner is the one who makes the call. There is never ambiguity about who is steering.
What you lose without it. Tasks that linger indefinitely. Coordination overhead that scales quadratically with the number of "co-owners." Status meetings that exist solely because no one knows who's actually doing the work. And eventually, a team culture where accountability is something everyone agrees is important but no one actually practices.
Principle 3: Delegation Is a Lifecycle, Not an Action
Delegation is not "assigning a task." It is a structured sequence: Request → Acceptance → Commitment → Execution → Validation.
Most systems treat delegation as a single atomic action. You create a task, you put someone's name on it, and you're done. The system records that person as the "assignee" and moves on. But anyone who has ever managed real work knows that putting someone's name on a task and actually delegating work to them are two profoundly different things.
Real-world delegation is messy. It involves negotiation ("I can do this, but not by Friday — can we push to next week?"). It involves clarification ("When you say 'fix the bug,' do you mean the login issue or the payment flow?"). It involves rejection ("I don't have bandwidth for this right now"). None of these are failure states — they are the natural mechanics of how humans coordinate.
Alissa models delegation as a lifecycle because ignoring any stage creates a specific, predictable failure:
Skip the request stage, and you get tasks that appear out of nowhere in someone's queue. The assignee has no context, no negotiation, no opportunity to push back. They either silently accept (and may silently resent) or silently ignore (and the task dies without anyone noticing).
Skip the acceptance stage, and you get phantom ownership. The system says Alice owns the task. But Alice never agreed to it. She may not even know about it. The system is lying about accountability, and everyone downstream is making plans based on that lie.
Skip the validation stage, and you're back to the checkbox problem from Principle 1 — "done" means nothing.
The full lifecycle in Alissa looks like this:
Request — An actor proposes work to another actor. "Can you book the flights for the client trip?" This is intent, not commitment.
Acceptance — The receiving actor reviews the request and explicitly accepts, declines, or negotiates. "Yes, I can do this by Thursday" or "I need more information about the budget."
Commitment — Upon acceptance, a task is created. This is the moment where intent becomes obligation. The actor has taken ownership.
Execution — The actor performs the work.
Validation — The outcome is checked against the Definition of Done and validation criteria. Only then is the task complete.
This lifecycle applies identically whether the actors are human, AI, or a mix of both. A manager requesting a bug fix from an engineer follows the same lifecycle as a user requesting a research summary from an AI agent. The structure does not change based on who's executing — it changes based on what's being asked and whether it has been truly accepted.
Principle 4: Humans and Agents Are First-Class Equals
Alissa introduces a universal abstraction — the Actor — and treats humans and AI agents identically at the system level.
This is not a futuristic aspiration. It is a present-tense design decision. The future of work is already hybrid. Humans delegate to AI agents. AI agents request human review. Workflows cross the human–machine boundary constantly, and that boundary-crossing will only accelerate.
Most systems handle this by bolting AI capabilities onto a fundamentally human-centric architecture. The result is two parallel systems with duplicated logic, inconsistent workflows, and constant translation between "how things work for humans" and "how things work for agents." Every new agent capability requires new plumbing because agents were an afterthought.
Alissa takes the opposite approach. At the structural level, the system does not distinguish between human and agent actors. An Actor is defined by what it can do — receive work, accept or decline work, execute work, participate in validation — not by what it is. The task lifecycle, the delegation flow, the validation requirements: all of these are Actor-agnostic.
Consider a concrete example. Your team needs to fix a critical bug:
Scenario A: You send a task request to an engineer (human actor). They accept, investigate, fix the code, submit a pull request. The PR passes automated tests and code review. Validation checks pass. Task complete.
Scenario B: You send a task request to a coding agent (agent actor). It accepts, investigates, generates a fix, submits a pull request. The PR passes automated tests and a human reviewer approves. Validation checks pass. Task complete.
Same request. Same lifecycle. Same validation. Same data model. The system doesn't care which scenario played out. It cares that the work was requested, accepted, executed, and validated.
Why this matters. Because the alternative — maintaining separate systems or separate logic for human and agent work — creates a scaling problem that gets worse over time. Every new agent capability you add requires changes to the "agent path." Every new human workflow you design must be partially reimplemented for agents. And users are forced to think about who will execute the work before they can even express what the work is. Alissa removes that friction entirely.
The philosophical commitment here is important: the system must not care who executes. It must only care that execution is accountable and validated. This is what makes Alissa durable across the transition from human-dominated to hybrid to potentially agent-dominated workloads. The system doesn't need to change because the balance shifts. It was designed for the shift from day one.
Principle 5: Visibility Is a Core Property of Work
Work in Alissa is structurally visible. Not manually reported. Not optionally shared. Visible by default, by design.
There is a fundamental difference between informing someone about the state of work and making work inherently observable. The former depends on a human deciding to write an update, compose a message, or call a meeting. The latter depends on the system itself surfacing reality as a consequence of how work flows through it.
Most teams live in the first world. They hold weekly standups so managers can ask "where are we on X?" They send end-of-day emails summarizing progress. They update Slack channels with status reports that are outdated before they're read. All of this communication exists because the tools they use are opaque — the tools know things that the humans don't, and the only way to bridge that gap is manual reporting.
Manual reporting is lossy, expensive, and dishonest. Lossy because not everything gets reported. Expensive because reporting takes time that could be spent on execution. And dishonest because reports are curated — people naturally present a rosier picture than reality.
Alissa treats visibility as a structural property, not a communication behavior. Because every task has an owner, a status, a Definition of Done, and validation criteria, the system can show stakeholders the actual state of work at any moment without anyone lifting a finger to "report." A project dashboard in Alissa doesn't show you what people said about their work — it shows you what the work is.
If work cannot be seen, it cannot be trusted.
This principle has a direct design consequence: every entity in the system — tasks, requests, bodies of work, projects — must expose its state in a way that is queryable, observable, and up to date. There is no "private" work that escapes the system. There is no "I'll update the board later." The board is the work.
What breaks without this. Hidden work. Delayed detection of problems. A culture of reactive firefighting where issues are discovered only when deadlines pass. And, inevitably, a reliance on meetings and status reports to bridge the gap between what the system knows and what humans know — the very overhead Alissa is designed to eliminate.
Principle 6: Simplicity at the Core, Depth When Needed
The system must be usable at three levels of complexity, and users must never be forced into a level they don't need.
This principle is about respect for the user. Most productivity systems fail in one of two ways: they force complexity on users who just need a simple task list, or they lack the depth required by users managing real projects with dependencies, timelines, and cross-functional coordination. The first failure drives people to sticky notes. The second drives them to spreadsheets.
Alissa is designed with three concentric rings of capability:
Personal (Most users, most of the time) — Tasks and Requests. Create your own tasks, send and receive requests. This is your daily productivity layer.
Structured (Power users, team leads) — Bodies of Work and Dependencies. Group tasks around goals, define relationships between tasks. This is your coordination layer.
Delivery (Advanced users, program managers) — Projects and Roadmaps. Orchestrate work across time with milestones, sequencing, and observability. This is your delivery layer.
The critical design constraint here is that each level must be genuinely useful on its own. A user who never creates a Body of Work or a Project must still find Alissa valuable. Their experience should feel simple, clean, and focused. And when — if — they grow into needing more structure, the system reveals new capabilities without invalidating what they've already built. Their tasks don't need to be reorganized. Their mental model doesn't need to be rebuilt. The system just gets deeper.
Think of it like a swimming pool with a gradual slope. You can wade in at knee depth and be perfectly comfortable. You can walk deeper as you gain confidence. The architecture of the pool doesn't change — it was always that deep. You just hadn't needed to go there yet.
IV. The Ontology, What Alissa Is Made Of
Alissa is built on a deliberately small number of primitives. Each one exists for a specific reason, and none should be added without the same level of justification. Complexity in an ontology is a debt that compounds — every new entity creates new relationships, new edge cases, and new cognitive load. The entities below represent the minimum viable set required to operationalize the thesis and principles described above.
They are organized across four layers:
Actor (Universal) — The foundational abstraction for anything that can own and execute work
Task Request (Intake) — A proposal for work — intent, not commitment
Request Inbox (Intake) — A structured entry point for receiving work
Task (Execution) — The atomic, accountable unit of committed work
Body of Work (Coordination) — A contextual grouping of tasks around a shared goal
Project (Delivery) — Time-based orchestration of work toward a broader objective
Each builds on the ones before it. You cannot understand a Task without understanding Actors and Task Requests. You cannot understand a Project without understanding Bodies of Work and Tasks. The ontology is a stack, and we'll walk through it from the bottom up.
Actor — The Foundation of Execution
An Actor is any entity capable of:
Receiving work
Accepting or declining work
Executing work
Participating in validation
Today, an Actor is either a human or an AI agent. Tomorrow, it might also be a team, a service, or an autonomous system. The abstraction is deliberately broad because the system's integrity must not depend on what kind of thing is doing the work.
The key design decision. The system never assigns work to "users." It assigns work to Actors. This is not a semantic trick — it is a structural choice that permeates the entire data model and every interface built on top of it. When someone looks at a task assignment, they don't see "assigned to: Alice" — they see "assigned to: [Actor] Alice (human)." The distinction matters because it means the same assignment could read "[Actor] ResearchBot (agent)" without any change to the system's logic, lifecycle, or validation requirements.
This is what makes Principle 4 (human-agent equality) possible at the implementation level. The Actor abstraction is the mechanism through which that principle is enforced.
Task Request — The Origin of Work
A Task Request is a proposal for work. It represents intent — not commitment. This distinction is one of the most important ideas in the entire system.
Consider three scenarios:
A wife says to her partner: "Can you book the flights for our trip?"
A manager says to an engineer: "Can you look into that bug in the payment flow?"
An AI agent says to a human reviewer: "Can you approve this deployment?"
In every conventional system, all three of these would immediately become tasks. Name on it, status set to "to-do," sitting in someone's queue. But notice what actually happened: someone asked. They didn't assign. They didn't commit. They proposed.
Alissa takes this distinction seriously. A Task Request is not a Task. It exists in a different state, with a different lifecycle, and a different set of valid operations. A request can be accepted, declined, or negotiated. It can be clarified. It can expire. It can sit in an inbox waiting for attention without anyone pretending it's "in progress."
The critical rule. A Task cannot exist without acceptance. There is one exception: self-created tasks carry implicit acceptance — if you create a task for yourself, you have obviously accepted it. But in every delegated scenario, the path from request to task must pass through explicit acceptance.
Why this matters. Separating the request from the task accomplishes three things simultaneously:
It enforces consent. No one can have work dumped on them without their agreement. This is not just polite — it is structurally necessary. An actor who hasn't accepted a task has no reason to feel accountable for it, and the system's integrity depends on accountability being real.
It clarifies ownership. The moment of acceptance is the moment ownership transfers. Before that, the requester owns the intent. After that, the acceptor owns the commitment. There is no gray zone.
It prevents silent failure. In systems without this separation, tasks can be "assigned" and then ignored indefinitely. No one knows whether the assignee even saw the task, let alone agreed to do it. In Alissa, an unaccepted request is visibly unaccepted. It sits in the intake layer, demanding a decision. Silence is not treated as agreement.
Request Inbox — The Work Intake Layer
A Request Inbox is an addressable entry point for incoming work. Think of it as a service interface — a structured channel through which one actor (or the outside world) can submit work proposals to another.
Here's a concrete example. Your brother is a freelance photographer. He's talented, but his "system" for managing client requests is a mess: some come by email, some by Instagram DM, some by text message, some by phone call at dinner. He misses requests. He double-books. He forgets to follow up.
In Alissa, your brother would expose a "Photography Requests" inbox. Clients submit requests through it. Each request has structure: what they want, when they want it, relevant details. Your brother reviews his inbox, accepts the requests he wants to take on (creating tasks in the process), declines the ones he can't, and negotiates the ones that need adjustment.
The Inbox is not a task list. It is not an execution entity. It is a boundary — the surface where unstructured external intent meets the structured internal system. Requests enter through the inbox. Tasks emerge on the other side.
This same pattern applies everywhere:
A coding agent can expose an inbox for development requests
A personal assistant (human or AI) can expose an inbox for scheduling requests
A service team can expose an inbox for support requests
An individual can expose a personal inbox for ad-hoc asks from collaborators
The inbox gives every actor a clean, structured way to receive work — and, critically, the right to decide what they accept.
Task — The Atomic Commitment
A Task is the heart of the system. It is a bounded, owned, verifiable unit of committed work. If the Actor is the foundation and the Task Request is the origin, the Task is the substance — the thing that actually gets done.
A valid Task must define three things:
Outcome — What does "done" look like? This is the Definition of Done (DoD). It must be specific enough that two reasonable people (or a person and a machine) would agree on whether it has been met.
Proof — How will completion be validated? What evidence must exist? What checks must pass? These are the validation criteria, and they must be defined before execution begins, not after.
Ownership — Who is accountable? Exactly one primary assignee. Contributors and reviewers may participate, but one Actor holds the line.
Consider a simple example: [Task] Book flight to NYC for client meeting
DoD: Round-trip flight booked for the correct dates, within budget, with preferred airline if available
Validation: Confirmation email attached to task, Booking ID present, dates match calendar invite
Owner: [Actor] Alice (human)
Without this structure, the task is just a line of text — "book flight to NYC" — and "done" means whatever the assignee decides it means. Maybe they booked a one-way flight. Maybe they found a flight but didn't book it because they weren't sure about the budget. Maybe they booked the wrong dates. Every ambiguity is a potential failure, and every failure traces back to vagueness at the point of definition.
This is why Alissa is not a to-do app. A to-do app stores the string "book flight to NYC." Alissa stores a commitment — a structured agreement between actors about what will be produced, how it will be verified, and who is responsible for making it happen.
Body of Work (BOW) — The Coherence Layer
A Body of Work (BOW) is an explicit grouping of related tasks organized around a shared goal. It is the layer where individual tasks gain context — where the question shifts from "what needs to be done?" to "what are we trying to accomplish?"
Example: [Body of Work] Client Travel Preparation
=> Book flights (Task)
=> Reserve hotel (Task)
=> Prepare travel documents (Task)
=> Send itinerary to client for approval (Task)
Each of those tasks makes sense individually. But the Body of Work provides the frame: these aren't four random tasks — they're four parts of a coherent effort with a shared goal (get the client to the meeting, comfortably and on time).
Why BOW exists. Without it, tasks remain fragments. A human looking at a flat list of 47 tasks cannot easily see which ones relate to each other, which ones serve the same objective, or which ones might need to be reconsidered if the goal changes. BOW provides that structure.
The critical distinction. A dependency graph is not a Body of Work until a goal is defined. You can have three tasks that depend on each other ("design mockup → implement feature → write tests") without any shared goal beyond "do them in order." A Body of Work requires intentionality: these tasks are grouped because they collectively serve a defined purpose.
This distinction prevents the common failure mode of over-mechanistic project management — where relationships between tasks are modeled exhaustively but the why behind those relationships is lost.
Project — The Delivery System
A Project coordinates work across time toward a broader objective. It is the highest-level entity in the Alissa ontology, and it earns that position by introducing something no other entity provides: the dimension of time.
A Project adds:
Timeline — when work starts, when it must finish
Sequencing — what must happen before what
Milestones — meaningful checkpoints that indicate progress
Observability — a dashboard-level view of how the overall effort is progressing
Example Project: Client Onboarding (Q2 2026)
Spans multiple Bodies of Work (legal setup, technical integration, training)
Structured over 8 weeks
Milestones: Contract signed, Environment provisioned, First workflow deployed, Go-live
The critical rule: if it has no timeline, it is not a Project. An open-ended collection of work with no deadline, no milestones, and no sequencing constraints is a Body of Work — not a Project. This is not a pedantic distinction. It matters because the word "project" in most organizations has become so overloaded that it means everything and therefore nothing. In Alissa, the term has a precise definition, and that precision is what makes the entity useful.
V. The Delegation Flow — The Backbone of the System
All work in Alissa follows a single, universal flow:
1. Request — An actor proposes work to another actor 2. Acceptance — The receiving actor explicitly accepts 3. Task Created — The proposal becomes a commitment 4. Execution — The work is performed 5. Visibility — Progress is structurally observable throughout 6. Validation — The outcome is verified against predefined criteria
Why this matters. Most tools collapse this entire flow into a single action: "create task." One click, and you've gone from intent to (alleged) commitment to (alleged) execution readiness — skipping negotiation, acceptance, and every other stage where real-world delegation actually happens. Alissa refuses to do this because each stage exists to preserve something essential:
The Request stage preserves intent and negotiability
The Acceptance stage preserves consent and ownership clarity
The Task Creation stage preserves the distinction between proposal and commitment
The Execution stage preserves the actual doing of work
The Visibility stage preserves stakeholder trust
The Validation stage preserves outcome integrity
Remove any one of these, and you lose something that cannot be recovered downstream.
VI. Human–Agent Collaboration in Practice
The Actor abstraction enables a spectrum of collaboration patterns that the system handles without any special-case logic:
Human → Agent → Human: A manager requests a code fix from an AI agent. The agent generates a fix and creates a review request back to a human. The human validates and approves.
Agent → Human → Agent: An AI monitoring system detects an anomaly and creates a task request for a human operator. The operator investigates, makes a decision, and assigns a remediation task to an agent.
Human + Agent (parallel): A Body of Work includes tasks owned by both humans and agents, executing in parallel. The project's observability layer doesn't distinguish between them — it surfaces progress from both equally.
The system does not define how execution happens. It does not tell an agent which algorithm to use or a human which process to follow. It defines three things and only three things:
Who owns it (ownership)
How it will be checked (validation)
How it connects to other work (coordination)
Everything else — process, methodology, tooling, technique — is the actor's domain. This separation is essential. The system is the railroad tracks, not the train. It defines where things can go and checks that they arrived. It does not drive.
VII. Observability — Derived, Not Declared
Observability in Alissa is an emergent property. It is not a feature that humans operate — it is a consequence of the system's structure.
Because every task has a status, a DoD, validation criteria, and an owner, the system can compute aggregate visibility at every level:
Task level: Is it in progress? Is it blocked? Is it validated?
Body of Work level: What percentage of component tasks are complete? Which are at risk?
Project level: Are milestones on track? What's the projected completion date? Where are the bottlenecks?
None of this requires anyone to write a status update. None of it depends on someone remembering to update a dashboard. The data exists because the work exists, and the observability derives from the data.
Observability must emerge from real execution, not manual reporting.
This is the standard. If a metric requires someone to manually input data, that metric is fragile. It will be wrong, late, or missing — probably all three. The only metrics the system should surface are ones it can compute from the structured execution data it already possesses.
VIII. Edge Cases and Difficult Truths
No system design is complete without honest treatment of the cases that don't fit neatly.
Cancellation is not failure — it is adaptation. A task can be cancelled. A request can be withdrawn. A project can be shut down. None of these are system errors. They are natural events in any dynamic environment, and the system must handle them with the same rigor it applies to completion. A cancelled task should be visible as cancelled (not deleted), its reason recorded, and any downstream dependencies notified.
Dependency failure blocks downstream work — visibly. When a task that other tasks depend on fails or stalls, that failure must propagate visibly. The system must not hide the blast radius of a blocked task. Stakeholders must see, without digging, that task X is blocked because task Y is late.
Partial completion fails validation and remains incomplete. If a task's DoD requires three deliverables and only two are present, the task is not complete. There is no "partial credit" in a system of verified outcomes. This may feel rigid. It is. That rigidity is the price of trustworthiness..
IX. What We Explicitly Reject
Doctrine is defined as much by what it excludes as what it includes. The following are patterns, features, and philosophies that Alissa will never adopt:
Multi-owner tasks. Collaboration is welcome. Shared accountability is not. One owner, always. (See Principle 2.)
Implicit completion. No auto-closing tasks based on time, no "assumed done" after X days of inactivity. Completion requires validation. (See Principle 1.)
Hidden work. No private tasks that escape observability. No "draft" states that exist outside the system's visibility. If it's in Alissa, it's visible to the relevant actors. (See Principle 5.)
Timeline-less projects. If it has no deadline, no milestones, and no sequencing, it is not a project. Call it a Body of Work, a backlog, an initiative — but not a project. (See the Project ontology.)
Over-engineered hierarchies. Six primitives. Four layers. That's the ontology. We do not add entities because they might be useful. We add entities only when their absence makes the system unable to represent a commitment it must represent. Every addition must justify itself against the full weight of the complexity it introduces.
X. The Mental Model
If you remember nothing else from this document, remember this:
Work enters the system as requests, becomes commitments through acceptance, is executed by accountable actors, validated through evidence, and coordinated across time toward meaningful outcomes.
That is the Alissa System in one sentence. Every entity, every principle, every design decision described in this document exists to make that sentence true and to keep it true as the system scales.
XI. Closing — Why the Constraints Are the Point
It would be easy to read this document and feel that Alissa is restrictive. No multi-owner tasks. No implicit completion. No hidden work. Mandatory validation. Mandatory acceptance. These are constraints, and constraints feel like limitations.
But they are not limitations. They are structural guarantees.
A bridge doesn't feel limited because it has load-bearing specifications. An airplane doesn't feel restricted because it has mandatory pre-flight checklists. These constraints are what make those systems trustworthy. Remove them and you don't get more freedom — you get a bridge that might collapse and a plane that might not fly.
Alissa works the same way. The constraints are what make it a system of record rather than a system of opinion. They are what make the phrase "task complete" mean something. They are what allow a stakeholder to look at a dashboard and trust what they see. They are what make human-agent collaboration possible without separate logic. They are what make the system scale without losing integrity.
Without these constraints, Alissa becomes another task tool. There are hundreds of those. The world does not need another one.
With these constraints, Alissa becomes something different:
A system of record for real work.