Demo 2 docs

Technical notes and dataflow documentation

This page renders the local markdown files from `docs/` using `react-markdown`, so the project docs live inside the app instead of in a placeholder shell.

Basic Data Flow

Goal

The goal of this POC is to show how a validation UI can be built dynamically from agent-related configuration while keeping the generation logic explainable and reusable.

In practice, the reusable part of the UI comes from the tool contract, while the runtime values come from the agent execution.

Static Inputs

The static tool contract should only contain:

  • tool_signature
  • description
  • inputSchema

The agent definition provides:

  • agent role and system prompt
  • allowed tools
  • execution context

The static layer should not contain pre-written UI labels or presentation copy.

Runtime Inputs

At runtime, the agent receives a trigger or user input and proposes one or more tool calls.

Each proposed tool call provides:

  • toolId
  • formData
  • execution state

These runtime values are not part of the cached UI schema. They are injected later.

Data Flow

  1. The system reads the agent definition and the allowed tool definitions.
  2. For each tool, it reads the static contract: tool_signature, description, and inputSchema.
  3. A presentation layer is extrapolated from that contract.
  4. AI rewrites this presentation layer into clearer human-readable labels and helper text.
  5. The resulting presentation layer is cached by tool-contract hash.
  6. At runtime, the agent proposes a tool call with formData.
  7. The cached presentation layer is merged with the runtime formData.
  8. The final validation UI is rendered from:
    • the tool contract
    • the cached presentation layer
    • the runtime values

Data Flow Diagram

Rendering diagram

Sequence Diagram

Rendering diagram

Cache Boundary

Cached:

  • tool label
  • tool summary
  • field labels
  • field help
  • presentation metadata derived from the tool contract

Not cached:

  • runtime formData
  • execution state
  • current UI state

Reading The Diagrams

  • The dataflow diagram shows the product logic from inquiry to validation, execution, and possible replanning.
  • The sequence diagram shows the technical interaction between the main system components used to build the validation UI.
  • Together they show both the business flow and the implementation flow.

Why This Fits the Task

This flow keeps the system aligned with the task requirements:

  • the UI is generated dynamically
  • the generation remains deterministic where structure already exists
  • AI is used for human-readable presentation enrichment
  • the reusable presentation layer can be cached
  • runtime values stay separate from the static contract

Business Analysis: Designing a Generic Validation UI for AI Workflows

1. Executive Summary

The assignment asks for a generic validation UI that can be dynamically generated from an agent definition and reused across AI workflows.

At first glance, this appears to be a UI generation problem. In practice, it is a product trust problem.

A validation interface is the layer that allows users to understand what an AI system is doing, supervise sensitive actions when needed, and feel confident enough to use the product in real business workflows.

The business challenge is therefore not only to standardize validation across heterogeneous workflows, but also to make that validation useful, lightweight, and trustworthy for end users.

The central thesis of this document is the following:

A good validation UI is the result of the right equilibrium between visibility and control.

Too little visibility creates a black-box effect and weakens trust. Too much control creates friction and reduces the value of automation. The product goal is to find the sweet spot.

2. Problem Context

Wonka’s assignment asks for a generalized validation interface capable of integrating with heterogeneous AI workflows.

At a high level, the expected system should:

  • read an agent definition
  • infer the relevant validation requirements
  • generate the UI dynamically
  • allow the user to supervise or approve workflow execution when needed

However, the business problem behind this requirement is broader than dynamic rendering.

Businesses do not adopt AI systems only because they are powerful. They adopt them when they are:

  • understandable
  • controllable
  • safe enough for operational use
  • lightweight enough to save time rather than create new friction

This is especially true for SMBs, which often seek automation to reduce repetitive work and improve operational efficiency. For them, a validation UI must not become a second manual workflow layered on top of the first one.

The real product question is therefore not simply:

How do we render validation dynamically?

It is:

How do we create a generic validation layer that makes AI workflows trustworthy and usable across business contexts?

3. Business Goal of the Validation UI

The validation UI should serve three business goals.

3.1 Reduce the black-box effect

Users need enough visibility into what the system understood, what it plans to do, and what it actually did.

Without that, the product feels opaque and unreliable.

3.2 Increase trust and confidence

If users cannot verify important outputs or supervise sensitive actions, they will hesitate to operationalize the workflow.

Trust is not a side effect. It is part of the product.

3.3 Provide the right amount of control

Users should feel that they can intervene where needed, especially for risky actions or business-critical outputs.

However, this must be done without turning the product into an over-engineered approval machine that destroys the value of automation.

4. Core Product Tension: Visibility vs Control

The key design tension is not between flexibility and standardization. It is between visibility and control.

  • Visibility answers: Can I understand what the AI is doing?
  • Control answers: Can I intervene when it matters?

A validation UI that over-optimizes one side at the expense of the other will fail.

Diagramme

Reading the diagram

  • A: High visibility, low control. The system is transparent, but the user has limited ability to influence outcomes.
  • B: High visibility, high control. The system can be powerful and reassuring, but may become too complex or heavy for users seeking automation.
  • C: Low visibility, low control. This is the weakest zone. The system feels like a black box and offers little reassurance.
  • D: Low visibility, high control. The user is expected to intervene without enough context, which creates confusion rather than confidence.

The best product experience sits in a balanced zone where the user has enough context and enough intervention power without excess friction.

This leads to a first important implication: a generic validation interface should not maximize controls. It should optimize trust.

5. Wonka Operates Across a Spectrum of AI Workflows

A generic validation UI cannot be designed around a single workflow archetype.

Wonka operates across a spectrum of AI workflows that vary in structure, ambiguity, and autonomy. At one end are highly guided and constrained systems, where expected behavior is relatively explicit. At the other are more autonomous and open-ended agents, where interpretation, planning, and execution involve greater flexibility.

This has an important product implication: the validation layer must be standardized enough to remain reusable across deployments, while still supporting different levels of supervision depending on the workflow context.

The challenge is therefore not to build a different product for each workflow type, but to define a validation approach that remains coherent across the full operating spectrum.

Wonka wide spectrum

This spectrum is central to the business problem. A system that is too rigid will fail to support open-ended workflows. A system that is too loose will fail to reassure users in more constrained or business-critical contexts.

6. Emerging Functional Requirements

The business context above leads to the following functional requirements.

6.1 The system must provide sufficient visibility into workflow interpretation and execution

Users must be able to understand:

  • what the agent interpreted
  • what it intends to do
  • what actions or tools are involved
  • what result is being proposed or produced

Without this, the product creates a black-box effect.

6.2 The system must support validation before sensitive actions or outputs

The interface must allow users to review, confirm, or correct important actions and outputs when the business context requires it.

Some workflows can tolerate greater autonomy, while others require explicit human oversight.

6.3 The system must support different levels of supervision

Not every workflow requires the same validation depth. The product must support lightweight review in some cases and more explicit validation in others.

This follows directly from Wonka’s operating spectrum.

6.4 The system must remain generic across workflows

The validation layer must not depend on one specific use case or one fixed workflow shape.

Its value lies in cross-deployability and standardization.

6.5 The system must account for action permissions

The validation experience must reflect what the agent is allowed to do and under what conditions.

Trust depends not only on output quality, but also on operational boundaries.

6.6 The system must support meaningful user intervention

Users must be able to approve, reject, edit, or escalate when the system encounters ambiguity, risk, or business-critical decisions.

A validation UI is useful only if it allows meaningful intervention.

7. Emerging Non-Functional Requirements

In addition to the functional requirements above, the product should satisfy the following non-functional requirements.

7.1 Simplicity

The product must remain easy to understand and use, especially for SMB users seeking time savings rather than complex configuration.

7.2 Low friction

Validation must not become so heavy that it cancels out the value of automation.

7.3 Trustworthiness

The system must inspire confidence by making relevant information visible and surfacing uncertainty or risk clearly.

7.4 Modularity

The solution must support a broad range of workflows without requiring a custom-built interface for each one.

7.5 Extensibility

The platform must be able to accommodate additional workflow patterns, constraints, or validation needs over time.

7.6 Consistency

Users should encounter a coherent validation logic across workflows, even if the level of detail varies.

7.7 Safety

The system must reduce the risk of silent errors, invalid outputs, or unauthorized actions being executed without appropriate visibility or control.

7.8 Scalability

The validation approach must remain viable as the number of supported workflows and integrations increases.

8. Conclusion

The assignment can be interpreted as a dynamic UI generation problem, but that would miss the underlying business challenge.

A generic validation UI is the trust layer between AI autonomy and business operations.

To succeed, it must reduce the black-box effect, provide the right amount of control, remain reusable across heterogeneous workflows, and preserve the value of automation rather than weakening it.

This analysis leads to a clear conclusion: the product should be designed not only for genericity, but for trustworthy genericity.

The next step is therefore to translate these business insights into a technical design and prototype architecture capable of supporting this validation layer in practice.

Advanced Architecture - Bonus

1. Goal

As established in the business analysis, a trustworthy verification UI must strike the right balance between visibility and control.

Visibility allows users to understand how the system interprets their request and how the workflow evolves over time.

Control ensures that the agent operates within clearly defined boundaries and that users can intervene when actions may affect business-critical operations.

In distributed agentic systems, achieving this balance cannot be solved at the UI layer alone. The system must expose operational structures that make visibility and control possible.

This document describes the architectural foundations required to support such a verification interface.

1.0 Architectural Alternatives and Their Limitations

Full Human Approval Workflow

One possible approach would be to require explicit user approval before every agent action.

Rendering diagram

While this model maximizes control, it eliminates the primary value of automation. Workflows become slower than manual execution, and users experience approval fatigue, eventually approving actions without meaningful review.

This model sacrifices automation for control and therefore fails to achieve the desired balance.

Fully Autonomous Execution

The opposite approach would be to allow agents to execute all actions autonomously and only present the final result to the user.

Rendering diagram

While this preserves automation, it eliminates operational safety. The user cannot intervene before side-effecting operations occur, and failures may only become visible after the system has already modified external systems or business data.

This model sacrifices control for automation.

Controlled Autonomy

The architecture proposed in this document aims to resolve this tension through controlled autonomy.

Instead of requiring approval for every step or hiding execution entirely, the system:

  • enforces granular permission boundaries
  • exposes execution state transparently
  • introduces approval checkpoints only when necessary

This allows the system to preserve automation while providing the visibility and control required for trustworthy operation.

Rendering diagram

1.1 Enabling Control Through Granular Agent Permissions

Agentic workflows often require invoking multiple external tools. Some actions may simply retrieve information, while others may create, modify, or delete business data.

To maintain operational safety, the system must enforce explicit authorization boundaries over what an agent is allowed to do.

This permission layer serves two purposes.

First, it protects operations by defining what actions an agent can perform autonomously.

Second, it allows the verification interface to explain why a particular action is allowed, requires approval, or is blocked.

Without such a permission model, the verification UI cannot provide meaningful operational control.

Minimal Permission Model

The system must persist three core entities:

  • agent definitions
  • available tools
  • authorization rules linking agents and tools

Rendering diagram

This model allows the system to determine:

  • whether the agent can invoke a tool
  • whether an action requires approval
  • whether the action should be blocked

These signals feed directly into the verification interface.

1.2 Enabling Visibility Through Execution State Persistence

Control alone is not sufficient.

Users must also be able to see how the agent interprets their request and how the workflow evolves during execution.

If the system only returns a final result, the verification UI cannot provide meaningful insight into what happened internally.

To support visibility, the system must persist structured execution artifacts.

At minimum, the runtime must store:

  • workflow-level state
  • action-level execution state
  • validation and transition events

These artifacts allow the verification interface to surface:

  • the execution plan
  • approval checkpoints
  • blocked actions
  • dependency chains
  • execution failures
  • compensation attempts
  • manual resolution requirements

Minimal Execution Model

Rendering diagram

This execution model provides the operational truth required for the verification UI.

Instead of displaying only a final result, the verification interface can show:

  • what the agent planned
  • what it is currently executing
  • what is waiting for approval
  • what failed
  • what compensation actions were attempted

1.3 Validation-Aware Execution Lifecycle

The final step is understanding how permissions and execution state interact during runtime.

Permission rules determine what actions are allowed.

Execution state reveals what the agent is actually doing.

Together they enable a verification UI capable of surfacing meaningful operational checkpoints.

This is particularly important because agentic workflows are distributed and asynchronous. A single user request may trigger multiple tool calls across independent systems.

In such environments, strict atomic transactions are rarely possible. Instead, the system must maintain workflow-level consistency through explicit execution tracking and compensation strategies.

Action Lifecycle

Rendering diagram

Why This Architecture Matters for Verification

This architecture enables the verification UI to expose the two key properties identified in the business analysis.

Visibility:

  • users can inspect execution state, dependencies, and failures.

Control:

  • users can intervene through permission boundaries and approval checkpoints.

Together these mechanisms transform the interface from a simple monitoring dashboard into an operational safety layer for agent-driven workflows.

Without these underlying architectural capabilities, a verification UI could only display the final output of the system rather than providing meaningful insight or control.