Intro
Our product has a lot of forms, which is true for most products once they accumulate enough entities and CRUD. HR has employee onboarding. Tasks has task creation. Campaigns, templates, flows, settings: every surface eventually grows a form.
At the same time, we were building a chat composer into the main app and integrating it with cards and workflows around it. The goal was straightforward: the assistant should always be nearby. You should be able to ask what you are looking at, navigate faster, and get help actually using the product instead of treating chat as a separate destination.
That gave us a product shape we liked: modern chat-and-cards interactions when they help, classic UI when you want to stay manual. Then forms exposed the gap. The assistant could explain the app, but the moment you had to actually do something inside a form, the experience snapped back to a completely disconnected flow.
Forms were the place where our assistant story stopped feeling true.
That was the real starting point for this work. Not "how do we add AI to forms?", but "how do we stop forms from breaking the assistant-first experience we were already building?"
The prototype
The first prototype was not a backend abstraction. It was a UI experiment. We took the tray composer and opened an assisted form next to it instead of throwing the user into a separate modal flow. The form still worked like a normal form. You could click, type, tab, validate, and submit it manually. But the chat box below it could now fill it for you.
That combination changed the feel of the interaction immediately. The assistant was no longer a detached helper sitting somewhere else in the product. It was now acting on the exact UI the user was already looking at.
The Add Employee flow made the idea obvious. You could still fill the fields yourself, but you could also drop in a PDF with employee details and let the assistant populate the form step by step. The same pattern translated cleanly to tasks: paste a customer complaint from the clipboard and let the assistant turn it into the right title, description, due date, and metadata.
Once we saw that working, the actual problem became clearer. We did not need "AI forms" as a special product surface. We needed normal forms that could accept input from two sources: the user directly, or the assistant acting with the same validation rules.
What we built
The real problem was not the model. It was our forms. Each one was a bespoke React implementation with its own fields, its own wiring, and its own little world of assumptions. That made the UI hard to reuse, and it made assistant support feel custom every time.
So the framework we actually needed was not "LLM support for forms". It was reusable form building blocks that already knew how to describe themselves. Once a field block could render itself, validate itself, and register itself as a client-side tool, assistant support stopped being a separate feature.
That shift changed the boundary completely. The browser already had the state, the validation, the visible UI, and the file objects. Sending all of that to the server just to send it back was upside down.
That pushed us toward a much simpler boundary: let the model call tools that run on the client, against the live form instance. The Vercel AI SDK already supports that model. If a tool is declared without an execute function, the call is streamed back to the client and resolved there. That was the hinge. Once we saw that, the architecture almost picked itself.
Technical details
The framework ended up with three parts: a zod schema that describes the form, a client tool manifest sent with the chat session, and a useAssistedForm hook that resolves model tool calls against the live react-hook-form instance.
The server now does one narrow job. When a request arrives with clientTools, buildClientToolMap turns that manifest into AI SDK tool definitions with no execute function. The call is then streamed back to the browser, where the form resolves it locally.
The key decision was treating the schema as the source of truth. It already contains the things the model actually needs: field names, value types, enum values, and a description of what each field means.
const EmployeeSchema = z.object({
fullName: z.string().min(1).describe("Employee's legal full name"),
email: z.string().email().describe("Work email address"),
employmentType: z
.enum(["full_time", "part_time", "contractor"])
.describe("Employment classification"),
startDate: z.coerce.date().describe("First day at the company"),
})From there, the walker turns every describe() into a tool description, maps zod types into JSON schema parameters, and exposes one tool per field leaf.
set_fullName({ value: string })
set_email({ value: string })
set_employmentType({ value: "full_time" | "part_time" | "contractor" })
set_startDate({ value: string /* ISO date */ })That is why the prompt can stay simple. It does not need to hardcode tools. It just tells the model to fill the form from what the user provided.
The UI stays declarative too. Form.AutoFields handles default rendering from the schema, while explicit JSX still wins when a specific field or step needs tighter control. The important part is that those reusable blocks also register themselves as tools, so rendering and assistant support come from the same source.
function AddEmployeeComposerDialog() {
const form = useAssistedForm({ schema: EmployeeSchema })
return (
<Wizard steps={["Personal", "Employment"]}>
<Wizard.Step title="Personal">
<Form form={form}>
<Form.AutoFields only={["fullName", "email"]} />
</Form>
</Wizard.Step>
<Wizard.Step title="Employment">
<Form form={form}>
<Form.AutoFields only={["employmentType", "startDate"]} />
</Form>
</Wizard.Step>
</Wizard>
)
}Each step validates against the subset of the schema it owns. That matters because the assistant is not bypassing the form. Whether a value came from the keyboard or a tool call, it is checked by the same UI-level rules.
To surface the feature, we extended the composer tray with registerable quick actions. Screens can register a pill that sends a prepared prompt to the model or runs a client-side handler.
useQuickAction({
id: "hr.add-employee.fill",
label: "Help me fill the form",
prompt: "Help me fill out this Add Employee form.",
})We kept the same assistant surface outside forms too, but that was a side effect of the architecture, not the point of it.
Results
The first migration target was Add Employee, a real form with a real backend mutation behind it. It ended up lighter than the original implementation while gaining assistant support it never had before.
The bigger payoff was deletion. Once the form blocks and schemas became reusable, assistant support no longer required a bespoke integration surface for every new form.
There was also a useful side effect: validation failures became product feedback. When a tool call fails schema validation, that usually means the prompt is weak, the field description is ambiguous, or the form itself is not clear enough.
The boundary stayed intact. The assistant helps with UI state, but submission and permissions still belong to the application.
Summary
Forms were the place where our assistant story felt least convincing. We had chat. We had cards. We had a clear idea that the assistant should stay close to the user. But the moment work moved into a form, that idea broke apart.
What fixed it was not a smarter prompt. It was a better boundary. Once we let the browser own form state, let the schema describe the UI, and let the model operate through client-side tools, assisted forms stopped being a special integration and started feeling like part of the product.
That is the real test for this kind of framework. When a new form appears, adding assistant support should feel boring. If it still needs its own backend tool, its own prompt contract, and its own wiring, the abstraction has not really moved.
