Your head of sales has a process she runs before every major prospect meeting. She researches the company, gathers recent news, identifies key decision-makers, maps potential pain points, and drafts three different outreach angles tailored to different buyer personas. The process takes her two hours, and the results are consistently strong--but there's no template document to show for it. The methodology lives in her head.
Build From Document works brilliantly when you have a polished artifact to reverse-engineer. But when the process exists as institutional knowledge--a series of steps someone executes from experience--you need a different starting point.
Describe With AI is that starting point. Open a conversational interface, describe what you want the workflow to do, answer a few clarifying questions, and receive a production-grade AI Workflow ready to deploy. No document required. No technical configuration. Just a clear description of the outcome you need.
How the Conversation Works
Describe With AI follows a natural conversational flow. You describe your intent, the system asks targeted follow-up questions when needed, and then generates a complete workflow.
Start With What You Know
Open Describe With AI and type a description of the workflow you want. The more specific you are about three things--inputs, processing, and outputs--the better the result. But you don't need to get it perfect on the first try.
A description like "Create a workflow that analyzes contracts" is a starting point, but it leaves the system guessing about what kind of analysis you want, what format the output should take, and where the contracts come from. Something more specific produces better results:
Create a contract risk review workflow. Input: an uploaded contract PDF. Extract key terms--effective date, term length, liability caps, termination clauses--as structured data. Then identify risk areas and draft specific redline language with negotiation rationale for each flagged clause.
That description tells the system exactly what to extract, how to analyze it, and what the output should contain. The generated workflow will closely match your intent.
Clarifying Questions Fill the Gaps
When your description needs more detail, Describe With AI asks focused follow-up questions before building. These aren't generic questionnaires--they're targeted questions based on what the system identified as ambiguous or under-specified in your prompt.
For a contract review workflow, you might see questions like:
- Should the risk assessment use a scoring framework (high/medium/low) or narrative descriptions?
- Which jurisdiction's legal standards should guide the clause analysis?
- Should the redline suggestions include alternative language options or just recommended changes?
Your answers shape the generated workflow's logic and output format. This back-and-forth ensures the system builds what you actually need, not what it assumes from a vague description.
From Description to Deployable Workflow
Once the system has enough information--either from your initial description or after clarifying questions--it generates a complete AI Workflow. You'll see a confirmation with the option to review and open the workflow directly in the visual editor.
What you receive isn't a skeleton or rough outline. The designer produces fully engineered workflows with detailed prompts, intelligent model routing, built-in data validation, and variables pre-populated with the context you provided. These workflows are ready to run against real data immediately.
What a Single Prompt Produces
To understand what Describe With AI actually builds, consider a real example. A single prompt asking for a "blog-to-social content engine" produced a 16-step workflow that operates across multiple platforms and AI models.
The workflow starts by extracting key concepts, proof points, tensions, and quotable lines from a blog article. It then gathers real-time trend intelligence from X using web access and a model selected specifically for its live data capabilities. From there, it branches into parallel content streams: LinkedIn posts for both company and founder accounts, a carousel with slide-by-slide outlines and design notes, engagement boosters including polls and comment strategies, and X posts for both company and founder accounts with threads, link posts, and standalone proof posts.
Every step that produces structured data is followed by a dedicated validation step that catches formatting issues, verifies expected fields exist, and corrects syntax--a best practice the designer applies automatically. The workflow finishes with a performance ranking step that evaluates all content variants by predicted engagement, then assembles everything into a single publication-ready package a social media manager can execute from directly.
Multi-Model Intelligence Built In
The designer doesn't assign the same AI model to every step. It routes each task to the model best suited for the job. In the social content example, content creation steps use a model optimized for nuanced writing, while the trend intelligence and performance ranking steps use a model with real-time web access. This multi-model routing happens automatically based on what each step needs to accomplish.
Production-Grade Prompt Engineering
Each generated step contains a detailed, structured prompt--not a one-line instruction. A typical prompt includes a role definition, context about where this step sits in the workflow, explicit references to upstream variables, specific formatting requirements, and style constraints. The prompts reference variables from earlier steps using the correct syntax, maintain consistent output structures, and include guardrails against common failure modes like generic language or missing data.
Variables With Context, Not Just Placeholders
The designer creates variables pre-populated with useful defaults based on your description. Instead of empty placeholders labeled "input1" and "input2," you get named variables like companyBackground, targetAudience, and brandVoiceTone--and when you provide that context during the conversation, the defaults are already filled in. When you run the workflow, the variables that change per run (like the blog article or URL) are ready for input, while the stable context (company positioning, brand voice) carries forward automatically.
What Good Prompts Look Like
The quality of your description directly shapes the quality of the generated workflow. Here are examples of effective prompts across different use cases, each demonstrating the specificity that produces strong results.
Competitive Intelligence
Build a competitor intelligence workflow. Inputs: company name, industry, and a list of competitor names. For each competitor, use web search to research their products, pricing, and market position. Then compare all competitors against our company and output strategic recommendations.
This prompt defines clear inputs (company name, industry, competitor list), specifies the processing method (web search per competitor), and describes the desired output (comparative analysis with strategic recommendations). The "for each competitor" phrasing naturally translates into a loop step with individual research iterations.
Expense Processing From a Site
Design an expense report workflow that reads from a receipts site. List employee folders, then for each employee find the matching month/year subfolder, list receipt files, and extract vendor, amount, date, and category from each receipt. Aggregate into a company-wide summary with per-employee breakdowns.
This prompt uses platform-specific concepts--Sites and folder structures--to describe a workflow that navigates existing data rather than requiring manual uploads. The system generates steps that use built-in tools to traverse the folder hierarchy and process each receipt systematically.
Meeting Follow-Through
Build a meeting summary workflow. Input: meeting transcript or notes file. Extract attendees, key decisions, and action items as structured data. Then loop through each action item and draft a suggested approach with a ready-to-send follow-up email for the assigned owner.
This prompt goes beyond extraction into actionable output. The meeting transcript becomes structured data, and each action item gets a drafted follow-up email. The generated workflow includes extraction, validation, iteration, and content generation--all wired together automatically.
Financial Report Analysis
Design a financial report analyzer. Input: a quarterly financial report PDF. Extract revenue, expenses, margins, and key ratios as structured data. Calculate period-over-period trends and output an executive summary with flagged anomalies and recommended follow-ups.
Financial analysis benefits from specific field extraction. Naming the exact metrics you want--revenue, expenses, margins, key ratios--ensures the workflow's extraction step captures what matters. The trend calculation and anomaly flagging become distinct workflow steps with their own validation.
Document Comparison
Create a document comparison workflow. Inputs: two uploaded document files such as old and new policy versions. Extract the key sections from each, compare them side by side, and output a detailed diff report highlighting additions, removals, and material changes with impact assessment.
Comparison workflows require two inputs processed in parallel, then merged for analysis. The prompt makes this structure explicit, and the generated workflow handles the dual extraction, alignment, and synthesis.
Vendor Due Diligence
Build a vendor due diligence workflow. Input: vendor name and an uploaded vendor questionnaire file. Extract vendor responses as structured data, use web search to verify claims and check for news or incidents, then produce a risk scorecard with recommendations and any flagged concerns.
This prompt combines private data analysis (the questionnaire) with external research (web search for verification). The generated workflow uses both document analysis and web tools to produce a comprehensive assessment grounded in your system of record and external sources.
Fine-Tuning for Your Domain
The generated workflow opens directly in the visual AI Workflow editor where every step, prompt, and variable is visible and editable. While the workflows the designer produces are ready to run, you may want to fine-tune them for your specific domain.
Run against real data first. The fastest way to evaluate a generated workflow is to execute it against actual documents from your Sites. Review the output for accuracy, tone, and completeness. Most workflows produce strong results on the first run, but real data reveals where domain-specific adjustments would sharpen the output.
Add domain-specific terminology. If your financial analysis should use your industry's specific metrics or your contract review should prioritize clauses that matter in your jurisdiction, edit the relevant prompts to include that context. The prompt structure is already there--you're adding precision, not rebuilding.
Adjust model selection. The designer selects appropriate AI models for each step, but you may have preferences based on experience. If you've found that a particular model handles your industry's terminology more naturally, override the model on those specific steps.
Extend with additional steps. A vendor due diligence workflow might benefit from an extra step that checks the vendor against your internal approved vendor list. A content workflow might need a compliance review step before final output. The visual editor makes it straightforward to insert steps into the existing flow.
For deeper technical patterns like prompt isolation, JSON validation strategies, and loop architectures, the technical guide to building robust AI Workflows covers production-grade engineering in detail.
When to Use Describe With AI vs Build From Document
Clear Ideas offers two paths to creating AI Workflows, each suited to different starting points.
Use Describe With AI when:
- The process lives in someone's head and hasn't been documented yet
- You're creating a new workflow from scratch for a use case you haven't automated before
- You want to combine multiple data sources, platforms, or output formats in a single workflow
- You have a clear idea of inputs, processing, and outputs but no reference document
Use Build From Document when:
- You have a polished, finished document that represents the desired output
- You want the workflow to replicate the structure, tone, and analytical approach of an existing artifact
- The document contains institutional knowledge you want to preserve and automate
- Consistency with an established format matters more than starting fresh
Both paths produce the same type of AI Workflow that opens in the same visual editor. You can start with Describe With AI to build a complex multi-step pipeline, then use Build From Document to create a complementary workflow for a different process. The choice is about where your process knowledge currently lives--in a document or in someone's expertise.
Getting Started
Think of a process your team runs regularly that hasn't been formalized into a template document yet. Open Describe With AI and describe what the workflow should do. Be specific about what goes in, what processing happens, and what comes out. Include context about your audience, your standards, and the format you need.
Answer the clarifying questions, open the generated workflow, and run it against real data from your Sites. The workflow the designer produces will likely be closer to production-ready than you expect--detailed prompts, validation built in, intelligent model routing, and variables pre-populated with your context. Fine-tune for your domain, schedule it, and let the workflow handle the execution.
The AI Workflow Designer documentation provides detailed guidance. For deeper technical patterns like prompt isolation, variable design, and loop strategies, the technical guide covers production-grade workflow engineering. And if you already have a polished document to start from, Build From Document reverse-engineers it into a workflow automatically.