Case study • Conversation design • Content ops tooling
Helping small content teams decide what to post next with a Voiceflow AI assistant
Small creative and content teams rarely struggle with a lack of ideas—they struggle with deciding what to prioritize and where it should live.
My role
Conversation design, prompt design, Voiceflow prototyping
Audience
Small creative/content teams (understaffed, multi-channel)
Tools
Voiceflow, LLM prompting, structured evaluation prompts
Outcome
Reduced “what do we post?” decision friction by turning messy idea dumps into a ranked plan.
The problem
Most small content teams don’t have a content shortage. They have a prioritization problem: too many ideas, too many channels, not enough time, and no clean way to make decisions quickly.
- Backlogs of drafts and half-finished ideas
- Limited bandwidth for strategy and planning
- Multiple channels with different audience expectations
- Decision-making becomes the bottleneck (not creativity)
What I built
The chatbot guides users through a structured decision flow—basically a lightweight strategy session in conversational form.
Key design decisions
1. Lower the friction to start
Users are explicitly told that messy input is welcome. This reduces hesitation, encourages brainstorming, and mirrors real creative workflows where ideas rarely start fully formed.
2. Gather context before generating output
The bot asks about goals, audience, channels, and tone before evaluating ideas. This structured context dramatically improves the relevance of the final output and prevents generic recommendations.
3. Position AI as a collaborator, not a replacement
The bot focuses on analysis and prioritization—tasks many creative teams find draining—while leaving ideation and execution to humans. This framing makes the tool feel supportive rather than threatening.
- Clarifies the team’s primary goal
- Defines the target audience
- Identifies available content channels
- Establishes tone and positioning
- Collects a “messy is fine” list of 5–10 content ideas
- Evaluates and compares each idea
- Recommends what to publish next
Process
The assistant works best when it behaves like a structured planning partner: gather context, accept messy input, then evaluate ideas against the team’s real constraints.
1) Intake
Collect goal, audience, channels, and tone before touching the idea backlog. Context first, always.
2) Evaluation
Score ideas against relevance, effort, and fit. Compare options instead of generating endless new ones.
3) Output
Return a ranked recommendation plus a simple next-step plan so the team can actually ship.
Conversation flow
The diagram below documents the full conversation flow, including timeout handling, platform validation, fallback logic, a confirmation step, and edit routing — designed to guide users through a structured content planning session.
Behind the scenes
The Voiceflow canvas combines structured conversation logic with LLM-powered evaluation — and knowing where to use each was the core design challenge.
Structured logic (designed by me)
The intake flow — questions, capture blocks, platform validation, timeout handling, retry loops, and the confirmation step — is all hand-built logic. I used a JavaScript array to validate platform inputs rather than relying on the LLM to interpret them, which keeps that part of the flow predictable and testable.
LLM behavior (prompt-governed)
The actual idea evaluation and ranking at the end of the flow is LLM-powered, governed by a structured prompt that uses the captured variables as context — goal, platform, audience, and tone all feed directly into the evaluation.
Voiceflow canvas

Click to view full size
What I learned
This project taught me to think about AI and creativity differently. Creative teams aren't short on ideas — they're short on time, clarity, and decision support. But that was just the beginning.
Building the conversation flow forced me to think beyond the happy path. Designing for timeouts, retries, fallback prompts, and graceful defaults taught me that good conversation design is mostly about what happens when things go wrong. The moments where a user hesitates, misunderstands, or gives unexpected input — those are where the real design work lives.
I also learned the importance of knowing when not to use an LLM. The intake flow — validation logic, retry loops, confirmation steps — is all structured, hand-built logic. Keeping that predictable and testable made the LLM-powered evaluation at the end more trustworthy, not less. The two approaches work better together when you're deliberate about the boundary between them.
Most importantly: I learned by doing. Voiceflow, Lucidchart, flow documentation, technical annotation — none of this was in my toolkit six months ago. This project is proof that the right problem will teach you what you need to know.
Try the bot
Demo
