Strategies and prompting
Table of contents
Overview
This section outlines a framework for using AI in software development, highlighting when to use structured workflows versus exploratory approaches. It also covers prompting patterns for feature planning, UI prototyping, and test generation, helping you choose the right strategy based on problem clarity and requirement maturity.
Strategies for AI-assisted development
This section presents two high-level strategies for AI-assisted development: test-driven development (TDD) with an outline strategy, and manual exploration. For task-specific workflows and prompt templates (feature planning, UI prototyping, test generation), see Prompting strategies.
Test-driven development and outline strategy
Use this strategy when the problem and solution are well defined, and you want AI to write implementations guided by tests. In this approach, you create tests first to define the expected outcome, then ask the AI agent to implement code that passes those tests.
When to use it
Best for: Problems with a clear path to resolution or understanding, solutions that you can break down into smaller chunks (functions, interfaces, or flows), and projects that require high test coverage.
How this strategy works
- Ask the AI agent to create tests based on your expected outcome. Confirm that the tests fail.Note: Add an explicit rule in your prompt that you are using test-driven development to prevent the AI from creating mocks.
- Review the tests and refactor them if they do not match your needs.
- Commit the tests.
- Ask the AI agent to write code that makes the tests pass.
- Review and refactor the code as needed.
- Commit the changes.
Examples
- Configuration page — You are a React expert and ask AI to generate a configuration page with an input and a dropdown.
- CRUD endpoint — You are experienced in backend development and ask AI to create a generic CRUD endpoint.
Manual exploration strategy
Use this strategy when the problem is unclear or involves unfamiliar tools and you need to explore before moving into structured development. In this approach, you explore the problem manually before planning a solution and committing the results to reduce uncertainty. Once you establish clarity, you can provide the AI agent with more specific rules or prompts.
When to use it
Best for: Problems that involve uncertainty, unfamiliar tools, protocols, or behaviors, or projects that do not have a clear definition of success.
How this strategy works
We recommend using Plan Mode for this strategy if it is available in your agent.
- Ask the AI agent to read relevant information about the issue you want to resolve. Use non-coding modes to ensure it only gathers context.
- Create a plan with the AI agent to approach the issue.
- Verify that the AI agent does not make incorrect assumptions.
- Refine the plan before implementation:Refine the plan early to gain higher leverage, as improving the plan itself is more effective than revising generated code later.
- Ask the AI agent to implement the solution.
- Review and refactor the code as needed.
- Iterate the last two steps as required.
- Commit the changes.
Examples
- OAuth exploration — You need to learn how OAuth works, so you manually implement client IDs, secrets, tokens, and distinguish between authorization and authentication.
- Asynchronous workflow — You are handling a complex asynchronous workflow involving multiple systems. You first build a simple end-to-end thread manually, with AI support, and later introduce more automation.
Prompting strategies
Use the following prompting patterns to guide AI agents through planning, implementation, and refinement tasks.
Feature planning & implementation
Use this strategy to generate feature ideas and implementation plans from requirements or user stories.
When to use it
Best for: Known features, prototypes, learning frameworks, rapid iteration
Not ideal for: Complex business logic, security-sensitive features
How this strategy works
1. Ideation phase — Create a prompt to brainstorm approaches. Review available modes in your AI agent and select the mode that best fits the task. Prompt structure:
I'm planning a [feature type] for [target audience].
Context:
- [Technical implementation options]
- [Business goals, user needs, constraints]
Help me brainstorm different approaches and considerations.
Example:
I'm planning an integration to HubSpot for our Contentful space for marketing teams.
Context:
- One option is to show connected entries in the Page location of a Contentful app. Another option is to show them in the Sidebar location.
- Marketing teams need to sync page-level data directly with Braze, manage lead tracking efficiently, and ensure consistency between Contentful entries and Braze content blocks.
Help me brainstorm different approaches and considerations, and compare with existing apps such as the Bulk Edit app.
2. Implementation phase — Use the agent to create components and basic wiring. Prompt structure:
I want you to create a skeleton implementation for [feature detail] with the following structure:
Requirements:
- [Mention successful implementations for reference]
- [Detail common patterns and unique approaches]
- [Provide skeleton implementation with empty components and placeholders]
- [Other requirements]
Context: [detailed business context, constraints, goals]
Example:
I want you to create a skeleton implementation for a new Page location inside my Contentful app with a simple initial table that will show all the existing connected entries in Braze.
Requirements:
- Analyze successful implementations such as Bulk Edit app for reference
- Follow the TDD approach first building the suite of test and then the skeleton implementation.
- Make it simple and as a starting point with empty placeholder functions and empty components.
- The table should have at first the following columns: the entry display name, status, total of fields and last updated date
Context:
- This is going to be a new app inside Contentful platform to integrate with Braze
- Each field from each entry will be a content block in Braze
- Our goal is to be able to see which entries has fields that are connected to a content module in Braze
3. Refining phase — Refine, validate, and adapt AI outputs to fit your specific context and constraints. Implement changes incrementally, in small, reversible batches, while following Recommendations. Have experts review the outputs against product requirements and quality standards, then customize the suggestions to align with your business needs.
UI Prototyping
AI can generate UI code from screenshots or UI descriptions. Use this approach to quickly prototype single screens and convert designs into functional code.
When to use it
Best for: Single-screen prototypes, design-to-code conversion, rapid UI iteration, component generation.
Not ideal for: Complex app architectures, full application development, advanced interactions.
How this strategy works
1. Prepare the design - Make sure to:
- Capture a clear screenshot of the design.
- Ensure that all text is readable and colors are visible.
- Include any interactive states, such as hover and focus, in separate screenshots.
2. Generate a single screen with a prompt - Use the following prompt structure to generate code:
Create UI code for this [screen type] based on the provided [Mockup screenshot or UI description].
Requirements:
- [Component structure]
- [Styling approach, such as CSS modules or styled-components]
- [Responsive behavior]
- [Interactive elements]
Context: [project constraints, design system, integration needs]
Example:
Create UI code for this configuration screen based on the provided mockup screenshot.
Requirements:
- Form validation and state management
- Contentful Forma 36 for styling
- No responsiveness required
- Add the Multiselect component from Forma 36 as an initial skeleton component (no fetch implementation)
Context:
- This is the configuration screen for a new Contentful app
- Follow our design system color palette
- Match the provided mockup design exactly
3. Review and iterate - Use the following prompt to refine components:
Refine this UI component for [feature] with:
- Accessibility improvements
- Performance optimizations
- Integration with [specific systems]
Context: [current project state and requirements]
Test generation
AI can generate unit and integration tests from code or specifications. This helps developers save time and improve coverage, but results should be reviewed carefully to ensure accuracy and reliability.
When to use it
Best for: Unit tests, Simple integration tests, simple mocking.
Not ideal for: End-to-end tests and complex integration tests.
Unit testing
1. Define prompt requirements - Include the following in your prompt:
- Function signature - Provide the full function signature and a concise description of expected behavior, including input and output types.
- Happy-path scenarios - Request tests that validate normal or expected inputs and outputs.
- Edge cases and error conditions - Include boundary conditions, invalid inputs, and expected error handling.
2. Implement prompt structure - Use this template as a starting point:
Generate unit tests for the following function using Vitest:
function calculateTotal(items) {
// Implementation details...
}
Requirements:
- Test happy path with valid inputs.
- Test edge cases (empty array, zero tax rate).
- Test error conditions (invalid inputs).
- Use descriptive test names.
See the rules file.
Integration testing
1. Define prompt requirements - Include the following in your prompt:
- Components - Specify which components interact and their roles.
- External dependencies - Identify external services, APIs, or third-party SDKs to mock.
- Data flow and state management - Describe how data moves between components and how state should change.
- User workflows - Define end-to-end user scenarios to verify.
2. Implement prompt structure - Use the example below and replace the component names with your app's exact identifiers:
Generate integration tests for the connected fields workflow in the page location.
Components involved:
- ConnectedEntriesTable
- PageLocation
- ConnectedFieldsModal
- SDKNotifier
Test scenarios needed:
- Disconnect single field - Disconnect one field from an entry and validate UI and API calls.
- Bulk disconnect - Disconnect all fields from an entry and validate bulk behavior.
- Field validation and errors - Validate field validation and error handling behavior.
- UI state updates - Confirm UI state updates when fields connect or disconnect.
- Entry removal - Confirm entry removal when all fields disconnect.
- Bulk selection operations - Validate field selection and bulk operations.
See the rules file.