Recommendations for AI app building
These guidelines support the development of Contentful apps with AI. They provide practical strategies for working with AI assistants, balancing automation with human oversight, and ensuring code quality.
Select the right AI models and tools
Be proactive in exploring and selecting AI models and tools that best fit your needs. Most AI development agents include an auto-select model feature that automatically chooses the most suitable model for your task — a good default for general use. If you require more control or specialized capabilities, you can manually select a specific model to optimize performance for particular coding or content-generation tasks. Different models excel in different areas, so thoughtful selection improves productivity, efficiency, and overall solution quality.
Provide AI with the right context
Providing the right context is essential for getting accurate and useful results from an AI assistant. By combining clear prompting, comprehensive repository context, and the right tool configuration, you enable the AI assistant to act as an effective development partner—improving accuracy, reducing duplication, and strengthening your overall foundation for development.
Clear prompting
Be clear and specific with the instructions you provide. General prompts often lead to vague or incomplete results because the assistant fills in gaps by making assumptions. Instead, include details about the functionality you need, the format you expect, and any constraints that apply. When possible, provide examples. The more precise your instructions, the more reliable the output.
You can also use meta prompting, a technique where you ask the AI to generate the optimal prompt for a specific coding or engineering task instead of writing it directly.
For example, instead of prompting
Write a TypeScript function to validate user input
you could ask:
Write an effective prompt that would instruct an AI to generate clean, type-safe TypeScript code for validating user input, handling edge cases, and including basic unit tests.
This technique helps engineers clarify intent, define constraints like performance, typing, or code style, and leverage the model’s understanding of effective prompt design. The resulting meta-generated prompt can then be reused in code generation workflows, documentation systems, or developer automation pipelines to produce more consistent, maintainable results.
Repository context
When building a new app, download the Contentful Apps repository, add your app there and give your AI assistant access to all apps within the repository. This allows it to reuse existing components instead of reimplementing new ones. You can reinforce this by adding rules that prioritize reuse and consistency across applications. This not only saves time but also ensures the AI follows established best practices from other projects.
Tool and MCP server configuration
To make your AI assistant more capable, connect it to the right tools and MCP (Model Context Protocol) servers. MCP servers extend the assistant’s functionality by giving it structured access to external systems, APIs, or project data. Configure your environment so the AI can securely interact with relevant APIs, databases, or development utilities through these connections. Only enable tools that are necessary for the current task or workspace—this minimizes noise and improves reliability.
For example, connecting to the Contentful MCP server lets the AI assistant interact directly with your Contentful space to read entries, manage content types, or validate schema changes. This gives the assistant real-time awareness of your content model, enabling it to generate accurate code, automate tasks, and ensure consistency across your projects.
Validate AI outputs
Review every AI suggestion thoroughly to ensure you understand what you are building. If you cannot read the code or grasp the underlying architecture, the result is difficult to maintain. AI can’t close knowledge gaps on its own, and even if the generated code appears correct, you might not know why it works. This makes extending or debugging the solution harder.
AI is most effective when you understand the problem, the solution, and the patterns you can generalize. Build this knowledge by studying the problem and strengthening your technical skills, rather than treating AI as a black box.
To make the review process manageable, avoid writing large prompts that generate code across multiple files. Instead, work in small increments. We recommend this approach because it keeps the process focused, reduces reviewer fatigue, and makes reviews easier to manage.
Commit and test philosophy
Commit changes whenever you reach a stable working point. Many AI assistants provide restore points that help roll back experiments. Consider adhering to the following best practices:
- Revert early — Catch errors early by reverting or redirecting when AI diverges from expectations.
- Write tests promptly — Write or finalize tests as soon as functionality is stable. When confidence is high, ask AI to generate tests before implementation.
- Document while fresh — Add documentation or comments while the context is fresh to maintain clarity throughout the development process.
Strategies for AI-assisted development
This article presents two strategies for AI-assisted development: test-driven development (TDD) with an outline strategy, and manual exploration.
Test-driven development and outline strategy
Use this strategy when the problem and solution are well defined, and you want AI to write implementations guided by tests. In this approach, you create tests first to define the expected outcome, then ask the AI agent to implement code that passes those tests.
How this strategy works
- Ask the AI agent to create tests based on your expected outcome. Confirm that the tests fail.Note: Add an explicit rule in your prompt that you are using test-driven development to prevent the AI from creating mocks.
- Review the tests and refactor them if they do not match your needs.
- Commit the tests.
- Ask the AI agent to write code that makes the tests pass.
- Review and refactor the code as needed.
- Commit the changes.
When to use this strategy
- Clear problem — You have a clear understanding of the problem at hand.
- High-confidence solution — You can confidently break the system into functions, interfaces, or flows.
- Test-first iteration — You want AI to generate implementations and iterate on failing tests.
Examples
- Configuration page — You are a React expert and ask AI to generate a configuration page with an input and a dropdown.
- CRUD endpoint — You are experienced in backend development and ask AI to create a generic CRUD endpoint.
Manual exploration strategy
Use this strategy when the problem is unclear or involves unfamiliar tools, and you need to explore before moving into structured development. In this approach, you explore the problem manually before planning a solution and committing the results.
How this strategy works
- Ask the AI agent to read relevant information about the issue you want to resolve. Use non-coding modes to ensure it only gathers context.
- Create a plan with the AI agent to approach the issue.
- Verify that the AI agent does not make incorrect assumptions.
- Ask the AI agent to implement the solution.
- Review and refactor the code as needed.
- Iterate the last two steps as required.
- Commit the changes.
When to use this strategy
- Uncertainty — The problem involves ambiguity or unknown implementations.
- Unfamiliar tools — You are working with tools, protocols, or behavior that are unfamiliar.
- Undefined success — You have not yet established a clear definition of success in your project.
This approach allows you to study the problem, reduce uncertainty, and build foundational understanding. Once you establish clarity, you can provide the AI agent with more specific rules or prompts.
Examples
- OAuth exploration — You need to learn how OAuth works, so you manually implement client IDs, secrets, tokens, and distinguish between authorization and authentication.
- Asynchronous workflow — You are handling a complex asynchronous workflow involving multiple systems. You first build a simple end-to-end thread manually, with AI support, and later introduce more automation.