AI Use Cases
Table of contents
- Table of contents
- Recommended AI use cases
- When not to use AI
- AI as a brainstorming partner
- Ask mode vs agent mode
- Feature planning use case
- Component Scaffolding use case
- UI Prototyping use case
- Test generation use case
Recommended AI use cases
This section covers the most common AI use cases when building apps. We recommend using AI for the following tasks:
- Feature planning — Generate ideas for features based on requirements or user stories.
- Component scaffolding — Create components from comments or short descriptions.
- UI prototyping — Generate code from Figma images, polish CSS styles, or adjust layouts to enhance the user experience.
- Test generation — Write tests from well-defined specifications or acceptance criteria.
When not to use AI
We don’t recommend using AI for the following tasks:
- Unknown logic — Implementing logic that is completely unknown to you.
- Abstract questions — Answering abstract questions that lack sufficient context.
- Full architecture — Fully architecting an application from scratch.
- Large-scale refactors — Large-scale refactors without human supervision.
- Sensitive configuration — Configuration changes that require explicit review and compliance. Like setting up API keys or credentials.
- Product decisions — Making product decisions.
- New app scaffolding — Creating the scaffolding for a new application. The Create Contentful App CLI already handles app scaffolding in a way that aligns with what the App Framework expects.
AI as a brainstorming partner
This documentation helps teams use AI for ideation, competitive analysis, and strategic planning. AI acts as an intelligent brainstorming partner that enhances creative thinking and provides structured analysis to transform complex requirements into actionable feature plans.
Ask mode vs agent mode
Ask mode — interactive brainstorming
Ask mode is like having a conversation with an expert — you ask questions, receive immediate answers, and follow up as needed. It is ideal for exploring ideas, getting quick insights, and having back-and-forth discussions. You remain in control of the conversation and can steer it in any direction.
Agent mode — autonomous research
Agent mode can run autonomous tasks and return compiled outputs. It is suitable when you need deep analysis, competitive research, or complete documentation. Use agent mode for multi-step research workflows that collect, synthesize, and summarize information.
Feature planning use case
1. Ideation phase
Use ask mode for brainstorming
Prompt structure:
I'm planning a [feature type] for [target audience].
Context:
- [Technical implementation options]
- [Business goals, user needs, constraints]
Help me brainstorm different approaches and considerations.
Example:
I'm planning an integration to HubSpot for our Contentful space for marketing teams.
Context:
- One option is to show connected entries in the Page location of a Contentful app. Another option is to show them in the Sidebar location.
- Marketing teams need to sync page-level data directly with Braze, manage lead tracking efficiently, and ensure consistency between Contentful entries and Braze content blocks.
Help me brainstorm different approaches and considerations, and compare with existing apps such as the Bulk Edit app.
2. Implementation phase
Use agent mode for research and implementation
For detailed implementation guidance, including prompt structures and examples, see the Skeleton Planning use case section. This phase covers how to ask AI to generate skeleton implementations, empty components, and basic project structure.
3. Refining phase
Human review and manual actions
After receiving AI-generated research and implementation recommendations, humans must refine, validate, and adapt the AI outputs to your specific context and constraints.
Key actions for this phase are:
- Take incremental steps - Implement changes in small, reversible batches to reduce risk and make rollbacks easier.
- Follow AI best practices - Consult the Recommendations for AI app building docs and apply prompt hygiene, safety checks, and verification steps.
- Review and validate recommendations - Have subject-matter experts verify AI outputs against product requirements and quality standards.
- Customize suggestions - Adapt AI-generated suggestions to your specific business requirements and constraints.
Component Scaffolding use case
AI generates skeleton implementations, empty components, and basic project structure, allowing developers to focus on business logic rather than boilerplate setup.
When to Use Skeleton Generation
Perfect for: Known features, prototypes, learning frameworks, rapid iteration
Not ideal for: Complex business logic, security-sensitive features
Skeleton Generation Process
1. Using Agent Mode for Initial Component Scaffolding
Prompt Structure:
I want you to create a skeleton implementation for [feature detail] with the following structure:
Requirements:
- [Mention successful implementations for reference]
- [Detail common patterns and unique approaches]
- [Provide skeleton implementation with empty components and placeholders]
- [Other requirements]
Context: [detailed business context, constraints, goals]
Example:
I want you to create a skeleton implementation for a new Page location inside my Contentful app with a simple initial table that will show all the existing connected entries in Braze.
Requirements:
- Analyze successful implementations such as Bulk Edit app for reference
- Follow the TDD approach first building the suite of test and then the skeleton implementation.
- Make it simple and as a starting point with empty placeholder functions and empty components.
- The table should have at first the following columns: the entry display name, status, total of fields and last updated date
Context:
- This is going to be a new app inside Contentful platform to integrate with Braze
- Each field from each entry will be a content block in Braze
- Our goal is to be able to see which entries has fields that are connected to a content module in Braze
2. Integration Wiring
Prompt:
Wire up skeleton components for [feature] with:
- Navigation between components
- Error handling structure
- Loading states
Context: [current project state]
Best Practices
- Start Simple: Basic structure, add complexity incrementally
- Include Placeholders: TODO comments, mock data, clear interfaces
- Follow Conventions: Consistent naming, framework best practices
- Plan for Growth: Extensible structure, configuration options
UI Prototyping use case
AI generates UI code from Figma screenshots or UI descriptions. Developers can use this to quickly prototype single screens and convert designs into functional code. Work on one screen at a time to improve results and keep implementations manageable.
When to use UI prototyping
Best for: Single-screen prototypes, design-to-code conversion, rapid UI iteration, component generation.
Not recommended for: Complex app architectures, full application development, advanced interactions.
UI prototyping process
1. Prepare the design
- Take a clear screenshot of your mockup design.
- Ensure that all text is readable and colors are visible.
- Include any interactive states, such as hover and focus, in separate screenshots.
2. Generate a single screen in agent mode
Use the following prompt structure to generate code:
Create UI code for this [screen type] based on the provided [Mockup screenshot or UI description].
Requirements:
- [Component structure]
- [Styling approach, such as CSS modules or styled-components]
- [Responsive behavior]
- [Interactive elements]
Context: [project constraints, design system, integration needs]
Example:
Create UI code for this configuration screen based on the provided mockup screenshot.
Requirements:
- Form validation and state management
- Contentful Forma 36 for styling
- No responsiveness required
- Add the Multiselect component from Forma 36 as an initial skeleton component (no fetch implementation)
Context:
- This is the configuration screen for a new Contentful app
- Follow our design system color palette
- Match the provided mockup design exactly
3. Review and iterate
Use the following prompt to refine components:
Refine this UI component for [feature] with:
- Accessibility improvements
- Performance optimizations
- Integration with [specific systems]
Context: [current project state and requirements]
Test generation use case
Generate Tests with AI
AI can generate unit and integration tests from code or specifications. This helps developers save time and improve coverage, but results should be reviewed carefully to ensure accuracy and reliability.
When to use AI Testing
Perfect for: Unit tests, Simple integration tests, simple mocking. Not ideal for: End-to-end tests and complex integration tests.
Dependencies and development patterns
When you request test generation, you must specify the following:
- Testing framework and version - Specify the framework (for example, Jest, Mocha, Vitest), the exact version, and any required plugins or extensions.
- Assertion libraries - Specify the assertion library (for example, Chai or built-in Jest assertions) and any custom assertion helpers.
- Mocking libraries - Specify mocking tools (for example, Jest mocks, Sinon, MSW, Nock) and the recommended approach for API and database mocking.
- Test utilities - Specify utilities such as React Testing Library, Enzyme, custom test helpers, and test data factories.
- Test structure and development patterns - Specify preferred patterns (for example, Arrange-Act-Assert, TDD) and file organization.
- Examples - Provide a unit test example and an integration test example.
Generating tests as a starting point
We recommend treating AI-generated tests as starting points rather than final solutions. Review them thoroughly for coverage and accuracy.
Common issues to fix
- Incomplete mocking - Add missing dependencies and realistic mock responses.
- Missing edge cases - Add boundary conditions and expected error scenarios.
- Context understanding - AI lacks deep understanding of your domain logic.
AI Testing process
Unit test generation
Prompt specifications Specify the prompt requirements for unit test generation, including at minimum the following:
- Function signature - Provide the full function signature and a concise description of expected behavior, including input and output types.
- Happy-path scenarios - Request tests that validate normal or expected inputs and outputs.
- Edge cases and error conditions - Include boundary conditions, invalid inputs, and expected error handling.
Example prompt structure
Use this template as a starting point:
Generate unit tests for the following function using Vitest: function calculateTotal(items) { // Implementation details... } Requirements: - Test happy path with valid inputs. - Test edge cases (empty array, zero tax rate). - Test error conditions (invalid inputs). - Use descriptive test names. See the rules file.
Integration test generation
Prompt specifications
Specify the prompt requirements for integration tests, including at minimum the following:
- Components - Specify which components interact and their roles.
- External dependencies - Identify external services, APIs, or third-party SDKs to mock.
- Data flow and state management - Describe how data moves between components and how state should change.
- User workflows - Define end-to-end user scenarios to verify.
Example prompt structure
Use the example below and replace the component names with your app's exact identifiers:
Generate integration tests for the connected fields workflow in the page location. Components involved: - ConnectedEntriesTable - PageLocation - ConnectedFieldsModal - SDKNotifier Test scenarios needed: - Disconnect single field - Disconnect one field from an entry and validate UI and API calls. - Bulk disconnect - Disconnect all fields from an entry and validate bulk behavior. - Field validation and errors - Validate field validation and error handling behavior. - UI state updates - Confirm UI state updates when fields connect or disconnect. - Entry removal - Confirm entry removal when all fields disconnect. - Bulk selection operations - Validate field selection and bulk operations. See the rules file.
Agents rules example for testing
Provide a canonical set of rules teams can follow.
# Unit testing rules example
## Testing framework and libraries
- Use Vitest as the primary testing framework.
- Use TypeScript for all test files.
- Use descriptive test names that explain the behavior being tested.
- Use React Testing Library.
## Test structure and development patterns
- Group related tests using `describe` blocks.
- Use `beforeEach` and `afterEach` hooks for setup and cleanup.
- Create test data factories for complex objects.
- Mock external dependencies using `vi.fn()` and `vi.mock()`.
- Follow Arrange-Act-Assert pattern for all tests.
- Use the TDD approach: create the suite with all use cases first and then the implementation.
## Mocking guidelines
- Mock external API calls and services.
- Use `vi.spyOn()` for spying on methods.
- Create mock implementations for complex dependencies.
## Test utilities
- Create factory functions with sensible defaults.
- Allow parameter overrides for specific test cases.
- Use TypeScript interfaces for type safety.
- Include both valid and invalid test data.
# Integration Testing Rules Example
## Test Scope and Boundaries
- Test interactions between multiple components or services
- Test data flow across system boundaries
- Test external API integrations and third-party services
## Integration Test Patterns
- Test complete user workflows from start to finish
- Test error handling and rollback scenarios
- Test performance under realistic data loads
- Test concurrent operations and race conditions
- Test system resilience and recovery
You can also provide concrete testing examples to give the agent a clear, limited scope as a starting point, which helps avoid hallucinations.
This is an example of a unit test:
it('selects and deselects content types, showing and removing pills', async () => { render(<TestWrapper />); const user = userEvent.setup(); // Wait for content types to be loaded const blogPostOption = await screen.findByText('Blog Post'); await user.click(blogPostOption); expect(await screen.findByLabelText('Close')).toBeInTheDocument(); const closeButton = screen.getByLabelText('Close'); await user.click(closeButton); await waitFor(() => { expect(screen.queryByLabelText('Close')).toBeNull(); }); });
This is an example of a simple integration test:
it('removes entry from config and table when all fields are disconnected', async () => { mockGetEntryConnectedFields.mockResolvedValue([ { fieldId: 'title', moduleName: 'mod1', updatedAt: '2024-05-01T10:00:00Z' }, { fieldId: 'description', moduleName: 'mod2', updatedAt: '2024-05-01T10:00:00Z' }, ]); mockRemoveEntryConnectedFields.mockResolvedValue({}); mockGetConnectedFields.mockResolvedValueOnce({ 'entry-id': [ { fieldId: 'title', moduleName: 'mod1', updatedAt: '2024-05-01T10:00:00Z' }, { fieldId: 'description', moduleName: 'mod2', updatedAt: '2024-05-01T10:00:00Z' }, ], }); // After disconnect: entry is removed mockGetConnectedFields.mockResolvedValueOnce({}); mockCma.entry.getMany = vi.fn().mockResolvedValue({ items: [ { sys: { id: 'entry-id', contentType: { sys: { id: 'Fruits' } }, updatedAt: new Date().toISOString(), publishedAt: new Date().toISOString(), }, fields: { title: { 'en-US': 'Banana' }, description: { 'en-US': 'Description value' }, }, }, ], }); mockCma.contentType.get = vi.fn().mockResolvedValue({ displayField: 'title', sys: { id: 'Fruits' }, fields: [ { id: 'title', name: 'Title', type: 'Text' }, { id: 'description', name: 'Description', type: 'Text' }, ], }); render(<Page />); // Table should show the entry expect(await screen.findByText('Banana')).toBeInTheDocument(); expect(screen.getByText('2')).toBeInTheDocument(); const btn = await screen.findByRole('button', { name: /Manage fields/i }); fireEvent.click(btn); await screen.findByRole('dialog'); const selectAllCheckbox = screen.getByTestId('select-all-fields') as HTMLInputElement; fireEvent.click(selectAllCheckbox); const disconnectBtn = screen.getByRole('button', { name: /Disconnect/i }); fireEvent.click(disconnectBtn); // After disconnect, entry should be removed from config and table, and notification shown await waitFor(() => { expect(mockRemoveEntryConnectedFields).toHaveBeenCalledWith('entry-id'); expect(mockSdk.notifier.success).toHaveBeenCalledWith( '2 fields disconnected successfully.' ); expect(screen.queryByText('Banana')).not.toBeInTheDocument(); expect( screen.getByText( 'No connected content. Sync entry fields from the entry page sidebar to get started.' ) ).toBeInTheDocument(); }); });