Acceptance Criteria are an effective way to bake quality into User Stories and Features by asking ourselves “how this is going to actually behave?”. But creating useful acceptance criteria takes effort, so it’s often overlooked or not fully utilized.
ChatGPT can help make this process easier. However, the quality of the results depends a lot on how you phrase your questions
Our First Shot at it:
Response:
Chart Accessibility
- The system must display a chart visualizing the average grades for each meeting within a specified timeframe.
Timeframe Selection
- The user must be able to select or input a timeframe (e.g., weekly, monthly, quarterly) for which the average grades are to be displayed.
Data Accuracy
- The chart must accurately reflect the average grade of each meeting within the selected timeframe.
Trend Identification
- The chart must include visual indicators (e.g., line or bar graph trends) to help identify improving, stable, or deteriorating grade trends over time.
Meeting Details
- Upon selecting a data point on the chart, detailed information about that specific meeting (e.g., date, average grade, number of participants) should be displayed.
User Guidance
- The system should provide tooltips or a help section explaining how to interpret the chart and its trends.
Error Handling
- In cases of no data available for the selected timeframe, the system should display a meaningful message, such as “No data available for the selected period.”
Not bad!
My tip:
Provide more guidance: Attach your Definition of Done, and be more specific about the format
A Better Prompt:
Response:
Index | Acceptance Criteria in Gherkin Syntax |
1 | Scenario: Basic usage |
2 | Scenario: Live update |
3 | Scenario: No data |
4 | Scenario: Responsive UI |
5 | Scenario: Basic explanation (tooltip) |
6 | Scenario: Fast response time |
7 | Scenario: Analytics event recorded |
This list is more comprehensive and more concrete.
Here’s the Definition of Done I provided:
Definition of Done for User Stories
A User Story is Done only when
- Design is responsive (should look nice on mobile – elements spread vertically and fit into narrow page)
- It handles edge cases (no data, not logged in, single data point…)
- It does not reduce automated tests coverage
- It works on main browsers: Chrome, Safari, Firefox, Edge, on versions that are up to 1 year old
- It is instrumented for Google Analytics (usage, errors): I can see an event for each of these
- It was added to “what’s new” page & Let feature requesters know that it is live
- Response time: Page/data response time should be less than 1 second 95% of the time
- Cost: Should not cost more than $0.01 per 1000 page views
- Tests run successfully on staging
- It is deployed to production
- Sanity tests run successfully on production
TLDR;
To create high quality acceptance criteria using chatGPT, provide:
- Your Definition of Done
- The desired format