Providing clear and constructive feedback is essential for maintaining a fair platform and helping participants understand why their submission was not accepted. Proper rejections play an important role in our ecosystem—they help build accurate participant reputation scores based on study performance, ensuring the system can prioritize high-quality participants for future study invitations.
When to Provide Feedback
The feedback field appears immediately after you click "Reject." This is a required field - you cannot reject a submission without providing a reason.
What Makes Good Feedback
Be Specific
Instead of "Bad data," write "Failed multiple attention checks on pages 3 and 5."
Specific Examples
- Poor: "Low quality responses"
- Better: "Responses were too brief and did not address the questions asked"
- Best: "Questions 2-4 received one-word answers that did not demonstrate engagement with the material"
Be Professional
Maintain a respectful tone in all feedback. Remember that participants are real people who may have had legitimate issues.
Reference the Rules
If a participant timed out or submitted the wrong code, state that clearly with reference to the study requirements.
Common Feedback Templates
Timeout Issues
"Your session exceeded the maximum time limit of [X] minutes. The study required completion within this timeframe."
Completion Code Issues
"The completion code you provided did not match our system. Please ensure you copy the code exactly as shown at the end of the study."
Quality Issues
"Your responses did not meet the minimum quality standards outlined in the instructions. Specifically: [detailed explanation]."
Attention Check Failures
"You did not pass the required attention checks embedded in the study. These checks ensure data quality and are mentioned in the study instructions."
No Show (LIVE Studies)
"You did not join the live session within the required 10-minute window after the scheduled start time."
Using Pre-Built Feedback Categories
Our platform provides pre-built feedback options such as "Failed attention check" and "Detection of AI use." When you identify these specific behaviors, please select the relevant category. While we've implemented extensive measures to ensure participant compliance with our policies, some issues may still occur. Using these standardized categories helps our platform better track patterns and maintain data quality across the ecosystem.
Impact of Good Feedback
- Platform trust: Transparent feedback builds confidence in the review process
- Participant improvement: Clear guidance helps participants succeed in future studies
- Reputation system: Accurate feedback contributes to participant reputation scores, helping our system invite high-performing participants to future studies
- Dispute resolution: Detailed feedback reduces participant complaints and disputes
- Quality maintenance: Consistent feedback standards improve overall data quality