Flagging submissions helps maintain data quality and builds an accurate participant reputation system. When you identify abnormal behaviors such as AI usage or failed attention checks, flagging these submissions contributes to a healthier research ecosystem.
How Flagging Impacts the Ecosystem
Our reputation system prioritizes participants with high-quality performance records when inviting them to new studies. By accurately flagging problematic submissions, you help:
- Build accurate participant reputation scores based on actual study performance
- Ensure high-quality participants receive priority access to future studies
- Create a feedback loop that improves overall data quality across the platform
- Protect other researchers from encountering the same issues with problematic participants
Ways to Flag Abnormal Behavior
1. Automatic Detection via Rejection Feedback
When you reject a submission with proper reasoning, our system automatically detects specific behavior patterns and flags the submission accordingly:
- AI Detection: Including terms like "AI use," "ChatGPT," or "detection of AI" in your rejection feedback will automatically flag the submission for AI usage
- Attention Check Failures: Mentioning "failed attention check" or "attention check" in your rejection will flag the submission for low focus
The system scans your rejection feedback text and applies appropriate flags that affect the participant's reputation badges. Learn more about providing effective feedback in Providing Feedback for Rejected Submissions.
2. Automatic Detection via Payment Memos
When awarding bonus payments or updating payments, you can add memos to document your reasoning. Similar to rejection feedback, the system will detect relevant keywords in memos:
- Memos mentioning AI usage patterns will trigger AI detection flags
- Memos referencing attention check issues will trigger low focus flags
This allows you to flag concerns even when approving a submission with payment adjustments. See How to Award a Bonus Payment for details on adding memos.
3. Manual Flagging on the Submissions Table
You can manually flag or unflag submissions directly from the Submissions Dashboard:
- Navigate to your study's Submissions Dashboard
- Locate the submission you want to flag
- Use the flag action in the submission row to mark or unmark specific behaviors
- Choose the appropriate flag type (AI detection or attention check failure)
Manual flagging gives you precise control when automatic detection doesn't capture the issue or when you need to correct a previous flag.
Feature Rollout Timeline
The manual flagging feature on the submissions table is expected to roll out by the end of February 2026. The reputation system is already active and working to prioritize high-quality participants for study invitations.
Building the Reputation System Together
We've collaborated with researchers across the platform to establish initial reputation scores for new and existing participants. This foundation helps the system identify and prioritize quality participants from the start.
Your continued participation in flagging problematic submissions strengthens this system for everyone. By working together, we create a research community that rewards quality work and helps all researchers access better data.
Best Practices for Flagging
- Be accurate: Only flag submissions when you have clear evidence of the behavior
- Be specific in feedback: Use clear terminology in rejection feedback so the system can properly categorize the issue
- Document reasoning: When using memos, explain what you observed to create a record for future reference
- Flag consistently: Apply the same standards across all submissions to ensure fair reputation tracking
Join the Community
We invite you to be part of building a stronger research ecosystem. Your flagging actions directly contribute to maintaining high data quality standards and ensuring that dedicated participants are rewarded with priority access to studies.
Learn more about how the reputation system works from the participant perspective in our Participant Reputation System Guide.