Content Moderation Workflow
1. How We Review Content
Automated Scanning All content uploaded to our platforms is scanned automatically using AI tools and filters. We check for:
Sexual content
Child safety violations
Violence and graphic content
Hate speech
Terrorism or extremism
Spam or scams
Impersonation or fraud
Politically or religiously provocative content
Harmful links (malware, phishing)
Initial Classification
Safe Content: Published immediately
Borderline Content: Visibility may be limited and sent for human review
High-Risk Content: Temporarily hidden and sent for urgent review
Human Moderation Review Moderators evaluate:
Context and intent
Severity of the content
Local laws (Tanzania: TCRA, Cybercrimes Act, EPOCA)
Community Guidelines
Decision Outcomes
Approve: Content is published
Restrict: Apply age restrictions, limited distribution, or warning labels
Remove: Content violating laws or guidelines is removed; user is notified
Escalate: Serious cases (child safety, terrorism, crimes) are reported to law enforcement or internal legal team
User Notification & Appeal Users receive explanations for moderation actions and can submit appeals. Senior moderators review appeals and make the final decision.
2. Rules for Users
Safety First: Do not post illegal, harmful, or threatening content
Respect Others: No harassment, bullying, or hate speech
Child Protection: Avoid content involving minors in any harmful or sexualized way
Intellectual Property: Do not upload copyrighted or pirated content without permission
Responsible Use: Avoid spam, scams, or misuse of platform features
Paroter Technologies Company reserves the right to remove content, restrict features, suspend accounts, or report to authorities when necessary.
Contact
Reach out for support or partnership
© 2026. All rights reserved
