CSAE Policy
1. Introduction - Definitions
Child Sexual Abuse and Exploitation (CSAE) refers to any activity involving the exploitation of minors for sexual purposes, including but not limited to the production, distribution, and possession of abusive material, grooming, coercion, and trafficking.Â
POPPINS offers to the users the possibility to (i) place an order for an item or a location and/or (ii) publish a listing in order to be ordered and loaned by another user. The listings are available for loan by the listers and then ordered by the clients. The possibility for the client to place an order and for the lister to publish a listing post and to obtain the payment for the order represent the services offered by POPPINS.
This CSAE Policy outlines POPPINS’ (the “Platform”) commitment to ensuring a safe, secure, and trusted environment where users can engage in loans or rentals while respecting global child protection standards.Â
2. ScopeÂ
This policy covers all services provided by POPPINS, including user interactions, messaging, listings, and user-generated content (UGC). This policy applies to all users, partners, and service providers interacting with or through the Platform.
3. Policy objectives and means
In order to keep the Platform safe for children, POPPINS does not allow content (including conversations) that sexually exploits or endangers children.Â
To do so, POPPINS uses technologies on the basis of EU Regulation 2021/1232, which implements a derogation to the confidentiality obligations of the EU Directive 2002/58/EC.Â
These technologies are used only for the following purposes:
- Prevent: design features and safeguards
- Detect: Use AI-powered and human-moderated systems to identify harmful content.
- Respond: Take immediate action on CSAE-related violations, including legal reporting.
4. Design Features and Safeguards
POPPINS has implemented the following “by-design” features and safeguards:
4.1. User Age Verification: All users must ensure a minimum age of 18 years or the age of majority according to the laws of the country that apply to their situation before accessing the Platform’s features;
‍
4.2. Communication Controls
- Restricted Direct Messaging: Direct messages are monitored by AI for suspicious content and behaviours.
- Chat Moderation: AI-driven real-time filtering of specific harmful keywords.
- User Blocking and Reporting: Users can block others, close conversations, and report abusive behaviour through a customer care/user feedback form.
4.3. Content Moderation and Monitoring
- AI-Powered Moderation: Automated monitoring systems detect CSAE-related content in listings, messages, and uploads.
- Human Moderation: Moderators review flagged content and reported incidents.
- Behavioural Detection: AI flags unusual behaviours such as persistent unwanted messages or repeated suspicious activity.
4.4. Reporting and Escalation
- In-App Reporting Tools: Users can report content, messages, or behaviour through integrated reporting tools.
- Content Removal Policy: Inappropriate content is immediately removed after human review.
- Law Enforcement Notification: Confirmed CSAE cases are reported to relevant authorities, including law enforcement organisations. All incidents are managed per local and international child protection laws.
5. Legal Compliance and Accountability
This policy complies with Regulation (EU) 2021/1232, which governs the detection, removal, and reporting of CSAE-related content in digital services.
Violation of CSAE-related rules will result in account suspension, permanent bans, and legal action.
6. Review and Updates
Policy Review: This policy is reviewed annually or upon significant Platform changes.
Policy Updates: Changes are made available to all users.
‍
Effective Date: [Date]
Contact for CSAE-Related Issues: [Contact Information]
‍