Mioly - Child Sexual Abuse and Exploitation (CSAE) Prevention Standards
Last updated: August 20, 2025
Nanning Mioli Network Technology Co., Ltd. (hereinafter referred to as "we") operates the "Mioly" application (hereinafter referred to as "this App"), committed to creating a safe and positive social environment for users. Protecting minors from sexual abuse, exploitation (Online Child Sexual Abuse and Exploitation, OCSEA), and any other forms of harm is our primary responsibility and non-negotiable bottom line. This standard is formulated based on industry best practices and outlines the comprehensive and strict measures and policy system we have adopted to prevent OCSEA.
1. Solemn Stance and Public Prohibitive Standards
We adopt the strictest attitude and explicitly prohibit any form of child sexual abuse and exploitation (CSAE) content and behavior in our Privacy Policy. This includes but is not limited to:
- Creating, uploading, distributing, accessing, or promoting Child Sexual Abuse Material (CSAM).
- Sexualizing, inducing (online grooming), harassing, or exploiting minors in any form.
- Attempting to establish inappropriate relationships with minors or soliciting their private information.
- Any other behavior that may endanger the safety and health of minors.
Once any user is found violating this policy, we will immediately take measures such as permanent account ban, deletion of all related content, and reporting to competent authorities according to internal guidelines.
2. Internal Child Safety Guidelines and Enforcement Framework
We have developed detailed internal child safety principles and enforcement guidelines to ensure coordination between the moderation team and the security team for effective identification and action against OCSEA. These guidelines include:
- Clear Definitions: Clear delineation of behaviors such as CSAE, CSAM, online grooming, etc.
- Graded Disposal Process: Stipulating corresponding disposal measures (from content deletion to permanent account ban) for content and behaviors of different risk levels.
- Moderator Training: All content moderators must undergo professional training on child safety identification and handling.
3. Multi-layered Protection, Detection, and Surface Treatment Methods
3.1 Proactive Technical Detection
We employ technical solutions that comply with industry standards to proactively detect OCSEA on our platform:
- Hash Matching (e.g., PhotoDNA): Comparing all user-uploaded images against known CSAM hash databases to intercept illegal content.
- AI Image/Video Classifiers: Utilizing artificial intelligence to identify suspicious new content not收录ed in hash databases.
- Text Classifiers and Keyword Filtering: Monitoring text content (bottle content, chat messages) to identify patterns and keywords associated with grooming, harassment, and other behaviors.
The use of technology assesses the need for additional manual review; therefore, we have a professional 24/7 manual moderation team for final judgment.
3.2 User Reporting Mechanism
We provide reporting channels that comply with legal requirements and are open to all users:
- Prominent reporting entrances are available in the chat interface, user profile page, and bottle interface.
- The reporting options explicitly include a "Minors Violation" category, providing users with a direct way to report potential child abuse behavior.
- We encourage users to use this option and can provide additional information through附加 descriptions and screenshots.
4. Incident Response: Reporting, Action, and Law Enforcement Collaboration
4.1 Mandatory Reporting Obligations
When we actually discover or have reason to believe that Child Sexual Abuse Material (CSAM) exists:
- We pledge to strictly comply with Chinese laws and regulations and immediately report to competent authorities such as the Cyberspace Administration of China.
- Simultaneously, we are registered as an Electronic Service Provider (ESP) with NCMEC and will make international reports to the National Center for Missing & Exploited Children (NCMEC) through its CyberTipline reporting API or reporting forms, complying with its data retention requirements.
- For other regions globally, we will follow the guidance of the INHOPE hotline network to report to the relevant national hotline or law enforcement channels.
4.2 Platform Violation Disposal
For confirmed OCSEA incidents, we will immediately execute platform penalties, including but not limited to:
- Immediate and permanent deletion of violating content.
- Immediate termination of the violator's account and prevention of re-registration using the same device.
- Recording related activities to assist law enforcement investigations.
4.3 Responding to Law Enforcement Requests
We have established specialized procedures to respond to information requests made by global law enforcement agencies in accordance with the law. In follow-up investigations subsequent to NCMEC reports, we will actively cooperate with law enforcement agencies, providing necessary information within the limits permitted by law to help protect victims and hold perpetrators accountable.
5. Contact Us (Dedicated Channel for Child Safety Incidents)
We have established dedicated contacts and channels for receiving reports on child safety incidents, inquiries, and notifications from Google Play or law enforcement agencies.
- Preferred Method: Please use the reporting function within the App, as it is the fastest way.
- External Reports/Law Enforcement Inquiries: Please contact us via the following dedicated email address:
safety@miaolegeli.com
Our security team representative will be responsible for handling these requests and can communicate regarding our review and law enforcement response procedures.