
SAN ANSELMO, CALIFORNIA – JANUARY 27: In this photo illustration, the DeepSeek app is displayed on … [+]
What DeepSeek-r1’s privacy policy reveals about its AI systems deserves close attention and the utmost caution in its use. It’s not about technological prowess or outperforming OpenAI’s 01 or others on benchmarks related to mathematics, coding, or general knowledge —topics that are already widely discussed.
It’s about its capability to exhibit artificial integrity over intelligence.
First, the lack of DeepSeek’s internal mechanisms—the ‘Inner’—makes it a greater model for user exploitation rather than user empowerment.
DeepSeek’s privacy policy outlines the types of data it collects but fails to clarify how this data is processed internally. User inputs like chat history and uploaded files are collected to “train and improve services,” yet no mention is made of anonymization or safeguards for sensitive data.
There’s no clear documentation on whether user data is directly used to update AI models either. Terms like “hashed emails” and “mobile identifiers” obscure meaningful transparency, leaving users uncertain about the implications of their data being collected.
Overall, DeepSeek collects extensive data (e.g., keystroke patterns, device IDs) but does not justify how these granular details are necessary for providing its service. However, it takes care to indicate that it is retaining user data “as long as necessary,” but without specific retention periods or guarantees exposing user data to prolonged vulnerabilities, including misuse, breaches, or unauthorized access.
Its reliance on tracking mechanisms (such as cookies) demonstrates a fundamental trade-off: Users can “disable cookies,” but the policy warns that doing so limits functionality, subtly coercing users into sharing data for basic service use. Moreover, by tying essential functions like logins or account continuity to data collection practices, DeepSeek blurs the line between informed consent and forced compliance.
Surprisingly, its policy does not mention any mechanism to prevent biases in how the system processes user inputs or generates responses, while there’s no mention of explainability in how AI outputs are generated, leaving users in the dark about the logic behind decisions or recommendations.
And last, by relying on internal reviews of user inputs to enforce “terms of service,” DeepSeek places the onus of ethical behavior on the users, not the system itself.
Second, DeepSeek’s promises of innovation should not justify its lapses on critical external matters—the ‘Outer’—threatening societal structures.
DeepSeek stores personal information on servers located in the People’s Republic of China, and its privacy policy acknowledges cross-border data transfers.
While legal compliance is mentioned, there is no explicit mention of compliance with major global privacy frameworks like GDPR (Europe) or CCPA (California), raising concerns about the legal treatment of user data from jurisdictions with stringent data protections.
Given the regulatory environment in China, where data localization and governmental access are significant concerns, storing sensitive personal data on Chinese servers introduces potential geopolitical vulnerabilities as users from regions with strict data protection laws may unknowingly subject themselves to less protective data regimes, undermining their privacy rights.
DeepSeek openly admits to sharing user data with advertising and analytics partners to monetize its platform enabling them to target users based on granular data, including activities outside the platform.
And typical of such a model, there is little (if any) transparency about how users are compensated—or even informed. Not to mention that the data collected can be used to perpetuate existing inequalities, such as targeting vulnerable populations with manipulative advertising. Indeed, as algorithms shape what users see and consume, they indirectly influence societal behaviors, values, and trends, often in ways that prioritize profit over well-being.
The privacy policy also allows DeepSeek to share user data during corporate transactions, such as mergers, acquisitions, or sales, leaving user data vulnerable to further potential exploitation to which they sign a blank check.
And it is worth noting the absence of independent audits or external validation, which means users must rely on DeepSeek’s self-regulation—a risky proposition for any AI system.
Third, in failing to address vulnerabilities in relationships—the ‘Inter’—DeepSeek risks turning from a mediator into a predator.
DeepSeek’s policy positions user participation as contingent on significant data sharing.
For instance, while users can disable cookies, they are warned that this will result in diminished functionality, effectively coercing them into sharing data for a “seamless” experience.
Though users can delete their data, the policy offers little clarity on the consequences for long-term service use, creating an imbalance in the relationship between the platform and its users.
Moreover, its handling of user input, such as chat history and uploaded files, raises significant concerns about how the platform mediates human-AI relationships. Indeed, user-provided data is treated as a resource for the platform’s benefit (e.g., model training), without clear opt-out options for individuals who do not want their data used in this way.
While DeepSeek states users can exercise rights like data deletion or access, the process is buried under layers of verification.
Also, the platform’s privacy notice provides no assurances that the AI’s responses or outputs are rooted in integrity-led principles, leaving users uncertain about the trustworthiness of interactions.
Equally concerning, DeepSeek’s handling of dependent relationships, such as minors or emotionally vulnerable users, highlights critical oversights in its intermediation mechanisms.
While the policy acknowledges parental consent for users under 18, it lacks robust safeguards to prevent data misuse or exploitation of younger users. There is no mention of how DeepSeek’s systems detect or handle users in distress, such as those discussing mental health or other sensitive issues, creating a risk of emotional harm.
Lastly, regular updates to the privacy policy are mentioned, but there is no clear process for users to track changes that could significantly affect their privacy.
Redefining what we ask of AI—artificial integrity over intelligence—is the guarantee of AI’s performance being directed toward serving what matters most: humanity. Without this, economic value comes at the expense of societal well-being and therefore, individual lives.
AI needs performance that does not come at the expense of excessive energy, water, and terrestrial resources, nor lead to economic concentration in the hands of a few.
AI also needs to be ingrained with integrity, not just from an external standpoint, but primarily in its core functioning. Without this, artificially created intelligence can veer into harmful territory at a societal level, beyond what any developer could manage with a rollback.
On the former, let’s hope the promise of models such as DeepSeek-r1 opens groundbreaking avenues; while most importantly, on the latter, ensuring that innovation empowers humans with machines, not the other way around—artificial integrity over intelligence.