OpenAI’s Unacceptable “Privacy by Pressure” Approach

We’ve seen this playbook before, and we should not allow it anymore

Luiza Jarovsky
7 min readApr 26


Photo by Etienne Girardet

OpenAI is not just any AI company. Its current reported valuation is $29 billion, its AI-based tools are being integrated into non-AI products used on a daily basis by millions of people through its partnership with Microsoft, and ChatGPT, its AI-based chatbot, was the fastest-growing consumer application ever, reaching 100 million monthly active users two months after launch.

Different from other fast-growing and innovative startups, OpenAI did not start as a college undergrad adventure. It has been under development since 2016, led by experienced co-founders, it has been through six funding rounds, and it “consolidated” its market position after the deal with Microsoft and the integration into well-established products.

Despite the various layers of assessment it had to undergo, especially to receive billions of dollars in six funding rounds and to partner with Microsoft — the second most valuable company in 2023 — it does not seem to care much about data protection or privacy rights and principles. OpenAI seems to follow what I coined a “privacy by pressure” approach: only acting when something goes wrong, when there is a public backlash, or when it is legally told to do so. I give some examples:

A few weeks ago, there was a privacy incident involving ChatGPT in which people’s chat histories were exposed to other users.

After this incident, a more obvious change I noticed was this warning before someone could access ChatGPT:

It is good that they gave one (small) step towards more transparency. But only that, and only after the incident? They have nothing else to tell people or to embed into the design of their product so that we can have better privacy assurances and transparency?

Perhaps as a result of this lack of proactivity in their privacy compliance approach, data protection authorities have recently started acting.

The Italian Data Protection Authority (“Garante per la Protezione dei Dati Personali”) imposed an immediate temporary limitation on the processing of Italian users’ data by OpenAI. It is still unclear how OpenAI will comply with the Italian DPA’s request, some say it is next to impossible. But most importantly, why the issues raised by the Italian DPA were not openly clarified to all users in advance?

Another privacy issue involving ChatGPT is reputational harm, which I have already approached in this newsletter before.

Recently, ChatGPT accused a law professor of sexual harassment, and its supposed evidence was an article that was never written.

As a further illustration of the topic, in a less damaging context, I asked ChatGPT to talk more about me, giving it my LinkedIn profile link:

The information that ChatGPT gave me (above) is 100% false.

I have argued that if ChatGPT has a high rate of hallucinations, then prompts about individuals should come back empty. Showing wrong information about people can lead to various types of harm, including reputational harm.

This week, Germany followed Italy and started scrutinizing OpenAI more closely, such as their specific measures to comply with the GDPR. According to Marit Hansen, privacy commissioner for the northern state of Schleswig-Holstein, they sent OpenAI a questionnaire that should be answered by June 11th.

One day later (yesterday), the CEO of OpenAI, Sam Altman, announced “new ways to manage your data in ChatGPT”:

When clicking on the link he shared, we can read more details about this announcement:

  1. The ability to turn off chat history. When chat history is disabled, the conversations will not be used to train and improve their models
  2. ChatGPT Business subscription: end users’ data won’t be used to train their models by default.
  3. Ability to export your data and understand what information ChatGPT stores.

After reading the announcement, it becomes clear that these new features involve basic GDPR rules (data minimization, data protection by design and by default, right of access, and so on). Why these features were not available since the beginning? Was there any privacy professional involved in the product development?

Furthermore, why they do not mention privacy rights, not even once? Why not link to their own privacy policy? Why not offer more information about their privacy assurances? Why not be proactively more transparent?

As a reminder, privacy by design, the framework developed by Dr. Ann Cavoukian, has seven main principles:

1. Proactive, not Reactive

2. Privacy as the Default Setting

3. Privacy Embedded into Design

4. Full Functionality — Positive-Sum, not Zero-Sum

5. End-to-End Security — Full Lifecycle Protection

6. Visibility and Transparency

7. Respect for User Privacy — Keep it User-Centric

The examples above show that OpenAI is definitely not applying privacy by design.

In my point of view, data protection authorities should not accept this. Companies collecting and processing personal data, especially on such a large scale and with such a broad and new risk spectrum, should be forced to be much more careful with privacy and data protection rights, principles, and rules.

Actually, we saw this playbook before.

Social networks, especially in their early years, have launched products and features that did not have users’ privacy in mind. In some cases, there were media-covered scandals, in other cases, there were public decisions or warnings. After enforcement or official warning, companies apologized and promised not to do it again. Some of them seemed to learn quickly and adopted a much stronger and more accountable privacy strategy, which endures until today. Others, so far, seem not to care much and just allocate some of their annual budgets to pay for more privacy fines. Some use privacy-beautifying language and hire hundreds of lawyers to make sure their privacy policies can be enough to escape the ongoing enforcement trends, but not much beyond that.

Privacy experts and researchers, however, can see through this “privacy makeup.” We can spot dark patterns in privacy, bad privacy UX, unfair practices, and disrespect for privacy by design and people’s rights — which are still largely not enforced by data protection authorities. The public can also see these cracks when there is news about privacy enforcement or privacy scandals — which happens almost daily.

Having said that, I am still mesmerized by how, in 2023, when we have:

  • comprehensive privacy laws with a global impact
  • privacy on the news every day
  • ongoing privacy reports and guidelines
  • privacy thriving as a research field
  • hundreds of thousands of privacy professionals
  • privacy publications and newsletters, such as the one you are reading
  • and so on

an AI company with the size, resources, and global influence of OpenAI can effectively avoid implementing privacy by design and, instead, adopts a “privacy by pressure” framework.

Do we want to replay the “social media privacy disaster” song again, but with AI-powered harm potential? Are we waiting for a Cambridge Analytica-style AI scandal? I hope not.

💡 Go beyond basic privacy knowledge and expand your career opportunities: join the waitlist for my new courses on Privacy & AI and Privacy UX and get a 20% discount when they launch.

🎤 Upcoming events

Tomorrow, I will discuss with Prof. Nita Farahany her new book “The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology,” as well as issues related to the protection of cognitive liberty and privacy in the context of current AI and Neurotechnology challenges.

Prof. Farahany is a leader and pioneer in the field of ethics of neuroscience. There are already 800+ people confirmed for this live session, sign up here and bring your questions.

To watch our previous events (the latest one was with Dr. Ann Cavoukian on Privacy by Design), check out my YouTube channel.

🎧 Podcast

In the second episode of The Privacy Whisperer Podcast, I spoke with Gal Ringel, the CEO of Mine, about:

  • his journey as an entrepreneur and his transition from cybersecurity to privacy;
  • how Mine expanded from B2C to B2B;
  • the advantages of data mapping in a privacy compliance strategy;
  • AI, security, the evolution of the privacy industry, and more.

If you are involved with privacy, do not miss this thought-provoking conversation, listen now.

And who should be my next podcast guest? Write to me and let me know.

🔁 Trending on social media

Follow me on Twitter and on LinkedIn for daily privacy content.

📌 Privacy & data protection jobs

We have gathered various links from job search platforms and privacy-related organizations on our Privacy Careers page. We are constantly adding new links, so bookmark it and check for new openings. Wishing you the best of luck!

Before you go:

  • If you think your network will benefit from this post, share it and invite them to subscribe to The Privacy Whisperer.
  • Check out our podcast, Twitter, LinkedIn & YouTube.
  • Go beyond basic privacy knowledge and expand your career opportunities: join the waitlist for my new courses on Privacy & AI and Privacy UX and get a 20% off coupon when they launch.

See you next week. All the best, Luiza Jarovsky



Luiza Jarovsky

Co-Founder at Implement Privacy, author of The Privacy Whisperer newsletter & podcast, Ph.D. researcher, Latina, immigrant, polyglot, mother of 3