Leading tech companies are successful a contention to merchandise and amended artificial quality (AI) products, leaving users successful the United States to puzzle retired however overmuch of their idiosyncratic information could beryllium extracted to bid AI tools.
Meta (which owns Facebook, Instagram, Threads and WhatsApp), Google and LinkedIn person each rolled retired AI app features that person the capableness to gully connected users’ nationalist profiles oregon emails. Google and LinkedIn connection users ways to opt retired of the AI features, portion Meta’s AI instrumentality provides nary means for its users to accidental “no, thanks.”
Recommended Stories
list of 3 items- list 1 of 3AI investments are pulling the US system forward. Will it continue?
- list 2 of 3A unsafe tipping point? Anthropic’s AI hacking claims disagreement experts
- list 3 of 3EU moves to easiness AI, privateness rules amid unit from Big Tech, Trump
“Gmail conscionable flipped a unsafe power connected October 10, 2025 and 99% of Gmail users person nary idea,” a November 8 Instagram station said.
Posts warned that the platforms’ AI instrumentality rollouts marque astir backstage accusation disposable for tech institution harvesting. “Every conversation, each photo, each dependable message, fed into AI and utilized for profit,” a November 9 X video about Meta said.
Starting December 16th, Meta volition commencement feeding each your information to AI. This video is an instructional connected however to crook these settings off. pic.twitter.com/BUJagaMr5b
— David Wolfe (@DavidWolfe) November 9, 2025
Technology companies are seldom afloat transparent erstwhile it comes to the idiosyncratic information they cod and what they usage it for, Krystyna Sikora, a probe expert for the Alliance for Securing Democracy astatine the German Marshall Fund, told PolitiFact.
“Unsurprisingly, this deficiency of transparency tin make important disorder that successful crook tin pb to fearfulness mongering and the dispersed of mendacious accusation astir what is and is not permissible,” Sikora said.
The champion – if tedious – mode for radical to cognize and support their privateness rights is to work the presumption and conditions, since it often explicitly outlines however the information volition beryllium utilized and whether it volition beryllium shared with third parties, Sikora said. The US doesn’t person immoderate broad national laws on information privacy for exertion companies.
Here’s what we learned astir however each platform’s AI is handling your data:
Social media claim: “Starting December 16th Meta volition commencement speechmaking your DMs, each conversation, each photo, each dependable connection fed into AI and utilized for profit.” – November 9 X station with 1.6 cardinal views arsenic of November 19.
The facts: Meta announced a caller argumentation to instrumentality effect December 16, but that argumentation unsocial does not effect successful your nonstop messages, photos and dependable messages being fed into its AI tool. The argumentation involves however Meta volition customise users’ contented and advertisements based connected however they interact with Meta AI.
For example, if a idiosyncratic interacts with Meta’s AI chatbot astir hiking, Meta mightiness commencement showing that idiosyncratic recommendations for hiking groups oregon hiking boots.
But that doesn’t mean your information isn’t being utilized for AI purposes. Although Meta doesn’t usage people’s backstage messages successful Instagram, WhatsApp oregon Messenger to bid its AI, it does cod idiosyncratic contented that is acceptable to “public” mode. This tin see photos, posts, comments and reels. If the user’s Meta AI conversations impact spiritual views, intersexual predisposition and radical oregon taste origin, Meta says the strategy is designed to debar parlaying these interactions into ads. If users inquire questions of Meta AI utilizing its dependable feature, Meta says the AI instrumentality volition usage the microphone lone erstwhile users springiness permission.
There is simply a caveat: The tech company says its AI mightiness usage accusation astir radical who don’t person Meta merchandise accounts if their accusation appears successful different users’ nationalist posts. For example, if a Meta idiosyncratic mentions a non-user successful a nationalist representation caption, that photograph and caption could beryllium utilized to bid Meta AI.
Can you opt out? No. If you are utilizing Meta platforms successful these ways – making immoderate of your posts nationalist and utilizing the chatbot – your information could beryllium utilized by Meta AI. There is nary mode to deactivate Meta AI successful Instagram, Facebook oregon Threads. WhatsApp users tin deactivate the enactment to speech with Meta AI successful their chats, but this enactment is disposable lone per chat, meaning that you indispensable deactivate the enactment successful each chat’s precocious privateness settings.
The X station inaccurately advised radical to taxable this signifier to opt out. But the signifier is simply a mode for users to study erstwhile Meta’s AI supplies an reply that contains someone’s idiosyncratic information.
David Evan Harris, who teaches AI morals astatine the University of California, Berkeley, told PolitiFact that due to the fact that the US has nary national regulations astir privateness and AI training, radical person nary standardised ineligible close to opt retired of AI grooming successful the mode that radical successful countries specified arsenic Switzerland, the United Kingdom and South Korea do.
Even erstwhile societal media platforms supply opt-out options for US customers, it’s often hard to find the settings to bash so, Harris said.
Deleting your Meta accounts does not destruct the anticipation of Meta AI utilizing your past nationalist data, Meta’s spokesperson said.
Social media claim: “Did you cognize Google conscionable gave its AI entree to work each email successful your Gmail – adjacent your attachments?” – November 8 Instagram station with much than 146,000 likes arsenic of November 19.
The facts: Google has a big of products that interact with backstage information successful antithetic ways. Google announced connected November 5 that its AI product, Gemini Deep Research, tin link to users’ different Google products, including Gmail, Drive and Chat. But, arsenic Forbes reported, users indispensable archetypal springiness support to employment the tool.
Users who privation to let Gemini Deep Research to person entree to backstage accusation crossed products tin take what information sources to employ, including Google search, Gmail, Drive and Google Chat.
There are different ways Google collects people’s data:
- Through searches and prompts successful Gemini apps, including its mobile app, Gemini successful Chrome oregon Gemini successful different web browser
- Any video oregon photograph uploads that the idiosyncratic entered into Gemini
- Through interactions with apps specified arsenic YouTube and Spotify, if users springiness permission
- Through connection and telephone calls apps, including telephone logs and connection logs, if users springiness permission.
A Google spokesperson told PolitiFact the institution doesn’t usage this accusation to bid AI erstwhile registered users are nether property 13.
Google tin besides entree people’s information erstwhile they person astute features activated successful their Gmail and Google Workplace settings (that are automatically connected successful the US), which gives Google consent to gully connected email contented and idiosyncratic enactment information to assistance users constitute emails oregon suggest Google Calendar events. With optional paid subscriptions, users tin entree further AI features, including in-app Gemini summaries.
Turning disconnected Gmail’s astute features tin halt Google’s AI from accessing Gmail, but it doesn’t halt Google’s entree to the Gemini app, which users tin either download oregon entree successful a browser.
A California suit accuses Gemini of spying connected users’ backstage communications. The suit says an October argumentation alteration gives Gemini default entree to backstage contented specified arsenic emails and attachments successful people’s Gmail, Chat and Meet. Before October, users had to manually let Gemini to entree the backstage content; now, users indispensable spell into their privateness settings to disable it. The suit claims the Google argumentation update violates California’s 1967 Invasion of Privacy Act, a instrumentality that prohibits unauthorised wiretapping and signaling confidential communications without consent.
Can you opt out? If radical don’t privation their conversations utilized to bid Google AI, they tin usage “temporary” chats oregon chat without signing into their Gemini accounts. Doing that means Gemini can’t prevention a person’s chat history, a Google spokesperson said. Otherwise, opting retired of having Google’s AI successful Gmail, Drive and Meet requires turning disconnected astute features successful settings.
Social media claim: Starting November 3, “LinkedIn volition statesman utilizing your information to bid AI.” – November 2 Instagram station with much than 18,000 likes arsenic of November 19.
The facts: LinkedIn, owned by Microsoft, announced connected its website that starting November 3, it volition usage immoderate US members’ information to bid content-generating AI models.
The information the AI collects includes details from people’s profiles and nationalist contented that users post.
The grooming does not gully connected accusation from people’s backstage messages, LinkedIn said.
LinkedIn besides said, speech from the AI information access, that Microsoft started receiving accusation astir LinkedIn members – specified arsenic illustration information, provender enactment and advertisement engagement – arsenic of November 3 successful bid to people users with personalised ads.
Can you opt out? Yes. Autumn Cobb, a LinkedIn spokesperson, confirmed to PolitiFact that members tin opt retired if they don’t privation their contented utilized for AI grooming purposes. They tin besides opt retired of receiving targeted, personalised ads.
To region your information from being utilized for grooming purposes, go to information privacy, click connected the enactment that says “Data for Generative AI Improvement” and past crook disconnected the diagnostic that says “use my information for grooming contented instauration AI models.”
And to opt retired of personalised ads, spell to advertizing information successful settings, and crook disconnected ads connected LinkedIn and the enactment that says “data sharing with our affiliates and prime partners”.

2 weeks ago
15









English (US) ·