Tools

What 'No Data Leaves Your Device' Actually Means (And How to Verify It)

May 7, 2026·9 min read

The Phrase Is Doing a Lot of Work

"No data leaves your device" has become the most-quoted line in privacy marketing. You see it on landing pages for note-taking apps, on the App Store privacy summaries of mental-health products, and in the pitch decks of any startup that wants to look serious about user trust. The phrase is genuinely meaningful. It is also frequently misleading, because it can describe at least six different architectures, only some of which actually keep your data local in the strong sense most readers assume.

The alternative to "no data leaves your device" is the default architecture of the modern web: you type something, it travels to a server, the server does something with it, and you trust that the company running it behaves well, doesn't get breached, and doesn't pivot its business model in two years. That arrangement is fine for plenty of use cases. But for a journal entry, a therapy note, or a clipboard full of medical symptoms typed into a chatbot, the question of what actually happens to that text is not academic.

This post is a working field guide. The goal is to leave you able to read a privacy claim, classify it into one of the categories below, and verify it in about ten minutes using tools you already have.

Six Things "No Data Leaves Your Device" Can Mean in 2026

Before you can verify a claim, you have to figure out which claim is actually being made. The same phrase covers wildly different architectures. Here are the six you will encounter most often, ordered roughly from strongest to weakest.

1. Pure Client-Side Computation

Everything happens in the browser or on the device, with no network calls during operation. The app loads once (HTML, JavaScript, any model weights), and after that the only network traffic is whatever the developer explicitly asks for, which can be zero. A JPEG-to-PNG converter running in WebAssembly, or a markdown-to-PDF tool that processes the file in-memory, fits here. This is the strong form of the claim, and it is verifiable: you can watch the network and see no requests fire.

2. End-to-End Encrypted with the Server Seeing Only Ciphertext

Data leaves the device, but it is encrypted with a key the server does not have. The server stores and routes ciphertext; only your device (or another you authorize) can decrypt it. Signal-style messaging is the canonical example. The server still sees metadata (who is talking to whom, when, message sizes), and a compromised client can still leak plaintext. But the content itself cannot be read without breaking the cryptography.

3. Local-First with Optional Sync

The data lives on your device as the source of truth. Sync, if it exists, is optional and ideally zero-knowledge. The phrase "local-first" was popularized by an Ink & Switch essay. Strong version: you can use the app indefinitely with the network off, and turning sync on does not require trusting the vendor with plaintext. Weak version: a vendor calls itself "local-first" because the app caches data offline, while quietly streaming everything to a server in the background.

4. Federated Learning

Your raw data stays on the device, but model updates derived from it leave. Google popularized this with a 2017 paper describing how Gboard learns from typing patterns without sending the typed text to Google. The device computes a gradient update and sends that. The boundary is fuzzier than most people realize: model updates can encode information about the data that produced them, and there is an active research literature on extracting training data from weights. "Your data doesn't leave" is technically true; "no information about your data leaves" is not.

5. Zero-Retention Server Inference

The data leaves your device, gets processed on a server, and is then discarded. Some on-device LLM products use this as a fallback: the local model handles most queries, and the hard ones get sent to a larger server-side model under a contractual zero-retention policy. That is a meaningful promise from a serious vendor, but it is entirely a function of the vendor's behavior and the legal regime they operate in. There is no cryptographic guarantee. You are trusting a policy.

6. The Bullshit Version: "Your Data Is Anonymized"

Your data leaves the device, sits on a server, and the vendor applies some transformation they call "anonymization." In practice this usually means stripping a name field while leaving timestamps, geolocations, device identifiers, and behavior patterns intact. Research going back to the Netflix Prize and AOL search log deanonymizations shows that weak anonymization is routinely reversible. If a claim rests on "anonymized" without a clear technical definition (differential privacy with a stated epsilon, k-anonymity with a stated k), treat it as marketing language.

How to Actually Verify a Claim

Once you know which claim is on the table, verification is mostly mechanical. None of these techniques require being a security researcher. They take a few minutes each.

Open the Network Tab

For any web app, open your browser's developer tools (Cmd-Option-I on macOS, F12 on Windows and Linux), switch to the Network tab, and clear it. Then use the app: type a journal entry, run a calculation, upload a file. Watch what fires. Mozilla's Firefox DevTools docs are a thorough reference; Chrome's DevTools docs are the equivalent for Chromium browsers. You are looking for the absence of requests, not the presence. If you type fifty words into an app that claims local-only computation and see zero outbound POST or PUT requests, the claim is consistent with what you observed. If you see a stream of requests to api.example.com/log every keystroke, the claim is false regardless of the marketing copy.

Read the Privacy Policy for Telemetry Disclosure

A serious product will disclose its telemetry explicitly. Look for words like "crash reporting," "error tracking," "analytics," "diagnostic data," and named services like Sentry or Crashlytics. The presence of one does not by itself break a "no data leaves your device" claim, but it qualifies it. The honest version reads: "no user content leaves your device, but we do collect crash logs that may incidentally include error context." If the policy is silent on telemetry but you are seeing background traffic, something is off.

Look at the Source Code If It Is Open

For open-source projects, you do not have to take anyone's word. Search the repository for fetch, XMLHttpRequest, axios, http., and the equivalents in whatever language the app uses. You do not need to read every line; just confirm the call sites match the claim. A "fully local" app with a fetch to a third-party analytics endpoint deserves a hard question.

Read the App Store and Play Store Data Labels

Apple rolled out App Privacy details on the App Store in December 2020, requiring developers to declare what data is collected and how it is linked to the user. Google followed with the Data Safety section on the Play Store in 2022. Both are mandatory disclosures, which means a developer who claims "no data leaves your device" while the App Store label says "Identifiers, Usage Data, Diagnostics linked to you" is contradicting themselves on a public, regulated surface. The labels are imperfect and the developer is responsible for accuracy, but they are a useful cross-check. Apple documents this on developer.apple.com; Google on support.google.com.

Check Independent Privacy Reviews

The Electronic Frontier Foundation, Mozilla's "Privacy Not Included" buyer's guide, and rigorous tech-press reviews routinely audit privacy claims and find the gap between marketing copy and observable behavior. If a product has been around for a year or two and no one has looked at it, that itself is information.

Common Failure Modes That Break the Claim Without the Developer Lying

Plenty of products that genuinely intend to keep data local end up shipping it off the device through paths the developer did not fully think through. These are not malice; they are the default behavior of modern operating systems and developer tools, and they are easy to miss.

  • Crash reporters that ship stack traces with PII. A user types their address into a buggy form, the crash reporter packages variable state into the trace, and the address ends up in Sentry. The developer never wrote a line that sends user data anywhere; the crash pipeline does it for them.
  • Cloud-synced clipboards. macOS Universal Clipboard, Windows Cloud Clipboard, and similar features send clipboard contents through the OS vendor's servers so you can paste on another device. If your "fully local" app touches the clipboard, that data may round-trip through Apple or Microsoft regardless of what the app does.
  • OS-level analytics the developer does not control. The OS itself collects diagnostic data about app usage, sometimes with content fragments. An app developer cannot truthfully say "no data about you leaves your device" if the OS is shipping app-launch metadata daily.
  • Third-party scripts that bleed into the app surface. A web app that includes Google Analytics, Facebook Pixel, Hotjar, and a chat widget on the same page where you type your journal entry is making a claim its bundle does not support, even if its own server never logs your text.
  • Auto-updaters that phone home with usage signals. Many include a payload identifying the install, version, and sometimes a coarse usage telemetry blob. Usually disclosed but rarely highlighted.

How We Apply This at Coherence Daddy

One concrete example, kept brief. Optimize Me, our self-help app, is built local-first in the strong sense: notes, journals, and personal data live in the browser's local storage, and the app is functional with the network disconnected after the initial load. Sync, where it exists, is optional and uses end-to-end encryption with a key the server does not see. The architecture exists because the content people put into a self-help app is exactly where the cost of a server breach or a policy change is unacceptable, and because we wanted the privacy claim to be verifiable rather than promised. The verification techniques in this post work on Optimize Me, and they should work on any product making a similar claim. Our content policy documents the editorial side of the same posture.

What "Local-First" Probably Cannot Mean

A few things are sometimes claimed under the local-first banner that the architecture cannot really deliver on.

It cannot mean the app is immune to the device being compromised. If your laptop is running malware, no amount of local-first architecture will save the plaintext sitting in browser storage. Local-first shifts the threat model from "the server is the threat" to "the device is the threat," which is a meaningful improvement but not a complete defense.

It cannot mean you have no responsibility for backups. If your data lives on one device and that device dies, the data is gone. Zero-knowledge encrypted sync lets you have backup without giving up local-first guarantees, but the responsibility is yours.

It cannot mean the developer has no relationship to your data. The developer still ships code that runs on your device, and a malicious update could exfiltrate everything in local storage in a single release. Open-source code, reproducible builds, and signed releases are partial defenses; complete trust elimination is not possible in the current software supply chain.

The Verification Checklist

Run this on any app claiming "no data leaves your device." It takes about ten minutes and will resolve most claims one way or the other.

  • Open the browser's Network tab (or the equivalent OS-level network monitor for native apps), use the app for a few minutes, and watch what requests fire. Strong claims should show near-zero outbound traffic during normal use.
  • Read the privacy policy specifically for the words "telemetry," "analytics," "crash reporting," and "diagnostic data." Note any qualifications to the headline claim.
  • If the project is open source, search the codebase for outbound network calls and confirm they match what the marketing claims.
  • Read the App Store App Privacy details and the Play Store Data Safety section if the app is mobile. Cross-check them against the website's claim.
  • Identify which of the six categories above the claim actually fits. A strong "no data leaves" claim should be either pure client-side or end-to-end encrypted with a verifiable key model.
  • Look for one independent privacy review (EFF, Mozilla's Privacy Not Included, a serious tech-press audit). If the product is more than a year old and no one has looked at it, that itself is information.
  • Check whether the third-party scripts on the page where you actually use the app match the privacy stance. A privacy-respecting app should not be running ad pixels and session-replay scripts on its main interface.

None of this requires being a privacy specialist. It requires spending ten minutes verifying a claim before trusting it with content you would not want leaked. In 2026, that is the minimum viable level of consumer privacy literacy, and it is well within reach.

Stay Informed

Get ecosystem updates

New tools, posts, and ecosystem news — no spam, unsubscribe anytime.