Identification: How Few Data Points It Takes
The research on re-identification consistently demonstrates that very few data points are needed to identify a specific person, even in datasets that have been stripped of names and other direct identifiers.
Three attributes — gender, date of birth, and zip code — are sufficient to uniquely identify 87% of the U.S. population. This was demonstrated by Latanya Sweeney (then at Carnegie Mellon, later at Harvard) using 1990 census data [12]. She proved the point by cross-referencing anonymized Massachusetts hospital discharge records with publicly available voter registration rolls to identify the then-governor’s medical records [13].
Four spatio-temporal data points identify 95% of individuals. Researchers at MIT and the Université catholique de Louvain analyzed 1.5 million cellphone users over 15 months and found that just four location-time observations — with fairly low resolution — were enough to uniquely distinguish 95% of them [13].
Fifteen demographic attributes identify 99.98% of Americans. A 2019 study published in Nature Communications by de Montjoye et al. developed a generative model that estimated the re-identification likelihood even in heavily incomplete datasets. They found that 15 demographic attributes would render virtually all Americans uniquely identifiable [14].
Six movie ratings identify 84-99% of users. Narayanan and Shmatikov at the University of Texas de-anonymized Netflix user data (released for a machine learning competition) by cross-referencing it with public IMDB ratings. Six ratings of obscure movies yielded 84% identification; adding approximate timestamps pushed that to 99% [15].
The practical implication is that “anonymized” commercial datasets — the kind sold by data brokers, shared with researchers, or released by companies — are not anonymous in any meaningful sense. The data broker Experian sold Alteryx access to a dataset containing 248 attributes per household for 120 million American households [14]. By the standards of the research above, every one of those households is re-identifiable.
How the Data Is Generated
Participating in modern society generates data about you as a condition of participation. You cannot hold a job, rent an apartment, drive a car, receive medical care, attend school, use a bank, buy groceries with a card, or carry a phone without creating records held by entities you do not control. This is not a side effect of technology; it is a structural feature of how commerce, government, and institutions operate. The question is not whether data about you exists, but how much of it there is, who holds it, and what barriers exist between that data and anyone who wants it.
Financial Transaction Data
Financial records represent one of the most detailed and complete portraits of a person’s life. Your bank and credit card records reveal where you shop, eat, drink, worship, seek medical care, travel, stay overnight, and donate money. They reveal your income, debts, spending patterns, and financial relationships.
What exists: Banks hold transaction histories, account balances, wire transfer records, deposit details, and loan applications. Credit card issuers and payment processors hold every purchase — merchant name, category, amount, location, date, and time. Credit bureaus (Equifax, Experian, TransUnion) aggregate all credit accounts, payment histories, credit inquiries, collections, and public records (bankruptcies, liens, judgments) into credit reports.
Who holds it and why: Financial institutions are required by the Bank Secrecy Act (BSA) to maintain extensive records and to file Suspicious Activity Reports (SARs) with the Treasury Department’s Financial Crimes Enforcement Network (FinCEN) when they detect potentially reportable transactions [28]. SARs contain personally identifiable information and are filed without the customer’s knowledge [28]. A House Judiciary Committee report from December 2024 documented that in the aftermath of January 6, 2021, the FBI coordinated with FinCEN to encourage financial institutions across the country to search their data and file SARs on hundreds of Americans, using terms like “MAGA” and “TRUMP” as search criteria [28].
Law enforcement access: The Right to Financial Privacy Act (RFPA) of 1978 nominally requires the government to provide advance notice before obtaining financial records, but the Patriot Act created exceptions allowing the FBI to obtain financial records via National Security Letters without court approval for counterintelligence purposes [16]. The 2003 expansion of “financial institution” under the NSL statute now covers not just banks but also casinos, insurance companies, auto dealerships, credit unions, real estate companies, and travel agencies [16]. For ordinary criminal investigations, law enforcement can obtain bank records with a subpoena, court order, or warrant, depending on the type of record. The third-party doctrine as established in United States v. Miller (1976) holds that bank customers have no reasonable expectation of privacy in records they share with their bank [23]. Carpenter did not overrule Miller, and lower courts have specifically found that Carpenter does not extend to location-revealing bank records [29].
The Data You Provide
Beyond what is collected passively, individuals actively generate enormous volumes of data through voluntary interactions:
Retail loyalty programs. When you sign up for Walmart+, Target Circle, CVS ExtraCare, or any grocery store rewards card, you are exchanging a detailed, timestamped, itemized record of every purchase for a discount. This data is owned by the retailer and is generally subject to the third-party doctrine — meaning law enforcement can obtain it with a subpoena.
Online ordering and delivery. Your Pizza Hut order history, your DoorDash delivery addresses, your Amazon purchase history — all of these are held by the company and constitute business records. They reveal dietary habits, household composition, gift recipients, and (via delivery addresses) your whereabouts at specific times.
Smart home devices. Ring doorbell footage is stored on Amazon’s servers by default (unless end-to-end encryption is enabled) [31]. Ring’s law enforcement guidelines state that the company complies with warrants and subpoenas via the Amazon Law Enforcement Request Tracker (ALERT) portal [31]. Ring distinguishes between content (video) and non-content (metadata) — it does not produce content in response to subpoenas, but may produce non-content information [31]. However, Ring claims the right to disclose video without user consent in emergencies involving “danger of death or serious physical injury,” with Amazon making the determination of what qualifies as an emergency unilaterally [32]. In 2025, Ring reversed its 2024 decision to discontinue police access and partnered with Axon to reinstate a “Request for Assistance” feature, through which users can opt in to share footage with local police [33]. Amazon disclosed in 2022 that more than 2,000 police departments had used the Neighbors app to request footage [34].
Vehicle data. Modern cars with telematics systems (OnStar, Ford SYNC, Toyota Connected Services) continuously transmit location, speed, diagnostic, and driving behavior data. When you buy a car, the dealership collects your SSN, income, employer, address, and credit application data. The dealership is now classified as a “financial institution” under the expanded NSL definition [16].
Social media and voluntary disclosure. Everything posted to Facebook, Instagram, X/Twitter, TikTok, etc. is held by the platform and available to law enforcement via the SCA framework. Content of communications stored less than 180 days requires a warrant; non-content metadata (who you communicated with, when, from where) can be obtained with a §2703(d) court order [17].
The Legal Framework: Data Access Is Not Binary
The most common misconception about personal data is that it is either protected or it is not. In practice, data access operates on a gradient. The relevant question is never “can they get my data?” but rather: who is asking, under what authority, at what legal threshold, and through which of the available channels?
A local detective investigating a burglary, an FBI counterintelligence agent pursuing a national security case, a DHS analyst with a purchase order and a data broker account, and a private individual with $20 and access to a people-search website can all obtain information about the same person. What differs is the scope of what they can access, the legal process (if any) required, and the practical barriers in their way. The legal thresholds separating these actors range from “probable cause reviewed by a judge” at the most protective end, down to “a credit card” at the least.
This gradient has consequences. It means that a person’s medical records, which enjoy relatively strong statutory protection under HIPAA, coexist in the same ecosystem as their retail purchase history, which has no specific federal statutory protection at all. It means that the same location data the government would need a warrant to obtain from a carrier (under Carpenter) can be purchased from a data broker with no legal process whatsoever. It means that the question “is my data private?” has a different answer depending on which data, held by whom, sought by whom, and under what legal theory.
The sections that follow lay out the specific legal thresholds, the sector-specific protections and their exceptions, and the structural gap — the data broker loophole — that allows the entire framework to be circumvented through purchase.
Protected Data
A handful of federal statutes create heightened protections for specific categories of data. These are the exceptions to the general rule of weak protection. Each, however, contains law enforcement exceptions.
Health data (HIPAA). The Health Insurance Portability and Accountability Act protects “protected health information” (PHI) held by covered entities (healthcare providers, health plans, healthcare clearinghouses) and their business associates. HIPAA permits disclosure to law enforcement pursuant to a court order, warrant, or subpoena; to report certain types of wounds or injuries as required by state law; when the covered entity believes disclosure is necessary to prevent a serious and imminent threat; and to identify or locate a suspect, fugitive, or missing person [24]. HIPAA does not cover health data collected by wellness apps, fitness trackers, or other consumer technologies that are not covered entities [25]. Your Fitbit heart rate data, your period-tracking app data, and your 23andMe genetic data are not PHI under HIPAA.
Education records (FERPA). The Family Educational Rights and Privacy Act protects student records held by educational institutions receiving federal funding. FERPA permits disclosure to law enforcement units of the educational institution, in compliance with a judicial order or subpoena (with notice to the student in most cases), and in connection with a health or safety emergency [26]. FERPA explicitly exempts certain “law enforcement unit records” maintained by campus police from its protections entirely [26].
Financial data (GLBA). The Gramm-Leach-Bliley Act requires financial institutions to disclose their data-sharing practices and give consumers the option to opt out of some third-party sharing. GLBA defines nonpublic personal information (NPI) to include SSNs, credit history, income data, account numbers, addresses, and phone numbers [27]. However, GLBA’s protections are primarily about disclosure practices and security safeguards, not about law enforcement access. Law enforcement access to financial records is governed by the RFPA and NSL statutes, which preempt GLBA’s privacy provisions.
Credit reports (FCRA). The Fair Credit Reporting Act governs credit bureau activities and restricts who can access credit reports. Reports may be provided in response to a court order, and may be provided to the FBI without a court order via National Security Letters for counterintelligence purposes [16].
The pattern across all of these: Every one of these “protective” statutes contains exceptions for law enforcement access, typically via subpoena, court order, or warrant. For national security purposes, the threshold drops further — the FBI can access financial and credit data via NSLs with no judicial involvement at all.
What Requires a Subpoena
The legal thresholds for law enforcement access exist on a spectrum. Understanding the tiers is essential to understanding how little stands between your data and any level of government:
No legal process required:
- Anything you post publicly on social media
- Data law enforcement purchases from data brokers (the data broker loophole) [18]
- Information voluntarily provided by the data holder (e.g., a neighbor who hands over their Ring footage, a retailer who voluntarily shares records)
- SARs already filed with FinCEN under the Bank Secrecy Act [28]
- Open-source intelligence (OSINT) — anything available on the public internet
Administrative subpoena (no court involvement):
- Basic subscriber information from ISPs and phone companies: name, address, billing records, phone number, types of service, length of service [17]
- National Security Letters can compel financial records, credit reports, and telecom transactional records with no judicial approval [16]
- Grand jury subpoenas for business records (retail purchase histories, loyalty card data, order histories, hotel records, auto purchase records) — a grand jury subpoena is issued by the prosecutor, not by a judge, though it is issued in connection with a grand jury proceeding
Court order (§2703(d) — “specific and articulable facts” standard, below probable cause):
- Transactional email/internet records (who communicated with whom, when, IP addresses)
- Call detail records and metadata beyond basic subscriber info
- Historical cell-site location information for less than seven days (per Carpenter’s threshold)
Search warrant (probable cause, reviewed by judge or magistrate):
- Content of communications stored 180 days or less (recent emails, messages, voicemails)
- Historical CSLI of seven or more days (Carpenter) [19]
- Real-time wiretaps (Title III)
- Ring video content, per Ring’s stated policy [31]
The critical gap: The data broker loophole means that for any data collected by apps, websites, or devices and sold to brokers — which includes precise location data, browsing history, purchase data, and app usage data — any government entity can simply buy it. No subpoena, no court order, no warrant. DHS, ICE, CBP, the FBI, the DEA, and the Secret Service have all done this [18][20]. The House passed the Fourth Amendment Is Not For Sale Act in 2024 by a vote of 219-199, but the Senate did not act [21]. As of March 2026, the loophole remains open.
Layers of Illusory Protection
Several mechanisms that appear to protect personal data provide substantially less protection than most people assume:
“Anonymization.” As documented above, anonymized datasets are re-identifiable with remarkably few data points. The regulatory framework — both the U.S. system and to some extent GDPR — still treats “de-identified” data as falling outside privacy protections. But the research consistently shows that de-identification as practiced is a legal fiction, not a technical fact. Sweeney demonstrated this in 1997 [12]; Narayanan and Shmatikov demonstrated it in 2008 [15]; de Montjoye et al. demonstrated it in 2019 [14]. The problem has gotten worse, not better, as datasets grow richer. The data broker Experian sold a “de-identified” dataset with 248 attributes per household — by any re-identification model, that is effectively identified data being sold as anonymous [14].
Terms of service “consent.” Every app, platform, and service requires you to agree to a terms of service and privacy policy before use. These agreements, which virtually no one reads, typically authorize the collection, storage, and sharing of data in broad terms. The legal system treats clicking “I agree” as consent. But as the UIC Law Review has argued, Fourth Amendment consent requires voluntary action based on the totality of circumstances, not the passive, uninformed acceptance that characterizes ToS agreements [22]. The problem is compounded by the fact that participation in modern society often requires using these services — you cannot meaningfully “choose” not to have a bank account, email address, or phone.
Privacy policies. Companies publish privacy policies that describe their data practices. These policies are unilaterally modifiable, often vaguely worded, and provide no contractual commitment to the user. Ring’s privacy policy changed multiple times between 2022 and 2025, with the company alternately restricting and then reinstating law enforcement access to footage [33][34]. A privacy policy is a description of current practice, not a guarantee of future behavior.
Opt-out mechanisms. GLBA gives consumers the right to opt out of some third-party sharing by financial institutions [27]. But the opt-out applies only to certain categories of sharing and does not apply to law enforcement access. Similarly, carrier privacy settings on your phone do not prevent the carrier from tracking which tower serves your device — that data is inherent to the network’s operation and cannot be opted out of [37].
Encryption. End-to-end encryption (E2EE) is the one mechanism that can provide genuine protection — if the provider cannot decrypt the data, it cannot comply with a warrant for the content. Ring offers optional E2EE, but it is not enabled by default and disabling it is the default for most users [36]. Apple’s iCloud Advanced Data Protection offers E2EE for cloud-stored data, but it must be manually enabled. For most users of most services, data is stored in a form accessible to the provider and therefore accessible via legal process.
The “narrowness” of Carpenter. The Supreme Court’s 2018 decision is frequently cited as establishing that digital-age data deserves Fourth Amendment protection. But the Court explicitly stated its decision was “narrow” and did not express views on real-time CSLI, tower dumps, security cameras, other business records that might reveal location information, or collection techniques involving foreign affairs or national security [19]. Lower courts have taken the hint and consistently cabined Carpenter, declining to extend its warrant requirement to fixed video surveillance, bank records revealing location, online shopping histories, and subscriber information [29]. The result is that Carpenter protects one specific category of data (seven or more days of historical CSLI) while leaving the vast majority of digital data under the weak pre-Carpenter framework.
Federal privacy law inertia. The last major federal data privacy law is the Electronic Communications Privacy Act of 1986, written before the commercial internet existed [17]. Multiple proposals for comprehensive federal privacy legislation have failed. The American Data Privacy and Protection Act (ADPPA) advanced further than any predecessor in 2022 but did not pass. As of 2026, the U.S. has no comprehensive federal data privacy law. The patchwork of sector-specific laws (HIPAA, FERPA, GLBA, FCRA, COPPA, VPPA) leaves enormous categories of data — retail purchases, app data, IoT device data, smart home data, vehicle telematics — with no specific federal statutory protection at all.
ISP Surveillance: The Service Provider Chain, VPNs, and Data Brokering
The Service Provider as Chokepoint
Every byte of data you transmit passes through at least one service provider. In most cases, it passes through several. The provider occupies a unique position in the surveillance landscape: unlike an app or a website, which sees only the data you give it, the ISP sees all of your traffic — every domain you visit, every connection you make, the timing, duration, and volume of every session. The ISP is not one node among many in your data ecosystem. It is the pipe through which everything else flows.
This analysis covers the full service provider chain: cellular carriers (AT&T, Verizon, T-Mobile), home broadband providers (Comcast/Xfinity, Charter/Spectrum, Cox, CenturyLink/Lumen, fiber providers), and business ISPs (which often provide the same carriers’ enterprise tiers plus dedicated providers like Cogent, Zayo, and others). The surveillance capabilities are structurally identical across all of these. A cellular carrier sees your mobile traffic the same way Comcast sees your home traffic. The legal framework governing what they can do with it is the same.
Without Encryption (HTTP)
For unencrypted traffic — increasingly rare but not extinct — the ISP can see everything: the full URL, the content of the page, form data, search queries, uploaded files, and the content of any communication. This is visible through basic packet inspection and requires no special tools.
With Standard Encryption (HTTPS/TLS)
The majority of web traffic now uses HTTPS. With HTTPS, the ISP cannot see the specific page content, the path within a URL, form submissions, or the body of communications. However, HTTPS does not hide the following [54][55]:
Domain names. The ISP sees every domain you connect to. It knows you visited pornhub.com, plannedparenthood.org, alcoholicsanonymous.org, or the website of a divorce attorney. It does not see which specific pages you viewed on those sites, but the domain alone is often sufficient to draw inferences about the nature of the visit.
DNS queries. By default, your device sends DNS lookup requests through your ISP’s DNS servers. Each lookup reveals which domain you intend to visit before you connect. DNS is the internet’s phonebook, and by default, your ISP reads every entry you look up [54][55]. Encrypted DNS (DNS over HTTPS or DNS over TLS) can close this gap, but it is not enabled by default on most devices and most users do not know it exists.
IP addresses of destination servers. Even if DNS is encrypted, the ISP sees the destination IP address of every connection. For sites hosted on dedicated IPs, this identifies the site. For sites on shared hosting or CDNs, it is less specific but still informative.
Connection metadata. The ISP sees when you connect, how long each session lasts, how much data you transfer, and the pattern of your activity over time [54][55]. This metadata is extraordinarily revealing. A connection to a known streaming IP at 10 PM transferring 3 GB is almost certainly video. Short, frequent connections to a messaging service’s IP reveal communication patterns. Timing correlations between your activity and events in the physical world (you searched for a specific address, then drove there) link online and offline behavior.
Server Name Indication (SNI). During the TLS handshake, the client typically sends the server name in plaintext (the SNI field), allowing the ISP to see which specific domain you’re connecting to on a shared-IP server. Encrypted Client Hello (ECH) is designed to close this gap, but adoption is still limited [56].
Mobile Carriers
Cellular carriers see all of the above, plus they hold the cell-site location information (CSLI) and call detail records analyzed in the earlier SIM network tracking section. A mobile carrier therefore has a combined view of your internet activity and your physical movements — a uniquely comprehensive portrait that no home broadband ISP possesses.
Business ISPs
Enterprise ISPs provide the same transport layer and have the same visibility. Additionally, businesses often deploy their own monitoring infrastructure (firewalls, proxy servers, DLP systems) that inspects traffic at the organizational boundary. Employees on corporate networks should assume that all traffic not routed through a personal VPN is visible to their employer’s IT department, in addition to the ISP.
The 2017 Regulatory Collapse
In October 2016, the FCC under Chairman Tom Wheeler adopted rules requiring ISPs to obtain opt-in consumer consent before using or sharing browsing data, app usage history, location data, and other sensitive information with third parties [57]. These rules were scheduled to take effect in 2017.
They never took effect. On March 23, 2017, the Senate voted 50-48 to repeal the rules via S.J.Res. 34 under the Congressional Review Act. The House followed on March 28, 215-205. President Trump signed the repeal on April 3, 2017 [57][58]. Critically, the Congressional Review Act not only nullified the existing rules but prohibited the FCC from adopting substantially similar rules in the future [58][59].
The repeal created a regulatory vacuum. The FCC, which had jurisdiction over ISPs as telecommunications carriers, was barred from reimposing privacy rules. The FTC, which regulates privacy for most other industries, had been found by the Ninth Circuit to lack jurisdiction over common carriers (a classification the FCC had applied to ISPs under its 2015 Open Internet Order). As Senator Brian Schatz stated at the time, the result was that “neither the FCC nor the FTC will have clear authority when it comes to how Internet service providers protect consumers’ data privacy and security” [57].
The telecom industry’s lobbying arm, CTIA, argued during this period that “web browsing and app usage history are not ‘sensitive information’” [57]. Privacy attorney Dallas Harris noted the opposite: ISPs can infer where you bank, your political views, and your sexual orientation from browsing patterns alone [57].
What ISPs Do With the Data
ISPs monetize user data through several channels:
Direct advertising partnerships. ISPs build behavioral profiles from browsing data and sell access to targeted advertising. AT&T, Verizon, and Comcast have all operated or invested in advertising platforms that leverage subscriber data.
Data aggregation and sale. ISPs sell summarized or “anonymized” browsing data to data brokers and analytics firms [55][60]. As the earlier section on re-identification demonstrated, “anonymized” datasets with sufficient attributes are trivially re-identifiable. An ISP dataset containing timestamped domain visits linked to household-level identifiers is functionally identified data sold as anonymous data.
“Supercookies” and persistent tracking. Verizon was caught in 2014-2015 injecting unique identifier headers (UIDH, dubbed “supercookies”) into all of its mobile subscribers’ HTTP traffic. These identifiers could not be deleted by the user and were visible to every website the user visited, allowing third-party advertisers to track Verizon subscribers across the web even after they cleared cookies. Verizon was fined $1.35 million by the FCC and required to obtain opt-in consent before sharing UIDH data [61].
Carrier data to location aggregators. As documented in the earlier analysis, AT&T, Verizon, T-Mobile, and Sprint sold real-time subscriber location data to aggregators like LocationSmart and Zumigo, resulting in the FCC’s $200 million fine in 2024 [30]. This is the cellular-specific dimension of ISP data brokering: the carrier’s unique ability to geolocate subscribers was commercialized and sold downstream to law enforcement, bounty hunters, and others without subscriber consent.
The Structural Asymmetry
Unlike a website or app, which a user can choose not to use, an ISP is a mandatory intermediary. In much of the United States, consumers have one or two broadband options. The FCC’s own data has repeatedly shown limited competition in fixed broadband markets. You cannot opt out of having an ISP without opting out of the internet.
This makes the ISP privacy problem structurally different from the app privacy problem. When Facebook tracks you, you can (in theory) stop using Facebook. When your ISP tracks you, your only options are to encrypt everything (VPN, encrypted DNS), to accept the surveillance, or to disconnect.
What a VPN Does
A VPN creates an encrypted tunnel between your device and a VPN server. All traffic passes through this tunnel before reaching its destination. From the ISP’s perspective, instead of seeing hundreds of connections to different domains throughout the day, it sees a single persistent connection to one IP address (the VPN server), with all content encrypted [62][63].
With a properly configured VPN, the ISP cannot see:
- Which domains or websites you visit
- DNS queries (if the VPN routes DNS through the tunnel)
- The content of any communication
- Which services or apps you are using
- Specific pages, searches, or downloads [62][63]
What the ISP Can Still See With a VPN
A VPN is not invisibility. The ISP retains visibility into [62][63][64]:
That you are using a VPN. The ISP sees encrypted traffic flowing to a known VPN provider’s IP address. Deep Packet Inspection (DPI) can identify VPN protocol signatures even without decrypting the content [56][64]. Some VPN providers offer “obfuscation” or “stealth” modes that disguise VPN traffic as ordinary HTTPS traffic, reducing detectability, but this is an arms race [56].
Connection timing. The ISP sees when you connect and disconnect from the VPN, session duration, and your connection schedule over days and weeks. This creates a behavioral pattern even without content visibility [63][64].
Data volume. The ISP sees how much data you upload and download through the VPN tunnel. Sudden spikes suggest large downloads or video streaming. The ISP cannot determine what you are streaming, but can infer that you are streaming based on bandwidth patterns [62][63].
The VPN server’s IP address. This identifies which VPN service you use, and the approximate geographic location of the server you’ve chosen [62][63].
The Trust Transfer Problem
A VPN does not eliminate surveillance. It transfers the surveillance point from your ISP to your VPN provider. Your VPN provider occupies exactly the same position your ISP previously did: it sees every connection, every domain, every DNS query, and all metadata [65]. The critical question becomes whether your VPN provider is more trustworthy than your ISP.
The VPN industry addresses this with “no-logs” policies — promises that the provider does not record user activity. Some providers submit to independent audits of their no-logs claims. But no-logs policies are contractual commitments, not technical guarantees. A VPN provider that claims to keep no logs may be compelled by legal process (warrant, court order, NSL) to begin logging a specific user’s traffic going forward. The user would not know this has happened. In jurisdictions with mandatory data retention laws, VPN providers may be legally required to retain certain records regardless of their stated policy.
Several VPN providers have been tested by real-world legal demands. In some cases, providers headquartered in privacy-friendly jurisdictions (Panama, the British Virgin Islands, Switzerland) have been able to respond to law enforcement requests by truthfully stating they have no records to provide. In other cases, providers that claimed no-logs policies were found to have maintained logs when subpoenaed.
What VPNs Cannot Address
Traffic analysis. Even with content encrypted and destinations hidden, the timing and volume of traffic flowing through a VPN can be analyzed statistically. If an adversary can observe both the user’s connection to the VPN and the VPN’s connection to a destination (a “global passive adversary”), correlation attacks can match traffic patterns to deanonymize the user. This is not a typical ISP’s capability, but it is within the reach of nation-state intelligence agencies [66].
Endpoints outside the tunnel. A VPN only protects data in transit between your device and the VPN server. Once traffic exits the VPN server and reaches its destination, it is subject to the same tracking by websites, apps, cookies, and browser fingerprinting that exists without a VPN. A VPN changes your apparent IP address but does not prevent websites from identifying you through logged-in accounts, browser fingerprints, or persistent cookies.
DNS leaks and misconfigurations. If the VPN is improperly configured, DNS queries may “leak” outside the encrypted tunnel and be visible to the ISP. IPv6 leaks can similarly expose traffic if the VPN only tunnels IPv4. WebRTC leaks in browsers can reveal the user’s real IP address. These are implementation failures, not fundamental limitations, but they affect many users who assume a VPN provides complete coverage [64].
The VPN app itself. The VPN application runs on your device and requires trust. A malicious or compromised VPN app could log activity, inject tracking, or exfiltrate data. Free VPN services are particularly suspect — the operating costs of running a VPN service are substantial, and if the user is not paying, the business model likely involves monetizing user data. Research has repeatedly found free VPN apps that track users, inject ads, or contain malware.
The Full Chain: How ISP Data Reaches Adversaries
Putting the pieces together, ISP data reaches parties who can use it against the subscriber’s interests through several distinct channels:
Direct law enforcement access. Under the Stored Communications Act, law enforcement can obtain subscriber information with an administrative subpoena, transactional records with a §2703(d) court order, and content with a warrant. ISPs are legally required to comply and are immunized from civil liability for doing so [17]. The FBI can obtain telecom transactional records via National Security Letters without any judicial involvement [16].
Data broker purchases. ISPs sell “anonymized” browsing data to data brokers, who aggregate it with data from other sources and sell it onward. Government agencies then purchase this aggregated data, bypassing the warrant requirements that would apply if they sought the same data directly from the ISP [18]. This is the same data broker loophole documented in the prior analysis, applied to ISP data specifically.
Carrier location data sales. Cellular carriers sold real-time location data through aggregators like LocationSmart, which reached law enforcement (via Securus), bounty hunters, and others [30]. The carriers were fined $200 million in 2024, but the fundamental capability — the carrier’s ability to geolocate any subscriber in real time — remains inherent to the network.
Compelled cooperation. Beyond formal legal process, ISPs may cooperate with government requests informally. The NSA’s PRISM and upstream collection programs, revealed by Edward Snowden in 2013, demonstrated that major ISPs and technology companies had provided the intelligence community with access to communications infrastructure. The legal authorities underlying these programs (Section 702 of FISA, Executive Order 12333) remain in effect.
Data breaches. ISP databases containing subscriber records, browsing histories, and billing information are targets for hackers. A breach of ISP records exposes not a single service’s data but the full breadth of a subscriber’s internet activity.
What This Means in Practice
The ISP surveillance problem has a specific structural character that distinguishes it from other privacy threats:
You cannot opt out. Using the internet requires an ISP. The ISP sees everything unless you take affirmative steps to encrypt.
The default is total visibility. Without a VPN, without encrypted DNS, without any protective measure, the ISP sees every domain you visit, every DNS query, every connection time, and every byte of data. This is the starting point for every subscriber who does nothing.
The 2017 repeal removed the only federal rule that would have required consent. The FCC cannot reimpose similar rules. The FTC’s authority is uncertain. No replacement legislation has been enacted. As of 2026, there is no federal law requiring ISPs to obtain subscriber consent before selling browsing data [57][59].
A VPN helps significantly but is not a complete solution. It blinds the ISP to your destinations and content, but the ISP still sees connection metadata and data volumes, and you have transferred trust to the VPN provider. The VPN provider is subject to legal compulsion in whatever jurisdiction it operates.
Cellular carriers are the most dangerous ISPs. They combine internet traffic surveillance with real-time physical location tracking — a capability no fixed broadband ISP has. Post-Carpenter, seven or more days of historical CSLI requires a warrant, but real-time location, shorter durations, and tower dumps remain in legal gray zones [19].
Cellular Network Surveillance
Phreeli
Tracking Mechanisms
This analysis examines the distinct ways that the cellular/SIM network infrastructure itself can be used to track individuals. Each mechanism exploits a different layer of the network architecture, from core signaling protocols to the SIM card hardware to the commercial relationships between carriers and data brokers.
1. SS7 Protocol Exploitation
Signaling System No. 7 (SS7) is a set of telephony signaling protocols developed in the 1970s that underpins call routing, SMS delivery, and subscriber authentication across the global public switched telephone network [67]. SS7 was designed as a closed, trusted system between cooperating carriers. It was never built with authentication or access control between network operators [68].
The core tracking mechanism works as follows: SS7 allows any operator on the network to query another operator’s Home Location Register (HLR) and Visitor Location Register (VLR) databases to determine which cell tower a subscriber’s phone is currently connected to [69]. This was designed for legitimate roaming and billing purposes, but because the protocol lacks authentication, anyone who gains access to the SS7 network can issue these queries for any subscriber worldwide.
SS7 vulnerabilities were publicly reported as early as 2008 and demonstrated by German security researchers in 2014, who showed tracking was possible with approximately 70% success rates [67]. In 2017, the German mobile operator O2 Telefónica confirmed that SS7 vulnerabilities had been exploited to bypass two-factor authentication and drain bank accounts [67]. As recently as late 2024, Enea’s Threat Intelligence Unit detected a surveillance vendor in the Middle East exploiting a novel SS7 bypass technique that manipulated the TCAP (Transaction Capabilities Application Part) layer using obscure “extended tag encoding” to evade SS7 firewalls [70][71]. The attack could locate a subscriber to the nearest cell tower, which in dense urban areas narrows to a few hundred meters [70].
SS7 tracking is not limited to 2G/3G networks. The successor protocol for 4G networks, Diameter, inherits many of the same architectural vulnerabilities because LTE networks frequently interwork with SS7 for fallback services, and the trust model between operators persists [72]. The GSMA estimated in 2021 that 30% of mobile connections still used 2G/3G access [68], and SS7 tracking will remain viable as long as these networks operate.
The U.S. Department of Homeland Security confirmed as early as 2017 that China, Iran, Israel, and Russia had all exploited SS7 to surveil U.S. mobile subscribers [70]. The FCC began publicly addressing SS7 security in 2024, requesting information from carriers about incidents and defenses [73].
2. Cell-Site Simulators (IMSI Catchers / Stingrays)
Cell-site simulators (CSS), also known as IMSI catchers or by the Harris Corporation brand name “Stingray,” are devices that impersonate legitimate cell towers to force nearby phones to connect to them [74]. They exploit the design feature in cellular protocols whereby mobile devices connect to whichever tower presents the strongest signal.
There are two categories. Passive IMSI catchers intercept cellular transmissions from the air without transmitting, analogous to an FM radio receiver. Active cell-site simulators broadcast signals stronger than nearby legitimate towers, forcing phones to connect and reveal their IMSI (International Mobile Subscriber Identity) numbers, IMEI (device identifiers), and location [74].
Once a target phone connects, the operator can determine its precise location via signal strength measurements. If the target IMSI is known, the operator screens incoming connections against it. If the target is unknown, the simulator collects identifiers from every phone in range, then the operator cross-references this with visual surveillance to isolate a specific individual [75].
Modern CSS can also force protocol downgrades. Because 4G/LTE devices have stronger authentication, some IMSI catchers reject tracking area update requests, forcing the target phone to fall back to less-secure 2G, where encryption can be defeated and communications intercepted [76]. Harris Corporation products in this category include the StingRay, Hailstorm, ArrowHead, AmberJack, and KingFish (a hand-carried version) [74]. These devices can be mounted in vehicles, on aircraft, helicopters, and drones [77].
In the United States, CSS have been deployed by the FBI, U.S. Marshals Service, ICE, DHS, and the Secret Service, often without warrants and sometimes without disclosing their use to courts [74][78]. A 2023 Congressional Oversight Committee report found that ICE, DHS, and the Secret Service had all used CSS many times without following their own rules [74]. The EFF released an open-source detection tool called Rayhunter in 2025, which runs on an inexpensive mobile hotspot and monitors for indicators of CSS activity [79].
The international dimension is significant. Between February 2015 and April 2016, over 12 companies in the UK were authorized to export IMSI catcher devices to Saudi Arabia, the UAE, and Turkey [77]. CSS have been documented in use in Canada, Ireland, and numerous other countries [77].
3. Cell Tower Triangulation and Cell-Site Location Information (CSLI)
Whenever a phone is powered on, it connects to nearby cell towers and generates time-stamped cell-site location information (CSLI) that carriers store for billing and network management purposes [19]. This happens continuously, whether or not the user is making a call, and constitutes a persistent location record.
Carriers can locate a phone using several methods: single-tower identification (placing the phone within the tower’s coverage area, which ranges from a few blocks in urban areas to over 20 square miles rurally); triangulation using signal strength from multiple towers; and GPS-assisted pinging, which can locate a phone to within 5-10 feet [80].
CSLI comes in two forms. Historical CSLI reconstructs past movements from stored records. Prospective (real-time) CSLI tracks current location, sometimes by “pinging” the phone to force it to report its position [80].
There are important reliability caveats. The assumption that a phone connects to the nearest tower is not always accurate. Carrier algorithms consider network congestion, tower capacity, geography, weather, and other factors when assigning connections [81]. This has led to wrongful convictions, as in the case of Lisa Marie Roberts in Oregon, who pled guilty to manslaughter in 2004 partly based on cell tower evidence that an appellate court later found scientifically unreliable [81].
In Carpenter v. United States, 585 U.S. 296 (2018), the Supreme Court ruled 5-4 that obtaining seven or more days of historical CSLI constitutes a Fourth Amendment search requiring a warrant based on probable cause [19]. The government had obtained 12,898 location points over 127 days for the defendant — an average of 101 data points per day — without a warrant [82]. Chief Justice Roberts wrote that CSLI provides the government with something akin to an ankle monitor attached to the phone user [83]. The decision was narrow, however, and explicitly did not address real-time CSLI, tower dumps, or national security contexts [19].
4. Carrier Data Sales to Aggregators and Brokers
The major U.S. carriers — AT&T, Verizon, T-Mobile, and Sprint — sold real-time subscriber location data to data aggregation companies, principally LocationSmart and Zumigo, who in turn resold it downstream to a wide variety of customers [84][85].
This came to public attention in 2018 when the New York Times reported that Securus Technologies, a prison communications company, had been providing a location-finding service to law enforcement that could locate any phone on the major U.S. networks [84]. A former sheriff in Missouri used the service to track a judge and other law enforcement officers without warrants [85]. Securus obtained its data through 3CInteractive, which sourced it from LocationSmart [86].
A Carnegie Mellon University researcher, Robert Xiao, then discovered that LocationSmart’s public demo website had an API vulnerability that allowed anyone to bypass authentication and geolocate any phone on AT&T, Sprint, T-Mobile, or Verizon — using nothing but a phone number [84]. Subsequent reporting by Motherboard found that LocationSmart was also selling data through a company called CerCareOne to bounty hunters and bail bondsmen [87].
Verizon disclosed that approximately 75 companies had been obtaining its customer location data through LocationSmart and Zumigo [85]. All four carriers announced they would terminate their aggregator relationships in mid-2018, but the FCC found they continued selling data for nearly a year afterward [30]. In a separate case, a Deputy U.S. Marshal named Adrian Pena was charged for using the Securus service between 2016 and 2017 to track personal acquaintances and their spouses by uploading fabricated legal documents [88].
In April 2024, the FCC fined the carriers a combined approximately $200 million: T-Mobile $80 million, AT&T $57 million, Verizon $47 million, and Sprint $12 million [30]. The carriers stated they intend to appeal [30].
The fundamental structural problem, as Brian Krebs noted, is that even with phone-level location and privacy settings disabled, a carrier must still track which tower serves a phone for the network to function — and there is no way for a subscriber to opt out of this [37].
5. SIM Card Software Exploitation (Simjacker)
In 2019, AdaptiveMobile Security disclosed a vulnerability dubbed Simjacker that attacks the SIM card itself [89]. The exploit targets the S@T Browser (SIMalliance Toolbox Browser), a legacy application embedded on SIM cards since the early 2000s that was originally designed to enable carrier menu services. Despite not being updated since 2009, it remains installed on SIM cards used by operators in at least 29 countries across the Americas, West Africa, Europe, and the Middle East [89][90].
The attack works by sending a specially crafted binary SMS message to a target phone. The message contains SIM Toolkit (STK) instructions that are passed to and executed by the S@T Browser on the SIM card. The code instructs the SIM to query the handset for its IMEI and location, then exfiltrate this information via a second SMS to the attacker’s number [89][91]. The target user receives no notification of the incoming attack SMS, the data query, or the outgoing data message — nothing appears in any inbox or outbox [91].
AdaptiveMobile reported that a private surveillance company — which they assessed was working with governments — had been actively exploiting Simjacker since at least late 2018, tracking the location of thousands of individuals, primarily in Mexico, Colombia, and Peru [89][92]. Some targets were queried hundreds of times per week [91]. The vulnerability is device-agnostic: phones from Apple, Samsung, Google, Huawei, Motorola, ZTE, and even IoT devices with SIM cards were successfully targeted [90].
EU-CERT assessed that up to one billion devices globally could be affected [90]. Unlike SS7 attacks, which require access to the signaling network, Simjacker requires only a phone number and can be executed using a $10 GSM modem [92].
6. SIM Swapping
SIM swapping (also called SIM hijacking) is an attack in which a fraudster convinces a mobile carrier to transfer a victim’s phone number to a SIM card under the attacker’s control [93]. This is not primarily a tracking mechanism, but it enables tracking and broader surveillance by giving the attacker control over all calls and SMS messages directed to the victim’s number.
Once in control, the attacker can intercept SMS-based two-factor authentication codes, reset passwords for email, banking, and cloud accounts, and monitor all incoming communications [93]. For espionage and surveillance purposes, a SIM swap gives the attacker the ability to monitor the victim’s communications, track their location through services tied to their phone number, and gather information for blackmail or manipulation [94].
The FBI received 1,600 complaints about SIM swapping in 2021, with victims losing $68 million — a dramatic increase from $12 million in losses during the entire 2018-2020 period [93]. Reports to the UK National Fraud Database rose over 1,000% from 2023 to 2024 [93]. The 2019 SIM swap attack on then-Twitter CEO Jack Dorsey’s account demonstrated the technique’s viability against high-profile targets [93].
Carriers have begun implementing countermeasures such as SIM Protection locks (Verizon) and port-out freezes, but the rollout of eSIM technology has opened new attack vectors, since attackers who compromise a carrier account can initiate over-the-air profile downloads to a new device without physical access to any SIM card [95].
Summary of Tracking Dimensions
| Mechanism | Access Required | Precision | User Awareness | Scale |
|---|---|---|---|---|
| SS7 exploitation | SS7 network access | Cell-tower level | None | Global |
| Cell-site simulators | Physical proximity + device | Sub-tower level | None | Local radius |
| CSLI (carrier records) | Legal process or carrier cooperation | Tower to GPS level | None | Per-subscriber |
| Carrier data sales | Commercial relationship | Tower level or better | None | All subscribers |
| Simjacker | Phone number + GSM modem | Cell-tower level | None | Per-SIM vulnerability |
| SIM swapping | Social engineering of carrier | N/A (enables other attacks) | Victim loses service | Per-target |
A common thread across all six mechanisms: the user has no technical ability to prevent the tracking (short of powering off the phone or using a Faraday bag), and in most cases receives no indication that tracking is occurring.
The Physical Surveillance Network
Flock Safety: Surveillance as a Service
Flock Safety is a private company, founded in 2017, that has built the largest automated license plate reader (ALPR) network in the United States. As of 2025, it claims to operate in over 5,000 communities across 49 states, deploying nearly 90,000 cameras that perform over 20 billion vehicle scans per month [38][39]. In September 2025, Flock raised $275 million at a $7.5 billion valuation [40]. It is not a government program. It is a private company selling surveillance as a subscription service.
How It Works
Flock’s core product is the Falcon, a solar-powered, LTE-connected camera mounted on a pole, typically at intersections and neighborhood entry points. The camera photographs the rear of every passing vehicle and uses computer vision to read the license plate, then transmits the plate number, timestamp, location, and a photograph to Flock’s cloud servers via the cellular network [38]. Beyond the plate number, Flock’s software captures what the company calls a “Vehicle Fingerprint” — the make, model, color, and distinguishing features of each vehicle (bumper stickers, roof racks, aftermarket modifications), allowing identification even when plates are obscured or missing [40][41].
The data feeds into a cloud platform accessible to Flock’s customers: police departments, sheriff’s offices, homeowner associations, apartment complexes, businesses, and schools. Officers can search the database by plate number, partial plate, vehicle description, or time/location range. They can set “hot lists” that trigger real-time alerts when a flagged plate passes any camera in the network. And critically, they can share data across the entire network [39].
The Data Sharing Architecture
The data sharing model is the core of the problem. Police departments that contract with Flock can choose to share their data with no other agencies, with specific named agencies, with all agencies in their state, or with every agency in the entire nationwide Flock network [39]. The ACLU of Massachusetts documented over 450,000 searches of the nationwide database in a single 30-day period in the summer of 2025, conducted by agencies from across the country [39].
This means a camera installed by a suburban HOA in Georgia can generate data searchable by a police department in Massachusetts, a sheriff’s office in Texas, or — as documented extensively — federal immigration enforcement agencies. The University of Washington’s Center for Human Rights found that at least eight Washington state law enforcement agencies enabled direct sharing with U.S. Border Patrol in 2025, and that Border Patrol had “back door” access to at least ten additional agencies that had not explicitly authorized sharing [42]. The EFF’s analysis of more than 12 million searches logged by 3,900 agencies between December 2024 and October 2025 found hundreds of searches related to political protests — the 50501 protests, Hands Off protests, and No Kings protests — as well as targeted surveillance of animal rights activists and racially discriminatory searches targeting Romani people [43].
The Product Expansion
Flock is no longer just a license plate reader company. Its product line now includes:
Flock Nova — Announced in May 2025, Nova is described as a “public safety data platform” that supplements ALPR data with information from public records and commercially available data to track specific individuals without a warrant. As of May 2025, it was already in use by law enforcement in an Early Access program. The EFF described it as a “dystopian panopticon.” After 404 Media reported that Nova would also incorporate data from data breaches, Flock removed that feature [38].
Flock Raven — A gunshot detection system, similar to ShotSpotter, that uses microphones to detect and locate gunfire. In October 2025, Flock announced Raven would also begin listening for “human distress” — effectively positioning police-monitored microphones in public spaces that alert on screaming. After public backlash, Flock altered its marketing language but the capability remains [38][40].
Condor PTZ cameras — Pan-tilt-zoom surveillance cameras with AI-powered tracking of people, vehicles, and objects. Flock announced that police will be able to obtain not just still photos but live video feeds and 15-second clips, searchable using natural language AI queries [41].
Drone as First Responder — Autonomous drones that can be dispatched to 911 calls before officers arrive, providing aerial surveillance [38].
Municipal Resistance
Not all communities have accepted Flock. Denver’s City Council unanimously rejected a $666,000 Flock contract extension. Eureka, California voted down a plan for 21 cameras. Gig Harbor, Washington rejected a proposal for 10 cameras. Berkeley, California has engaged in sustained public debate [40]. The EFF and ACLU of Northern California filed a lawsuit against San Jose in November 2025, challenging warrantless searches of millions of ALPR records [43]. The Institute for Justice filed a federal lawsuit against Norfolk, Virginia, arguing that the city’s Flock deployment violates the Fourth Amendment [38]. Virginia’s Court of Appeals ruled in October 2025 that license plate readers do not require warrants, but the federal case continues [40].
The Ring Camera Surveillance Network
Amazon’s Ring represents the privatization of neighborhood surveillance. An estimated 10 million Americans have Ring cameras [40]. These cameras, primarily doorbell cameras but also floodlights, stick-up cameras, and indoor cameras, record continuously and store footage on Amazon’s cloud servers.
The Law Enforcement Partnership
Ring’s relationship with law enforcement has oscillated. At its peak, more than 2,000 police departments had access to the “Request for Assistance” feature in Ring’s Neighbors app, through which officers could request footage from Ring users in the vicinity of an incident [35]. In January 2024, Ring shut down the Request for Assistance tool following years of criticism from privacy advocates [35]. In 2025, Ring reversed course: founder Jamie Siminoff returned to the company in April, and in October, Flock Safety and Ring announced a partnership through which agencies using Flock’s Nova platform could request footage from Ring users [38][40]. The partnership routes requests through Axon’s secure platform, and users must opt in to share [33]. After Amazon ran a Super Bowl LX advertisement promoting the integration that drew public backlash, the companies canceled the planned integration before it launched — Ring stated that “no Ring customer videos were ever sent to Flock Safety” [38].
Regardless of any voluntary sharing program, Ring complies with warrants and subpoenas through the Amazon Law Enforcement Request Tracker (ALERT) portal. Ring distinguishes between content (video) and non-content (metadata); it does not produce video content in response to subpoenas, but may produce non-content information. Ring claims the right to share footage without user consent in emergencies involving “danger of death or serious physical injury,” with Amazon making the determination unilaterally [31].
The Scale Problem
The aggregate effect of millions of privately owned cameras, many of which are positioned to capture public sidewalks and streets, is a surveillance infrastructure that no government could have built on its own. Residents install Ring cameras voluntarily, pay for the hardware and subscription themselves, and — through Neighbors app posts, voluntary sharing with police, and compliance with legal process — create a surveillance network that covers residential streets at a density no municipal camera program has ever achieved. The footage is stored on Amazon’s servers, accessible to Amazon, to law enforcement via legal process, and to anyone the user chooses to share it with.
License Plate Readers: The Broader Ecosystem
Flock is the dominant player, but not the only one. The ALPR ecosystem includes:
Vigilant Solutions (now part of Motorola Solutions) — Maintains its own national database and has contracts with agencies nationwide, including ICE. The Massachusetts State Police contracts with Vigilant in addition to Flock. ICE agents have direct access to query the Vigilant database [39].
Motorola Solutions / Avigilon — Motorola acquired Avigilon and operates ALPR cameras alongside its broader law enforcement technology portfolio.
Mobile ALPRs — In addition to fixed cameras, police vehicles mount ALPR cameras that scan plates while driving. These mobile units are not mapped by DeFlock and add a layer of surveillance that is inherently impossible to track or avoid.
Toll road and parking operators — Toll systems use ALPR for billing. Parking enforcement companies use ALPR to detect violations. These operators hold location data tied to vehicles that may be accessible via subpoena.
The Institute for Justice’s Plate Privacy project documented ALPR cameras located near an abortion clinic, a halfway house, an immigration attorney’s office, a church, a gun range, and a mosque, illustrating the sensitivity of the location data these cameras inherently capture [44]. A Kansas town used license plate readers to investigate a man who wrote an op-ed critical of the local government [41].
ALPRs can scan up to 2,000 plates per minute, and cities with readers routinely capture thousands of different vehicles each month [44]. Data retention policies vary from 30 days (Flock’s default) to years, depending on the agency. Some agencies store data indefinitely.
DeFlock: Counter-Surveillance
DeFlock is an open-source project created by Will Freeman, a software engineer, after he noticed Flock cameras proliferating during a drive from Washington state to Alabama. “I saw these creepy-looking cameras with solar panels on top,” Freeman told 404 Media. “I took a picture, searched online, and found Flock’s website” [45].
DeFlock (deflock.org) uses OpenStreetMap to allow contributors to plot the locations of ALPR cameras. As of early 2026, the project has mapped more than 16,000 individual camera locations, more than a third of which are Flock devices [46]. Contributors can note the direction each camera points, revealing deployment strategies — for example, Freeman found that all cameras in downtown Huntsville, Alabama are pointed outward, focused on detecting vehicles entering the downtown core rather than leaving it [45].
Freeman has stated he eventually wants to offer navigation routing that avoids known ALPR cameras, although the density of cameras in some areas may make this infeasible — a fact that itself supports the Fourth Amendment arguments being made in the Norfolk lawsuit [45].
Flock Safety responded to DeFlock by sending Freeman a cease-and-desist letter claiming the project dilutes its trademark. The EFF represented Freeman and rejected the demand, pointing out that the project is well within First Amendment rights [46]. Bruce Schneier, the security researcher, featured DeFlock on his blog, noting that the project only maps fixed cameras — mobile ALPRs on police vehicles remain unmapped [47].
Security Vulnerabilities and Hacks
The security of Flock’s infrastructure has been the subject of multiple independent research findings in 2025, all of which revealed serious vulnerabilities.
The Gaines White Paper (November 2025)
Cybersecurity researcher Jon Gaines published a white paper documenting 51 distinct security findings in Flock hardware and software, including 22 with assigned CVE identifiers and 8 more pending [48]. The findings included:
- Physical compromise in under 30 seconds. A simple button sequence on the back of a Flock camera can initiate unauthorized wireless access [48][49]. YouTuber Benn Jordan demonstrated this in a widely viewed video titled “We Hacked Flock Safety Cameras in Under 30 Seconds” [49].
- Hard-coded WiFi network names. Flock cameras store hard-coded WiFi SSIDs that they automatically connect to when LTE is unavailable. An attacker need only create a wireless access point matching one of these names to intercept traffic and credentials via a man-in-the-middle attack [48].
- Cleartext credential transmission. Credentials were transmitted without encryption in some configurations [48].
- Obsolete operating system. The devices run Android Things 8 or 8.1, an operating system Google discontinued in 2021 with hundreds of known unpatched vulnerabilities [48][49].
- Exposed USB ports and unsecured internal storage. Physical access to the camera — which is mounted on a public-facing pole — allows data extraction and tampering [49].
Flock acknowledged the findings, registered the vulnerabilities in the CVE database, and stated that none resulted in a confirmed breach of customer data [50]. The company characterized the vulnerabilities as “theoretical” and requiring physical access, though critics noted that publicly mounted cameras are inherently physically accessible [48].
The Condor Camera Exposure (Late 2025)
404 Media found at least 60 Flock Condor PTZ cameras streaming live to the internet without any authentication — no password, no encryption [51]. Journalists were able to view live footage and control the cameras’ pan, tilt, and zoom functions. The exposed streams included footage from playgrounds, emergency response scenes, and high-traffic intersections in locations including Cedar Rapids, Iowa and Douglas County, Colorado [51]. Flock described the exposure as a “configuration error” during beta testing [51].
Benn Jordan’s subsequent investigation, titled “This Flock Camera Leak Is Like Netflix for Stalkers,” highlighted that the exposed feeds included archived footage spanning up to 30 days [49].
Stolen Police Credentials
TechCrunch reported in November 2025 that stolen police login credentials were being used to access Flock’s camera network. In one documented case, the DEA used a local Palos Heights, Illinois police officer’s password — without the officer’s knowledge — to search Flock cameras for an individual suspected of an “immigration violation” [52]. Flock did not require multi-factor authentication for customers until November 2024, and as of November 2025, 3% of its law enforcement customers still had not enabled MFA [52].
Flock’s Response
Flock CEO Garrett Langley characterized the scrutiny as a “coordinated attack” by “activist groups who want to defund the police, weaken public safety and normalize lawlessness” [53]. The company maintains that its cloud platform has never experienced a data breach and that no customer data has been compromised [50]. Privacy advocates and security researchers dispute this framing, noting that the documented vulnerabilities — physical compromise, exposed live feeds, credential theft, obsolete OS, cleartext transmission — represent systemic security failures in infrastructure that tracks millions of vehicles daily [48].
The Aggregate Picture
These systems — Flock’s 90,000 ALPR cameras, Ring’s estimated 10 million doorbell cameras, Vigilant’s national database, municipal surveillance camera networks, mobile ALPRs on patrol cars, toll and parking systems — are converging into a unified physical surveillance infrastructure. The data flows between them: Flock shares with police, police share with federal agencies, Ring footage can be requested through Flock’s platform or via legal process, and the data broker loophole allows government purchase of commercially aggregated location data.
The result is that driving a car in the United States in 2026 means having your movements recorded, timestamped, and stored in databases accessible to thousands of law enforcement agencies, with search audit logs that show hundreds of thousands of warrantless queries per month. Walking past a Ring camera means your image is stored on Amazon’s servers. Entering a neighborhood with a Flock deployment means your vehicle is photographed, identified, and cataloged in a database that federal immigration agents, local police, and potentially anyone who compromises the poorly secured system can search.
The legal framework has not kept pace. Virginia’s Court of Appeals has ruled that ALPR surveillance does not require a warrant. The federal government argues that Americans cannot reasonably expect privacy on public roads [40]. The Fourth Amendment challenge in Norfolk remains pending. The mosaic theory — that while a single observation on a public road is not a search, the aggregation of movements over time constitutes one — has not been definitively applied to ALPR data by the Supreme Court. Carpenter established the principle for cell-site location data but explicitly declined to address other technologies [38].
Meanwhile, the cameras keep multiplying. They are being installed by HOAs, businesses, and school districts as well as police departments, creating a network that no single government entity controls but that government can access. The cost is borne by private parties. The data flows to the state. And the security of the entire system has been shown to be vulnerable to compromise by anyone with 30 seconds of physical access and a basic understanding of the devices’ architecture.
Doxxing
What It Isn’t
The word “doxxing” has expanded far beyond its original meaning and is now routinely misapplied. Clearing away what doxxing is not matters, because the misuse of the term obscures the actual harm and muddies the legal and ethical analysis.
Doxxing is not journalism. When a reporter identifies the person behind a pseudonymous account that has a public following and public influence, that is reporting. The Washington Post’s 2022 exposé identifying the person behind the Libs of TikTok account was widely called “doxxing” by the account’s supporters [1]. It was not. Identifying people who exercise public influence — even under pseudonyms — is a core function of journalism. The same applies to Newsweek’s 2014 attempt to identify the creator of Bitcoin [1] and the New York Times’ 2020 reporting on the identity behind the Slate Star Codex blog [1]. These may be debatable editorial decisions, but they are acts of journalism, not doxxing. The conflation of the two is itself a tactic: it uses the moral weight of the word “doxxing” to delegitimize accountability reporting.
Doxxing is not the publication of information that is already public. When Harvard students signed a public letter and their names — which they had voluntarily attached to a public document — were displayed on a billboard truck in Harvard Square, this was widely called “doxxing” [2]. It was not. Amplifying information that someone voluntarily made public, even if the amplification is hostile, is not doxxing. It may be harassing, intimidating, or in poor taste, but the information was never private.
Doxxing is not accountability for public officials. When constituents post the office phone number and publicly listed address of an elected official, or when activists post public information about school board members who made public votes on public policy, those are acts of political speech [2]. Anti-doxxing laws that sweep this kind of activity into criminal liability face serious First Amendment problems. The Foundation for Individual Rights and Expression (FIRE) has noted that many proposed anti-doxxing laws are overbroad and could cover whistleblower activity and speech on matters of public concern [2].
Doxxing is not the same as embarrassment. If someone posts something under their real name on a public forum and it later goes viral to their embarrassment, they have not been doxxed. They published the information themselves. The discomfort of having your own public statements find an unintended audience is not a privacy violation.
What It Is
Doxxing — from “dropping documents” — is the deliberate aggregation and publication of a person’s private identifying information, without their consent, for the purpose of enabling harassment, intimidation, retaliation, or harm [1][3].
The term originated in 1990s hacker culture as a weapon in rivalries between hackers operating under pseudonyms. “Dropping docs” on a rival meant stripping away their anonymity by publishing their real name, address, and identity — destroying the separation between their online persona and their physical person [1][3]. It was, from its inception, understood as an act of aggression. The point was never simply to inform; it was to make someone vulnerable.
What distinguishes doxxing from other forms of information disclosure is the combination of three elements:
Aggregation. Doxxing is rarely about a single piece of information. It is the compilation of information from multiple sources — property records, voter registration databases, social media profiles, data broker sites, corporate registries, court filings, reverse phone lookups, and sometimes social engineering — into a dossier that connects a person’s identity to their physical location and personal life [4][5]. Any one of these data points might be individually public. The act of doxxing is assembling them into a package that makes the target locatable and contactable by strangers.
Publication with hostile intent. The assembled information is published — on social media, forums, imageboards, or dedicated “doxx” sites — with the implicit or explicit invitation for others to act on it. This distinguishes doxxing from, say, a skip-tracing firm assembling the same information for a client. The publication is directed at an audience that is expected to harass, threaten, or confront the target [3][6]. The National Association of Attorneys General describes the evolution from “isolated conduct to a more coordinated form of digital persecution,” in which doxxing increasingly leverages algorithmic amplification to reach hostile audiences at scale [6].
Targeting the private person, not the public role. Doxxing targets the private life — the home address, the family members, the children’s school, the daily routine — of someone who may or may not have a public role. Even when public figures are doxxed, the information released is characteristically not about their public conduct but about their private physical whereabouts. The publication of a judge’s home address after an unfavorable ruling is not commentary on the ruling; it is a signal to people who might show up at the house [7].
The mechanism works because of the data ecosystem documented throughout this analysis. The data broker ecosystem, the people-search industry (Whitepages, Spokeo, BeenVerified, and hundreds of others), public records databases, voter registration files, property records, and the residue of every online interaction create a substrate of information that is trivially aggregable. ProPublica’s reporting describes the toolset: doxxers use property records, tax documents, voter registration databases, social media, real estate websites, and sometimes physical surveillance [5]. The people-search industry makes this even easier — a few dollars and a name yields a current address, phone number, previous addresses, known associates, and family members [8].
Doxxing’s downstream consequences include identity theft, physical confrontation, and swatting — the practice of filing a false emergency report (active shooter, hostage situation) at the target’s home address to provoke an armed law enforcement response [6]. Swatting has resulted in deaths. The connection between doxxing and swatting is direct: the doxx provides the address, and the swat weaponizes it.
Who Can and Cannot Be Doxxed
The short answer is that effectively everyone can be doxxed. The longer answer has to do with the gradient of difficulty and the nature of the available infrastructure.
Almost no one is immune. If you have a phone number, a residential address, a driver’s license, a voter registration, a property deed, a utility account, or any social media presence, the raw material for a doxx exists. The people-search and data broker industry has made the aggregation step largely unnecessary for casual doxxers — the dossier already exists in pre-assembled form, available for a small fee or often for free [8]. As the earlier section on re-identification demonstrated that 15 demographic attributes are sufficient to identify 99.98% of Americans [14], and data brokers routinely hold 248 or more attributes per household [14]. The data exists. The question is only whether someone is motivated to compile and publish it.
People who are easiest to doxx:
Anyone with a normal public footprint. The typical person who has a social media account, a voter registration, a property record, and an employment history listed on LinkedIn is fully doxxable in minutes using free or cheap people-search tools. They have never had reason to scrub their data from broker sites, and most do not know those sites exist.
Public-facing professionals. Journalists, educators, healthcare providers, judges, and elected officials are particularly vulnerable because their professional identities are public by necessity. Their names appear in bylines, court records, school directories, and government databases. Journalists covering controversial topics face especially high risk — the CUNY Graduate School of Journalism maintains a dedicated guide on counter-doxxing specifically because the threat is so routine [9]. Federal judges, court employees, and jurors received more than 4,500 threats and inappropriate communications in 2021 alone, and a data removal service reported a 20% increase in removal requests from judicial officers and federal employees since January 2025, coinciding with escalating political attacks on judges handling ICE and DOGE-related cases [7].
People caught in viral moments. Misidentification is a persistent and devastating feature of doxxing. Kyle Quinn, an engineering professor at the University of Arkansas, was misidentified as a participant in the 2017 Charlottesville white nationalist rally because someone at the rally wore an “Arkansas engineering” shirt. His photo, home address, and employer were published, forcing him and his wife to flee their home [5][10]. Sunil Tripathi, a Brown University student, was misidentified as the Boston Marathon bomber by Reddit users and others in 2013; he was later found dead, having committed suicide before the misidentification occurred, though the doxxing subjected his family to a torrent of abuse [10].
People in domestic violence or stalking situations. For someone fleeing an abusive partner, a single data broker listing that reveals a current address can be life-threatening. Some states maintain address confidentiality programs (Florida, for example, conceals voter registration for participants in its program for domestic violence victims [5]), but these programs require the person to know they exist and to affirmatively enroll — and they cover only voter registration, not the hundreds of other data broker and public record sources that may expose an address.
People who are harder to doxx:
People with significant operational security practices. Individuals who systematically opt out of data broker sites, use PO boxes or registered agent addresses for all public records, maintain strict social media compartmentalization, register property through LLCs or trusts, and use separate devices and identities for different spheres of activity can make doxxing substantially more difficult. This is not the same as making it impossible. It requires continuous effort — data broker sites re-populate, new brokers emerge, and a single slip (a package delivered to a real address, a photo with identifiable metadata, a friend tagging a location) can undo months of work. One analysis estimated that manual data broker removal takes 100-200 hours and is approximately 40-60% effective as a temporary measure [11].
People with institutional protection. Some categories of people have legal or institutional mechanisms that reduce (but do not eliminate) their exposure. Federal judges gained additional protections after the murder of Judge Esther Salas’s son by a litigant who found her home address online — the Daniel Anderl Judicial Security and Privacy Act, signed in 2022, requires data brokers to remove personal information of federal judges and their families upon request [7]. Law enforcement officers in some states have similar protections. These are narrow, person-category-specific carve-outs that do not extend to the general public.
Wealthy individuals who invest in privacy services. Commercial data removal services (DeleteMe, Incogni, and others) will continuously monitor and submit opt-out requests to data broker sites on a client’s behalf. These services cost money and require ongoing subscriptions because the data reappears. They reduce exposure but cannot eliminate it — the underlying public records (property, court, voter) still exist, and not all data brokers honor removal requests [8].
People who cannot be meaningfully protected under current conditions:
The honest answer is that the current infrastructure makes doxxing a low-skill, low-cost attack available to anyone with a name and an internet connection, and there is no reliable defense available to ordinary people. The people-search industry exists specifically to make personal information findable. It is legal. It is profitable. And it provides the raw material for every doxxing attack. The data broker ecosystem documented earlier — the same ecosystem that enables warrantless government surveillance via purchase — also enables any private individual to assemble a targeting package on any other private individual for the cost of a fast-food meal.
The structural reality is this: participating in society generates the data. The data broker industry aggregates it. People-search sites make it searchable. And the legal system provides no comprehensive remedy. There is no federal anti-doxxing statute. The aggregation and publication of publicly available information is generally legal under the First Amendment [2]. Existing laws against stalking, harassment, and true threats can apply when doxxing is part of a broader pattern of criminal conduct, but the doxxing act itself — the assembly and publication — sits in a legal gray zone in most U.S. jurisdictions [2][6]. A handful of states have enacted targeted legislation since 2023, and countries like the Netherlands (effective January 2024), Hong Kong (up to five years imprisonment), and Australia (criminalized December 2024) have moved further [1]. But in the United States, the tension between anti-doxxing legislation and First Amendment protections remains unresolved.
The result is an asymmetry: doxxing is easy, cheap, and largely legal to commit, and difficult, expensive, and largely impossible to undo.
Biometrics
Browser and Device Fingerprinting
Bibliography
Doxxing & Online Harassment
[1] Wikipedia. “Doxing.” [Snippet] https://en.wikipedia.org/wiki/Doxing
[2] Foundation for Individual Rights and Expression. “Is doxxing illegal? Doxxing, Free Speech, and the First Amendment.” [Snippet] https://www.fire.org/research-learn/doxxing-free-speech-and-first-amendment
[3] Britannica. “Doxing | Meaning, Law, & History.” [Snippet] https://www.britannica.com/topic/doxing
[4] Kaspersky. “What is Doxing? Definition and Explanation.” [Snippet] https://usa.kaspersky.com/resource-center/definitions/what-is-doxing
[5] ProPublica. “So What the Hell Is Doxxing?” [Snippet] https://www.propublica.org/article/so-what-the-hell-is-doxxing
[6] National Association of Attorneys General. “The Escalating Threats of Doxxing and Swatting: An Analysis of Recent Developments and Legal Responses.” August 2025. [Snippet] https://www.naag.org/attorney-general-journal/the-escalating-threats-of-doxxing-and-swatting-an-analysis-of-recent-developments-and-legal-responses/
[7] Newsweek. “Doxing on Rise After Info Leaks About Judge in DOGE Case: Privacy Expert.” February 2025. [Snippet] https://www.newsweek.com/doxing-rise-after-info-leaks-about-judge-doge-case-privacy-expert-2032954
[8] JoinDeleteMe. “Why Digital Privacy Protection Matters for Public Figures.” 2026. [Snippet] https://joindeleteme.com/blog/digital-privacy-protection/
[9] CUNY Craig Newmark Graduate School of Journalism. “Dealing with Doxxing.” [Snippet] https://researchguides.journalism.cuny.edu/doxxing
[10] AVG. “What Is Doxxing: Is It Illegal and How to Prevent It.” [Snippet] https://www.avg.com/en/signal/what-is-doxxing
[11] DisappearMe.AI. “Doxxing History: Origins, Etymology, 1990s to 2025.” December 2025. [Snippet] https://disappearme.ai/blog/complete-history-doxxing-1990s-hacker-culture-2025-epidemic-origins-evolution-impact-prevention
Data Re-identification & Anonymization
[12] EPIC. “Re-identification.” [Snippet] https://archive.epic.org/privacy/re-identification.html
[13] Wikipedia. “Data re-identification.” [Snippet] https://en.wikipedia.org/wiki/Data_re-identification
[14] de Montjoye, Y-A., et al. “Estimating the success of re-identifications in incomplete datasets using generative models.” Nature Communications 10, 3069 (2019). [Snippet] https://www.nature.com/articles/s41467-019-10933-3
[15] Lubarsky, Boris. “Re-Identification of ‘Anonymized Data.’” Georgetown Law Technology Review (2017). [Snippet] https://georgetownlawtechreview.org/re-identification-of-anonymized-data/GLTR-04-2017/
Privacy Law & Government Access
[16] EPIC. “National Security Letters.” [Snippet] https://archive.epic.org/privacy/nsl/
[17] Congressional Research Service. “Overview of Governmental Action Under the Stored Communications Act.” [Snippet] https://www.congress.gov/crs-product/LSB10801
[18] Brennan Center for Justice. “Closing the Data Broker Loophole.” [Snippet] https://www.brennancenter.org/our-work/research-reports/closing-data-broker-loophole
[19] Wikipedia. “Carpenter v. United States.” [Snippet] https://en.wikipedia.org/wiki/Carpenter_v._United_States
[20] ACLU. “DHS is Circumventing Constitution by Buying Data It Would Normally Need a Warrant to Access.” January 2026. [Snippet] https://www.aclu.org/news/privacy-technology/dhs-is-circumventing-constitution-by-buying-data-it-would-normally-need-a-warrant-to-access
[21] Project on Government Oversight. “Fact Sheet: Closing the Data Broker Loophole.” [Snippet] https://www.pogo.org/fact-sheets/fact-sheet-closing-the-data-broker-loophole
[22] UIC Law Review. “The Fourth Amendment, the Third-Party Doctrine, and Cloud-Stored Data.” [Snippet] https://lawreview.law.uic.edu/news-stories/the-fourth-amendment-the-third-party-doctrine-and-cloud-stored-data-do-terms-of-service-undermine-our-privacy-expectations-in-the-digital-age/
[23] Wikipedia. “Third-party doctrine.” [Snippet] https://en.wikipedia.org/wiki/Third-party_doctrine
Sector-Specific Privacy Protections
[24] Secureframe. “HIPAA Exceptions.” [Snippet] https://secureframe.com/hub/hipaa/exceptions
[25] The Data Privacy Group. “Understanding Consumer Data Privacy Laws in the US.” 2024. [Snippet] https://thedataprivacygroup.com/blog/understanding-consumer-data-privacy-laws-in-the-us/
[26] archTIS/Spirion. “FERPA vs. HIPAA: Understanding the Key Differences.” 2024. [Snippet] https://www.spirion.com/solutions/compliance/ferpa-vs-hipaa-key-differences
[27] Total HIPAA. “GLBA & HIPAA: How They Overlap.” 2023. [Snippet] https://www.totalhipaa.com/hipaa-and-glba/
Financial Surveillance
[28] House Judiciary Committee. “Financial Surveillance in the United States.” December 2024. [Snippet] https://judiciary.house.gov/sites/evo-subsites/republicans-judiciary.house.gov/files/2024-12/2024-12-05-Financial-Surveillance-in-the-United-States.pdf
[29] Yale Law Journal. “Fourth Amendment Reasonableness After Carpenter.” 2019. [Snippet] https://yalelawjournal.org/forum/fourth-amendment-reasonableness-after-carpenter
[30] FCC. “FCC Fines AT&T, Sprint, T-Mobile, and Verizon Nearly $200 Million for Illegally Sharing Access to Customers’ Location Data.” April 29, 2024. [Snippet] https://docs.fcc.gov/public/attachments/DOC-402213A1.pdf
Smart Home Surveillance
[31] Ring. “Learn About Ring Law Enforcement Guidelines.” [Snippet] https://ring.com/support/articles/oi8t6/Learn-About-Ring-Law-Enforcement-Guidelines
[32] Kurtz & Blum. “Ring Doorbell Lawsuit Concerns.” December 2025. [Snippet] https://kurtzandblum.com/blog/ring-doorbell-lawsuit-concerns-what-amazon-can-really-share-with-police/
[33] Black Enterprise. “Ring Changes Course, Will Allow Law Enforcement Access To Personal Camera Footage.” July 2025. [Snippet] https://www.blackenterprise.com/ring-police-access-personal-camera-footage/
[34] Ifrah Law. “Ding Dong – The Police Want Access to Your Doorbell Footage.” February 2024. [Snippet] https://www.ifrahlaw.com/ftc-beat/ding-dong-the-police-want-access-to-your-doorbell-footage-can-they-get-it/
[35] NPR. “Ring will no longer allow police to request users’ doorbell camera footage.” January 2024. [Snippet] https://www.npr.org/2024/01/25/1226942087/ring-will-no-longer-allow-police-to-request-users-doorbell-camera-footage
[36] Consumer Reports. “Can Federal Law Enforcement Access Your Ring Doorbell Videos?” February 2026. [Snippet] https://www.consumerreports.org/electronics/personal-information/can-federal-law-enforcement-access-your-ring-doorbell-videos-a4894322123/
[37] Marketplace/Krebs. “Why privacy settings can’t keep your location secret.” May 2018. [Snippet] https://www.marketplace.org/story/2018/05/22/why-privacy-settings-cant-keep-your-location-secret
License Plate Readers & Physical Surveillance
[38] Wikipedia. “Flock Safety.” [Snippet] https://en.wikipedia.org/wiki/Flock_Safety
[39] ACLU of Massachusetts. “Flock Gives Law Enforcement All Over the Country Access to Your Location.” October 2025. [Snippet] https://data.aclum.org/2025/10/07/flock-gives-law-enforcement-all-over-the-country-access-to-your-location/
[40] State of Surveillance. “Flock Safety: The $7.5 Billion Surveillance Network Tracking Your Car.” December 2025. [Snippet] https://stateofsurveillance.org/articles/surveillance/flock-safety-surveillance-network/
[41] ACLU. “Flock’s Aggressive Expansions Go Far Beyond Simple Driver Surveillance.” October 2025. [Snippet] https://www.aclu.org/news/privacy-technology/flock-roundup
[42] University of Washington Center for Human Rights. “Leaving the Door Wide Open: Flock Surveillance Systems Expose Washington Data to Immigration Enforcement.” October 2025. [Snippet] https://jsis.washington.edu/humanrights/2025/10/21/leaving-the-door-wide-open/
[43] EFF. “EFF’s Investigations Expose Flock Safety’s Surveillance Abuses: 2025 in Review.” December 2025. [Snippet] https://www.eff.org/deeplinks/2025/12/effs-investigations-expose-flock-safetys-surveillance-abuses-2025-review
[44] Plate Privacy / Institute for Justice. “Home.” [Snippet] https://plateprivacy.com/
[45] 404 Media. “The Open Source Project DeFlock Is Mapping License Plate Surveillance Cameras All Over the World.” November 2024. [Snippet] https://www.404media.co/the-open-source-project-deflock-is-mapping-license-plate-surveillance-cameras-all-over-the-world/
[46] EFF. “Anti-Surveillance Mapmaker Refuses Flock Safety’s Cease and Desist Demand.” February 2025. [Snippet] https://www.eff.org/deeplinks/2025/02/anti-surveillance-mapmaker-refuses-flock-safetys-cease-and-desist-demand
[47] Schneier on Security. “Mapping License Plate Scanners in the US.” November 2024. [Snippet] https://www.schneier.com/blog/archives/2024/11/mapping-license-plate-scanners-in-the-us.html
Flock Safety Security Vulnerabilities
[48] Info by Matt Cole. “Critical Security Vulnerabilities Exposed in Flock Safety Surveillance Cameras.” December 2025. [Snippet] https://infobymattcole.com/index.php/2025/12/11/critical-security-vulnerabilities-exposed-in-flock-safety-surveillance-cameras-a-comprehensive-analysis-of-the-2025-research-findings/
[49] The Maverick Times. “Exposed Critical Security Vulnerabilities In Flock Safety Cameras In 2025.” December 2025. [Snippet] https://themavericktimesnews.com/2025/12/26/exposed-critical-security-vulnerabilities-in-flock-safety-cameras-in-2025/
[50] Flock Safety. “Has Flock Been Hacked?” [Snippet] https://www.flocksafety.com/blog/has-flock-been-hacked
[51] WebProNews. “Flock Safety AI Cameras Exposed: Privacy Breaches and Surveillance Fears.” December 2025. [Snippet] https://www.webpronews.com/flock-safety-ai-cameras-exposed-privacy-breaches-and-surveillance-fears/
[52] TechCrunch. “Lawmakers say stolen police logins are exposing Flock surveillance cameras to hackers.” November 2025. [Snippet] https://techcrunch.com/2025/11/03/lawmakers-say-stolen-police-logins-are-exposing-flock-surveillance-cameras-to-hackers/
[53] WFLX. “Flock Safety exposed live police camera feeds in internet data breach, company says.” January 2026. [Snippet] https://www.wflx.com/2026/01/09/flock-safety-exposed-live-police-camera-feeds-internet-data-breach-company-says/
ISP Surveillance & Privacy
[54] BroadbandNow. “ISP Tracking: What Your Internet Provider Can See.” 2025. [Snippet] https://broadbandnow.com/guides/what-your-isp-knows-about-your-data-use
[55] Incogni Blog. “Can your internet provider see your search history?” 2025. [Snippet] https://blog.incogni.com/can-internet-service-provider-see-history/
[56] ExpressVPN Blog. “Deep packet inspection (DPI): How it works and why it matters.” December 2025. [Snippet] https://www.expressvpn.com/blog/deep-packet-inspection/
[57] Wikipedia. “2017 Broadband Consumer Privacy Proposal repeal.” [Snippet] https://en.wikipedia.org/wiki/2017_Broadband_Consumer_Privacy_Proposal_repeal
[58] TechCrunch. “Congress just voted to let internet providers sell your browsing history.” March 28, 2017. [Snippet] https://techcrunch.com/2017/03/28/house-vote-sj-34-isp-regulations-fcc/
[59] Harvard Journal of Law & Technology. “Congress Rolls Back FCC Broadband ISP Privacy Rules.” April 2017. [Snippet] https://jolt.law.harvard.edu/digest/congress-rolls-back-fcc-broadband-isp-privacy-rules
[60] TheTechieGuy. “Understanding How Your ISP Monitors Your Online Activity.” 2025. [Snippet] https://thetechieguy.com/understanding-how-your-isp-monitors-your-online-activity/
[61] Verizon UIDH/“supercookie” FCC settlement, March 2016. [Unverified — from training data]
VPN Technology
[62] NordVPN Blog. “Can my ISP see that I am using a VPN?” 2026. [Snippet] https://nordvpn.com/blog/can-isp-see-vpn/
[63] How-To Geek. “What Your ISP Still Knows About You, Even With a VPN.” October 2025. [Snippet] https://www.howtogeek.com/isp-knows-vpn-use/
[64] PortalsVPN. “Everything Your ISP Can See When You’re Using a VPN.” September 2025. [Snippet] https://www.portalsvpn.com/blog/everything-isp-can-see-while-using-vpn/
[65] Yahoo Tech / VPN Metadata Explained. March 2026. [Snippet] https://tech.yahoo.com/vpn/articles/vpn-metadata-explained-provider-cant-154338205.html
[66] Factually.co. “What metadata does ISP see when using Tor.” November 2025. [Snippet] https://factually.co/fact-checks/technology/what-metadata-does-isp-see-when-using-tor-019383
Cellular Network Surveillance: SS7 & Protocol Exploitation
[67] Wikipedia. “Signalling System No. 7.” [Snippet] https://en.wikipedia.org/wiki/Signalling_System_No._7
[68] TechTarget. “What is SS7 Attack?” [Snippet] https://www.techtarget.com/whatis/definition/SS7-attack
[69] Forensic Focus. “Cell Phone Tracking and SS7.” September 2023. [Snippet] https://www.forensicfocus.com/podcast/cell-phone-tracking-and-ss7-hacking-security-vulnerabilities-to-save-lives/
[70] TechCrunch. “A surveillance vendor was caught exploiting a new SS7 attack to track people’s phone locations.” July 18, 2025. [Snippet] https://techcrunch.com/2025/07/18/a-surveillance-vendor-was-caught-exploiting-a-new-ss7-attack-to-track-peoples-phone-locations/
[71] GBHackers. “Surveillance Firm Exploits SS7 Flaw to Track User Locations.” July 21, 2025. [Snippet] https://gbhackers.com/surveillance-firm-exploits-ss7-flaw/
[72] P1 Security. “Location Tracking Attacks in Mobile Networks: SS7, Diameter, and 5G Security Risks.” December 2025. [Snippet] https://www.p1sec.com/blog/location-tracking-attacks-how-adversaries-exploit-mobile-networks-to-follow-you
[73] The Register. “FCC finally set to do something about SS7 vulnerabilities.” April 2, 2024. [Snippet] https://www.theregister.com/2024/04/02/fcc_ss7_security/
Cellular Network Surveillance: Cell-Site Simulators
[74] Electronic Frontier Foundation. “Cell-Site Simulators / IMSI Catchers.” [Snippet] https://sls.eff.org/technologies/cell-site-simulators-imsi-catchers
[75] Cato Institute. “Stingray: A New Frontier in Police Surveillance.” [Snippet] https://www.cato.org/policy-analysis/stingray-new-frontier-police-surveillance
[76] GoDark Bags. “How IMSI Catchers, Like Stingrays, Track Your Location.” [Snippet] https://godarkbags.com/blogs/post/imsi-catchers
[77] Wikipedia. “Stingray phone tracker.” [Snippet] https://en.wikipedia.org/wiki/Stingray_phone_tracker
[78] Project on Government Oversight. “Issue Brief: The Cell-Site Simulator Warrant Act.” [Snippet] https://www.pogo.org/fact-sheets/issue-brief-the-cell-site-simulator-warrant-act
[79] Electronic Frontier Foundation. “Meet Rayhunter: A New Open Source Tool from EFF to Detect Cellular Spying.” March 2025. [Snippet] https://www.eff.org/deeplinks/2025/03/meet-rayhunter-new-open-source-tool-eff-detect-cellular-spying
Cellular Network Surveillance: Cell-Site Location Information
[80] UC Berkeley Law. “Cell Phone Location Tracking.” [Snippet] https://www.law.berkeley.edu/wp-content/uploads/2015/04/2016-06-07_Cell-Tracking-Primer_Final.pdf
[81] Forensic Resources. “Using cell tower data to track a suspect’s location.” 2014. [Snippet] https://forensicresources.org/2014/using-cell-tower-data-to-track-a-suspects-location/
[82] ACLU. “Carpenter v. United States.” [Snippet] https://www.aclu.org/cases/carpenter-v-united-states
[83] SCOTUSblog. “Opinion analysis: Court holds that police will generally need a warrant for sustained cellphone location information.” June 2018. [Snippet] https://www.scotusblog.com/2018/06/opinion-analysis-court-holds-that-police-will-generally-need-a-warrant-for-cellphone-location-information/
Cellular Network Surveillance: Carrier Data Sales
[84] Krebs on Security. “Tracking Firm LocationSmart Leaked Location Data for Customers of All Major U.S. Mobile Carriers.” May 2018. [Snippet] https://krebsonsecurity.com/2018/05/tracking-firm-locationsmart-leaked-location-data-for-customers-of-all-major-u-s-mobile-carriers-in-real-time-via-its-web-site/
[85] CBS News/AP. “Mobile Phone Carriers Say They’ll Stop Selling Your Location Data To Data Brokers.” June 2018. [Snippet] https://www.cbsnews.com/sacramento/news/mobile-phone-tracking-data/
[86] CPO Magazine. “Can Mobile Carriers Be Trusted with Location Data?” May 2019. [Snippet] https://www.cpomagazine.com/data-privacy/can-mobile-carriers-be-trusted-with-location-data/
[87] Light Reading. “US Wireless Operators Have (Mostly) Stopped Selling Customer Location Data.” [Snippet] https://www.lightreading.com/regulatory-politics/us-wireless-operators-have-mostly-stopped-selling-customer-location-data
[88] Vice/Motherboard. “US Marshal Charged for Using Cop Phone Location Tool to Track People He Knew.” July 2024. [Snippet] https://www.vice.com/en/article/us-marshal-securus-phone-location-tracked/
Cellular Network Surveillance: SIM Card Exploits
[89] Wikipedia. “Simjacker.” [Snippet] https://en.wikipedia.org/wiki/Simjacker
[90] CERT-EU. “Security Advisory 2019-020: Simjacker Vulnerability Impacting up to 1 Billion Phone Users.” [Snippet] https://cert.europa.eu/publications/security-advisories/2019-020/
[91] Kaspersky Blog. “Simjacker opens SIM cards to spying.” November 2019. [Snippet] https://www.kaspersky.com/blog/simjacker-sim-espionage/28832/
[92] SecurityWeek. “Simjacker: SIM Card Attack Used to Spy on Mobile Phone Users.” [Snippet] https://www.securityweek.com/simjacker-sim-card-attack-used-spy-mobile-phone-users/
Cellular Network Surveillance: SIM Swapping
[93] Wikipedia. “SIM swap attack.” [Snippet] https://en.wikipedia.org/wiki/SIM_swap_scam
[94] Montgomery County Police Dept. “SIM swapping.” [Snippet] https://www.montgomerycountymd.gov/pol/fraud/sim-swapping.html
[95] Specops Software. “SIM-swap fraud: Scam prevention guide.” November 2025. [Snippet] https://specopssoft.com/blog/sim-swap-fraud-prevention-guide-2025/