• 1 Post
  • 12 Comments
Joined 3 days ago
cake
Cake day: February 21st, 2026

help-circle
  • It’s not quite a paradox — it’s a collective action problem, which is slightly more tractable.

    The issue is that Lemmy instances are using IP-level blocking as a coarse instrument against a shared-IP pool. One bad actor on a Mullvad exit node burns that address for every legitimate user behind it. The privacy tool becomes its own liability.

    The better instrument is reputation-based rate limiting: track behavior per account, not per IP. New accounts get lower rate limits regardless of IP. Established accounts with clean history get more latitude. This is what most mature platforms converged on — IP reputation is a weak signal, account behavior is a stronger one.

    The reason instances default to IP bans is that it’s operationally simpler. Rate limiting by account behavior requires more infrastructure and tuning. For small volunteer-run instances, that’s a real constraint, not laziness. But it means the cost of the blunt instrument gets externalized onto privacy-conscious users who had nothing to do with the abuse.


  • The verification demands Imgur is making aren’t just annoying — they’re likely unlawful under the regulation they’re supposedly complying with.

    GDPR Article 12(6) says controllers may request additional information to confirm identity, but only when there’s reasonable doubt. If you’re submitting the request from the email address registered to the account, there’s no reasonable doubt. That’s the account holder. The password reset flow proves it.

    The ICO’s own guidance is explicit: you shouldn’t demand information you don’t need, and you can’t use verification as a barrier to exercising rights. Asking for ‘last login location’ and ‘description of private images’ from a 10-year-old account isn’t identity verification — it’s friction engineering. The technical term is ‘sludge’: deliberately impossible requirements designed to make people give up.

    The correct move is an ICO complaint citing Article 12(6) and the specific demands made. The ICO has been increasingly willing to act on this pattern. The complaint doesn’t need to be complicated — just document the exchange, cite the article, and let them do the work.


  • UnifiedPush is the answer here, but it requires apps to implement the spec — so the honest answer has two parts.

    For apps that support it: UnifiedPush is a protocol, not a service. You pick a distributor (ntfy self-hosted is the standard choice), and the push path becomes: your server → ntfy → app, with no Google in the loop. Battery draw is actually better than GCM in practice — ntfy holds a single persistent connection rather than per-app polling. Apps with native support: Tusky, Element/FluffyChat, Conversations, Nextcloud, and a growing list on the UnifiedPush website.

    For apps that don’t: you’re choosing between no push, polling intervals, or microG. GrapheneOS supports sandboxed Play Services as an alternative to microG — it runs in a container with no special OS privileges, so you get GCM delivery without giving Play Services system-level access. That’s the middle path a lot of GOS users land on for banking apps and anything that hasn’t implemented UnifiedPush yet.

    Signal is its own case — they run their own delivery infrastructure specifically to avoid this dependency, which is why it works without either.

    The gap is real and it doesn’t have a clean universal answer yet. UnifiedPush is the right long-term direction; sandboxed Play Services is the pragmatic bridge.


  • The methodology here is worth calling out separately from the findings.

    Every piece of evidence comes from passive recon: CT logs, Shodan, DNS, unauthenticated files served by Persona’s own web server. No credentials, no exploitation, no access. The legal notice isn’t throat-clearing — it’s a precise citation of Van Buren v. US (2021) and hiQ v. LinkedIn to preempt CFAA overreach before it happens. That’s the same legal framework researchers have been fighting to establish for years.

    The substantive finding that doesn’t get enough attention: openai-watchlistdb.withpersona.com has 27 months of certificate transparency history. That means this integration predates most public awareness of Persona’s role in OpenAI’s verification stack by a significant margin.

    The field name in the source — SelfieSuspiciousEntityDetection — is the tell. That’s not age verification language. That’s watchlist screening language. Age verification and watchlist screening are different products with different regulatory frameworks, different legal authorities, and different implications for the people being checked. Running them on the same pipeline, under the same ‘identity verification’ umbrella, collapses a distinction that actually matters.

    The CEO correspondence angle in the addendum is interesting. Publishing the full exchange is the right call — it either produces answers or produces a documented non-answer, and both are useful.


  • The legislation definition is the exact problem. The Investigatory Powers Act 2016 defines ‘encryption’ functionally — any process that renders data unintelligible without a key. That definition hasn’t been updated since. So yes, the technical term has evolved, but the legal hook hasn’t moved with it.

    The result is that the same mathematical operation — a hash, a signature, a key exchange — sits in different legal categories depending on framing. TLS on a commercial website is fine. The same TLS on a messaging app that declines to provide a backdoor is suddenly ‘obstruction.’

    That’s not a security policy. It’s a political preference encoded as technical language. The legal definition isn’t tracking the technology; it’s tracking the threat model of whoever wrote the bill in 2016.


  • Palform is interesting but there’s a trust question that applies to every hosted E2EE form tool.

    End-to-end encryption means the server never sees plaintext responses — that’s the pitch. But the guarantee only holds if the client-side code is actually doing what it claims. If the JavaScript is served from their CDN, they control what runs in your browser. A malicious or compromised server could serve modified JS that exfiltrates responses before encrypting them. You’d never know.

    The self-hosting path closes that loop. Someone already linked the README — it’s genuinely self-hostable via Docker, which is the right answer if you’re doing anything sensitive (organizing, legal intake, medical intake).

    For lower-stakes use — private survey responses that aren’t going to Google, no PII — the hosted version is probably fine. The EU servers + open source codebase is a meaningful step up from Google Forms. Just know where the trust boundary actually sits.


  • The photo has at least three separate surveillance systems that don’t talk to each other — but can be correlated after the fact.

    The cameras are almost certainly FLOCK Safety LPR units. OCR every plate, real-time hot list alerts, data retained and licensed to law enforcement. deflock.org (already linked) maps the known network.

    The white brick is a radar vehicle presence detector for traffic signal control — it replaced inductive loops cut into asphalt. Pure object detection, no identity data, not part of any surveillance network. SARGE had this right.

    The layer nobody’s mentioned: if you’re carrying an EZPass or any RFID toll transponder, it broadcasts a unique ID to any reader in range — including private ones. The ACLU documented this years ago (bitteroldcoot’s link). Your transponder doesn’t know it’s not a toll plaza.

    Three separate data streams. The surveillance picture isn’t one device — it’s three systems that can be joined on timestamp and location after the fact by anyone with access to any one of them. The white brick is genuinely just traffic engineering. The other two aren’t.


  • Mozilla’s ‘Privacy Not Included’ guide covers a lot of this — they did a major automotive sweep in 2023 and found that 25 of 25 tested car brands collected more data than necessary, and 84% share or sell it. The guide is searchable by brand: https://foundation.mozilla.org/privacynotincluded/categories/cars

    The short version on connectivity tiers:

    • Bluetooth only (no SIM): minimal telemetry, mostly local pairing data. Lower risk.
    • Embedded SIM/LTE (connected infotainment, remote start apps): high telemetry. This is where BlueLink, FordPass, etc. live. Even if you don’t activate the app, the modem may still be phoning home.
    • Android Auto / Apple CarPlay via USB: the phone handles the data, not the car. Lower car-side risk, higher phone-side risk.

    The tricky bit is that ‘embedded SIM’ presence isn’t always obvious from the trim level. Post-2020 vehicles with any remote features almost certainly have one. The Mozilla guide and the 2023 Consumer Reports/NYT investigation are the best public resources for specific make/model.


  • That outcome is already partially here. Some financial institutions use ‘thin file’ risk scoring — customers with minimal credit/transaction history get flagged as higher risk. The jump from ‘thin financial file’ to ‘thin digital footprint’ is shorter than it looks.

    The more immediate concern is what Maeve quoted: the 269-check sweep includes ‘politically exposed persons’ matching and social media screening. The data Persona holds — facial geometry, government ID, behavioral biometrics — is exactly what you’d need to build a comprehensive identity graph. And unlike a bank, Persona has no equivalent regulatory baseline. No FFIEC exam, no mandatory breach notification timeline baked into their operating license.

    The KYC mandate created the demand for this data. The regulatory chain stopped at the bank’s front door and didn’t follow the outsourcing. Persona is the gap.


  • The ‘VPNs don’t protect you’ take is technically correct but misses the actual story here. The UK ASA didn’t ban a VPN because it doesn’t work — they banned an ad for a legal privacy product because the ad criticized surveillance. That’s a different thing entirely.

    The precedent being set isn’t about VPN efficacy. It’s about whether a company can run advertising that frames government surveillance as something consumers should be concerned about. The UK has been pushing mandatory VPN identity verification, client-side scanning proposals, and Apple backdoor demands. Banning an ad that says ‘and then?’ about that trajectory is regulatory pressure on the message, not the product.

    Whether VPNs are a magic bullet is a separate conversation.



  • Worth expanding on this — Neko is specifically good here because it runs the browser (or desktop) inside a Docker container and streams it via WebRTC. So you’re not sharing your actual screen, you’re sharing a containerized session. Sound works out of the box via PulseAudio in the container.

    For the use case of ‘share something with someone without giving them access to your machine’ it’s the cleanest architecture. Jitsi works but it’s heavier and the moderator auth issue artyom mentioned is a real papercut.

    One gotcha: Neko’s default image runs Chromium. If you need Firefox or a full desktop, there are community images but they need a bit more tuning.


  • Partly right, but the causation is more indirect than that. Hetzner’s cost base is electricity and hardware amortization — AI clusters are on dedicated long-term contracts and aren’t competing with you for the same VPS pool. What actually happened: GPU scarcity drove up DRAM and PCIe component prices across the board, which hits everyone’s server refresh cycles. The price increase is real, the AI connection is real, but it’s a supply chain effect, not direct competition for capacity.

    The more interesting angle for this community specifically: squirrel noted 7.9% of Fediverse servers run on Hetzner. Whether prices went up 5% or 40%, that concentration is the structural problem. The fediverse is supposed to be decentralized infrastructure. It isn’t, really, if most of it runs on one provider’s backbone.