Open-Source Wallets' First Real Audit: Lessons for Verifiers

Three bugs found in the EU age verification app within 24 hours of launch. That is exactly how open-source public infrastructure should work — and so should verifiers.

eIDAS Pro Team
April 18, 2026
9 min read

The headline and the reality

"EU age verification app can be hacked in 2 minutes" was the headline that trended across Reddit and tech press between 15 and 17 April 2026. More than 6,500 upvotes across r/europe, r/privacy, r/technology, and r/BuyFromEU framed the story as a failure — the EU ships a privacy tool, a researcher breaks it, critics pile on.

We read it differently. The disclosure cycle that produced this headline is exactly what public digital infrastructure is supposed to do. It is the best argument we have seen this year for why critical government-adjacent software has to be open source, and why every verifier integrating with EUDI Wallet should take the same path.

For the technical analysis of what Paul Moore actually found, see our detailed breakdown of the EU age verification app disclosure. This post is the argument one level up.

How the cycle actually played out

Day 0 — The European Commission announces the age verification app is technically ready. Von der Leyen stresses that the code is fully open source and that anyone can inspect it.

Day 1 — An independent security researcher (Paul Moore) reads the public Android repository, identifies three design flaws in local state management, and publishes a short demonstration on X.

Day 2 — Reddit and technical press amplify the story. The privacy and security communities dig into the source code themselves. Other researchers reproduce the findings, clarify the caveats, and separate FUD from legitimate concerns.

Day 3 and onwards — The fix cycle begins. The EU reference team has public issues filed against them. Fixes will land in the main branch, be reviewed publicly, and ship in the next release.

That is a functioning audit loop, and it took three days. Closed-source software does not move that fast, does not surface flaws that publicly, and does not allow the community to build trust incrementally by reading what is being fixed and why.

What a closed wallet disclosure looks like instead

Compare the EUDI Wallet cycle with how identity verification vendor breaches have historically unfolded. Browse any of the high-profile incidents in the verification space over the last few years and a pattern emerges:

  • The vendor discovers or suspects an issue internally.
  • Weeks or months pass before any external party knows.
  • Remediation begins behind closed doors with no public detail on what is being fixed.
  • The disclosure, if it happens, is either a regulatory filing after the fact or a press statement minimising the scope.
  • End users and integrators have no way to verify whether the fix addresses the root cause, because the source is not public.

This is the counterfactual. A closed-source EU age verification app would have shipped with the same three flaws Paul Moore found. Nobody outside the development team would know. The flaws would sit unexploited (or quietly exploited by well-resourced adversaries) until a public breach forced disclosure. The fix cycle would be slower. Trust in the system would be lower.

The specific claims that held up

The technical reaction on Reddit is worth paying attention to because it did not simply agree with the researcher. It agreed with parts of his claim, disagreed with parts of the framing, and added observations he missed.

Valid critique, correctly reported. Three design flaws in local state management on a rooted, physically compromised device. These are real and will be fixed.

Framing pushback from the technical audience. Multiple top-voted comments across subreddits pointed out that "rooted device with physical access" is a substantially narrower threat model than "hacked in 2 minutes" implies. The comment with 832 upvotes on r/europe simply observed: "this is how open-source software is supposed to work."

Additional concerns surfaced by the community. The requirement to rely on platform attestation (which forces iOS or Android), the dependency on Google Play Services in the Android build, and the absence of a path for libre clients — all raised by community members reading the same source code. These are more serious strategic issues than the local-state bugs, and they would have been invisible in a closed implementation.

Informed technical praise. The Hungarian developer thread on r/programmingHungary carried the most technically literate discussion we saw: correct identification of the underlying OpenID Verifiable Credentials plus OpenID4VP selective disclosure stack, acknowledgement that the privacy architecture is one of the better designs in the age verification space, and simultaneous critique of the Google Play Services dependency. That kind of nuanced public evaluation simply cannot happen against closed code.

Why this matters for verifiers specifically

If you are building a verifier that will consume EUDI Wallet attestations in production, the disclosure cycle you just watched is the cycle your customers will judge you on. Merchants and end users will not read your source code. They will read about breaches, fixes, and how quickly they happened. An open-source verifier SDK that fixes bugs in public inherits the trust-building properties of the wallet ecosystem it participates in.

We built OpenEUDI on this principle. Every line of the verifier logic is MIT-licensed and publicly auditable. When somebody finds a flaw, they can file it, we fix it publicly, and every integrator sees the fix. When somebody wants to understand how we handle a tricky edge case in OpenID4VP, they can read the code instead of trusting our marketing. The OpenEUDI SDK quickstart tutorial walks through the same implementation a security researcher would audit.

The sovereignty argument

There is a second argument that has nothing to do with bugs. The EU has spent years articulating a case for digital sovereignty: European citizens' identity data, European institutions' cryptographic keys, European regulators' audit rights should not depend on software whose source nobody outside a foreign vendor can read.

The only honest conclusion from this position is that all EU-mandated identity infrastructure has to be open source. Not open source as a marketing phrase, but source-available, reproducibly buildable, and community-auditable code. Anything less leaves a sovereignty gap that no amount of certification paperwork can fill.

The EU has taken that conclusion seriously with the wallet and the age verification app. Verifier infrastructure should take it just as seriously. A verifier that trusts open-source wallet attestations and runs closed-source code itself is inconsistent at best and strategically fragile at worst.

What "open source" should mean for a verifier

To be clear, "open source" is a necessary but not sufficient condition. An open-source verifier should also:

Be auditable, not just readable. Code with no test coverage or opaque dependency chains is technically readable but not practically auditable. The ENISA certification scheme draft captures this distinction in the context of wallets, and the same standard applies to verifiers.

Fix bugs in public. A private fix to a public repository is half the value. Issue trackers, public pull requests, and transparent changelogs are what make the audit loop work.

Accept contributions. Allowing external researchers to land fixes keeps the maintainer honest and the community invested.

Keep a permissive licence. MIT, Apache 2.0, or a similarly permissive licence. Copyleft licences are philosophically defensible but practically limit who will integrate with your code.

Lessons for the verifier side

The April 2026 disclosure leaves three durable lessons for anyone building on top of the EUDI Wallet ecosystem.

Use auditable code. Whether you adopt our OpenEUDI SDK, another open implementation, or build your own, choose a stack that security researchers can read. If your verifier ships in closed source, you are accepting the failure mode described earlier: slower fixes, lower trust, no path for external contributions.

Let the ecosystem do part of your security review for you. Reading what researchers find in related open-source projects — the wallet, the age verification app, the reference verifiers — is one of the highest-leverage security activities available to a verifier team. You get findings before they become incidents.

Separate privacy architecture from implementation bugs. The critics who say "the EU age verification app has bugs therefore the whole privacy-preserving approach is broken" are wrong. The cryptographic design is sound. The bugs are in local state management, not in the protocol. See Privacy-first age verification with OpenEUDI for how to build on the same privacy-preserving primitives.

The quiet win

If there is one sentence to take away from this entire episode, it is one the r/europe top commenter wrote: "this is how open-source software is supposed to work." That is not a defence of the bugs. It is a description of the system that finds and fixes them. That system is better than the alternative, by a margin that grows larger every time a closed-source vendor has a breach and handles it the old way.

The EU has committed to building its identity infrastructure in the open. Verifier builders should make the same commitment. For the current state of that infrastructure across the 27 member states, see our April 2026 rollout tracker, and for the regulatory framework underneath it, see our Implementing Regulation 2026/798 explainer.


OpenEUDI is MIT-licensed, auditable, and free. For production verification with managed WRPAC certificates and a fully open-source codebase, see eIDAS Pro's managed plans.

Share this article

Help others learn about eIDAS verification