SNAP and EBT integrity is the program where architecture beats analytics
Retailer-trafficking detection, EBT card-skim defense, and claimant-side eligibility integrity all share an architectural shape that most state SNAP agencies have not built. Buying a fraud product is not the same as building the integrity surface.
By Payton Jonson · May 8, 2026
A state SNAP agency carries three distinct fraud surfaces, and they do not share much code. Retailer trafficking — the exchange of EBT benefits for cash, alcohol, or controlled substances — is one surface. EBT card skim and account-takeover, where benefits are siphoned by criminal organizations who acquired card data through skimmers or breaches, is another. Claimant-side eligibility integrity — household composition misrepresentation, undisclosed income, intentional program violations — is the third.
Most state SNAP agencies have a model for one of these. Almost none have an integrated architecture across all three.
Why model-centric SNAP defenses fail
The agencies that licensed a fraud-detection model in 2022 or 2023 have largely the same experience to report. The model catches the pattern it was trained on. The next ring varies its inputs. The model degrades. The vendor offers a model update. The cycle continues.
There are two structural reasons for this. First, SNAP fraud is adversarial. The threat model is criminal organizations with millions of dollars of monthly throughput optimizing against detection. Second, the signal that distinguishes a fraudulent transaction or claim from a legitimate one is not concentrated in any one feature. It is distributed across many weakly-correlated signals — retailer transaction-mix shape, time-of-day patterns, card-test behavior, household-composition coherence, undisclosed-income graph signals — that no single model is positioned to consume.
The agencies that hold up best in DOL OIG and USDA OIG reviews are not the ones with the most accurate model. They are the ones with multiple layers of defense, each catching a different segment of the attack distribution, and a graph layer joining the signals across claims, transactions, and retailers.
The architecture
Five layers. Each is independently shippable. Each catches a different segment of the attack distribution. Together they comprise a defense the next variant of attack cannot trivially work around.
Layer one — retailer transaction-pattern fingerprint. Per-retailer signals: transaction-amount distribution, transaction-time distribution, EBT-to-total-sales ratio, basket size, sale-vs-purchase mix. A grocery retailer should not have 90% of transactions in $100-or-$50 round-dollar amounts; that is the trafficking fingerprint. This layer is mature, available from multiple vendors, and the first slice we recommend.
Layer two — claimant card-behavior fingerprint. Per-card transaction patterns: terminals used, dwell time, device fingerprint of the terminal, behavioral cadence. Card-skim and account-takeover rings produce distinguishable signatures here — concentrated geographic spend in a window, terminal sets the legitimate cardholder has never used. This layer detects the criminal-acquisition fraud that has cost states hundreds of millions in 2024–2025.
Layer three — claimant-side graph. Household identity graph: same address, same bank account, same emergency contact, same employer-of-record, same prior case history. The trafficking layer catches the retailer; the graph catches the ring buying from the retailer. Without this layer, ring detection is not tractable.
Layer four — claimant-side eligibility coherence. Income reporting against IRS, state tax, and unemployment-insurance data. Household composition against birth records, school enrollment, and shared-residence signals. This layer is the highest-value claimant-integrity surface, and the highest-political-risk — it must be implemented with adverse-action notice quality that survives due-process challenge.
Layer five — generated-text linguistic features. Free-text portions of the application and the recertification have a generated-text quality that mature LLM detection picks up. New in the last 18 months. Low cost to implement. Raises the floor on synthetic-application fraud.
What stops most agencies
Three reasons in our experience.
The graph layer requires data integration that is not procurement-shaped. Layer three depends on joining records across many claimants and transactions. Most SNAP systems are configured to process applications and transactions one at a time. Standing up the join is straightforward but cuts across procurement boundaries — the integration is typically nobody's deliverable.
Cross-program data sharing is politically difficult. Layer four ideally pulls signals from the state Medicaid agency, the state tax agency, the state UI agency, and federal data partners. The data-sharing agreements take 12 to 18 months to negotiate. They are worth it; they are not a procurement-cycle deliverable.
Adverse-action notice quality is an underrated bottleneck. When detection is working, the disqualification queue and the over-issuance queue fill with legitimate claimants who got bad letters. If the notice does not explain what to do, the legitimate claimant gives up and the agency loses both the recovery and the relationship. Defense architecture without notice-quality work fails the OIG review.
Where Vardr fits
Multi-layer fraud architecture is one of our four published service lines and the area where our methodology most directly attacks the failure mode of single-model deployment. We pair the Reference Architecture with the Modernization Readiness Assessment to identify which of the five layers an agency can stand up first — typically layers one and two, because they are vendor-buyable; layers three through five are where Vardr concentrates the engagement.
The work is methodology, principal time, and integration. The model is the smallest of the five purchases. The agencies that understand that order are the agencies that recover their post-PUA exposure and stay ahead of the next variant.
If this resonates with a program you're working on, we'd be glad to talk.