My Heart Has TEEEETH

A Mental Model for Designing Anti-Scam Interventions

The scam economy is growing fast, and both businesses and consumers are being inundated with a wide variety of scam types coming at them through every mechansim - the phone, face-to-face, and of course…the internet.

The regulatory response is still TBD, but regardless, fraud fighters everywhere are trying to get a handle on how to stop these tricky loss events before the money moves.

A Scam by Any Other Name…

My rough working definition of scams (as differentiated from “normal” fraud) is that for most fraud abuse cases, the attacker is focused on getting the credentials out of innocent people, but for scams, the attacker is focused on getting innocent people to send them money directly. The outcome is largely the same for the attacker, but they skip the messy step of getting credentials and impersonating the innocent person.

(For the record, this is an imperfect definition of fraud bc it ignores first-party fraud, but for the purposes of differentiating scams from fraud I think this is a good working shortcut).

So a big part of what we talk about when we talk about scams is how difficult it is to detect, and how difficult it is to prevent.

  • It is difficult to detect because so much of transactional fraud/abuse detection is based on identity and identity signals. When it’s the legitimate, authorized user making the moves - it flies by much of the detection logic. We become reliant on “something new is happening” and outlier-iness in the behavior. Those patterns are still detectable, but typically much harder.

  • Worse, it’s difficult to prevent because – even if accurately detected – there are rarely interventions (i.e. adaptation of the user experience) that will stop a legitimate user from executing a transaction that they wish to do.

    • Step-up authentications aren’t relevant, and

    • The warnings don’t seem to be working. Butterbars and interstitials reminding users that “if you accidentally send this transaction to the wrong person, it can’t be reversed” and “don’t get tricked by fraudsters” (paraphrasing, here) also do absolutely nothing. Why? Because for the vast majority of users who enter a payment or money movement flow, they are committed to the transaction. They intend to do it. 

Even if we delve into my favorite user-centric design method in security/privacy, opinionated design, it’s a really difficult situation for our detection capabilities and our traditionally designed user flows to address, when the legitimate user is deeply attached to following-through on an event/flow that we are confident is unsafe. So, I’d like to contribute a little mental model I’ve been working on to the canon on designing for scam detection/prevention. (If you’re unfamiliar w/the concept of Opinionated Design in user-facing security & privacy contexts, TL;DR it’s roughly: empower the user to make choices, but ensure the default is to the safest experience)

The best model we have for thinking about scams is the scams classifier model (an extension and refinement of the fraud classifier model), developed by the Fed as part of the Payments Improvement efforts/community. If you take a quick look (ScamClassifier SM Model and below), you will see that the scams types are defined by the story or persona taken on by the scammer. This makes sense as the purpose of this model (discussed here: Defined Scams to Fight Scams - FedPayments Improvement) is to allow banks to classify incidents and improve reporting and data sharing - which they will be able to do - by type. 

However, it doesn’t exactly give the folks designing the user flows a lot to work with as they consider how to design scam-resistant flows. So I want to offer a complementary view/model of looking at scams - from the point-of-view of the customer getting scammed. I call it the TEEEETH model, and it’s essentially a “level of attachment” to the outcome view. The purpose of the TEEEETH model is not to classify scams, it’s to inform the design of anti-scam interventions, and improve the likelihood we can convert a detected scam event into a prevented scam event. 

The TEEEETH model (high level)

And to take a bite out of this type of crime, we need to sharpen our TEEEETH for sure. Here we go, TEEEETH is (in escalating levels of incoming customer attachment to the transaction outcome):

  • Too good to be true: The customer has been offered a great “deal” (usually on a purchase or investment), but the odds are good that the goods are odd, counterfeit, or non-existent. The customer wants to believe the deal is true, but would ultimately not complete the scam if they understood the deal is a scam. Time pressure might be high (FOMO) but urgency is typically lower.

  • (Engineered Entry): To be honest I’m not sure about this one in the TEEEETH model, as this is straight-up social engineering, like phishing or Business Email Compromise type schemes where the scammer “slips into” the guise of an existing customer or service provider. A sense of urgency might be in play if there’s an existing transactional due date (invoice date, closing on a real estate investment), and the urgency is medium/high as all parties are interested in closing the detail. The customer is committed to completing the (legitimate) transaction, but would not complete the scam version if they had additional information.

  • Emergency: The customer needs to move money fast because they or someone they know is (purportedly) in danger. They are committed to resolving the emergency, but would not complete the scam if they had additional information. Time pressure and urgency are high.

  • Extortion: The customer needs to move the money fast because they are being threatened. Like “Emergency”, there’s time pressure and a real sense of urgency. However, the customer likely understands they are being threatened, they are just unsure of how to resolve the threat of extortion. They would not complete the scam if they had a different means of resolution.

  • Entanglement: This is a long-game con and the customer is confident and committed to pursuing the transaction. This is the world of romance scams and people in high-control groups: they believe in the “story” full force.

  • …& THen what?: I think with a better understanding of the customers intent and frame of mind, we have a better shot at designing a useful intervention. So while the “...& Then what?” is still TBD, this can be a fruitful area of discussion for teams wanting to win hearts and minds in a transaction flow. 

Here I’ve laid out how TEEEETH might be overlaid on the Fed’s scam classifier model typology, based on the details of scams i’m hearing about. As I look at this overlay, I am thinking that the Fed scam types could also be turned into proxies for scammer personas (and elaborated on), which also might be useful for teams designing both educational materials and also user flows.

TEEEETH: A mental model for designing scam-combatting UX (Cartomancy Labs)

The threat modeling part is fun, but the real creative challenges comes with solution design. Here are some early thoughts on how we might get started with TEEEETH’s “and then what”:

  • Too Good to be True: Most interventions are already set-up to combat “Too Good to Be True” by reminding the user that their movement of money is “one way” and should not be used for purchases.  

    • Red flags to remind folks of: 

      • How well do they know the recipient? 

      • Is the recipient pressuring them to use an unknown or non-standard way of paying? 

      • Are the funds going straight to the intended destination, or is there some kind of unusual intermediary?

  • (Engineered Entry): Many banks, retailers, and online services providers are already combatting phishing and social engineering, which is why this threat is moving to less defended transaction flows - like the accounts payable teams of smaller businesses, or finding their way into legal and real estate service flows. (Or this head-scratcher - stealing unpublished manuscripts, artfully discussed by the amazing hosts of the Scamfluencer podcast)

    • Red flags to remind folks of: 

      • Was this request or information inbound and unexpected?

      • Without clicking on anything sent by whoever’s reporting the emergency to you, who can you call to confirm the request?

  • Emergency: Emergency scams are best interrupted by getting out-of-band information, so reminding users to take a moment to confirm (and breathe) before sending funds is helpful. 

    • Red flags to remind folks of:

      • Was information about this emergency inbound and unexpected?

      • If the emergency is related to someone known to you (injury, money issue, arrest, kidnapping), have you tried to contact them to confirm details? (At a known phone number, separate from the interaction with the emergency reporter? 

      • Without clicking on anything sent by whoever’s reporting the emergency to you, who can you call / what website can you visit to confirm the situation you’re trying to resolve?

  • Extortion: Extortion scams are best interacted by providing the customer/user with resources and support to address the threat. Diagnosing the extortion attempt is less complicated than identifying the right resources that can help.

    • Red flags to remind folks of:

      • Evaluate how likely it is that the extortion threat is “real” (how likely that the scammer has pictures, incriminating info, etc)

      • Extortion is rarely a one-and-done, it may be preferable to stop cooperating now before threat increases.

      • What resources do you need to resolve the extortion threat?

  • Entanglement: Once entangled, customers/victims are both unlikely to respond to interventions - and also unlikely to report the scam-related losses as fraud - until the situation has spiraled beyond all control. Note: Entangled customers/victims may also be converted by scammers into money mules, shifting them from victims to colluders as far as the scam architecture. Interventions likely need to happen early to be useful (hard to do from a financial services perspective since the “scam” often starts over in face-to-face interactions, social media, or communication apps).

    • Red flags to remind folks of:

      • How did you meet this person/entity - did they find you via a text, social media, online forum, or dating app? 

      • Have you met the recipient face-to-face, or are interactions largely via social media, text, and video chat?

      • If recipient is a group/organization, do you know what the funds will be used for, or is this a required set of dues/tithe-ing that you didn’t expect when initially interacting with the group/organization?

      • If the recipient is a group/organization focused on charitable works, investing, or any business/money making initiatives, have you been able to verify their structure and activities via independent resources, or is interaction with them directly via text and voice chat?

I’d love to hear your thoughts on TEEEETH - what’s missing, or what would you like to see expanded upon? Scams took up a lot of my brain waves in 2024 and that’s only going to increase in 2025. Regardless of the regulatory environment, I expect banks to be wrestling with the right way to interrupt these scams and the rest of us to be looking for more methods to avoid getting tricked by the growing scam economy.

If you are a bank, FinTech, or online services provider trying to figure out how to get ahead of scams, give us a call at Cartomancy Labs. In addition to working on kickstarting fraud programs and revamping cyber/fraud tech stacks, we’re starting to offer lighter-weight abuse case modeling and solution design workshops to help teams get a jump on emerging problems in the abuse space - like scams. 

For more on working through some user-centric considerations when it comes to tackling fraud/abuse and working at Layer 8, check out the latest post in the Fraud Frameworks & Foundations series or my 2018 presentation at LocoMocoSec (Building Better Defenses: Engineering for the Human Factor).