Futurecast | Fakes, Crypto Scammery, & the Cost of Dropping the Ball on T&S

#0001 Trust And Cyber Online 🌼

hello world [what’s up]

Hey Cartomancers! Today is the day, the day where you receive the first official issue of the Cartomancy Labs Futurecast. Of course there have been two beta test issues already (thanks, beta testers!) but now we can consider ourselves and this little experiment Actually Launched. Yay!

I’ve been working on some really cool projects in the fraud detection space, and woo! some things are changing super-fast, while others feel as fresh as they felt in 2004 when I saw them the first time around. There are so many things we could talk about via these lab notes, I’ll just start rambling around, but do reply, ask questions, or make requests if there are any topics you want me to dig into.

In today’s notes:

  • Noodling in the Lab

    • Sorta-deep dive into counterfeit

  • News nuggets 

    • Big attacks on big systems have big impacts

    • What a world we’re in where dropping investment in T&S and content moderation is putting us all (and information integrity) at risk

    • Deep fakes + crypto - does it blend? [spoiler: yes], and 

    • Are you into long distance situationship, or is this giving pig-butchering vibes?

  • 2024 predictions 

    • Another round-up, but this time we go fraud-ways

a noodle from the lab [what we’re researching]

Okay, so I thought it would be fun to do a little research into counterfeit and the wild world of fakes, because fakes are a casual cousin to fraud and scams — and also a place where a number of folks who are not criminals might recognize themselves of their friends. For example, your aunt who has a rack of fake designer purses, or a buddy with an outspoken willingness to buy knockoff workout wear that fits and feels like the real thing. What I was not prepared for, really, was the breadth of the problem. Luxury brands are on one end of the spectrum, but then Faux-Zempic (“fake” drugs) and counterfeit foods (“fake” food) and down the rabbit hole of counterfeits just like deep fakes and identity theft, just in a different direction. A proper overview or deep dive would take a series of articles, so let’s maybe consider this an imperfect intro.

Bills & Cards & Txns, Y’all

In the payments world “counterfeit” is a fraud problem when we’re referring to a card (or currency itself) that has been copied or counterfeited. It’s a classic fraud problem in the world of unauthorized payments, i.e. “you stole my card” or “you stole my account which had financial instruments attached to it”.

But when we start describing counterfeit goods (like fake designer purses) the payments world doesn’t “see” the issue as it isn’t a problem with the money movement (it’s not an unauthorized transaction) so much as a problem with the exchange of goods: i.e. the funds are (usually) good, but the goods are bad. In that way, counterfeit is a problem a lot like money laundering: many of the transactions associated with the problem are never disputed, and it’s those disputes that give us a thread to pull to systematically understand bad actor patterns. 

Everything is Fake

Regardless of how the problem is classified, it’s definitely impactful - and growing. In their State of the Fake Report (2023), product verification firm Entrupy shared that there’s ~$2.8T confiscated annually in counterfeited goods (that’s Trillion with a “T”), and that US consumers are spending $100B per year on IP-infringing goods. Certainly not a US-only problem, spend on fake products is estimated at $600B/year. An earlier report from the OECD suggests even with all of those confiscations, international trade of counterfeit/pirated goods might be as much as 2.5% of world trade ($464B)

training data [what’s news]

Economics, Impact of Cyber attacks on financial services: Interesting piece from the FT,  Cyber attacks reveal fragility of financial markets, describes an environment where cyber attacks are going up (for example, ransomware hits in financial services went up 64% in 2023 as per Sophos), and the risks are also escalating. “Lloyd’s of London warned that a significant cyber attack on a global payments system could cost the world economy $3.5tn” and for the UK market, a Bank of England survey found that according to respondents the risk of cyber attacks is now the number one systemic risk to the financial system.

Economics, Impact of Financial Crime: It’s that time of year when big ideas and big numbers are coming at us live out of the World Economic Forum proceedings, like these numbers from Nasdaq CEO: Financial crime is now a multitrillion-dollar epidemic. Nasdaq’s research shows more than $3 trillion USD of illicit funds moved through through the financial systems (globall), including drug trafficking ($782.9B), human trafficking ($346.7B) and terrorist financing ($11.5B) – and don’t forget scams and fraud, which created $485.6B of losses. Nasdaq CEO Adena Friedman hopes the exchange can help bridge the corporate/law enforcement connection needed to identity and address financial crime. As part of this strategy, Nasdaq bought the firm Verafin for $2.75 B to help smaller banks address money laundering and fraud.

Trust & Safety Investment, Platform Integrity: When we talk about tech layoffs, we often focus on the employees who have lost their jobs, while we know the companies themselves will be rewarded financially by a stock market that loves cost cutting. When developers and marketing folk are cut, we logically know that expansion slows. But what happens when companies cut Trust & Safety programs? 

Turns out: 

  • Compliance costs can go up (hello, fines for spreading disinformation, hate speech, & not protecting kids or privacy - Meta recently fined $1.3B by the EU), 

  • Good users get tired of picking through garbage content (drops in engagement core metrics like DAU - global web traffic to X decreasing 14% globally), 

  • And when user metrics drop, so will revenue (advertising dollars dry-up). 

The regulatory fines come fast, and faithful users depart slow, but draining investment from T&S seems to be a sure-fire way to kill the viability of the platform. More here: The Unbearably High Cost of Cutting Trust & Safety Corners

Deep Fakes, Crypto, Phishing: AI Generated content has found a new killer use case – generating scammy promotional content for tricking crypto folk into opening their wallets. After an initial, legitimate promo from Solana went out (airdropped LFG tokens), fake video soon followed showing Solana co-founder Anatoly Yakovenko describing a “historic day” for Solana and offering a giveaway showed up in YouTube videos and Twitter ads. Folks claiming the promotion from the deep faked video were phished and wallets drained. Takedowns of this type of fraudulent content are the responsibility of content moderation teams (see above about funding for T&S teams), but the deep fakery makes it all the more difficult for detection teams to find. Look for modifications of IP/brand “takedown” requests at major platforms as these types of fakes move from crypto-influencers to celebrities, and crypto itself tries to “grow up” and become a REAL financial instrument (i.e.an ETF).

Crypto, Account Hijacking: And speaking of REAL financial instruments, and crypto scams, the SEC’s Twitter account was hacked and used to post fake news. While the immediate sarcasm muscle might twitch thinking about SEC guidance on cybersecurity, account hijackings are generally the domain of anti-abuse/anti-fraud teams (see bit about T&S investment above) and product design - what happens to trust on these major information platforms, when account security controls are delegated entirely to the users? Let’s also expect to see more account hijacking + deep fakes in months and years to come, not just to push superficial phishing scams, but to supercharge pump & dump scams, as fraudsters figure out that they can move markets, not just phish.

Deep Fakes, Authentication, Identity Verification: Speaking of deep fakes, the internet has gone wild over threads on Reddit in r/StableDiffusion by user u/_harsh_, who made several posts showing how ML/AI-assisted identity verification techniques used online (including at places like banks, which may be required to conduct stringent KYC) can be tricked using engineered images/video. Here’s a good LI thread discussion on the topic from the author of “3 Key Concerns in Generative AI that KYC Platforms Must Address”. Here’s an article showing a researcher’s progress through different fakery scenarios.  

Romance Scams, Scamming Scammers: I always love hearing or reading tales of folks social engineering the social engineers who contact them. Reverse out a 419 scam? Awesome. Get a little unhelpful with a fake help desk call? Good on you. So this article about Becky Holmes’s adventures leading-on romance scammers caught my eye I conned the romance scammers (with hilarious results) - that article is, unfortunately, paywalled, but she’s just written her experience into a book Keanu Reeves Is Not in Love with You: The Murky World of Online Romance [pre-order via Bookshop] and here’s a link to a related opinion piece she wrote for the Guardian. TL;DR Long-con scams (sometimes called “pig butchering scams” are making tons of money for fraudsters that are willing to wait for a payout.  According to the Guardian, in 2021 the FBI received complaints relating to crypto romance scams in the US that resulted in $429m in losses.

2024 Predictions, Payment Fraud: In our last set of lab notes, we got into 2024 predictions from the cyber side of the house, it’s only fair to let the fraud folk play, too:

  • It’s Fraud At Full Throttle – Our Predictions for 2024 Frank McKenna (of Frank On Fraud),  Mary Ann Miller (Prove) and Karrisse Hendrik (Chargelytics) predict even more insider (Inny) action, scams driving dispute volumes & losses, deep fakes, synthetic identities, commercial/real estate scams getting uncloaked, impacts of crypto but also additional pressure on traditional financial instruments like checks, and growth across the whole fraud supply chain: KYC, data breaches, phishing, extortionists, money mules, etc. 

  • Fraudsters Emerge From Dark Web With New Tactics for 2024 M&T Bank's Karen Boyer discusses the fraud supply chain, identity exploits, and AI/Deepfake trends

  • Payments Fraud: What to Expect in 2024 Peter Tapling calls out fraud industry drivers like scams, getting ahead of real-time payments, and where we might see generative AI & deep fakes play an increasing role in fraud playbooks. 

  • The state of Appsec in 2024 ← Okay, Appsec is very much not fraud/T&S, but Adam Shostack and I are often very philosophically aligned when it comes to risk. And I wanted the opportunity to draw your attention to this quote:

    • One of the things that makes liability bad is we don’t have a quantified understanding of what’s going wrong, in the sense of root causes and contributing factors to the problems, and that makes it impossible to pass laws which effectively and narrowly target just those root causes. Instead we get broad laws which require things like security awareness training and insanely short-trigger provisions about telling regulators. We get requirements to change passwords every 90 days baked into regulations.

find more cartomancy [what’s happening]

Security Insights with Gunnar Peterson Ep. 6: Allison Miller | Forter

/

ttyl [what’s next]

Thanks for reading to the end of this set of lab notes. I’m thrilled to have some fellow travelers mapping out where we’ve been, philosophizing about where we want to be, and building the paths to get us where we’re going.

See you next time on the Futurecast!

Allison