Futurecast | Doctored Docs for Docs, Detecting Deepfakes, and Evolving Product Security

#0007 Trust And Cyber Online 🌮

hello world [what’s up]

OlĂĄ Cartomancers!

Hope you are doing well. Spring has officially sprung, with a slightly early Spring Equinox happening earlier this week - anyone getting psyched for the eclipse coming up? My planetary-obsessed siblings all gearing up to be somewhere in the path of a TOTAL ECLIPSE OF THE SUN. 80’s babies, it’s running in your head now, right? Forever.

Let’s get into it:

  • Noodling in the Lab

    • For real finally sorta getting started on the Big Credit Card Fraud article

  • News nuggets 

    • Doctored Docs for Docs: Fakes make their way into doctor’s offices and pharmacies

    • Digging into Deepfake Detection: Finally, detection tech brings some AI-detection into the deepfake battle

    • Evolving Product Security (+ AI): Getting ahead of harms, can’t rely on the tech to figure out policy implemenation for us

a noodle from the lab [what we’re working on]

Yeppers, I spent most of my time this last week thinking about how we can teach AI do to better when it’s so hard for us to express it ourselves (via policy, or in tech, see the last news nugget for a bit more on that). BUT I did get started on first dusty rough cut draft of The Big Credit Card Fraud Resource by going stream-of-consciousness onto how credit cards work. Sorry, not sorry - I am a former economist…I looooooooooooove payments networks. Add a comment or hit “reply” to this newsletter if you have questions you’d like me to get to, or recommendations for additional details to include. (this is so fun, y’all)

training data [what’s news]

🗨️ Doctored Docs for Docs: Q: Doc Doc. Who’s there? A: I have no freaking idea. Both Frank on Fraud and 404 Media (shout-out to you, kindred spirits) have been reporting on this new (?) way to conduct fraud in the healthcare space, and it’s spicy. We know healthcare data systems are targeted for cyberattack, but there are simpler scams out there. 

404 Media goes deep on how How Hackers Dox Doctors to Order Mountains of Oxy and Adderall - roughly, stealing their identity and credentials, they can use legit doctor identities to sign-up to electronic prescription portals. Of course - some of us still get tear-off, handwritten prescriptions, but lots of doctors offices send those scrips digitally - theoretically more secure! If the participants are strongly authenticated, of course. But we know where this goes, and this is a handy-dandy target for phishers. Listen to the Podcast for more details Spotify: How Hackers Steal Mountains of Oxy and Adderall

Faking scripts is one thing (a variant of an ATO or BEC), but how about fake patients? Over here we’re seeing an interesting extension of Credit Protection Number (CPN) Schemes. CPNs look like an SSN/EIN (format-wise). The base scam is a fraudster will sell a “clean” CPN to a consumer who needs a clean credit file created – often, though, those CPNs are actually stolen SSNs. Frank on Fraud recently shared that synthetic - or Premade CPN’s - Are The New Scam. Multiple sellers are cropping-up, making promises like increased privacy, credit repair, and a clean credit report to get more access to financial services. Of course, these are likely just stolen SSNs being sold as valid, working “synthetics” (although pair this scam with some of the synthetic ID fraud rings and things will get dicey). AND some of those “clean” profiles could also be added to healthcare benefit profiles, apparently, in the case of a Georgia woman who knew she was doing something a lil’ bit wrong, but thought she was paying to be added to a legit benefits profile (full video below - not cool to be pulled out of a dentist’s office, but interesting how the story plays out).

Prediction: Pill mills turbocharge by “laundering” scripts through multiple stolen prescriber IDs, cutting out the need to find doctors willing to play fast and loose with their prescription pads.

🗨️ Digging into Deepfake Detection: To be honest, I’m sorry it’s not better news, but given the flood of deepfakes coming at us from all angles, it’s nice to know that folks are working on the problem of detecting deepfakes. Besides all of the photoshop detectives working on whether Kate Middleton is filtering her official royal snaps, deepfake detection is being used to identify real-world attempts to leverage manipulated media. For example,  Paul Vann (of IdentifAI) posted on LinkedIn recently re: a Coinbase Deepfake Ad, here’s a video (below) showing the IdentifAI team explaining how they identified the faked YouTube video ad, seemingly made by Coinbase CEO Brian Armstrong.

Battling deepfakes is an issue with bipartisan support here in the US, with lawmakers encouraging both funding and new legislation for deepfake detection (NSF and DARPA grants). But the government won’t be alone in encouraging this, VCs can sense the opportunity to get into this space early, with Israeli deepfake detection startup Clarity raising $16 million in seed funding. And how does this tech work? Well, Clarity’s approach is to recognize patterns common in the creation of deepfakes and offers a watermark to designate authentic content

Other teams are looking in other directions. For example, phone Pindrop recently dropped deets on their new product, Pulse, for Audio Deepfake Detection. The authentication and voice security vanguard explains How Their Approach to Deepfake Detection Works - basically they are using AI to find AI (patterns, naturally) and also a critical element - Liveness Detection. Note: Pindrop claims a deepfake detection technology has a 99% success rate, over the human success rate (humans can detect deepfake speech only 73% of the time). Since we are seeing deepfakes crop up in online forums, ads, and online meetings, it is helpful to know these detective techniques are developing and being applied – for example, CNN reports that Steve Kramer, confesses to creating the recent Biden audio deepfakes - subsequently flagged by Pindrop

More good leads to chase:

🗨️ Evolving Product Security + AI: Built-in versus Bolted-On Later - that’s the clarion cry of modern Product Security, and so whether it’s Privacy by Design, Safety by Design, or Security by Design - the key to REALLY making Defense in Depth work is building layers INTO the product or system, not slapping them on later. So says HBR, so say we all. (Cybersecurity Needs to Be Part of Your Product’s Design from the Start)

It was nice to see that Bluesky paired their press release about opening the app to everyone (Big Things on the Horizon: Bluesky Opens App to Everyone) included a section about prioritizing Safety, and made a big splash when Bluesky snags former Twitter/X Trust & Safety exec, Aaron Rodericks. These are good moves to make, and good to make them early. (Still curious on how they will factor ads into their business model: Bluesky CEO Jay Graber Says She Won’t ‘Enshittify the Network With Ads’ )

I have been thinking a lot of about how we translate these needs “by design” into technologies like AI, and so Robert Hansen’s post on LinkedIn (With regard to Google Gemini, we might as well rebrand it as Google Homogeny...) really left me thinking, both about 1) who we trust with making decisions about ethics in these deeply embedded technologies, and 2) how will they implement ethics/policies effectively.

Specifically, I’m thinking about how much I like the idea of Opinionated Design, pioneered by (funny enough) privacy, security, and UX researchers from Google. (Disclosure: I, too, am Xoogler.) This approach suggests the product designers can frame product features in a way to default to the safest outcome. When dealing with SSL warnings, perhaps the safe default outcome is clearer, to both the designer and to the end users - with results of LLMs it might not be as clear.

The makers of AI products (just like in recommendation algorithms, just like in ads, just like in search) face the following - how do you train and deploy at scale?

  • Do we assume that training purely on the attributes being optimized (will they “like”, will they “click”) is the way to go? No, we clearly know that doesn’t work. We’ve seen bots go bad - because they are a product of reinforcement learning. Both the good and the bad can be reinforced.

  • So, what’s the alternative - decide what results and what information is correct? No, that doesn’t work either. Because there’s no arbiter of truth, and no universal set of norms. 

Is it a secret of these recommendation systems that they are developed and deployed in a tech-forward way, optimizing for those variables that are used to evaluate performance and popularity…and then policy programs go in, and execute brute force tweaks. You might not notice it, as a user, but problematic search terms lead to specific answers (and might also not be monetized). Some of those specific search terms might have been part of a larger policy effort, or response to a public snafu, or an angry advertiser that didn’t want their brand associated with…whatever the term is, or results were.

Did the growth of recommendations, search, and ads take longer than what we’re seeing with AI, such mistakes had a smaller blast radius, and teams developed processes to force-tweak the results to be less harmful? Because mistakes being made with AI seem to be heard, round the world, and then the responses to the mistakes (roll-backs, over-corrections) are also getting lambasted. I’m glad for the transparency (a rising transparency tide lifts all floaty boats), but it’s kind of hard to watch these teams moving SO FAST, at this scale, with all of the implications of missteps casting such a long shadow.

I don’t know what the right answer is. I’m not agreeing or disagreeing with Hansen’s post. I’m just acknowledging that the question he raises is important, but not easy to answer: if AI needs to be safe, reliable, and equitable - there’s a balance to be struck since we cannot rely on the tech, and the vagaries of reinforcement learning alone. So who decides what’s balanced? And how do they implement those decisions in a way that doesn’t make problems worse? 

Let me know what you think.

find more cartomancy [what’s out there]

coming soon

▶️ On April 25th, join me at improve 2024 (featuring Fraud Fight Club). I’ll be discussing SCAMS: Defining, Measuring, and Combatting at 11am with fellow fraud experts David Kerman (Chase), Mike Timoney (FRB Boston), and Ian Mitchell (The Knoble / Mission Omega). See you there!

on demand

I was delighted to spend some time discussing cybersecurity career paths, leadership development, and industry trends while reconnecting with my friend and colleague Sandra Liu (if you haven't seen what she's working on over on YouTube I encourage you to check out her projects). this interview, we cover cybersecurity career and industry topics including:

  • 🤝 What do hiring managers look for when hiring candidates for a job?

  • 💻 What cybersecurity skills are most relevant?

  • 💭 What are the biggest challenges facing organizations today?

ttyl [what’s next]

Thanks for reading to the end of this set of lab notes. I’m thrilled to have some fellow travelers mapping out where we’ve been, philosophizing about where we want to be, and building the paths to get us where we’re going.

If you’ve read to the end and you find this content helpful, I’d love feedback. My news feed is full of leads, but my personal algorithm loves learning about what interests the community, so that I can focus in on what will be most useful. Just hit reply and your comments will come whizzing into my inbox. (It’s also a good way to find me if you are interested in working with me or with Cartomancy Labs).

See you next time on the Futurecast!

Allison

@selenakyle