- cartomancy labs futurecast
- Posts
- Futurecast | Doctored Docs for Docs, Detecting Deepfakes, and Evolving Product Security
Futurecast | Doctored Docs for Docs, Detecting Deepfakes, and Evolving Product Security
#0007 Trust And Cyber Online đŽ
hello world [whatâs up]
OlĂĄ Cartomancers!
Hope you are doing well. Spring has officially sprung, with a slightly early Spring Equinox happening earlier this week - anyone getting psyched for the eclipse coming up? My planetary-obsessed siblings all gearing up to be somewhere in the path of a TOTAL ECLIPSE OF THE SUN. 80âs babies, itâs running in your head now, right? Forever.
Letâs get into it:
Noodling in the Lab
For real finally sorta getting started on the Big Credit Card Fraud article
News nuggets
Doctored Docs for Docs: Fakes make their way into doctorâs offices and pharmacies
Digging into Deepfake Detection: Finally, detection tech brings some AI-detection into the deepfake battle
Evolving Product Security (+ AI): Getting ahead of harms, canât rely on the tech to figure out policy implemenation for us
a noodle from the lab [what weâre working on]
Yeppers, I spent most of my time this last week thinking about how we can teach AI do to better when itâs so hard for us to express it ourselves (via policy, or in tech, see the last news nugget for a bit more on that). BUT I did get started on first dusty rough cut draft of The Big Credit Card Fraud Resource by going stream-of-consciousness onto how credit cards work. Sorry, not sorry - I am a former economistâŚI looooooooooooove payments networks. Add a comment or hit âreplyâ to this newsletter if you have questions youâd like me to get to, or recommendations for additional details to include. (this is so fun, yâall)
training data [whatâs news]
đ¨ď¸ Doctored Docs for Docs: Q: Doc Doc. Whoâs there? A: I have no freaking idea. Both Frank on Fraud and 404 Media (shout-out to you, kindred spirits) have been reporting on this new (?) way to conduct fraud in the healthcare space, and itâs spicy. We know healthcare data systems are targeted for cyberattack, but there are simpler scams out there.
404 Media goes deep on how How Hackers Dox Doctors to Order Mountains of Oxy and Adderall - roughly, stealing their identity and credentials, they can use legit doctor identities to sign-up to electronic prescription portals. Of course - some of us still get tear-off, handwritten prescriptions, but lots of doctors offices send those scrips digitally - theoretically more secure! If the participants are strongly authenticated, of course. But we know where this goes, and this is a handy-dandy target for phishers. Listen to the Podcast for more details Spotify: How Hackers Steal Mountains of Oxy and Adderall
Faking scripts is one thing (a variant of an ATO or BEC), but how about fake patients? Over here weâre seeing an interesting extension of Credit Protection Number (CPN) Schemes. CPNs look like an SSN/EIN (format-wise). The base scam is a fraudster will sell a âcleanâ CPN to a consumer who needs a clean credit file created â often, though, those CPNs are actually stolen SSNs. Frank on Fraud recently shared that synthetic - or Premade CPNâs - Are The New Scam. Multiple sellers are cropping-up, making promises like increased privacy, credit repair, and a clean credit report to get more access to financial services. Of course, these are likely just stolen SSNs being sold as valid, working âsyntheticsâ (although pair this scam with some of the synthetic ID fraud rings and things will get dicey). AND some of those âcleanâ profiles could also be added to healthcare benefit profiles, apparently, in the case of a Georgia woman who knew she was doing something a lilâ bit wrong, but thought she was paying to be added to a legit benefits profile (full video below - not cool to be pulled out of a dentistâs office, but interesting how the story plays out).
Prediction: Pill mills turbocharge by âlaunderingâ scripts through multiple stolen prescriber IDs, cutting out the need to find doctors willing to play fast and loose with their prescription pads.
đ¨ď¸ Digging into Deepfake Detection: To be honest, Iâm sorry itâs not better news, but given the flood of deepfakes coming at us from all angles, itâs nice to know that folks are working on the problem of detecting deepfakes. Besides all of the photoshop detectives working on whether Kate Middleton is filtering her official royal snaps, deepfake detection is being used to identify real-world attempts to leverage manipulated media. For example, Paul Vann (of IdentifAI) posted on LinkedIn recently re: a Coinbase Deepfake Ad, hereâs a video (below) showing the IdentifAI team explaining how they identified the faked YouTube video ad, seemingly made by Coinbase CEO Brian Armstrong.
Battling deepfakes is an issue with bipartisan support here in the US, with lawmakers encouraging both funding and new legislation for deepfake detection (NSF and DARPA grants). But the government wonât be alone in encouraging this, VCs can sense the opportunity to get into this space early, with Israeli deepfake detection startup Clarity raising $16 million in seed funding. And how does this tech work? Well, Clarityâs approach is to recognize patterns common in the creation of deepfakes and offers a watermark to designate authentic content.
Other teams are looking in other directions. For example, phone Pindrop recently dropped deets on their new product, Pulse, for Audio Deepfake Detection. The authentication and voice security vanguard explains How Their Approach to Deepfake Detection Works - basically they are using AI to find AI (patterns, naturally) and also a critical element - Liveness Detection. Note: Pindrop claims a deepfake detection technology has a 99% success rate, over the human success rate (humans can detect deepfake speech only 73% of the time). Since we are seeing deepfakes crop up in online forums, ads, and online meetings, it is helpful to know these detective techniques are developing and being applied â for example, CNN reports that Steve Kramer, confesses to creating the recent Biden audio deepfakes - subsequently flagged by Pindrop.
More good leads to chase:
đ¨ď¸ Evolving Product Security + AI: Built-in versus Bolted-On Later - thatâs the clarion cry of modern Product Security, and so whether itâs Privacy by Design, Safety by Design, or Security by Design - the key to REALLY making Defense in Depth work is building layers INTO the product or system, not slapping them on later. So says HBR, so say we all. (Cybersecurity Needs to Be Part of Your Productâs Design from the Start)
It was nice to see that Bluesky paired their press release about opening the app to everyone (Big Things on the Horizon: Bluesky Opens App to Everyone) included a section about prioritizing Safety, and made a big splash when Bluesky snags former Twitter/X Trust & Safety exec, Aaron Rodericks. These are good moves to make, and good to make them early. (Still curious on how they will factor ads into their business model: Bluesky CEO Jay Graber Says She Wonât âEnshittify the Network With Adsâ )
I have been thinking a lot of about how we translate these needs âby designâ into technologies like AI, and so Robert Hansenâs post on LinkedIn (With regard to Google Gemini, we might as well rebrand it as Google Homogeny...) really left me thinking, both about 1) who we trust with making decisions about ethics in these deeply embedded technologies, and 2) how will they implement ethics/policies effectively.
Specifically, Iâm thinking about how much I like the idea of Opinionated Design, pioneered by (funny enough) privacy, security, and UX researchers from Google. (Disclosure: I, too, am Xoogler.) This approach suggests the product designers can frame product features in a way to default to the safest outcome. When dealing with SSL warnings, perhaps the safe default outcome is clearer, to both the designer and to the end users - with results of LLMs it might not be as clear.
The makers of AI products (just like in recommendation algorithms, just like in ads, just like in search) face the following - how do you train and deploy at scale?
Do we assume that training purely on the attributes being optimized (will they âlikeâ, will they âclickâ) is the way to go? No, we clearly know that doesnât work. Weâve seen bots go bad - because they are a product of reinforcement learning. Both the good and the bad can be reinforced.
So, whatâs the alternative - decide what results and what information is correct? No, that doesnât work either. Because thereâs no arbiter of truth, and no universal set of norms.
Is it a secret of these recommendation systems that they are developed and deployed in a tech-forward way, optimizing for those variables that are used to evaluate performance and popularityâŚand then policy programs go in, and execute brute force tweaks. You might not notice it, as a user, but problematic search terms lead to specific answers (and might also not be monetized). Some of those specific search terms might have been part of a larger policy effort, or response to a public snafu, or an angry advertiser that didnât want their brand associated withâŚwhatever the term is, or results were.
Did the growth of recommendations, search, and ads take longer than what weâre seeing with AI, such mistakes had a smaller blast radius, and teams developed processes to force-tweak the results to be less harmful? Because mistakes being made with AI seem to be heard, round the world, and then the responses to the mistakes (roll-backs, over-corrections) are also getting lambasted. Iâm glad for the transparency (a rising transparency tide lifts all floaty boats), but itâs kind of hard to watch these teams moving SO FAST, at this scale, with all of the implications of missteps casting such a long shadow.
I donât know what the right answer is. Iâm not agreeing or disagreeing with Hansenâs post. Iâm just acknowledging that the question he raises is important, but not easy to answer: if AI needs to be safe, reliable, and equitable - thereâs a balance to be struck since we cannot rely on the tech, and the vagaries of reinforcement learning alone. So who decides whatâs balanced? And how do they implement those decisions in a way that doesnât make problems worse?
Let me know what you think.
find more cartomancy [whatâs out there]
coming soon
âśď¸ On April 25th, join me at improve 2024 (featuring Fraud Fight Club). Iâll be discussing SCAMS: Defining, Measuring, and Combatting at 11am with fellow fraud experts David Kerman (Chase), Mike Timoney (FRB Boston), and Ian Mitchell (The Knoble / Mission Omega). See you there!
on demand
I was delighted to spend some time discussing cybersecurity career paths, leadership development, and industry trends while reconnecting with my friend and colleague Sandra Liu (if you haven't seen what she's working on over on YouTube I encourage you to check out her projects). this interview, we cover cybersecurity career and industry topics including:
đ¤ What do hiring managers look for when hiring candidates for a job?
đť What cybersecurity skills are most relevant?
đ What are the biggest challenges facing organizations today?
ttyl [whatâs next]
Thanks for reading to the end of this set of lab notes. Iâm thrilled to have some fellow travelers mapping out where weâve been, philosophizing about where we want to be, and building the paths to get us where weâre going.
If youâve read to the end and you find this content helpful, Iâd love feedback. My news feed is full of leads, but my personal algorithm loves learning about what interests the community, so that I can focus in on what will be most useful. Just hit reply and your comments will come whizzing into my inbox. (Itâs also a good way to find me if you are interested in working with me or with Cartomancy Labs).
See you next time on the Futurecast!
Allison
@selenakyle