Every Convenience Is a Control Surface
What a chatbot's refusal taught me about the systems I used to build.
So, I’m deep in my newsletter workflow — feeding ChatGPT my articles, getting editorial suggestions, the whole thing — and at some point, I ask it to generate an image of me. Which it can do. Cool, right? Except every single time, it asks me to upload a photo of my face as a reference.
Every. Single. Time.
(I know. I know. Just keep reading.)
So finally, I get annoyed enough to just ask: why can’t you use the photo I already gave you?
And it says, more or less: because that would be stalking.
Oh.
Oh.
You know that feeling when you’ve been low-grade irritated at something for weeks, and then the actual reason lands, and your whole body does this weird recalibration from “this is broken” to “oh god, this is the only thing keeping it from being terrifying”?
That’s what happened to me. Right there at my desk, having what I can only describe as an involuntary ethics realization.
Because if ChatGPT could remember my face across sessions, we’re not talking about a helpful little upgrade. We’re talking about a system that recognizes you instantly, tracks how you’ve aged, reconstructs you from memory, and quietly builds a face database out of what you thought were just casual conversations. No cameras on poles. No guys in uniforms. Just a friendly chatbot that “remembers.”
That’s not an assistant. That’s biometric surveillance with good UX. And you signed up for it because you were too lazy to re-upload a photo.
(Me. I’m the you in that sentence.)
Here’s the wild part: this isn’t even hard to build. Technologically, we’re already there. The only thing standing between “helpful AI” and “soft authoritarian data collection apparatus” is restraint. Policy. Architecture. Actual human beings making deliberate choices. And that’s assuming that the actual human beings make the right deliberate choices, if you know what I mean.
Which I say with complete awareness that I used to be one of those human beings.
I worked at Yahoo! (yes, the exclamation point is mandatory and no, I don’t make the rules.) Back when that sentence didn’t come with a footnote and the yodel wasn’t replaced with a sad trombone. Back when Yahoo! was the internet for a lot of people — the homepage, the email, the news, the search. I was on the Yahoo! Photos team, building interfaces that millions of people used every day, and I can tell you with total certainty that not once did anyone in any meeting say, “hey, we’re designing a control surface here.”
We talked about users. Engagement. Features. Clicks. Time on page. We talked about removing friction like it was categorically, always, without exception, a good thing. Friction was the enemy. Friction meant the metrics went down. Friction meant someone left.
Nobody frames it as control when you’re building. You frame it as service. You’re helping people. You’re optimizing.
And that word — optimizing — covers a remarkable number of sins. When you optimize for engagement at scale, you stop helping individuals and start shaping behavior. You decide what’s easy and what’s hard, what’s visible and what’s buried, what people do without thinking versus what requires actual effort. Those are enormous choices. They were disguised as product decisions.
I didn’t fully understand that then.
I do now. Unfortunately.
Once you see it, you can’t stop seeing it. All that “annoying” friction in AI tools? A lot of it is the only thing holding the ceiling up.
Take memory. I’ve had hundreds of conversations with ChatGPT. If it remembered every single one, it would have a fairly detailed portrait of me — every doubt, every financial stress, every half-formed 2am thought about whether I should just quit the tech industry and go work at Trader Joe’s. (The health insurance is apparently great.) Over time, that portrait doesn’t just describe you. It predicts you. Nudges you. Shapes what you see and how you feel about your options.
This is called scope creep, and I’ve watched it happen from the inside. Features built for one purpose, quietly repurposed for another. Data collected for personalization could become data used for ad targeting. Turns out architecture doesn’t care about your original intent. It just enables whatever comes next.
So, AI memory is lossy. Kept incomplete, kept sessionless, kept from becoming a portrait. Not purely by accident. Not because the engineers couldn’t do better — they absolutely could — but because “better” is the wrong word for what total recall would actually become.
Or take refusals. I once asked an AI to help me find information about someone. Completely benign — I was vetting a potential client. The AI hedged. Wouldn’t do it.
My intent was fine. But the person typing that exact same request with completely different intentions? Looking up an ex. A witness. Someone they saw on the bus.
Same request. Completely different universe of outcomes.
You can’t build a door that only opens for good people. When you’re building at scale, you learn fast that you can’t design for intent — you can only design for capability. “What can the system do?” not “what do you hope people will do with it?” The gap between those questions is where the damage lives.
Here’s the one that actually gets me, though. Why can’t AI just give you definitive medical or legal advice?
Let’s say, hypothetically, you’re dealing with Medi-Cal. You hypothetically have your mom’s coverage renewal sitting on the kitchen table. You’ve already been on hold for forty-five minutes. You just want someone to tell you what to do. So you type that, off the cuff, desperate.
And the AI says: here are some general guidelines, but you should consult a professional.
Which feels like being handed a pamphlet when you’re actively drowning. Hypothetically.
But if the AI said, “do exactly this” and you got denied — who’s responsible? Not the AI. Not the company. You. You’re the one who followed the advice. You’re the one who missed the appeal deadline because a chatbot told you it would be fine.
It’s called automation bias. The tendency to over-trust machines even when something feels off. There’s real research on this — it shows up in cockpits, in clinical settings, in places where the stakes are high enough that someone actually studied it. The conclusion is always roughly the same: the human should have known better. The human had the final call. The human is responsible.
Cool. Super helpful. Tell that to the denial letter.
The hedging isn’t a bug. It’s the system refusing to take responsibility it genuinely cannot bear.
(Still absolutely maddening when you’re sitting at that kitchen table, though. Hypothetically.)
Every convenience is a potential control surface.
If a system can help you, it can steer you. If it can remember, it can profile. If it can act, it can override. If it can predict, it can manipulate.
And the distance between “help” and “control” isn’t some vast canyon you’d notice yourself falling into. It’s a setting. A UI toggle. A policy change at a company you’ve never heard of, made by people who will never know your name.
I know because I’ve been one of those people. Not maliciously. Not even carelessly, most of the time. Just... quickly. Under deadline. Shipping the feature because it tested well and the metrics supported it and another sprint starts Monday.
That’s how it happens. Not with villains. With sprints.
I’ve spent the last few years in waiting rooms. Filing paperwork. Watching my parents get processed by systems designed for efficiency, not for them. And there’s something genuinely clarifying about going from the person who builds the machine to the person who gets processed by it. You feel — viscerally, in your body — how “streamlined” is just another way of saying “we removed the parts where a human might have actually helped you.”
So, when I think about AI that could “just remember faces,” I’ve started thinking less about how convenient that would be. I think about insurance companies. Immigration offices. Benefits systems. Law enforcement. Places that already treat people like data points, suddenly equipped with a tool that’s exceptionally good at turning people into data points.
And then I think about that annoying little friction. The photo upload. Every. Single. Time.
You know what? Keep it inconvenient.
I spent years removing friction for a living. I know what happens when you build doors that are too easy to open.
All of this existential reckoning. All of it. For a photo that looks like this:


