Using AI Without Surrendering Your Judgment
Two positions dominate the conversation about AI right now, and both of them are wrong.
The first is the hype version: AI is transformative, AI changes everything, AI will write your emails and run your meetings and basically think for you if you let it. The people selling this version have something to sell. The second position is the reflex rejection you'll find in a lot of independent web and fediverse spaces: AI is theft, AI is slop, AI is what's killing the things we care about online. That position is more sympathetic, but it's also not quite right.
The nuanced take is harder to hold because it requires you to actually think about what you're doing instead of just picking a side. Here it is: AI is a tool. Treat it like one.
That sounds obvious until you watch how most people actually use it.
A tool is something you pick up with a specific job in mind. You know what the tool does well and what it doesn't. You stay in charge of the work. The tool doesn't decide what gets made or whether it's good. You do. When people talk about AI as an "oracle" or a "copilot" or anything that implies it's navigating and you're just along for the ride, they've already lost the thread. The question isn't whether the output is impressive. The question is whether you're still the one making judgments.
Augmentation and replacement are not the same thing, and the distinction matters more than people want to admit. Using a calculator doesn't make you worse at math; it offloads the arithmetic so you can think about the problem. That's augmentation. Asking an AI to write your thinking for you, then publishing it because it sounds pretty good, is replacement. You've handed off the actual cognitive work. Whether the output is technically competent is beside the point. You didn't do the thinking, and it shows in ways that are hard to articulate but easy to feel when you're reading it.
Intentional workflows are the practical expression of all this. They're also what separates people who are using AI well from people who are just using AI. An intentional workflow starts with knowing what you're actually trying to do. Maybe you're using a local model to summarize a long document before you read it, so you know where to focus. Maybe you're using it to check your reasoning or pressure-test an argument. Maybe it's drafting boilerplate so you can spend your time on the parts that require judgment. In each of those cases, you've defined the job, you've limited the scope, and you're evaluating the output rather than just accepting it.
That's different from opening a chat window and asking it what to think.
The local and private AI question is worth its own consideration, especially for people who already care about infrastructure independence. Running a model locally isn't always practical, but when it is, it changes the relationship. You're not sending your data to a company whose interests don't align with yours. You're not training their next model with your queries. You're using a tool on your own hardware, and the output stays on your machine. For a lot of use cases, a local model is good enough, and good enough plus private beats great-but-surveilled.
The generated content versus authored content distinction is where things get philosophically interesting, and where a lot of the legitimate criticism of AI actually lives. There's a real difference between content that a person wrote and content that a model generated, even if you can't always tell from the outside. The difference isn't just aesthetic. Authored content carries a point of view that was formed through the actual process of writing it. The thinking and the writing happen together, and each one shapes the other. Generated content skips that process. The output can be fluent and well-structured and completely hollow because no one did the thinking that writing is supposed to do.
This is why "but you can't tell the difference" is not the defense people think it is. The absence of detectable seams doesn't mean the thing was made with the same care.
Preserving discernment is probably the most important thing on this list, and also the one that erodes the fastest if you're not paying attention. Discernment is the capacity to tell good from bad, useful from useless, true from plausible-sounding. It's a skill, which means it atrophies when you stop using it. If you outsource your judgment to a model consistently enough, you start losing the ability to evaluate the output at all. You become dependent not because the tool is addictive but because you've quietly let a capability lapse.
The people doing this well treat AI outputs the way a good editor treats a first draft: as raw material that needs their judgment applied to it, not as a finished product to be accepted or rejected wholesale. They're harder to impress and quicker to rewrite. They know what they're looking for because they've thought about it before they asked.
That's the posture worth developing. Not reflexive rejection, not credulous acceptance. Just clear eyes about what the tool does, what you're asking it to do, and who's responsible for the result.
That last part is always you.