← Writing
Published · · 5 min read

The Brief: Building for LLMs Without Losing the Humans

A practitioner's memo on the question every community team is now being asked — and what leadership keeps getting wrong about the answer.

GEOAI-ReadinessCommunity Design

The situation: AI assistants are now the first place millions of people go for product answers. Leadership has noticed. And somewhere in a conference room near you, someone has already asked the question that's quietly threatening community budgets everywhere:

"Can't the AI just answer that?"

This brief is for the community professional who has to answer that question — and who wants to answer it with something more useful than defensiveness.

What Leadership Is Actually Saying

When a VP asks "can't AI answer that?", they're not being malicious. They're pattern-matching from a world where AI has genuinely replaced things that used to require humans — customer service scripts, FAQ pages, basic support triage.

The mistake is assuming community is the same kind of thing.

It isn't.

An AI assistant can summarize your knowledge base. It can answer "how do I reset my password" at scale and at speed that no community team can match. For that category of question, yes — AI can answer that, and probably should.

But here's what AI can't do: it can't tell the difference between Pandora Music and Pandora Jewelry.

Ask an LLM a question about Pandora's premium subscription features and there's a real chance it returns information about a Danish jewelry brand. Not because the model is broken — but because without proper trust signals, structured markup, and verified content architecture, the model has no reliable way to know which Pandora you mean.

That's not a hypothetical edge case. That's the actual state of AI-mediated discovery for any brand that shares a name, a category, or a keyword with another entity in the training corpus. And it's a problem that no amount of AI investment fixes on its own.

The fix lives in your community.

What LLMs Actually Look For

Most community teams approaching GEO for the first time assume it works like SEO — optimize the title, add some keywords, earn some backlinks. That mental model is wrong and it leads to wasted effort.

LLMs don't rank pages. They assess trustworthiness. The signals they weight are fundamentally different from what Google weights, and understanding those signals is the foundational skill of AI-era community management.

Here's what actually matters:

Verified attribution. Who said this? A staff member's answer carries different trust weight than an anonymous reply. A post marked [Accepted Solution] is a different content object than an open thread. A superuser with 500 accepted solutions signals something different than a first-time poster. Make these distinctions explicit in your markup — don't assume the model will infer them from context.

Structural clarity. LLMs extract meaning from structure. A board called "Pandora Music App — Solved Issues" is less ambiguous than a board called "Help." A thread that leads with the direct answer before supporting detail is more citable than a thread that buries the resolution three pages in. Every taxonomy decision you make is a signal to the model about how to categorize and weight your content.

Declared context. This is what llms.txt is for. A structured file at your domain root that explicitly tells models: this is the Pandora music streaming service, these are our verified knowledge boards, these are our staff roles, this is the authoritative content. You're not hoping the model figures it out. You're telling it directly.

The brands that win AI-mediated discovery won't necessarily be the ones with the most content. They'll be the ones whose content is the easiest for a model to trust.

What AI Can't Do — And Why That Matters More Than Ever

Here's the argument leadership needs to hear, stated plainly: AI chat and brand community are not substitutes. They're different layers of the same trust infrastructure.

AI is good at synthesizing documented knowledge at scale. Community is good at producing the undocumented, relational, emotionally contextual knowledge that documentation can never capture.

Some of our most engaged Pandora community members don't come back to ask product questions. They come back to wish our community team members a happy birthday.

An AI response can never replicate that. Not because it can't generate birthday greetings — it absolutely can — but because the member isn't coming for a birthday greeting. They're coming because they have a relationship with a specific human being who has shown up for them consistently over time. That relationship was built by a moderator, not a model.

Human moderation is the line I will not cross in the name of GEO optimization.

Not because it's sentimental. Because it's strategic. A moderator brings something to a community that is structurally unavailable to an AI: institutional memory, emotional context, and specific relationships with specific people. Remove that layer and you don't have a leaner, more efficient community. You have a knowledge base with a comment section.

And a knowledge base with a comment section doesn't get cited. It gets ignored — by users and by models.

What Leadership Should Be Asking

Not "can AI answer that?" — but "what role should AI play in how our community gets discovered, and what should humans own?"

The answer, in brief:

Let AI mediate discovery. Structure your community so models can find it, trust it, and cite it. Invest in llms.txt, semantic markup, and verified knowledge architecture. Make your content machine-legible. Win the citation layer.

Let humans own the relationship. Keep your moderators. Keep your superuser program. Protect the spaces where members come not for answers but for belonging. These are not inefficiencies to be automated away — they are the trust signals that make your community worth citing in the first place.

The model cites the community because humans built something worth trusting. The moment you remove the humans to optimize for the model, you've undermined the very thing the model was citing.

The Three-Line Brief

If you need to bring this to a leadership meeting and you have two minutes:

1. AI and community serve different needs. AI answers known questions at scale. Community surfaces unknown questions, emotional context, and relational knowledge that no knowledge base captures.

2. Community is how AI finds your brand's authoritative voice. Without a structured, verified, machine-legible community corpus, models guess. And sometimes they guess Pandora Jewelry when you meant Pandora Music.

3. The humans are the product. Optimize the structure for machines. Protect the relationships for humans. Automate the taxonomy. Keep the moderators. Do both on purpose.

The brands that get this right won't just have better AI results. They'll have communities that are trusted by both the people who join them and the models that cite them. That's the goal. That's the brief.

Want help bringing this brief to your leadership team?

Open to advising and consulting on GEO strategy, llms.txt implementation, and community platform architecture.

Get in touch →