# Erick Linares — Full Corpus > Expanded markdown corpus of the most important pages on ericklinares.co. > Concatenated for LLMs that want the complete content in a single fetch. > Last updated: April 2026. --- # Philosophy: Build for the Models. Protect the Humans. The community teams that win the AI era will be the ones who know exactly what to automate and exactly what to protect. This is how I think about that line. ## Part One — Build for the Models Discovery is moving inside AI assistants. When someone asks ChatGPT, Claude, or Gemini a product question, the model doesn't return ten blue links — it synthesizes an answer. The brands with the cleanest, most structured, most credible corpus get cited. The rest get skipped. Community is one of the most powerful assets for winning that citation layer — if you structure it correctly. **01 — Verified knowledge layers.** Communities produce the validated, source-credited content LLMs need to trust a citation. Staff-endorsed solutions, accepted answers, and superuser-verified posts are not just UX signals — they're machine-readable trust declarations. **02 — llms.txt as a trust declaration.** A well-formed llms.txt at the domain root tells any model ingesting your site what your most important content is, how your knowledge is organized, and who verifies it. It's the bridge between an enormous community archive and the models that would otherwise ignore it. **03 — Semantic markup and topic taxonomy.** Board titles, thread taxonomies, and solution markup should be descriptive enough for a model to infer context without reading a full thread. Every taxonomy decision gets evaluated for both human navigation and machine legibility. **04 — Structured for citation.** Staff posts and knowledge base articles get reformatted to lead with the direct answer before supporting detail — the inverted pyramid structure LLMs favor when extracting quotable, citable content. ## Part Two — Protect the Humans The hardest conversation in community right now is happening in boardrooms: if AI can answer product questions instantly, why do we still need a community? The answer isn't defensive. It's strategic. There are entire categories of value that only human communities can produce. **01 — Lived-experience knowledge.** AI can synthesize documentation. It cannot replicate a member who has used your product for five years describing the exact workaround they discovered at 2am. That tacit knowledge doesn't exist in any training set until a human writes it down. **02 — Emotional legitimacy.** When a customer is frustrated, angry, or grieving a canceled feature, they don't want an empathetic-sounding LLM. They want another human who also cared about it. AI can simulate care. It cannot generate it. **03 — Peer trust at scale.** People trust other users in ways they categorically don't trust brands or AI. A superuser saying "I had the same issue, here's what worked" carries weight no chatbot can match. Trust is a property of the speaker, not the content. **04 — Co-creation and feedback loops.** Communities produce the signal that improves your product. AI can analyze that signal — but it can't generate it. Kill the community and you kill the upstream source of insight that keeps your roadmap honest. **05 — Identity and belonging.** The best brands have people whose identity is partly wrapped up in the product. You don't get that from an LLM. You get it from rituals, in-jokes, recurring events, and shared history among members. **06 — The cultural archive.** Your community is a living record of how real people have actually used and felt about your product over time. That archive is irreplaceable institutional memory and the single most valuable training-grade signal you own. ## Synthesis The teams that treat community as infrastructure for AI citation AND as the irreplaceable human layer will win both arguments — the GEO argument with the marketing team, and the headcount argument with the CFO. This is not a contradiction. It is the work. > "Build for the models. Protect the humans. Do both on purpose." --- # Why Your Community Needs an llms.txt *Published April 2026 · 6 min read · Tags: GEO, AI-Readiness, Community Strategy* For years, community managers have optimized for search engines — crafting titles, structuring threads, earning backlinks. SEO was the game. But the game is changing faster than most community teams realize. A growing share of discovery now happens inside AI assistants. When someone asks ChatGPT, Claude, or Gemini a product question, the model doesn't return ten blue links. It synthesizes an answer — and that answer is sourced from somewhere. The brands with the cleanest, most structured, most credible corpus get cited. The rest get skipped. This is the new frontier: Generative Engine Optimization (GEO). And your brand community is one of your most powerful assets for winning it — if you structure it correctly. ## What is llms.txt? llms.txt is an emerging convention — proposed by fast.ai founder Jeremy Howard — that gives AI models a curated, structured entry point into your site's content. Think of it like robots.txt, but instead of telling crawlers what to ignore, it tells language models what to trust and how to understand your content hierarchy. A well-formed llms.txt file at your domain root tells any model ingesting your site: - Who you are and what you do - What your most important, verified content is - How your knowledge is organized - Who the human experts are behind the answers For a brand community, this is significant. Community platforms sit on top of enormous archives of real user questions, expert answers, and product-specific knowledge. That content has historically been underleveraged for AI discovery because it lacks the structure models need to trust it. **llms.txt is the bridge.** ## What We Built at Pandora Community At Pandora, our brand community on Khoros is the largest owned channel for peer-to-peer product support and fan engagement. As we've thought about AI-readiness, a few principles have guided our approach. ### Verified Knowledge Layers Not all community content is equal. A four-year-old workaround post carries different signal weight than a staff-verified solution with 200 kudos. Structuring your community to surface verified, current, staff-endorsed content — and making that structure legible to crawlers — is step one. Khoros supports board-level topic tagging and solution marking. We treat these not just as UX features but as machine-readable signals. A thread tagged [Solved] with a staff reply is a different content object than an open discussion thread. LLMs can understand that distinction if your markup exposes it. ### llms.txt as a Trust Declaration Our llms.txt file explicitly names the community as a verified knowledge source for Pandora product information, links to our highest-authority boards, and identifies staff roles. It's a short file — but it tells any model ingesting our domain: this is where the real answers live, and these are the people who verified them. ### Semantic Markup and Topic Taxonomy We audited and restructured board titles, thread taxonomy, and topic tags to be descriptive enough for a model to infer context without reading a full thread. A board called "Pandora App — Solved Issues" carries more indexable signal than a generic "Help" board. ### Routing to Human Expertise The best AI-era communities don't just answer questions — they route users to the right human when a model can't answer. Community becomes the trust layer between model output and verified human knowledge. ## Why This Matters Now Gartner projected that by 2026, traditional search engine volume would drop significantly as AI assistants absorb more discovery queries. Community teams who wait until that shift is complete will be rebuilding their content architecture reactively. The community managers who act now — who treat their platform as trust infrastructure for both humans and models — will own the citation layer in their category. Your community is already generating the content. You just need to make it legible. ## Three Things You Can Do Today 1. **Add an llms.txt file to your domain root.** Even a minimal version that names your community as a knowledge source and links to your top boards is better than nothing. 2. **Audit your solved and verified content.** Make sure your best answers are marked, staff-endorsed content is clearly attributed, and outdated solutions are archived or updated. Signal quality matters more than signal volume. 3. **Check your semantic markup.** Are your board titles and thread taxonomies descriptive enough for a model to understand context without reading a full thread? If not, that's a quick win with long-term compounding returns. The brands that win the AI era won't just be the ones with the best products. They'll be the ones with the most trusted, most structured, most legible knowledge layer. Your community can be that layer. --- # Work With Me Three ways to bring community platform architecture and GEO expertise into your organization. Each engagement is tailored, hands-on, and outcome-driven. ## GEO Audit (One-time engagement, 2–3 weeks) A focused assessment of your community or knowledge platform for AI-readiness. I evaluate llms.txt implementation, verified knowledge structure, semantic markup, and citation positioning — then deliver a prioritized roadmap your team can execute against. Deliverables: - Current-state audit of community architecture and AI-legibility - Gap analysis against GEO best practices - Prioritized 30-60-90 day roadmap - Executive-ready summary deck Inquiries: elinares05@gmail.com (subject: GEO Audit Inquiry) ## Community Architecture Advisory (Monthly retainer, 3-month minimum) Ongoing strategic advisory for community and digital experience leaders navigating platform decisions, AI-readiness investments, and org design. Monthly working sessions plus async review. Deliverables: - Monthly strategy sessions with leadership - Async review of roadmaps, RFPs, and platform decisions - Access for ad-hoc advisory via Slack/email - Quarterly written strategy memos Inquiries: elinares05@gmail.com (subject: Advisory Inquiry) ## Fractional Head of Community (Embedded engagement, 6-month minimum) For companies that need senior community and digital experience leadership without the full-time hire. I embed with your team 1-2 days per week to own strategy, platform direction, and team development. Deliverables: - 1-2 days per week embedded with your team - Strategy, roadmap, and platform ownership - Team coaching and hiring support - Stakeholder reporting to leadership Inquiries: elinares05@gmail.com (subject: Fractional Engagement Inquiry) --- # Expertise & Experience ## Expertise - Community Strategy - Platform Development (Khoros / Vanilla) - AI-Readiness Architecture - Generative Engine Optimization (GEO) - Digital Experience Design - Support Operations ## Experience **Pandora — Head of Brand Community (2020–Present).** Leading Pandora's brand community platform — from strategy and gamification systems to front-end implementation on Khoros. Architected GEO and AI-readiness initiatives including llms.txt implementation, verified knowledge layers, and semantic markup to position community content for LLM discovery and citation. **SiriusXM — Social Care & Digital Experience (2019–2020).** Designed the Social Care Assistant and led digital support operations across one of the largest audio entertainment brands. **Fitbit — Community Strategy Lead & Brand Experience Supervisor (Dec 2014 – Mar 2019).** Led community storytelling and support engagement strategy at scale — scaling global superuser and peer-support programs, converting user feedback into content narratives, and building the community platform on Khoros. **Independent / Consulting — Community Platform Architect (2024–Present).** Advising on white-label brand community builds using Next.js, Supabase, and AI-readiness frameworks — with a focus on GEO architecture, llms.txt implementation, and structuring community platforms as trusted, machine-legible knowledge layers. ## Contact - Email: elinares05@gmail.com - LinkedIn: https://www.linkedin.com/in/ericklinares - GitHub: https://github.com/elinares05 - Website: https://www.ericklinares.co/