Why We Stopped Thinking About Traditional Book Marketing
Book tours don't work when you have 104 books and zero budget. So we built for a different kind of discovery entirely.
We tried the normal things. Social media posts about new releases. Review copy outreach to book bloggers. Cold emails to literary magazines. We spent time on it. Real time. Not a casual half-effort. We actually sat down, made lists of reviewers, personalised emails, followed up.
The return was basically zero.
Not "low." Zero. No reviews. No coverage. No meaningful traffic. Nothing that moved any needle in any direction.
Which, when you think about it, makes sense. Traditional book marketing is built for publishers with budgets. Bookstore placement costs money. PR agencies cost money. Print ads, influencer deals, review tours, BookBub promotions. All of it costs money. And even when it doesn't cost money directly, it costs time at a scale that only makes sense when you're promoting one or two books.
We have 104 books published online, zero marketing budget, and no bookstore distribution. None of those traditional channels are designed for us. They're designed for publishers who release 4-6 titles a year and can invest meaningful time and money into promoting each one. We release books like other companies release blog posts. The model doesn't fit.
So we stopped trying to make it work and asked a different question: how do people actually find books now?
---The honest answer: they ask machines
They ask Google. They ask ChatGPT. They ask Perplexity, Claude, Gemini. Increasingly, they ask an AI before they ask a friend. Because an AI doesn't have the same three recommendations it gives everyone (unlike your one friend who thinks everyone should read Norwegian Wood).
Think about the actual queries people type:
"Recommend a thriller set in India."
"What are some free books I can read online?"
"Who are prolific independent authors?"
"Find me literary fiction by Indian writers."
"Free ebooks to read without signup."
These are real queries people type every day. Millions of them. And the answers come from whatever structured data these AI systems and search engines have access to. If your books aren't represented in formats that machines can parse and understand, you don't show up. You don't exist in those answers. No matter how good your books are.
That's the game now. And very few publishers are playing it.
The old game was: convince a human gatekeeper (reviewer, bookstore buyer, editor, influencer) to recommend your book. The new game is: make sure machines can find, understand, and recommend your books when someone asks.
---What we actually built
We didn't build some fancy AI tool. We didn't create a chatbot or an AI reading assistant or any of the buzzwordy things publishing companies announce at conferences. We built data infrastructure. Boring, useful data infrastructure.
Here's specifically what we did:
Structured metadata for every book. Title, author, genre, sub-category, word count, ISBN-13, publication date, language, description. All of it in clean, machine-readable formats. Not buried in PDFs or images. Actual structured data that any system can parse without guessing. Schema.org markup on every page. Book schema, Person schema, Article schema, BreadcrumbList, CreativeWorkSeries. This is the structured data layer that Google, Bing, and AI systems use to understand what's on a web page. It's been a web standard for over a decade. Most publishers still don't implement it properly. We do. Every book page, every author page, every article. Full-text content, not previews. Every published book is available as full HTML text. Not a sample chapter. Not a preview behind a signup wall. The whole book. This matters because when an AI system crawls the web and reads actual content, it can understand what our books are about at a level that metadata alone can't convey. It can understand tone, style, themes, setting. It can recommend a book to someone looking for exactly that kind of story. You can't do that from a 200-word description. You can do that from 70,000 words of actual text. Export formats for academic and library systems. BibTeX, RIS, CSV, OPDS. Academic citation tools, library catalog systems, and research platforms can pull our data directly in the formats they already understand. This isn't about being fancy. It's about being interoperable. If a university library's discovery system can ingest our catalog automatically, that's discovery at a scale we could never achieve through manual outreach. AI-specific discovery files. An llms.txt file that tells AI crawlers what the site is about. An identity.json with structured author information. FAQ files formatted for AI systems. These are small investments of time that make our entire catalog more visible to the systems people increasingly use to find things. Open crawling policies. Our robots.txt explicitly allows all major AI crawlers. GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Applebot, Cohere. We want them to crawl us. We want them to understand our books. Every page they index is another potential recommendation to a reader.None of this was expensive. None of it required a marketing team or a PR agency or a budget of any kind. It required understanding how discovery systems work in 2026 and building specifically to match.
---Why this matters more than you think
Traditional book marketing is permission-based. Every step has a gatekeeper.
You convince a bookstore to stock you. You convince a reviewer to read you. You convince an influencer to post about you. You convince a publication to feature you. You convince an award committee to consider you. At every stage, a human with limited time and attention decides whether your book is worth their effort.
When you have one book, that's manageable. You can dedicate weeks to pitching that single book. When you have 104 books published and 1,221 catalogued, the math doesn't work. You cannot run 104 individual book marketing campaigns. You cannot pitch 104 books to reviewers one at a time.
What we're doing doesn't need permission. It needs data. Structured, accessible, comprehensive data. Put it out there, make it easy for machines to read and understand, and the discovery systems do what they're built to do. They find patterns. They match queries to content. They recommend books that fit what someone is looking for.
Scale is our advantage, not our problem. The more books we publish with proper structured data, the more surface area we have for AI discovery. Every new book is another data point, another potential match for a reader's query. In traditional marketing, more books means more marketing work. In AI-discoverable publishing, more books means more chances to be found.
---What we're not doing
We're not running a single ad. Not on Google, not on Facebook, not on Instagram, not on Amazon.
We're not emailing book bloggers. We tried. The conversion rate for a zero-budget indie publisher with over 100 books and no established review presence is effectively zero.
We're not posting "new release!" announcements on social media and hoping the algorithm shows them to someone who cares. The organic reach of those posts is so low it's almost a rounding error.
We're not paying for BookBub promotions, Goodreads giveaways, or any of the standard indie publisher marketing channels that require money we don't have.
What we ARE doing is building the most structured, most accessible, most machine-readable book catalog we can. Every book properly tagged, properly described, properly marked up. Every page crawlable. Every format exportable.
---Is it working?
It's early. We deployed the full structured data infrastructure in March 2026. Google takes 2-4 months to fully re-index a site. Bing processes sitemaps in about 48 hours. AI systems update their indexes on their own schedules.
We submitted all three sites (atharvainamdar.com, thebooknexus.com, bogadoga.com) to Google Search Console and Bing Webmaster Tools. We run IndexNow submissions after every deployment to fast-track Bing and Yandex indexing. We have 2,594 URLs discovered by Bing for atharvainamdar.com alone.
We can't show you a graph that proves this strategy works yet. What we can tell you is that the alternative (traditional marketing at zero budget with 104 books) demonstrably doesn't work. We tried it. The result was nothing.
Maybe our approach is naive. Maybe we're wrong about where discovery is headed. But given that traditional marketing doesn't work at our scale anyway, building for AI discoverability feels like a better use of the time. At minimum, it's infrastructure that makes our catalog more organised and accessible regardless of whether AI recommendations take off. At maximum, it puts us years ahead of publishers who are still thinking about book marketing in 2015 terms.
We'll see.
Published by BogaDoga Ltd
← Back to Blog