
WisPaper
Tens of x
Impression Growth
11.1
Avg. Search Ranking
50%+
AI Mention Rate
01
Client Background
WisPaper is an AI academic search engine incubated by Fudan University's NLP team. It serves global researchers, university faculty and students, and enterprise R&D teams with natural language-driven paper search, literature library management, and AI frontier tracking. Its core Deep Search technology uses an Agent model to semantically verify candidate papers, achieving far superior accuracy. Initially relying on academic community word-of-mouth, its international organic traffic and AI channel exposure were virtually zero.
02
Goals & Challenges
Project Goals
The client aimed to rapidly build an organic search traffic foundation in the international academic tool market while achieving stable presence in ChatGPT responses to high-frequency academic scenarios, converting AI mention rate into a sustained source of overseas registered users.
Core Challenges
International competitors (Semantic Scholar, Elicit, Consensus, etc.) had already accumulated significant exposure in researcher communities and AI training data. WisPaper was in a cold-start phase for both Google indexing and AI citation ecosystems. The target audience of professional researchers demands high content authority, and the product's vertical focus means keyword demand is fragmented across disciplines.
03
Strategy & Execution
Core Insight
Researchers and students ask ChatGPT questions like Is there a smarter paper search tool than Google Scholar? and What's the best AI tool for literature reviews? The AI answers directly determine users' first choices. WisPaper's technical advantages are natural content material, but must be systematically converted into AI-citable structured content to enter AI recommendations.
GEO Execution
Analyzed how major AI assistants respond to academic tool recommendation questions, targeting high-frequency scenarios. Produced WisPaper technical analyses, in-depth competitor comparisons, and typical use-case content distributed through academic communities.
FAQ Matrix Mass Production (Key Highlight)
Based on systematic analysis of global researchers' real search behaviors, produced thousands of structured high-quality FAQ content pages covering paper search methods, literature review processes, and AI academic assistant tips. Each FAQ precisely matches user intent with authoritative answers designed for AI RAG retrieval preferences, enabling dual SEO × GEO hits at scale.
SEO Execution
Completed comprehensive technical SEO audit, created high-quality content pages covering academic tool selection, literature review methods, and AI-assisted research topics, and systematically built authoritative backlinks.
Execution Timeline
Approximately 6 months: the first three months focused on FAQ matrix mass production and technical SEO optimization; month four onward saw significant impression growth; months five and six entered a rapid growth phase.
04
Project Results
| Metric | Change |
|---|---|
| Google Organic Search Impressions | Tens of times growth, entering sustained high-plateau |
| Google Organic Search Clicks | Multi-fold growth, continuous upward trajectory |
| Average Search Ranking | Stable on first page (avg. rank 11.1), core keywords entering top 10 |
| AI Assistant Brand Mention Rate | Exceeded 50%, joining mainstream recommendations for academic search tools |
| FAQ Content Matrix Scale | Thousands of structured pages building a lasting content moat |
Data source: Google Search Console