PKM vs RAG vs Wiki vs Memory Systems Explained Clearly
A map of modern knowledge systems
PKM, RAG, wikis, and AI memory systems are often discussed as if they solve the same problem. They do not. They all deal with knowledge, but they operate at different layers:
- PKM helps humans think.
- Wikis help groups preserve shared knowledge.
- RAG helps machines retrieve external knowledge.
- Memory systems help AI agents persist context over time.
Confusing these systems leads to bad architecture.
You get wikis full of personal scratch notes, RAG systems without a source of truth, memory layers pretending to be databases, and PKM tools overloaded with automation they were never designed to handle.
A better model is to see them as different parts of a knowledge systems spectrum.

This article compares PKM, RAG, wikis, and AI memory systems by structure, retrieval, ownership, evolution, and real-world use cases.
The short version
| System | Primary user | Main purpose | Best for |
|---|---|---|---|
| PKM | Individual | Develop personal knowledge | Thinking, learning, synthesis |
| Wiki | Team or public group | Maintain shared knowledge | Documentation, policies, reference |
| RAG | Machine system | Retrieve context for generation | AI answers over external data |
| AI memory | AI agent | Persist context over time | Long-running agents and personalization |
The most important distinction is this:
PKM and wikis structure knowledge. RAG retrieves knowledge. Memory systems evolve agent context.
That is the core mental model.
Why these systems are confused
They overlap in visible behavior.
All of them can:
- store notes
- retrieve information
- answer questions
- organize references
- connect ideas
But they differ in intent.
A PKM system is not just a private wiki. A wiki is not just a RAG database. A RAG pipeline is not an AI memory. An AI memory system is not a replacement for structured documentation.
The confusion comes from treating “knowledge” as one thing.
In practice, knowledge has multiple layers:
- Capture
- Structure
- Retrieval
- Interpretation
- Reuse
- Evolution
Different systems optimize different stages.
The four paradigms
1. PKM
PKM stands for personal knowledge management.
It is the practice of capturing, organizing, connecting, and using knowledge for personal work.
Typical PKM systems include:
- Obsidian
- Logseq
- Notion
- plain Markdown folders
- Zettelkasten systems
- second brain systems
PKM is human driven.
The goal is not just storage. The goal is better thinking.
What PKM is good at
PKM works well for:
- learning a new domain
- developing original ideas
- connecting notes over time
- writing articles or books
- tracking personal research
- building a second brain
A good PKM system is messy in a useful way. It supports unfinished thoughts, partial ideas, private context, and evolving concepts.
This is why PKM is not the same as documentation.
Documentation wants clarity. PKM tolerates ambiguity.
PKM failure modes
PKM often fails when it becomes:
- a dumping ground
- a folder taxonomy project
- a productivity aesthetic
- a tool optimization hobby
- a private archive nobody uses
The main risk is collection without synthesis.
If you only save information, you do not have a knowledge system. You have a personal landfill.
Opinionated take
PKM should optimize for reuse, not capture.
Capturing everything feels productive, but it creates debt. The real value appears when notes become connected, rewritten, compressed, and used in output.
2. Wiki
A wiki is a structured knowledge base designed for shared reference.
Typical wiki systems include:
- DokuWiki
- MediaWiki
- Confluence
- BookStack
- Git based documentation sites
- internal company knowledge bases
A wiki is usually more formal than PKM.
It should answer:
What do we know, and where is the current version?
What wikis are good at
Wikis work well for:
- team documentation
- operational runbooks
- product knowledge
- policy documents
- technical reference
- onboarding material
- stable domain knowledge
A wiki is a social contract.
It says:
This page is the place where this knowledge lives.
That makes ownership and maintenance critical.
Wiki failure modes
Wikis often fail because they become stale.
Common problems:
- no page owners
- outdated screenshots
- duplicate pages
- unclear canonical versions
- too much hierarchy
- no maintenance rhythm
A wiki with old information is worse than no wiki, because it creates false confidence.
Opinionated take
A wiki should be boring.
That is a compliment.
A good wiki is not where ideas are born. It is where stable knowledge is preserved after it becomes useful to others.
3. RAG
RAG stands for retrieval augmented generation.
It is an AI architecture where a system retrieves relevant external information before asking a language model to generate an answer.
A basic RAG pipeline usually has:
- Documents
- Chunking
- Embeddings or search index
- Retrieval
- Optional reranking
- Prompt assembly
- LLM generation
RAG is machine driven.
The goal is not to create knowledge. The goal is to give a model relevant context at query time.
What RAG is good at
RAG works well for:
- question answering over documents
- internal search assistants
- support bots
- technical documentation assistants
- compliance lookup
- research over large corpora
- connecting LLMs to updated information
RAG is especially useful when the model cannot or should not memorize the information.
RAG failure modes
RAG often fails when teams treat it as magic search.
Common problems:
- bad chunking
- weak retrieval
- noisy context
- missing metadata
- no source of truth
- stale documents
- weak evaluation
- no human feedback loop
RAG does not fix bad knowledge management.
If the underlying content is fragmented, outdated, or contradictory, the RAG system will surface that mess with confidence.
Opinionated take
RAG is not a knowledge strategy.
RAG is an access strategy.
It helps machines access knowledge, but it does not decide what knowledge is valid, maintained, canonical, or useful.
4. AI memory systems
AI memory systems give agents persistent context beyond a single prompt or conversation.
They may store:
- user preferences
- past decisions
- long-term facts
- task history
- summaries
- reflections
- extracted entities
- episodic memories
- semantic memories
Examples and related ideas include:
- MemGPT style memory tiers
- long-term agent memory
- episodic memory
- semantic memory
- vector memory
- profile memory
- tool state memory
- reflective agents
AI memory is agent driven.
The goal is continuity.
What AI memory is good at
AI memory systems work well for:
- personal assistants
- long-running coding agents
- research agents
- customer support agents
- tutoring systems
- workflow automation
- persistent companions
- multi-session task execution
Memory matters when the system must behave as if it remembers.
AI memory failure modes
Memory systems are dangerous when unmanaged.
Common problems:
- remembering wrong facts
- storing too much
- privacy risk
- stale preferences
- poor memory ranking
- memory poisoning
- no forgetting mechanism
- confusing memory with truth
A memory system needs governance.
It should answer:
- What should be remembered?
- Who approved it?
- How long should it live?
- When should it be forgotten?
- How is it corrected?
Opinionated take
AI memory is not just long context.
Long context lets a model see more at once. Memory decides what survives across time.
Those are different problems.
Core differences table
| Dimension | PKM | Wiki | RAG | AI memory |
|---|---|---|---|---|
| Primary user | Individual | Team or public group | AI system | AI agent |
| Main function | Thinking | Shared reference | Query time retrieval | Persistent context |
| Knowledge state | Evolving | Stabilized | Retrieved | Adaptive |
| Structure | Flexible | Explicit | Index based | Learned or extracted |
| Retrieval style | Human search and linking | Navigation and search | Semantic or hybrid retrieval | Relevance plus salience |
| Ownership | Personal | Page or team owners | System maintainers | Agent or user controlled |
| Time horizon | Long term personal | Long term shared | Query time | Multi-session |
| Best output | Insight | Reliable reference | Grounded answer | Continuity |
| Main risk | Hoarding | Staleness | Bad retrieval | Bad memory |
| Good metric | Reuse in thinking | Trust and freshness | Answer quality | Helpful continuity |
Structure vs retrieval vs evolution
The simplest way to understand these systems is to compare what they optimize. The architectural implications of that distinction are explored in depth in Retrieval vs Representation in Knowledge Systems.
PKM optimizes personal evolution
PKM is about how your understanding changes.
You collect material, rewrite it, connect it, and turn it into something useful.
The output is often:
- a better mental model
- a written article
- a decision
- a research direction
- a reusable insight
PKM is not primarily about fast lookup. It is about long-term sensemaking.
Wikis optimize shared structure
Wikis are about stable knowledge.
They ask:
- What is the current answer?
- Who owns it?
- Where should people go?
- What should be updated?
A wiki works when people trust it.
RAG optimizes machine retrieval
RAG is about retrieving the right context at the right time.
It asks:
- What documents are relevant?
- Which chunks should be used?
- How much context fits?
- What should the model cite?
RAG works when retrieval quality is high and the source corpus is trustworthy.
AI memory optimizes continuity
Memory systems are about persistence across sessions.
They ask:
- What should the agent remember?
- What should be forgotten?
- Which memory matters now?
- How should memory change behavior?
Memory works when it improves future behavior without polluting the agent with stale or incorrect context.
When to use PKM
Use PKM when the knowledge is personal, unfinished, or exploratory.
Good scenarios:
- learning distributed systems
- planning articles
- researching LLM architecture
- collecting book notes
- building a second brain
- tracking personal experiments
Use PKM when you are still thinking.
Example
You are learning about RAG evaluation.
You collect:
- articles
- benchmark notes
- diagrams
- implementation ideas
- failures from your own experiments
This belongs in PKM first.
Later, once the knowledge stabilizes, you may publish an article or turn it into documentation.
When to use a wiki
Use a wiki when knowledge must be shared and maintained.
Good scenarios:
- team onboarding
- API documentation
- operational runbooks
- architecture decision records
- product knowledge
- deployment instructions
- support procedures
Use a wiki when others need a reliable answer.
Example
Your team has one correct way to deploy a Hugo site to S3 and CloudFront.
That does not belong only in someone’s private notes.
It belongs in a wiki or documentation system with clear ownership.
When to use RAG
Use RAG when an AI system needs access to external knowledge at query time.
Good scenarios:
- chatbot over documentation
- search assistant over internal docs
- support assistant over help articles
- legal or compliance assistant
- research over large document sets
- developer assistant over code docs
Use RAG when the problem is:
The model needs information that lives outside its weights.
Example
You have hundreds of technical articles and want an assistant to answer questions using them.
RAG is a good fit.
But only if the documents are clean enough to retrieve from.
When to use AI memory
Use AI memory when an agent needs continuity.
Good scenarios:
- coding agents that remember project conventions
- personal assistants that remember preferences
- research agents that continue long investigations
- tutoring agents that remember student progress
- support agents that remember prior interactions
- autonomous agents that track goals
Use memory when the system must improve across time.
Example
A coding agent should remember:
- the project uses Go
- tests run with a specific command
- the user prefers minimal dependencies
- database migrations follow a convention
That is not just retrieval. It is persistent operating context.
How these systems combine
The most useful systems are hybrids.
A mature knowledge architecture might look like this:
- PKM for personal exploration
- Wiki for stable shared knowledge
- RAG for machine access
- AI memory for long-running agent continuity
Each layer has a job.
Pattern 1. PKM to wiki
This is the human knowledge pipeline.
Flow:
- Capture notes privately
- Connect ideas
- Distill insights
- Publish stable knowledge
- Maintain as shared reference
This is how personal research becomes organizational knowledge.
Example
You research self-hosted knowledge tools in Obsidian.
After testing DokuWiki, Nextcloud, and static Markdown systems, you write a stable guide in your site or team wiki.
PKM created the insight. The wiki preserves the result.
Pattern 2. Wiki to RAG
This is the machine access pipeline.
Flow:
- Maintain canonical wiki pages
- Index them
- Retrieve relevant sections
- Generate grounded answers
- Link back to sources
This is one of the cleanest RAG patterns.
The wiki remains the source of truth. RAG becomes the access layer.
Example
A support bot answers questions using a product wiki.
The bot should not replace the wiki. It should cite and route users back to the canonical pages.
Pattern 3. RAG plus memory
This is the agent continuity pipeline.
Flow:
- RAG retrieves external facts
- Memory stores user or task context
- The agent combines both
- Future behavior improves
RAG answers:
What does the knowledge base say?
Memory answers:
What matters about this user, project, or task?
Example
A coding agent uses RAG to retrieve framework docs.
It uses memory to remember that your project avoids ORMs, prefers sqlc, and uses structured logging.
Those are different knowledge types.
Pattern 4. PKM plus AI assistant
This is the hybrid thinking pipeline.
Flow:
- Human captures notes
- AI summarizes and suggests links
- Human edits and validates
- Knowledge becomes more structured
- Some pages graduate to wiki or publication
The AI augments the PKM system, but it should not own the truth.
Example
An AI assistant can suggest connections between notes about RAG, memory systems, and LLM Wiki.
But the human decides which connections are meaningful.
Common architecture mistakes
Mistake 1. Treating RAG as a wiki
RAG is not a knowledge base.
It does not automatically create a canonical structure. It retrieves from whatever exists.
If the source documents are bad, RAG becomes a confident interface to bad knowledge.
Mistake 2. Treating memory as a database
AI memory is selective context, not general storage.
A database stores records. Memory changes behavior.
If you need exact facts, use a database or knowledge base. If you need continuity, use memory.
Mistake 3. Treating PKM as documentation
PKM can be messy.
Documentation should not be.
Private notes can contain half-formed ideas. Shared documentation should contain stable, maintained knowledge.
Mistake 4. Treating a wiki as a thinking tool
A wiki can support thinking, but it is not ideal for early exploration.
If every early thought must become a polished page, people stop writing.
Use PKM for rough thinking. Use wikis for durable knowledge.
Mistake 5. Treating long context as memory
Long context is not memory.
It only helps while the context is present.
Memory persists, selects, updates, and sometimes forgets.
Decision guide
Use this simple decision model.
If the knowledge is private and evolving
Use PKM.
If the knowledge is shared and stable
Use a wiki.
If an AI needs to answer from external documents
Use RAG.
If an agent needs continuity over time
Use memory.
If you need all four
Build a layered system.
Do not force one tool to do every job.
The knowledge systems spectrum
These systems form a spectrum from human thinking to AI continuity.
| Layer | System | Role |
|---|---|---|
| Human thought | PKM | Explore and synthesize |
| Shared structure | Wiki | Preserve and maintain |
| Machine access | RAG | Retrieve and generate |
| Agent continuity | Memory | Persist and adapt |
The direction matters.
Knowledge often starts as personal thought, becomes shared structure, is indexed for machine retrieval, and then becomes part of persistent agent behavior.
That is the modern knowledge stack.
Where LLM Wiki fits
LLM Wiki style systems sit between wiki and AI architecture.
They are not classic RAG.
Instead of retrieving chunks only at query time, they attempt to pre-structure knowledge into pages, summaries, entities, and links.
That makes them closer to compiled knowledge systems.
A useful placement:
| System | Position |
|---|---|
| Wiki | Human maintained structured knowledge |
| RAG | Query time machine retrieval |
| LLM Wiki | Ingest time machine structured knowledge |
| Memory | Agent persistent context |
This is why LLM Wiki belongs near knowledge systems architecture, not inside ordinary RAG.
Practical examples
Example 1. Personal technical blog
A technical blogger might use:
- PKM for research notes
- Hugo site as published knowledge
- internal linking as wiki-like structure
- RAG later for site search
- AI memory for writing assistant preferences
This is a strong architecture.
It keeps human judgment at the center while still allowing AI support.
Example 2. Engineering team
An engineering team might use:
- PKM for individual learning
- wiki for standards and runbooks
- RAG assistant for internal docs
- memory for coding agents working inside repositories
The wiki should remain canonical.
The RAG assistant should not invent process. The memory layer should remember project preferences, not replace architecture decisions.
Example 3. AI research workflow
A researcher might use:
- PKM for paper notes
- wiki for stable summaries
- RAG for literature search
- memory for long-running research agents
This works because each layer handles a different time scale.
Security and governance
Knowledge systems become risky when they store sensitive or stale information.
PKM governance
Questions:
- What should stay private?
- What should be published?
- What should be deleted?
Wiki governance
Questions:
- Who owns each page?
- When was it last reviewed?
- What is canonical?
RAG governance
Questions:
- Which sources are indexed?
- Are answers cited?
- How is retrieval evaluated?
- What content is excluded?
Memory governance
Questions:
- What is remembered?
- Can users inspect memory?
- Can users delete memory?
- How are wrong memories corrected?
Memory needs the strictest governance because it can silently influence future behavior.
SEO and content strategy note
If you run a technical site, this distinction is not only architectural. It is also editorial.
You can map content like this:
- PKM pages explain human knowledge practices.
- Wiki pages explain structured knowledge systems.
- RAG pages explain retrieval engineering.
- Memory pages explain persistent AI behavior.
- Architecture pages compare and connect the paradigms.
This gives your site a clean authority mesh instead of a pile of loosely related AI articles.
Final conclusion
PKM, RAG, wikis, and AI memory systems are not competitors.
They are different answers to different questions.
PKM asks:
How do I think better over time?
A wiki asks:
What do we know, and where is the trusted version?
RAG asks:
What external context should the model use right now?
AI memory asks:
What should this agent remember for the future?
Once you separate those questions, the architecture becomes obvious.
Use PKM for thinking. Use wikis for shared truth. Use RAG for retrieval. Use memory for continuity.
The future is not one knowledge system that replaces all others.
The future is layered knowledge architecture. For tools, methods, and self-hosted platforms across the full knowledge management spectrum, the cluster pillar maps the territory.
Sources and further reading
- https://cloud.google.com/use-cases/retrieval-augmented-generation
- https://aws.amazon.com/what-is/retrieval-augmented-generation/
- https://www.ibm.com/think/topics/retrieval-augmented-generation
- https://www.ibm.com/think/topics/knowledge-management
- https://arxiv.org/abs/2310.08560
- https://research.memgpt.ai/
- https://zettelkasten.de/posts/building-a-second-brain-and-zettelkasten/