Skip to main content

When to use this

You want to persist cached conversation sessions into the knowledge graph so the Q&A history becomes part of the searchable graph. This is useful when you want session data to survive cache clearing or to be queryable alongside other graph data. Before you start:
  • Complete Quickstart to understand basic operations
  • Ensure you have LLM Providers configured
  • Have an existing knowledge graph (add → cognify completed)
  • Caching must be enabled and at least one session must exist (created by prior cognee.search() calls with a session_id)

Code in Action

import asyncio
import cognee
from cognee import SearchType
from cognee.memify_pipelines.persist_sessions_in_knowledge_graph import (
    persist_sessions_in_knowledge_graph_pipeline,
)
from cognee.modules.users.methods import get_default_user

async def main():
    await cognee.add(
        ["Alice moved to Paris in 2010. She works as a software engineer."],
        dataset_name="session_demo",
    )
    await cognee.cognify(datasets=["session_demo"])

    # Build session history with searches
    await cognee.search(
        query_type=SearchType.GRAPH_COMPLETION,
        query_text="Where does Alice live?",
        session_id="demo_session",
    )
    await cognee.search(
        query_type=SearchType.GRAPH_COMPLETION,
        query_text="What does she do for work?",
        session_id="demo_session",
    )

    # Persist the session into the graph
    user = await get_default_user()
    await persist_sessions_in_knowledge_graph_pipeline(
        user=user,
        session_ids=["demo_session"],
        dataset="session_demo",
    )

asyncio.run(main())

What Just Happened

  1. Add + Cognify — builds a knowledge graph from your text.
  2. Search with session_id — runs two searches that accumulate Q&A history in the session cache under "demo_session".
  3. get_default_user() — retrieves the authenticated user. This pipeline requires a User object with write access.
  4. persist_sessions_in_knowledge_graph_pipeline(user, session_ids, dataset) — reads the cached session data and writes it into the knowledge graph.

What Changed in Your Graph

  • New nodes are created from the session Q&A history, grouped under the user_sessions_from_cache node set.
  • The session data is processed through cognee.add and cognee.cognify internally, so entities and relationships from the session content become part of the graph.
  • user (User, required) — authenticated user with write access. Obtain via await get_default_user().
  • session_ids (Optional[List[str]]) — list of session IDs to persist. If None, no sessions are extracted.
  • dataset (str, default: "main_dataset") — the dataset to write session data into.
  • run_in_background (bool, default: False) — run asynchronously and return immediately.
Two tasks run in sequence:
  1. extract_user_sessions — reads Q&A data from the SessionManager for the specified session_ids.
  2. cognify_session — calls cognee.add and cognee.cognify on the extracted session data, writing the results into the graph under the user_sessions_from_cache node set.
  • No sessions found — caching must be enabled and searches with a session_id must have been run first. See Sessions and Caching.
  • Error: no graph data found — run cognee.add() and cognee.cognify() before calling this pipeline.
  • LLM errors — verify that your LLM provider is configured. See LLM Providers.
  • Permission errors — the user must have write access to the target dataset. See Permissions.

Memify Concept

Understand how memify pipelines work

Sessions Guide

Learn how sessions and caching work in Cognee

Search

Query the enriched graph with specialized search types