Skip to main content

What is the add operation

The .add operation is how you bring content into Cognee. It takes your files, directories, or raw text, normalizes them into plain text, and records them into a dataset that Cognee can later expand into vectors and graphs with Cognify.
  • Ingestion-only: no embeddings, no graph yet
  • Flexible input: raw text, local files, directories, any Docling supported format or S3 URIs
  • Normalized storage: everything is turned into text and stored consistently
  • Deduplicated: Cognee uses content hashes to avoid duplicates
  • Dataset-first: everything you add goes into a dataset
    • Datasets are how Cognee keeps different collections organized (e.g. “research-papers”, “customer-reports”)
    • Each dataset has its own ID, owner, and permissions for access control
    • You can read more about them below

Where add fits

  • First step before you run Cognify
  • Use it to create a dataset from scratch, or append new data over time
  • Ideal for both local experiments and programmatic ingestion from storage (e.g. S3)

What happens under the hood

  1. Expand your input
    • Directories are walked, S3 paths are expanded, raw text is passed through
    • Result: a flat list of items (files, text, handles)
  2. Ingest and register
    • Files are saved into Cognee’s storage and converted to text
    • Cognee computes a stable content hash to prevent duplicates
    • Each item becomes a record in the database and is attached to your dataset
    • Text extraction: Converts various file formats into plain text
    • Metadata preservation: Keeps file-system metadata like name, extension, MIME type, file size, and content hash — not arbitrary user-defined fields
    • Content normalization: Ensures consistent text encoding and formatting
  3. Return a summary
    • You get a pipeline run info object that tells you where everything went and which dataset is ready for the next step

After add finishes

After .add completes, your data is ready for the next stage:
  • Files are safely stored in Cognee’s storage system with metadata preserved
  • Database records track each ingested item and link it to your dataset
  • Dataset is prepared for transformation with Cognify — which will chunk, embed, and connect everything

Further details

  • Mix and match: ["some text", "/path/to/file.pdf", "s3://bucket/data.csv"]
  • Works with directories (recursively), S3 prefixes, and file handles
  • Local and cloud sources are normalized into the same format
Cognee integrates with dlt to ingest structured relational data directly into the knowledge graph:
  • dlt resources: Pass @dlt.resource() decorated generators directly to cognee.add()
  • CSV files: .csv files are auto-detected and ingested via dlt
  • Database connections: Pass a connection string (postgresql://..., sqlite:///...) to ingest tables directly
  • Foreign key relationships become graph edges automatically
  • Structured data bypasses LLM extraction — the graph is built deterministically from the schema
  • See the full dlt integration guide for details
  • Text: .txt, .md, .csv, .json, …
  • PDF: .pdf
  • Images: common formats like .png, .jpg, .gif, .webp, …
  • Audio: .mp3, .wav, .flac, …
  • Office docs: .docx, .pptx, .xlsx, …
  • Docling: Cognee can also ingest the DoclingDocument format. Any format that Docling supports as input can be converted, then passed on to Cognee’s add.
  • Cognee automatically selects the best loader for each format. You can learn more about how this works in the Loaders section.
  • A dataset is your “knowledge base” — a grouping of related data that makes sense together
  • Datasets are first-class objects in Cognee’s database with their own ID, name, owner, and permissions
  • They provide scope: .add writes into a dataset, Cognify processes per-dataset
  • Think of them as separate shelves in your library — e.g., a “research-papers” dataset and a “customer-reports” dataset
  • If you name a dataset that doesn’t exist, Cognee creates it for you; if you don’t specify, a default one is used
  • More detail: Datasets
  • Every dataset and data item belongs to a user
  • If you don’t pass a user, Cognee creates/uses a default one
  • Ownership controls who can later read, write, or share that dataset
  • Optional labels to group or tag data on ingestion
  • Example: node_set=["AI", "FinTech"]
  • Useful later when you want to focus on subgraphs
  • More detail: NodeSets
cognee.add() automatically preserves only file-system metadata like name, MIME type, extension, content hash.If you need to associate extra information with ingested data, three mechanisms are available:1. node_set — categorical tags applied to a whole batchPass a list of string tags to mark every item in that add() call:
await cognee.add(
    "Quarterly earnings report Q4 2024.",
    node_set=["finance", "Q4-2024"]
)
Tags flow into the knowledge graph as NodeSet nodes connected with belongs_to_set edges, and can be used to scope searches later — see NodeSets.2. DataItem — per-item string labelWrap individual data items in DataItem to attach a single string label to each one. The label is stored in the relational database alongside the ingested record.
from cognee.tasks.ingestion.data_item import DataItem

await cognee.add(
    DataItem(data="/path/to/report.pdf", label="q4-earnings-report")
)

# Mix labelled and plain items in one call
await cognee.add([
    DataItem(data="Contract text …", label="contract-2024"),
    DataItem(data="Meeting notes …", label="meeting-2024-03"),
])
3. dataset_name — logical groupingSeparate collections of data into named datasets to keep different knowledge domains apart:
await cognee.add("Legal contract text.", dataset_name="legal-docs")
await cognee.add("Product spec text.",   dataset_name="product-specs")
LimitationArbitrary key-value metadata (e.g. {"source": "CRM", "author": "Alice"}) cannot currently be attached via add(). If rich metadata is important for your use case, consider encoding it as part of the text content itself, or combine datasets (via dataset_name) and NodeSets (via node_set) to represent the dimensions you care about.

Cognify

Expand data into chunks, embeddings, and graphs

DataPoints

The units you’ll see after Cognify

Building Blocks

Learn about Tasks and Pipelines behind Add