Skip to main content
The official meilisearch-importer is a high-performance CLI tool for bulk importing large datasets into Meilisearch. It handles millions of documents with automatic retry logic and progress tracking.

Features

  • Import CSV, NDJSON, and JSON (array of objects) files
  • Handle datasets from thousands to 40+ million documents
  • Automatic retry logic for failed batches
  • Real-time progress tracking with ETA
  • Configurable batch sizes for performance tuning

Prerequisites

  • A Meilisearch instance (Cloud or self-hosted)
  • One of:
    • Rust/Cargo installed (for building from source)
    • Pre-built binary from releases

Installation

cargo install meilisearch-importer

Basic usage

Import a CSV file:
meilisearch-importer \
  --url "${MEILISEARCH_URL}" \
  --api-key "${MEILISEARCH_API_KEY}" \
  --index movies \
  --file movies.csv
Set your environment variables:
export MEILISEARCH_URL="https://your-instance.meilisearch.io"
export MEILISEARCH_API_KEY="your_api_key"

Supported formats

CSV

meilisearch-importer --index products --file products.csv
CSV files must have a header row. The importer automatically detects column types.

NDJSON (Newline-delimited JSON)

meilisearch-importer --index products --file products.ndjson
Each line must be a valid JSON object:
{"id": 1, "title": "Product A", "price": 29.99}
{"id": 2, "title": "Product B", "price": 39.99}

JSON array

meilisearch-importer --index products --file products.json
File must contain an array of objects:
[
  {"id": 1, "title": "Product A", "price": 29.99},
  {"id": 2, "title": "Product B", "price": 39.99}
]

Configuration options

OptionDescriptionDefault
--urlMeilisearch URLhttp://localhost:7700
--api-keyMeilisearch API keyNone
--indexTarget index nameRequired
--fileInput file pathRequired
--batch-sizeDocuments per batch1000
--primary-keyPrimary key fieldAuto-detected

Performance tuning

Batch size

Adjust batch size based on your document size and network:
# Smaller documents: larger batches
meilisearch-importer --index logs --file logs.ndjson --batch-size 5000

# Larger documents: smaller batches
meilisearch-importer --index articles --file articles.json --batch-size 100

Primary key

Specify the primary key if auto-detection fails:
meilisearch-importer --index products --file products.csv --primary-key product_id

Example: Import a large dataset

Import 10 million products with progress tracking:
meilisearch-importer \
  --url "https://ms-xxx.meilisearch.io" \
  --api-key "your_master_key" \
  --index products \
  --file products.ndjson \
  --batch-size 2000
Output:
Importing products.ndjson to index 'products'...
[████████████████████░░░░░░░░░░░░░░░░░░░░] 52% (5.2M/10M) ETA: 12m 34s

After import

Verify your import:
curl "${MEILISEARCH_URL}/indexes/products/stats" \
  -H "Authorization: Bearer ${MEILISEARCH_API_KEY}"
Test a search:
curl "${MEILISEARCH_URL}/indexes/products/search" \
  -H "Authorization: Bearer ${MEILISEARCH_API_KEY}" \
  -d '{"q": "test"}'

Next steps

Configure settings

Set up searchable and filterable attributes

Debug performance

Identify and fix indexing bottlenecks

Resources