SmartQueryTools

JSON vs NDJSON

JSON and NDJSON (Newline-Delimited JSON) both use JSON syntax, but they have very different data models. A JSON file holds one value — typically an array of objects. An NDJSON file holds one independent JSON object per line. That one structural difference determines which format is appropriate for streaming, logging, and large-dataset scenarios.

What is JSON?

A standard JSON file contains a single JSON value — most commonly an array of objects (e.g. [{...}, {...}, {...}]). The entire file must be parsed at once because the array is a single top-level structure. This makes standard JSON impractical for very large datasets: you must read the entire file into memory before processing any records.

JSON is the right format for APIs (request/response bodies), configuration, and datasets that are small enough to load at once. Most databases, tools, and languages have excellent JSON support and expect the standard single-document format.

What is NDJSON?

NDJSON (also called JSONL or JSON Lines) stores one complete JSON object on each line, with no separating commas and no wrapping array. Each line is independently valid JSON. This makes it trivial to stream: you process one line at a time without ever loading the full file into memory.

NDJSON is the natural output format for log aggregators (Fluentd, Logstash), Kafka consumers, database change-data-capture streams, and bulk API endpoints. Elasticsearch's bulk API requires NDJSON. It is also the format used by BigQuery table exports and many data pipeline outputs because it supports append-only writes — you add a new record by appending a line.

JSON vs NDJSON: Key Differences

FeatureJSONNDJSON
StructureSingle JSON value (usually an array)One JSON object per line
Memory requirementFull file must be parsed at onceCan be streamed line by line
File appendRequires rewriting the full arrayAppend a line — no rewrite needed
Partial readNot possible without full parseRead any range of lines
API compatibilityUniversal — all REST APIs use JSONElasticsearch bulk, BigQuery, log systems
StreamingPoor — not streamableExcellent — designed for streaming
Human readabilityEasier with pretty-printOne object per line — dense but scannable

When to use JSON

  • Sending or receiving API request/response bodies
  • Datasets small enough to fit comfortably in memory
  • Configuration files or structured data documents
  • Tools or systems that expect a standard JSON array

When to use NDJSON

  • Log files and event streams (each event is one line)
  • Large datasets that must be processed record-by-record without full memory load
  • Kafka, Kinesis, or Pub/Sub consumer outputs
  • Elasticsearch bulk indexing, BigQuery streaming inserts
  • Appending records incrementally — no need to rewrite the file

Convert between JSON and NDJSON

Convert files instantly in your browser — no upload, no account, no server.

More format comparisons