JavaScript CSV parsing is well-served by mature libraries. papaparse handles 90% of cases with a one-line API. csv-parse is more flexible but more verbose. The API option exists for when you don't want to ship a parsing library at all (small bundles, edge runtimes, or unpredictable input formats).

Method 1: papaparse (works in browser and Node)

papaparse is the most popular CSV parser in the JS ecosystem. It auto-detects delimiters, handles quoted fields, supports streaming, and runs in both browsers and Node.

npm install papaparse
import Papa from "papaparse";
import fs from "node:fs";

function csvToJson(csvPath, jsonPath) {
  const csvText = fs.readFileSync(csvPath, "utf8");
  const result = Papa.parse(csvText, {
    header: true,         // first row becomes object keys
    dynamicTyping: true,  // numbers become numbers, true/false become booleans
    skipEmptyLines: true,
  });
  fs.writeFileSync(jsonPath, JSON.stringify(result.data, null, 2));
}

csvToJson("users.csv", "users.json");

For browser use with a file input:

document.querySelector("input[type=file]").addEventListener("change", (e) => {
  Papa.parse(e.target.files[0], {
    header: true,
    dynamicTyping: true,
    complete: (result) => {
      console.log(result.data);  // array of objects
      const blob = new Blob([JSON.stringify(result.data, null, 2)], { type: "application/json" });
      // trigger download:
      const url = URL.createObjectURL(blob);
      const a = Object.assign(document.createElement("a"), { href: url, download: "out.json" });
      a.click();
    },
  });
});

papaparse handles quoted fields with embedded commas, newlines inside cells, and Windows/Mac/Unix line endings transparently. dynamicTyping coerces numeric strings to numbers — disable it if you need to preserve leading zeros.

Method 2: csv-parse (Node, streaming for huge files)

csv-parse from csv.js.org is more flexible than papaparse and supports true streaming — useful when CSVs are too big to fit in memory.

npm install csv-parse
import { parse } from "csv-parse";
import fs from "node:fs";

function csvToJson(csvPath, jsonPath) {
  const records = [];
  const parser = fs
    .createReadStream(csvPath)
    .pipe(parse({
      columns: true,    // first row becomes object keys
      cast: true,        // type inference
      skip_empty_lines: true,
    }));

  parser.on("data", (record) => records.push(record));
  parser.on("end", () => {
    fs.writeFileSync(jsonPath, JSON.stringify(records, null, 2));
  });
  parser.on("error", (err) => console.error(err));
}

csvToJson("big_users.csv", "big_users.json");

For files larger than RAM, write each record to JSONL incrementally instead of buffering:

const out = fs.createWriteStream(jsonlPath);
parser.on("data", (record) => out.write(JSON.stringify(record) + "\n"));
parser.on("end", () => out.end());

Method 3: ChangeThisFile API (no library, handles messy input)

If you don't want to ship a parser at all, or if your inputs are unpredictable (third-party CSV exports, customer uploads), the API handles it. Free tier gives 1,000 conversions/month.

import fs from "node:fs";

const API_KEY = "sk_test_your_key_here";

async function csvToJson(csvPath, jsonPath) {
  const buffer = fs.readFileSync(csvPath);
  const form = new FormData();
  form.append("file", new Blob([buffer]), "input.csv");
  form.append("source", "csv");
  form.append("target", "json");

  const response = await fetch("https://changethisfile.com/v1/convert", {
    method: "POST",
    headers: { Authorization: `Bearer ${API_KEY}` },
    body: form,
  });

  if (!response.ok) throw new Error(`HTTP ${response.status}`);
  fs.writeFileSync(jsonPath, await response.text());
}

await csvToJson("messy_export.csv", "clean.json");

The API auto-detects delimiters (comma, semicolon, tab, pipe), handles multiple encodings (UTF-8, Windows-1252, Latin-1), and produces clean JSON even when input rows are inconsistent.

When to use each

ApproachBest forTradeoff
papaparseDefault for most projects, works in browsersLoads everything into memory
csv-parseHuge files, Node streaming pipelinesMore verbose, Node-only practical use
ChangeThisFile APIUnpredictable input, edge runtimes, no bundle hitPer-call cost, network dependency

For typical web apps and Node services, papaparse is the right default. For multi-GB CSVs, switch to csv-parse with streaming. For Cloudflare Workers and other edge runtimes where you want to skip parser dependencies, the API.

Production tips

  • Disable dynamicTyping for ID columns. Auto-coerced "007" becomes 7 and you lose the leading zero. For columns where strings matter (zip codes, phone numbers, account IDs), pass dynamicTyping: false.
  • Handle the BOM. Excel-exported CSVs start with a UTF-8 byte-order mark. papaparse strips it automatically; with raw csv-parse, set bom: true.
  • Validate row shape. CSV doesn't enforce that every row has the same number of fields. Check that result.data.every(r => Object.keys(r).length === expectedFields).
  • Use Web Workers for big files in browsers. Parsing a 50MB CSV blocks the main thread for several seconds. papaparse supports worker: true to offload parsing.

papaparse handles the vast majority of CSV-to-JSON cases cleanly. csv-parse is the right tool when you outgrow papaparse on huge files. The API is the right tool when you don't want a parser at all. Free tier gives 1,000 conversions/month — try it without committing.