The csv crate is the standard Rust CSV library — fast, correct, and integrates cleanly with serde. Combined with serde_json, CSV-to-JSON in Rust is straightforward. For known schemas, use typed structs with #[derive(Deserialize, Serialize)]. For dynamic data where you don't know the columns ahead of time, deserialize into HashMap<String, String> and convert to serde_json::Value.

Method 1: csv + serde_json (idiomatic Rust)

The csv and serde_json crates are the standard tools. Typed structs give the best ergonomics for known schemas.

# Cargo.toml
[dependencies]
csv = "1"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
use csv::ReaderBuilder;
use serde::{Deserialize, Serialize};
use std::error::Error;
use std::fs;
use std::io::BufWriter;

// For KNOWN schema — strongly typed
#[derive(Debug, Deserialize, Serialize)]
struct User {
    id: u32,
    name: String,
    email: String,
    age: Option,  // Option handles missing/empty fields
}

fn typed_csv_to_json(in_path: &str, out_path: &str) -> Result> {
    let mut rdr = ReaderBuilder::new()
        .has_headers(true)
        .trim(csv::Trim::All)
        .from_path(in_path)?;

    let records: Vec = rdr.deserialize()
        .collect::>()?;

    let out = fs::File::create(out_path)?;
    serde_json::to_writer_pretty(BufWriter::new(out), &records)?;
    Ok(records.len())
}
use csv::ReaderBuilder;
use serde_json::{Map, Value};
use std::collections::HashMap;

// For UNKNOWN/DYNAMIC schema
fn dynamic_csv_to_json(in_path: &str, out_path: &str) -> Result> {
    let mut rdr = ReaderBuilder::new()
        .has_headers(true)
        .trim(csv::Trim::All)
        .from_path(in_path)?;

    let headers: Vec = rdr.headers()?
        .iter()
        .map(|s| s.to_string())
        .collect();

    let mut rows: Vec = Vec::new();
    for result in rdr.records() {
        let record = result?;
        let mut obj = Map::new();
        for (header, field) in headers.iter().zip(record.iter()) {
            // Try to parse as number, fall back to string
            let val: Value = if let Ok(n) = field.parse::() {
                Value::Number(n.into())
            } else if let Ok(f) = field.parse::() {
                serde_json::Number::from_f64(f)
                    .map(Value::Number)
                    .unwrap_or_else(|| Value::String(field.to_string()))
            } else if field.eq_ignore_ascii_case("true") {
                Value::Bool(true)
            } else if field.eq_ignore_ascii_case("false") {
                Value::Bool(false)
            } else if field.is_empty() {
                Value::Null
            } else {
                Value::String(field.to_string())
            };
            obj.insert(header.clone(), val);
        }
        rows.push(Value::Object(obj));
    }

    let out = fs::File::create(out_path)?;
    serde_json::to_writer_pretty(BufWriter::new(out), &rows)?;
    Ok(rows.len())
}

fn main() -> Result<(), Box> {
    let count = dynamic_csv_to_json("data.csv", "output.json")?;
    println!("Converted {} rows", count);
    Ok(())
}

Method 2: Streaming JSON Lines (constant memory, large files)

For files with millions of rows, writing JSON Lines (one JSON object per line) keeps memory constant regardless of file size.

use csv::ReaderBuilder;
use serde_json::{Map, Value};
use std::io::{BufWriter, Write};
use std::fs;

fn csv_to_jsonlines(in_path: &str, out_path: &str) -> Result> {
    let mut rdr = ReaderBuilder::new()
        .has_headers(true)
        .trim(csv::Trim::All)
        .from_path(in_path)?;

    let headers: Vec = rdr.headers()?
        .iter()
        .map(|s| s.to_string())
        .collect();

    let out = fs::File::create(out_path)?;
    let mut writer = BufWriter::new(out);
    let mut count = 0usize;

    for result in rdr.records() {
        let record = result?;
        let mut obj = Map::new();
        for (header, field) in headers.iter().zip(record.iter()) {
            let val = if let Ok(n) = field.parse::() {
                serde_json::Number::from_f64(n)
                    .map(Value::Number)
                    .unwrap_or_else(|| Value::String(field.to_string()))
            } else {
                Value::String(field.to_string())
            };
            obj.insert(header.clone(), val);
        }
        let line = serde_json::to_string(&Value::Object(obj))?;
        writeln!(writer, "{}", line)?;
        count += 1;
    }

    writer.flush()?;
    Ok(count)
}

fn main() -> Result<(), Box> {
    let count = csv_to_jsonlines("large.csv", "output.jsonl")?;
    println!("Streamed {} records", count);
    Ok(())
}

JSON Lines is ideal for large datasets: each line is a complete valid JSON object, so you can process the output file line by line without loading it all into memory.

Method 3: ChangeThisFile API (reqwest, no csv crate)

The API converts CSV to JSON server-side. Source auto-detected from filename — pass target=json. Free tier: 1,000 conversions/month, no card needed.

# Cargo.toml
[dependencies]
reqwest = { version = "0.12", features = ["multipart", "blocking"] }
use reqwest::blocking::{Client, multipart};
use std::fs;
use std::path::Path;

const API_KEY: &str = "ctf_sk_your_key_here";

fn csv_to_json_api(in_path: &str, out_path: &str) -> Result<(), Box> {
    let client = Client::builder()
        .timeout(std::time::Duration::from_secs(60))
        .build()?;

    let file_bytes = fs::read(in_path)?;
    let filename = Path::new(in_path)
        .file_name()
        .unwrap()
        .to_string_lossy()
        .to_string();

    let form = multipart::Form::new()
        .part(
            "file",
            multipart::Part::bytes(file_bytes)
                .file_name(filename)
                .mime_str("text/csv")?,
        )
        .text("target", "json");

    let resp = client
        .post("https://changethisfile.com/v1/convert")
        .header("Authorization", format!("Bearer {}", API_KEY))
        .multipart(form)
        .send()?;

    if !resp.status().is_success() {
        return Err(format!("API error: {}", resp.status()).into());
    }

    fs::write(out_path, resp.bytes()?)?;
    Ok(())
}

fn main() -> Result<(), Box> {
    csv_to_json_api("data.csv", "output.json")?;
    println!("Done");
    Ok(())
}

When to use each

ApproachBest forTradeoff
csv + serde_json (typed)Known schema, type safety, compile-time checksSchema must be defined in code; Cargo.toml deps required
Streaming JSON LinesFiles with millions of rows, constant memoryOutput is JSON Lines not a JSON array; downstream must handle
ChangeThisFile APINo crate deps, quick one-off scriptsNetwork call; free tier 25MB limit

Production tips

  • Use to_writer instead of to_string for large output. serde_json::to_writer writes directly to a file handle. to_string allocates the entire JSON string in memory — avoid it for large datasets.
  • Wrap File with BufWriter. serde_json's writer makes many small write() calls. BufWriter batches them into larger syscalls — significant performance difference for large files.
  • Handle BOM in UTF-8 CSV files. Some Windows tools export UTF-8 with a BOM. The csv crate doesn't strip BOM automatically. Strip it manually: let content = fs::read(in_path)?; let slice = content.strip_prefix(b"\xEF\xBB\xBF").unwrap_or(&content);
  • Use csv::ReaderBuilder::flexible(true) for ragged CSVs. If rows have different numbers of columns, the csv crate errors by default. flexible(true) allows variable-length rows.
  • For typed structs, use #[serde(rename = "Column Name")] for headers with spaces. CSV headers often have spaces or special characters. Map them to clean Rust field names with serde's rename attribute.

The csv + serde_json combination is the idiomatic Rust approach — fast, memory-safe, and compiles away all the dynamic type overhead. For files too large to fit in memory, stream to JSON Lines. Free API tier: 1,000 conversions/month.