VillageSQL is a drop-in replacement for MySQL with extensions.
All examples in this guide work on VillageSQL. Install Now →
INSERT as a full transaction commit — parse, plan, execute, fsync. At scale, that per-row overhead dominates. Here’s how to avoid it.
Multi-Row INSERT
The simplest optimization: batch multiple rows into one statement.Transaction Batching
Wrapping many INSERTs in a single transaction avoids per-statement commit overhead:LOAD DATA INFILE
For large CSV imports,LOAD DATA INFILE is the fastest native MySQL option — significantly faster than batched INSERTs because it bypasses SQL parsing for each row.
secure_file_priv:
LOCAL INFILE requires local_infile = ON on the server and the client to have the --local-infile flag enabled.
INSERT … SELECT
For copying data between tables (ETL patterns, staging tables),INSERT...SELECT is faster than reading rows into the application and inserting them back:
Disabling Indexes During Bulk Load
For MyISAM tables, disabling non-unique index updates during a bulk insert speeds things up significantly:unique_checks on data that actually has duplicates produces a corrupt index.
Performance Comparison
| Method | Relative speed | Best for |
|---|---|---|
| Single-row INSERT (auto-commit) | Slowest | Small volumes, interactive |
| Single-row INSERT (batched in transaction) | 2–5× faster | Moderate volumes |
| Multi-row INSERT | 5–20× faster | Most bulk loads |
| LOAD DATA INFILE | Fastest (often 10–100× faster than single-row) | Large CSV imports |
| INSERT…SELECT | Fast (server-side copy) | Table-to-table copies |
Frequently Asked Questions
How large should each INSERT batch be?
Start at 500–1000 rows and benchmark. Larger batches reduce per-commit overhead but increase memory usage and transaction duration (which holds locks longer). If a batch fails, the whole batch rolls back — smaller batches mean less rollback work. Most applications find the optimal batch size between 500 and 5000 rows.Can I use LOAD DATA INFILE with ON DUPLICATE KEY UPDATE?
LOAD DATA supports REPLACE or IGNORE for duplicate handling, but not ON DUPLICATE KEY UPDATE. For upsert semantics during bulk load, load into a staging table, then use INSERT...SELECT...ON DUPLICATE KEY UPDATE from the staging table to the target.
Does bulk inserting bypass triggers?
No. Triggers fire for every row affected, including those loaded byLOAD DATA INFILE and INSERT...SELECT. If your trigger does significant work, it will slow bulk loads proportionally. For pure data migrations, you may want to temporarily disable triggers.
Troubleshooting
| Problem | Solution |
|---|---|
ERROR 1290: The MySQL server is running with the --secure-file-priv option | Move the file to the directory shown by SHOW VARIABLES LIKE 'secure_file_priv', or use LOAD DATA LOCAL INFILE |
ERROR 3948: Loading local data is disabled | Enable with SET GLOBAL local_infile = 1 on server and --local-infile=1 on client |
| Bulk insert still slow after batching | Check if autocommit is ON — each statement commits individually; wrap in explicit BEGIN...COMMIT blocks |
Duplicate key errors on known-clean data after disabling unique_checks | unique_checks = 0 does not validate uniqueness during load but corrupts the index if duplicates exist — always validate first |
LOAD DATA misparses rows | Check FIELDS TERMINATED BY, ENCLOSED BY, and LINES TERMINATED BY match your actual file format; use SHOW WARNINGS after the load |

