suggest change

Package encoding/csv in standard library provides functionality for reading and writing CSV files.

Reading records from CSV file

Let’s read stock quotes from a CSV file:

buf := bytes.NewBufferString(csvData)

r := csv.NewReader(buf)
var record []string
nRecords := 0
var err error
for {
	record, err = r.Read()
	if err != nil {
		if err == io.EOF {
			err = nil
	if nRecords < 5 {
		fmt.Printf("Record: %#v\n", record)
if err != nil {
	log.Fatalf("r.Read() failed with '%s'\n", err)
fmt.Printf("Read %d records\n", nRecords)
Record: []string{"date", "open", "high", "low", "close", "volume", "Name"}
Record: []string{"2013-02-08", "15.07", "15.12", "14.63", "14.75", "8407500", "AAL"}
Record: []string{"2013-02-11", "14.89", "15.01", "14.26", "14.46", "8882000", "AAL"}
Record: []string{"2013-02-12", "14.45", "14.51", "14.1", "14.27", "8126000", "AAL"}
Read 5 records

As per Go best practices, CSV reader operates on io.Reader interface, which allows it to work on files, network connections, bytes in memory etc.

Read() method reads one CSV line at a time and returns []string slice with all fields in that line and an error.

Returning io.EOF as an error signifies successfully reaching end of file.

Reading all records from CSV file

Instead of calling Read() in a loop, we could read all records in one call:

r := csv.NewReader(f)
records, err := r.ReadAll()
if err != nil {
    log.Fatalf("r.ReadAll() failed with '%s'\n", err)
// records is [][]string
fmt.Printf("Read %d records\n", len(records))

This time we don’t have to special-case io.EOF as ReadAll does that for us.

Reading all records at once is simpler but will use more memory, especially for large CSV files.

Writing records to CSV file

Let’s now write simplified stock quotes to a CSV file:

func writeCSV() error {
	f, err := os.Create("stocks_tmp.csv")
	if err != nil {
		return err

	w := csv.NewWriter(f)
	records := [][]string{
		{"date", "price", "name"},
		{"2013-02-08", "15,07", "GOOG"},
		{"2013-02-09", "15,09", "GOOG"},
	for _, rec := range records {
		err = w.Write(rec)
		if err != nil {
			return err

	// csv.Writer might buffer writes for performance so we must
	// Flush to ensure all data has been written to underlying
	// writer

	// Flush doesn't return an error. If it failed to write, we
	// can get the error with Error()
	err = w.Error()
	if err != nil {
		return err
	// Close might also fail due to flushing out buffered writes
	err = f.Close()
	return err

Error handling here is not trivial.

We need to remember to Flush() at the end of writing, check if Flush() failed with Error() and also check that Close() didn’t fail.

The need to check Close() errors is why we didn’t use a simpler defer f.Close(). Correctness and robustness sometimes require more code.

Nalues that had , in them were quoted because comman is used as field separator.

In production code we would also delete the CSV file in case of errors. No need to keep corrupt file around.

Writing all records to CSV file

Just like we can read all records at once, we can write multiple records at once:

w := csv.NewWriter(f)
err = w.WriteAll(records)
if err != nil {
    return err

Feedback about page:

Optional: your email if you want me to get back to you:

Table Of Contents