Practical Persistence in Go: SQL Databases
This is the first in a series of tutorials about persisting data in Go web applications.
In this post we'll be looking at SQL databases. I'll explain the basics of the
database/sql package, walk through building a working application, and explore a couple of options for cleanly structuring your code.
Before we get started you'll need to
go get one of the drivers for the database/sql package.
In this post I'll be using Postgres and the excellent pq driver. But all the code in this tutorial is (nearly) exactly the same for any other driver or database – including MySQL and SQLite. I'll point out the very few Postgres-specific bits as we go.
Let's build a simple Bookstore application, which carries out CRUD operations on a
If you'd like to follow along, you'll need to create a new
bookstore database and scaffold it with the following:
Once that's done, head over to your Go workspace and create a new
bookstore package directory and a
Let's start with some code that executes a
SELECT * FROM books query and then prints the results to stdout.
There's a lot going on here. We'll step through this bit-by-bit.
The first interesting thing is the way that we import the driver. We don't use anything in the
pq package directly, which means that the Go compiler will raise an error if we try to import it normally. But we need the pq package's
init() function to run so that our driver can register itself with
database/sql. We get around this by aliasing the package name to the blank identifier. This means
pq.init() still gets executed, but the alias is harmlessly discarded (and our code runs error-free). This approach is standard for most of Go's SQL drivers.
Next we define a
Book type – with the struct fields and their types aligning to our
books table. For completeness I should point out that we've only been able to use the
float32 types safely because we set
NOT NULL constraints on the columns in our table. If the table contained nullable fields we would need to use the
sql.NullFloat64 types instead – see this Gist for a working example. Generally it's easiest to avoid nullable fields altogether if you can, which is what we've done here.
main() function we initialise a new
sql.DB object by calling
sql.Open(). We pass in the name of our driver (in this case
"postgres") and the connection string (you'll need to check your driver documentation for the correct format). It's worth emphasising that the
sql.DB object it returns is not a database connection – it's an abstraction representing a pool of underlying connections. You can change the maximum number of open and idle connections in the pool with the
db.SetMaxIdleConns() methods respectively. A final thing to note is that
sql.DB is safe for concurrent access, which is very convenient if you're using it in a web application (like we will shortly).
From there we follow a standard pattern that you'll see often:
We fetch a resultset from the
bookstable using the
DB.Query()method and assign it to a
rowsvariable. Then we
defer rows.Close()to ensure the resultset is properly closed before the parent function returns. Closing a resultset properly is really important. As long as a resultset is open it will keep the underlying database connection open – which in turn means the connection is not available to the pool. So if something goes wrong and the resultset isn't closed it can rapidly lead to all the connections in your pool being used up. Another gotcha (which caught me out when I first began) is that the defer statement should come after you check for an error from
DB.Query. Otherwise, if
DB.Query()returns an error, you'll get a panic trying to close a nil resultset.
We then use
rows.Next()to iterate through the rows in the resultset. This preps the first (and then each subsequent) row to be acted on by the
rows.Scan()method. Note that if iteration over all of the rows completes then the resultset automatically closes itself and frees-up the connection.
We use the
rows.Scan()method to copy the values from each field in the row to a new
Bookobject that we created. We then check for any errors that occurred during Scan, and add the new
bksslice we created earlier.
rows.Next()loop has finished we call
rows.Err(). This returns any error that was encountered during the interation. It's important to call this – don't just assume that we completed a successful iteration over the whole resultset.
bks slice has been filled successfully, we loop through it and print the information about each book to stdout.
If you run the code you should get the following output:
Using in a web application
Let's start to morph our code into a RESTful-ish web application with 3 routes:
- GET /books – List all books in the store
- GET /books/show – Show a specific book by its ISBN
- POST /books/create – Add a new book to the store
We've just written all the core logic we need for the
GET /books route. Let's adapt it into a
booksIndex() HTTP handler for our web application.
So how is this different?
We use the
init()function to set up our connection pool and assign it to the global variable
db. We're using a global variable to store the connection pool because it's an easy way of making it available to our HTTP handlers – but it's by no means the only way. Because
sql.Open()doesn't actually check a connection, we also call
DB.Ping()to make sure that everything works OK on startup.
booksIndexhander we return a
405 Method Not Allowedresponse for any non-GET request. Then we have our data access logic. This is exactly the same as the earlier example, except that we're now returning proper HTTP errors instead of exiting the program. Lastly we write the books' details as plain text to the
Run the application and then make a request:
Querying a single row
GET /books/show route we want to retrieve single book based on its ISBN, with the ISBN being passed in the query string like:
We'll create a new
bookShow() handler for this:
Once again the handler starts again by checking that it's dealing with a GET request.
We then use the
Request.FormValue() method to fetch the ISBN value from the request query string. This returns an empty string if there's no parameter found, so we check for that and issue a
400 Bad Request response if it's missing.
Now we get to the interesting bit:
DB.QueryRow(). This method is similar to
DB.Query, except that it fetches a single row instead of multiple rows.
Because we need to include untrusted input (the
isbn variable) in our SQL query we take advantage of placeholder parameters, passing in the value of our placeholder as the second argument to
DB.QueryRow() like so:
db.QueryRow("SELECT * FROM books WHERE isbn = $1", isbn)
Behind the scenes,
db.QueryRow (and also
db.Exec()) work by creating a new prepared statement on the database, and subsequently execute that prepared statement using the placeholder parameters provided. This means that all three methods are safe from SQL injection when used correctly . From Wikipedia:
Prepared statements are resilient against SQL injection, because parameter values, which are transmitted later using a different protocol, need not be correctly escaped. If the original statement template is not derived from external input, injection cannot occur.
The placeholder parameter syntax differs depending on your database. Postgres uses the
$N notation, but MySQL, SQL Server and others use the
? character as a placeholder.
OK, let's get back to our code.
After we've got a row from
DB.QueryRow() we use
row.Scan() to copy the values into a new
Book object. Note how any errors from
DB.QueryRow() are deferred and not surfaced until we call
If our query returned no rows, our call to
row.Scan() will return an error of the type
sql.ErrNoRows. We check for that error type specifically and return a
404 Not Found response if that's the case. We then handle all other errors by returning a
500 Internal Server Error.
If everything went OK, we write the book details to the
Give it a try:
If you play around with the ISBN value, or issue a malformed request you should see that you get the appropriate error responses.
Executing a statement
For our final
POST /books/create route we'll make a new
booksCreate() handler and use
DB.Exec() to execute a
INSERT statement. You can take the same approach for an
DELETE, or any other action that doesn't return rows.
Here's the code:
Hopefully this is starting to feel familiar now.
booksCreate() handler we check we're dealing with a POST request, and then fetch the request parameters using
request.FormValue(). We verify that all the necessary parameters exist, and in the case of
price use the
strconv.ParseFloat() to convert the parameter from a string into a float.
We then carry out the insert using
db.Exec(), passing our new book details as parameters just like we did in the previous example. Note that
DB.QueryRow(), is a variadic function, which means you can pass in as many parameters as you need.
db.Exec() method returns an object satisfying the
sql.Result interface, which you can either use (like we are here) or discard with the blank identifier.
sql.Result() interface guarantees two methods:
LastInsertId() – which is often used to return the value of an new auto increment id, and
RowsAffected() – which contains the number of rows that the statement affected. In this code we're picking up the latter, and then using it in our plain text confirmation message.
It's worth noting that not all drivers support the
RowsAffected() methods, and calling them may return an error. For example, pq doesn't support
LastInsertId() – if you need that functionality you'll have to take an approach like this one.
Let's try out the
/books/create route, passing our parameters in the POST body:
At the moment all our database access logic is mixed in with our HTTP handlers. It's probably a good idea to refactor this for easier maintainability and DRYness as our application grows.
But this tutorial is already pretty long, so I'll explore some of the options for refactoring our code in the next post – Practical Persistence in Go: Organising Database Access
The Sqlx package by Jason Moiron provides some additions to the standard
database/sql functionality, including support for named placeholder parameters and automatic marshalling of rows into structs.
If you're looking for something more ORM-ish, you might like to consider gorp by James Cooper.
The null package by can help make managing nullable values easier, if that's something you need to do a lot of.
Lastly, I found the tutorials at go-database-sql.org to be clear and helpful. Especially worth reading is the surprises and limitations section.