Not sure how to structure your Go web application?
My new book guides you through the start-to-finish build of a real world web application in Go — covering topics like how to structure your code, manage dependencies, create dynamic database-driven pages, and how to authenticate and authorize users securely.
Take a look!
Go 1.25 introduced a new http.CrossOriginProtection
middleware to the standard library — and it got me wondering:
Have we finally reached the point where CSRF attacks can be prevented without relying on a token-based check (like double-submit cookies)? Can we build secure web applications without bringing in third-party packages like justinas/nosurf
or gorilla/csrf
?
And I think the answer now may be a cautious “yes” — so long as a few important conditions are met.
If you want to skip the explanations and just want to see what those conditions are, you can click here.
The http.CrossOriginProtection middleware
The new http.CrossOriginProtection
middleware works by checking the values in a request's Sec-Fetch-Site
and
Origin
headers to determine where the request is coming from.
It will automatically reject any non-safe requests that are not from the same origin, and will send the client a
403 Forbidden
response.
The http.CrossOriginProtection
middleware has some limitations, which we'll discuss in a moment, but it is robust and simple to use, and a great addition to the standard library.
At its simplest, you can use it like this:
package main
import (
"fmt"
"log/slog"
"net/http"
"os"
)
func main() {
mux := http.NewServeMux()
mux.HandleFunc("/", home)
slog.Info("starting server on :4000")
// Wrap the mux with the http.NewCrossOriginProtection middleware.
err := http.ListenAndServe(":4000", http.NewCrossOriginProtection(mux))
if err != nil {
slog.Error(err.Error())
os.Exit(1)
}
}
func home(w http.ResponseWriter, r *http.Request) {
fmt.Fprint(w, "Hello!")
}
If you want, it's also possible to configure the behavior of http.CrossOriginProtection
. Configuration options include being able to add trusted origins (from which cross-origin requests are allowed), and the ability to use a custom handler for rejected requests instead of the default 403 Forbidden
response.
When I've wanted to customize the behavior, I've been using a pattern like this:
package main
import (
"fmt"
"log/slog"
"net/http"
"os"
)
func main() {
mux := http.NewServeMux()
mux.HandleFunc("/", home)
slog.Info("starting server on :4000")
err := http.ListenAndServe(":4000", preventCSRF(mux))
if err != nil {
slog.Error(err.Error())
os.Exit(1)
}
}
func preventCSRF(next http.Handler) http.Handler {
cop := http.NewCrossOriginProtection()
cop.AddTrustedOrigin("https://foo.example.com")
cop.SetDenyHandler(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusBadRequest)
w.Write([]byte("CSRF check failed"))
}))
return cop.Handler(next)
}
func home(w http.ResponseWriter, r *http.Request) {
fmt.Fprint(w, "Hello!")
}
Limitations
The big limitation of http.CrossOriginProtection
is that it is only effective at blocking requests from modern browsers. Your application will still be vulnerable to CSRF attacks coming from older (generally pre-2020) browsers which do not include at least one of the Sec-Fetch-Site
or Origin
headers in requests.
Right now, browser support for the Sec-Fetch-Site
header is at 92%, and for Origin
it is 95%. So — in general — relying on http.CrossOriginProtection
is not sufficient as your only protection against CSRF.
It's also important to note that the Sec-Fetch-Site
header is only sent when your application has a "trustworthy origin" — which basically means that your application needs to be using HTTPS in production for http.CrossOriginProtection
to work to its full potential.
And you should also be aware that when no Sec-Fetch-Site
header is present in a request, and it falls back to comparing the Origin
and Host
headers, the Host
header does not include the scheme. This limitation means that http.CrossOriginProtection
will wrongly allow cross-origin requests from http://{host}
to https://{host}
when there is no Sec-Fetch-Site
header present but there is an Origin
header. To mitigate this risk, you should ideally configure your application to use HTTP Strict Transport Security (HSTS).
Enforcing TLS 1.3
Looking into this got me wondering... What if you're already planning to use HTTPS and enforce TLS 1.3 as the minimum supported TLS version? Could you be confident that all web browsers which support TLS 1.3 also support either the Sec-Fetch-Site
or Origin
headers?
As far as I can tell from the MDN compatibility data and tables from Can I Use, the answer is "yes" for (almost) all major browsers.
If you enforce TLS 1.3 as the minimum version:- Older browsers which don't support TLS 1.3 simply won't be able to connect to your application.
- For the modern major browsers that do support TLS 1.3 and can connect, you can be confident that at least one of the
Sec-Fetch-Site
orOrigin
headers are supported — and thereforehttp.CrossOriginProtection
will work effectively.
The only exception to this I can see is Firefox v60-69 (2018-2019), which did not support the Sec-Fetch-Site
header and did not send the Origin
header for POST
requests. This means that http.CrossOriginProtection
will not work effectively to block requests originating from that browser. Can I Use puts usage of Firefox v60-69 at 0%, so the risk here appears very low — but there are probably some computers somewhere in the world still running it.
Also, we only have this information for the major browsers — Chrome/Chromium, Firefox, Edge, Safari, Opera and Internet Explorer. But of course, other browsers exist. Most of them are forks of Chromium or Firefox and therefore will likely be OK, but there's no guarantee here and it is hard to quantify the risk.
So if you use HTTPS and enforce TLS 1.3, it's a huge step forward in making sure that http.CrossOriginProtection
can work effectively. However, there remains a non-zero risk that comes from Firefox v60-69 and non-major browsers, so you may want to add some defense-in-depth and utilize SameSite
cookies too.
We'll talk more about SameSite
cookies in a moment, but first we need to take a quick detour and discuss the difference between the terms origin and site.
Cross-site vs cross-origin
In the world of web specifications and web browsers, cross-site and cross-origin are subtly different things, and in a security context like this it's important to understand the difference and be exact about what we mean.
I'll quickly explain.
Two websites have the same origin if they share the exact same scheme, hostname, and port (if present). So https://example.com
and https://www.example.com
are not the same origin because the hostnames (example.com
and www.example.com
) are different. A request between them would be cross-origin.
Two websites are 'same site' if they share the same scheme and registerable domain.
So https://example.com
, https://www.example.com
and https://login.admin.example.com
are all considered to be the same site because the scheme (https
) and registerable domain (example.com
) are the same. A request between these would not be considered to be cross-site, but it would be cross-origin.
So what are the points that I'm building up to here?
Go's
http.CrossOriginProtection
middleware is accurately and appropriately named. It blocks cross-origin requests. It's more strict than it would be if it only blocked cross-site requests, because it also blocks requests from other origins under the same site (i.e. registrable domain).This is useful because it helps to prevent a situation where your janky-not-been-updated-in-the-last-decade WordPress blog at
https://blog.example.com
is compromised and used to launch a request forgery attack at your importanthttps://admin.example.com
website.When most people — myself included — casually talk about "CSRF attacks", what we are referring to most of the time is actually cross-origin request forgery, not just cross-site request forgery. It's a shame that CSRF is the commonly used and known acronym to describe this family of attacks, because most of the time CORF would be more accurate and appropriate. But hey! That's the messy world we live in.
For the rest of this post though, I'll use the term CORF instead of CSRF when that is exactly what I mean.
SameSite cookies
The SameSite
cookie attribute has generally been supported by web browsers since 2017, and by Go since v1.11. If you set the SameSite=Lax
or SameSite=Strict
attributes on a cookie, that cookie will only be included in requests to the same site that set it. In turn, that prevents cross-site request forgery attacks (but not cross-origin attacks from within the same site).
There is some good news here — all major browsers that support TLS 1.3 also fully support SameSite
cookies, with no exceptions that I can see. So if you enforce TLS 1.3, you can be confident that all the major browsers using your application will respect the SameSite
attribute.
This means that by using SameSite=Lax
or SameSite=Strict
on your cookies, you cover off the risk of cross-site request forgeries from Firefox v60-69 that we talked about earlier.
Putting it all together
If you combine using HTTPS, enforcing TLS 1.3 as the minimum version, using SameSite=Lax
or SameSite=Strict
cookies appropriately, and using the http.CrossOriginProtection
middleware in your application, as far as I can see there are only two unmitigated CSRF/CORF risks from major browsers:
- CORF attacks from within the same site (i.e. from another subdomain under your registrable domain) in Firefox v60-69.
- CORF attacks from a HTTP version of your origin, from browsers that do not support the
Sec-Fetch-Site
header.
For the first of these risks, if you don't have any other websites under your registrable domain, or you're confident that the websites are secure and uncompromised, then this might be a risk that you're willing to accept given the extremely low usage of Firefox v60-69.
For the second, if you don't support HTTP on your origin at all (including redirects) then this isn't something you need to worry about. Otherwise, you can mitigate the risk by including a HSTS header on your HTTPS responses.
At the start of this article, I said that not using a token-based CSRF check might be OK under certain conditions. So let's run through what those are:
- Your application uses HTTPS and enforces TLS 1.3 as the minimum version. You accept that users with older browsers will not be able to connect to your application at all.
- You follow good-practice and never change important application state in response to requests with the safe methods
GET
,HEAD
,OPTIONS
orTRACE
. - You use both the
http.CrossOriginProtection
middleware andSameSite=Lax
orSameSite=Strict
cookies. It's important to still useSameSite
cookies for general defense in depth, but more specifically to mitigate CSRF attacks from Firefox v60-69. - Because of the unprotected risk of a same-site CORF attack from Firefox v60-69, you either don't have any other websites under your registrable domain, or you're confident that they're secure and uncompromised.
- There is either no HTTP version of your application origin at all, or you include a HSTS header on your HTTPS responses.
- Finally, you are willing to accept the difficult-to-quantify risk of CSRF/CORF attacks from non-major browsers that support TLS 1.3 but don't support the
Origin
header,Sec-Fetch-Site
header orSameSite
cookies. Does any such browser exist? I don't know, and I'm not sure there's a way to answer that question with 100% confidence. So you'll need to do your own risk assessment here, and it's a risk that you probably only want to accept if your application is a low-value target and the impact of a successful CSRF/CORF attack is both isolated and minor.
If you enjoyed this post...
You might like to check out my other Go tutorials on this site, or if you're after something more structured, my books Let's Go and Let's Go Further cover how to build complete, production-ready, web apps and APIS with Go.