People argue a lot about which API style or technology is faster. To end this discussion I’ve created a basic Next.js app with separate endpoints for a classic REST, GraphQL and tRPC, and then… I’ve run some benchmarks against it. In this article, I’ll show the numbers and tell you what you should do if you worry about the performance of your application. So which is faster: REST, GraphQL or tRPC?
The first step was to create a basic Next.js app. Why Next.js and not any other framework? Because it is simple to set up and it’s pretty popular. I’m not a huge fan but whatever.
The next step was to create an example data payload that would be returned as a response for each API type. I managed that thanks to some online tool on the internet (sorry, don’t remember which one). This payload is around 1,6 kB.
Now all I needed to do was to create 3 endpoints for REST, GraphQL and tRPC that return the stubbed data as effectively as possible.
REST one returns just plain NextResponse as a wrapper for data. I do realize that this is one of the factors when it comes to performance and chosen implementation will impact the end result, so keep this in mind.
GraphQL uses Apollo Server (version 3 to be exact), so this is what we are testing here.
tRPC is from a tRPC package and leverages Next.js integration.
The only data source is stubbed data, no database or external connections are made.
Since serving HTTP requests also take the time I’ve created a control endpoint that always just returns HTTP 200.
All the code is available in this GitHub repository.
Let’s go through how I tested the application. First off I’ve built an app and run it via
next start command.
All benchmarks were conducted on my local environment on MacBook Pro (M1 Pro).
As a benchmark tool I’ve chosen a tool called Hey, it’s a nice ab alternative I’ve grown fond of recently. Concurrency was set to 1 and the goal was to run as many HTTP requests as possible in 5 seconds. Results are the average requests per second for each technology.
Take a look at this graph to see how it went:
Obviously, our control endpoint was the fastest - fetching over 9000 rps. Second was pure REST at 4300-is rps closely followed by tRPC at around 3900 rps. GraphQL was last at around 2500 rps. As you can see I also included an alternative GraphQL query that returned only a small part of all the fields - it was pretty quick at nearly 6400 rps. Of course, we would have to consider it cheating, but it was interesting to test the impact of shorter input and output for GraphQL.
Before we discuss the results let’s just recap the commands used for this benchmark, so you can check it for yourself (or check what I’ve done actually).
hey -z 5s -c 1 -H "Content-Type: application/json" "http://localhost:3000/api/hello"
hey -z 5s -c 1 -H "Content-Type: application/json" "http://localhost:3000/api/rest"
hey -z 5s -c 1 -H "Content-Type: application/json" "http://localhost:3000/api/trpcTest?batch=1&input=%7B%7D"
GraphQL - full :
hey -z 5s -c 1 -H "Content-Type: application/json" -m POST -D data.txt "http://localhost:3000/api/graphql"
GraphQL - some fields :
hey -z 5s -c 1 -H "Content-Type: application/json" -m POST -D data2.txt "http://localhost:3000/api/graphql"
The why it doesn’t matter (and what does matter)
Looking only at the relative numbers you could totally say that REST was 40% faster than GraphQL, but actually, the big conclusion from this little benchmark is that difference between tRPC, REST and GraphQL is negligible.
Each technology managed to parse a request, serialize and return a response in under 1 ms. The “slowest” was GraphQL at 0.4ms per request, where 0.1ms is unavoidable anyway. This means that as soon as you touch a database you are dealing with times at least an order of magnitude higher.
Therefore a real-world application performance does not depend on API “style”. It depends on the internal implementation - retrieving and processing data. GraphQL, tRPC, and REST are capable of delivering world-class performance. That’s why you are safe to assume that other factors will be more important to system speed in the long run.