A Performance Analysis of Python WSGI Servers: Part 2

In Part 1 of this series, we introduced you to WSGI and the top 6 WSGI web servers. In this post, we’ll show you the result of our performance benchmark analysis of these servers. There are many production-grade WSGI servers, and we were curious as to how well they performed. To this end, we constructed a benchmark to test six of the most popular servers.

What About CGI and mod_python?

Before WSGI existed, the two primary methods of serving a python web application were CGI and mod_python. Both of these have fallen in their popularity in favor to WSGI because CGI applications are slower, as they spawn a new process for each request. Also, mod_python integrates with Python directly, which improves performance over CGI. However, it is only available for Apache and is no longer actively developed.

The Contestants

Due to time constraints, we limited this study to six WSGI servers. We tried to include servers that claimed to be fast but haven’t been prominently featured in benchmarks. Unfortunately, thismeant that there were many excellent choices that we simply didn’t have time to test. All the code for this project is posted on GitHub, and we’ll try to update the project with additional servers in the future.

The Benchmark

To make the test as clean as possible, we created a Docker container to isolate the tested server from the rest of the system. In addition to sandboxing the WSGI server, this ensured that every run started with a clean slate.

Server

Testing

Metrics

Results

All the raw performance metrics have been included in the project’s repository, and a summary CSV is provided. If you are more of a visual person, the CSV file has been graphed in a Google document.

Requests Served

This graph shows the average number of requests served; the higher the numbers, the better.

WINNER: Bjoern

Bjoern

In the number of sustained requests served, Bjoern is the obvious winner. However, given the numbers are so much higher than its competitors, we are a bit skeptical. We are not sure if Bjoern is really that mind-numbingly fast or if there is an error in the test. At first, we were testing the servers alphabetically, and we thought that Bjoern was gaining an unfair advantage. However, even after randomizing the server execution order and retesting, the output remained the same.

uWSGI

We were disappointed and by uWSGI’s poor results. We expected it to be one of the top performers. While testing, we noticed uWSGI printing logging information to the screen, and we initially attributed its lack of performance to the extra work that it was doing. However, even after introducing the “–disable-logging” option, it still is the slowest performer.

As mentioned in uWSGI’s introduction, it is usually paired with a reverse proxy, such as Nginx. However, we are not sure this could account for such a large difference.

Latency

Latency is the amount time elapsed between a request and its response. Lower numbers are better.

WINNER: CherryPy

RAM Usage

This compares the memory requirements and “lightness” of each server. Lower numbers are better.

WINNERS: Bjoern and Beinheld

Errors

For a web server, an error is when a server drops, aborts or times out. Lower is better.

For each server, we calculated the ratio of total requests against the number of errors:

WINNER: CherryPy

CPU Usage

High CPU usage is not good or bad, as long as a server performs well. However, it yields some interesting insights into how the server works. Since two CPU cores were used, the maximum usage possible is 200 percent.

WINNER: None, since this is more of an observation in behavior than a comparison in performance.

Conclusion

The benchmark’s results surprised us in a couple of different ways. First, we were blown away with Bjoern’s performance. However, we were also a bit suspicious at the discrepancy between it and the next highest performer. We need to investigate this further and would also love to hear your thoughts if you have any insight into our approach. Secondly, we were sorely disappointed with uWSGI. Either we misconfigured uWSGI, or the version we installed has some major bugs, but we’d also love to open this up for discussion.

To summarize, here are some general insights that can be gleaned from the results of each server:

Sources used for research and inspiration, but not linked within in the article:

* http://nichol.as/benchmark-of-python-web-servers

* https://docs.python.org/2/howto/webservers.html

Related Articles

What the North Pole Can Teach Us About Digital Resilience
Observability
3 Minute Read

What the North Pole Can Teach Us About Digital Resilience

Discover North Pole lessons for digital resilience. Prioritise operations, just like the reliable Santa Tracker, for guaranteed outcomes. Explore our dashboards for deeper insights!
The Next Step in your Metric Data Optimization Starts Now
Observability
6 Minute Read

The Next Step in your Metric Data Optimization Starts Now

We're excited to introduce Dimension Utilization, designed to tackle the often-hidden culprit of escalating costs and data bloat – high-cardinality dimensions.
How to Manage Planned Downtime the Right Way, with Synthetics
Observability
6 Minute Read

How to Manage Planned Downtime the Right Way, with Synthetics

Planned downtime management ensures clean synthetic tests and meaningful signals during environment changes. Manage downtime the right way, with synthetics.
Smart Alerting for Reliable Synthetics: Tune for Signal, Not Noise
Observability
7 Minute Read

Smart Alerting for Reliable Synthetics: Tune for Signal, Not Noise

Smart alerting is the way to get reliable signals from your synthetic tests. Learn how to set up and use smart alerts for better synthetic signaling.
How To Choose the Best Synthetic Test Locations
Observability
6 Minute Read

How To Choose the Best Synthetic Test Locations

Running all your synthetic tests from one region? Discover why location matters and how the right test regions reveal true customer experience.
Advanced Network Traffic Analysis with Splunk and Isovalent
Observability
6 Minute Read

Advanced Network Traffic Analysis with Splunk and Isovalent

Splunk and Isovalent are redefining network visibility with eBPF-powered insights.
Conquer Complexity, Accelerate Resolution with the AI Troubleshooting Agent in Splunk Observability Cloud
Observability
4 Minute Read

Conquer Complexity, Accelerate Resolution with the AI Troubleshooting Agent in Splunk Observability Cloud

Learn more about how AI Agents in Observability Cloud can help you and your teams troubleshoot, identify root cause, and remediate issues faster.
Instrument OpenTelemetry for Non-Kubernetes Environments in One Simple Step
Observability
2 Minute Read

Instrument OpenTelemetry for Non-Kubernetes Environments in One Simple Step

The OpenTelemetry Injector makes implementation incredibly easy and expands OpenTelemetry's reach and ease of use for organizations with diverse infrastructure.
Resolve Database Performance Issues Faster With Splunk Database Monitoring
Observability
3 Minute Read

Resolve Database Performance Issues Faster With Splunk Database Monitoring

Introducing Splunk Database Monitoring, which helps you identify and resolve slow, inefficient queries; correlate application issues to specific queries for faster root cause analysis; and accelerate fixes with AI-powered recommendations.