Recent Posts

Sponsors
![]() |
Using Nginx as a Reverse Proxy to IISAdrian Singer, 11-04-2010 |
We were recently approached by a client who's using a legacy Content Management system running on Microsoft IIS that is becoming painfully slow, hurting their business.
The system was not keeping up with their traffic.
Typically, in a situation like this, we would recommend re-architecting the application, piece by piece, replacing IIS with LAMP and optimizing database access.
In this case, the client was low on budget and didn't want to make too many changes. They were looking for a quick fix.
Following careful review of their .asp application, it became clear we're dealing with a chaotic buggy system and that we would have to cut deep, if we want to optimize existing code.
So we decided to go with a different approach.
Keep everything as is and use Nginx to reverse-proxy all incoming requests.
What is a Reverse Proxy?
A Reverse Proxy is a web server that handles all incoming requests from end-users, caching, load balancing and communicating with your back end primary servers as necessary.
IIS is slow. Nginx is super fast.
If we can't rewrite the code, let's have Nginx handle all traffic, connect to IIS internally and then cache the response from IIS, so that future requests can be fulfilled without ever hitting IIS.
The idea is to switch 1 million users downloading an image from IIS, with those users downloading everything from Nginx directly. Nginx is faster, light weight and scales easily.
Why Nginx?
Whenever we setup reverse proxies, one of our favorite options is Squid.
Squid has been around for a long time, very easy to setup and provides a good reverse-proxy caching solution.
In this case however, incoming requests required further logic before a request could be routed to IIS. Nginx is just as fast and offers greater flexibility by letting us use PHP.
Setting up
We provisioned a new dedicated server for the client and installed Nginx with PHP-FPM.
Analyzed all possible requests the IIS system was handling. They were all HTTP_GET requests, with varying parameters. IIS handled several vhosts, so we had to properly handle http://DomainA.com/dosomething?a=b and http://DomainB.com/dosomething?a=b
Configured Nginx to rewrite all requests for files that did not exist, to go to a notfound.php script:
location /
{
if (-d $request_filename)
{
break;
}
if (!-f $request_filename)
{
rewrite ^(.*)$ /notfound.php?$1 last;
break;
}
}
In notfound.php, we would connect to IIS to retrieve the image / static page / dynamic content, then save it locally.
The IIS system served different content based on the user's ip address and origin, so we had to take that into account when saving file names. (/us/google/welcome.gif vs /canada/yahoo/welcome.gif)
Going live
After testing everything locally, we had the client update their DNS, sending all traffic to Nginx instead of IIS.
The impact on performance has been very noticeable.
IIS CPU utilization went down from 70% to below 5% at all times and Nginx was barely breaking a sweat, handling the majority of the requests locally, reverting to IIS only when presented with a new combination of parameters that was never seen before.
We later developed a simple way to "expire" content on Nginx so that whenever the client updated the IIS Content Management System, the changes would propagate properly.
There is one aspect of this solution that is still lacking and worth mentioning. In the event of a sudden burst of new requests with never-seen-before parameters, the current implementation will revert all requests to IIS until files are created locally. A better approach would be to queue requests for new content, avoiding hitting IIS more than once when there's a sudden burst of new requests.
Implementing a RabbitMQ/Cassandra queue for new requests would be the next step here, so we can avoid an initial slowdown when hit with a burst of new requests.
In Conclusion
SPI engineers came up with a quick fix, that didn't involve any changes to the original application and made a huge impact on throughput and the number of concurrent connections the service can handle.
If you're dealing with massive traffic and you're not using Nginx yet, you owe it to yourself to take it for a spin.
The system was not keeping up with their traffic.
Typically, in a situation like this, we would recommend re-architecting the application, piece by piece, replacing IIS with LAMP and optimizing database access.
In this case, the client was low on budget and didn't want to make too many changes. They were looking for a quick fix.
Following careful review of their .asp application, it became clear we're dealing with a chaotic buggy system and that we would have to cut deep, if we want to optimize existing code.
So we decided to go with a different approach.
Keep everything as is and use Nginx to reverse-proxy all incoming requests.
What is a Reverse Proxy?
A Reverse Proxy is a web server that handles all incoming requests from end-users, caching, load balancing and communicating with your back end primary servers as necessary.
IIS is slow. Nginx is super fast.
If we can't rewrite the code, let's have Nginx handle all traffic, connect to IIS internally and then cache the response from IIS, so that future requests can be fulfilled without ever hitting IIS.
The idea is to switch 1 million users downloading an image from IIS, with those users downloading everything from Nginx directly. Nginx is faster, light weight and scales easily.
Why Nginx?
Whenever we setup reverse proxies, one of our favorite options is Squid.
Squid has been around for a long time, very easy to setup and provides a good reverse-proxy caching solution.
In this case however, incoming requests required further logic before a request could be routed to IIS. Nginx is just as fast and offers greater flexibility by letting us use PHP.
Setting up
We provisioned a new dedicated server for the client and installed Nginx with PHP-FPM.
Analyzed all possible requests the IIS system was handling. They were all HTTP_GET requests, with varying parameters. IIS handled several vhosts, so we had to properly handle http://DomainA.com/dosomething?a=b and http://DomainB.com/dosomething?a=b
Configured Nginx to rewrite all requests for files that did not exist, to go to a notfound.php script:
location /
{
if (-d $request_filename)
{
break;
}
if (!-f $request_filename)
{
rewrite ^(.*)$ /notfound.php?$1 last;
break;
}
}
In notfound.php, we would connect to IIS to retrieve the image / static page / dynamic content, then save it locally.
The IIS system served different content based on the user's ip address and origin, so we had to take that into account when saving file names. (/us/google/welcome.gif vs /canada/yahoo/welcome.gif)
Going live
After testing everything locally, we had the client update their DNS, sending all traffic to Nginx instead of IIS.
The impact on performance has been very noticeable.
IIS CPU utilization went down from 70% to below 5% at all times and Nginx was barely breaking a sweat, handling the majority of the requests locally, reverting to IIS only when presented with a new combination of parameters that was never seen before.
We later developed a simple way to "expire" content on Nginx so that whenever the client updated the IIS Content Management System, the changes would propagate properly.
There is one aspect of this solution that is still lacking and worth mentioning. In the event of a sudden burst of new requests with never-seen-before parameters, the current implementation will revert all requests to IIS until files are created locally. A better approach would be to queue requests for new content, avoiding hitting IIS more than once when there's a sudden burst of new requests.
Implementing a RabbitMQ/Cassandra queue for new requests would be the next step here, so we can avoid an initial slowdown when hit with a burst of new requests.
In Conclusion
SPI engineers came up with a quick fix, that didn't involve any changes to the original application and made a huge impact on throughput and the number of concurrent connections the service can handle.
If you're dealing with massive traffic and you're not using Nginx yet, you owe it to yourself to take it for a spin.
![]() |
Udi, 11-08-2010 |
Instead of bothering with setting up Nginx, you can simply add this line to IIS:
It will tell IIS to cache all pages in memory for 60 seconds.
<%@OutputCache Duration="60" VaryByParam="none" %>
It will tell IIS to cache all pages in memory for 60 seconds.
![]() |
Adrian Singer, 11-08-2010 |
With this command, IIS will not "recompute" each page, but instead cache it in memory for 60 seconds each time. It will definitely help and it's worthwhile to test if the impact of this will be sufficient for your current traffic requirements.
But understand that all traffic will still be hitting the IIS machine. This is like "scaling up" vs "scaling out".
In other words, you are doing something that will make your single machine capable of handling more traffic. Ultimately even with this inmemory caching in IIS, you will hit a glass-ceiling of how much the machine can handle and will need a way to scale out, meaning a way to linerally throw more machines at the problem so that you can effectively handle an unlimited amount of traffic.
But understand that all traffic will still be hitting the IIS machine. This is like "scaling up" vs "scaling out".
In other words, you are doing something that will make your single machine capable of handling more traffic. Ultimately even with this inmemory caching in IIS, you will hit a glass-ceiling of how much the machine can handle and will need a way to scale out, meaning a way to linerally throw more machines at the problem so that you can effectively handle an unlimited amount of traffic.
![]() |
Louis Galipeau, 12-30-2010 |
Cool. Though from what I've been reading recently if statements within a location directive can be evil:
http://wiki.nginx.org/Pitfalls#Using_If
Have you had a chance to look at the try_files directive?
Possibly a configuration that looked something like this would work better:
try_files $uri $uri/ @proxy;
location @proxy {
rewrite ^ /notfound.php?$request_uri last;
}
OR
try_files $uri $uri/;
error_page 404 /notfound.php?$request_uri;
http://wiki.nginx.org/Pitfalls#Using_If
Have you had a chance to look at the try_files directive?
Possibly a configuration that looked something like this would work better:
try_files $uri $uri/ @proxy;
location @proxy {
rewrite ^ /notfound.php?$request_uri last;
}
OR
try_files $uri $uri/;
error_page 404 /notfound.php?$request_uri;
|

Subscribe Now to receive new posts via Email as soon as they come out.
Comments
Post your comments