LAMP forms the base of most web applications. As the load on an server increases, the bottlenecks in the underlying infrastructure become more apparent in the form of slow response to user requests.
To overcome this slow response the primary choice of most people is to add more hardware resources ( incase of AWS increasing the instance type). This will definitely increases performance but will cost you more money. The webserver and database eat most of the resources. Most commonly used web server is apache and database is MySQL. So if we can optimize these two we can improve the performance.
Apache optimization techniques can often provide significant acceleration boosts even when other acceleration techniques are in use, such as a CDN. mod_pagespeed is a module from Google for Apache HTTP Servers that can improve the page load times of your website. you can read more on this from here. If you want to deploy a PHP app on AWS Cloud, Its better to using some kind of caching mechanism. Its already discussed in our blog .
Once we came into a situation where we have to use a micro instance for a web server with less than 500 hits a day
When the site started running live, and we feel like disappointed. when accessing website, it would sometimes pause for several seconds before serving the requested page. It took hours to figure out what was going on. finally we run the command top and quickly discovered that when the site was accessing by certain amount of users the CPU would spike, but the spike was not the typical user or system CPU. For testing what’s happening in server we used the apache benchmark tool ‘ab’ and run the following command on localhost.
#ab -n 100 -c 10 http://mywebserver.com/
This will show how fast our web server can handle 100 requests, with a maximum of 10 requests running concurrently. In the meantime we were monitoring the output of top command on web server.
For further investigation we started with sar – Linux command to Collect, report, or save system activity information
#sar 1
According to amazon documentation “Micro instances (t1.micro) provide a small amount of consistent CPU resources and allow you to increase CPU capacity in short bursts when additional cycles are available”.
If you use 100% CPU for more than a few minutes, Amazon will “steal” CPU time from the instance, meaning that they throttle your instance. This last as long as five minutes, and then you get a few seconds of 100% again, then the restrictions are back. This will effect your website, making it slow, and even timing-out requests. basically means the physical hardware is busy and the hypervisor can’t give the VM the amount of CPU cycles it wants.
Real tuning required on prefork. This is where we can tell apache to only generate so many processes. The defaults values are high, and which cant be handled by micro instance. Suppose you get 10 concurrent requests for a php page and require around 64MB of RAM when requested (you have to make sure that php memory_limit is above that value). That’s around 640MB of RAM on micro instance of 613MB RAM. This is the case with 10 connections – apache is configured to allow 256 clients by default, We need to scale these down , normally with 10-12 MaxClients. As per out case, this is still a huge number because 10-12 concurrent connections would use all our memory. If you want to be really cautious, make sure that your max memory usage is less than 613MB. Something like 64M php memory limit and 8 max clients keeps you under your limit with space to spare – this helps ensure that our MySQL process when your server is under load.
Maxclients an important tuning parameter regarding the performance of the Apache web server. We can calculate the value of this for a t1.micro instance
Theoretically,
MaxClients =(Total Memory – Operating System Memory – MySQL memory) / Size Per Apache process.
t1.micro have a server with 613MB of Total memory. Suppose We are using RDS instead of mysql server.
Stop apache and run
#ps aux | awk ‘{sum1 +=$4}; END {print sum1}’.
we will get the amount of memory thats used by processes other than apache.
Suppose we get a value around 30.
from top command we can check the average memory that each apache resources use.
suppose its 60mb.
Max clients = (613 – 30 ) 60 = 9.71 ~ 10 approx …
Micro instances are awesome, especially when cost becomes a major concern, however that they are not right for all applications. A simple website with only a few hundreds hits a day will do just fine since it will only need CPU in short bursts.
For Servers that serves dynamic content, better approach is to employ a reverse-proxy. This would be done this apache’s mod_proxy or Squid. The main advantages of this configurations are content caching, load balancing etc. Easy method is to use mod_proxy and the ProxyPass directive to pass content to another server. mod_proxy supports a degree of caching that can offer a significant performance boost. But another advantage is that since the proxy server and the web server are likely to have a very fast interconnect, the web server can quickly serve up large content, freeing up a apache process, why the proxy slowly feeds out the content to clients
If you are using ubuntu, you can enable module by
#a2enmod proxy
#a2enmod proxy_http
and in apache2.conf
ProxyPass / http://192.168.1.46/
ProxyPassReverse / http://192.168.1.46/
The ProxyPassreverse directive captures the responses from the web server and masks the URL as it would be directly responded by the Apache hiding the identity/location of the web server. This is a good security practice, since the attacker won’t be able to know the ip of our web server.
Caching with Apache2 is another important consideration. We can configure apache to set the Expires HTTP header, max-age directive of the Cache-Control HTTP header of static files ,such as images, CSS and JS files, to a date in the future so that these files will be cached by your visitors browsers. This saves bandwidth and makes web site appear faster if a user visits your site for a second time, static files will be fetched from the browser cache
#a2enmod expires
edit /etc/apache2/sites-available/default
<IfModule mod_expires.c>
ExpiresActive On
ExpiresByType image/gif “access plus 4 weeks”
ExpiresByType image/jpg “access plus 4 weeks”
</IfModule>
This would tell browsers to cache .jpg, .gif files for four week.
If your server requires a large amount of read / write operations, you might consider provisioned IOPS ebs volumes on your server. This is really effective if you use database server on ec2 instances. we can use iostat on the command line to take a look at your read/sec and write/sec. You can also use CloudWatch metrics to determine read and write operations.
Once we move to the security side of apache, our major concern is DDos attacks. If a server is under a DDoS attack, it is quite difficult to detect the attack before the damage is done. Attack packets usually have spoofed source IP addresses. Hence, it is more difficult to trace them back to their real source. The limit on the number of simultaneous requests that will be served by Apache is decided by the MaxClients directive, and is set to safe limit, by default. Any connection attempts over this limit will normally be queued up.
If you want to protect your apache against DOS, DDOS attacks use mod_evasive module. This module is designed specifically as a remedy for Apache DoS attacks. This module will allow you to specify a maximum number of requests executed by the same IP address. If the limit is reached, the IP address is blacklisted for the time period you specify.