This blog post should help you to set up some basic security measures on your stand-alone webserver. It focuses on a typical LAMP stack and open source security solutions, but should in principle be applicable to other linux web servers and database setups as well. I’m going to give a rather high level overview since the details of each step depend on your specific distribution and configuration. Most of the tools and software mentioned here allow for complex adjustments but also increase your security when run with only slight modification to the default configuration.
I highly recommend that you do more exhaustive research on your own on each of the mentioned steps, especially when you are going to host sensitive customer data and personally identifiable information on your webserver.
Here are some basic steps to achieve a minimal security level on the server side (- webapps are going to be covered in a later post):
- Disable uneccessary services, accounts and users
- Check file permissions and seperate processes, services and users
- Secure existing services
- Install a host based firewall
- Setup a web application firewall
- Baseline your system and monitor changes with an HIDS
- Regularly install security updates and consider auto-updating
- Create regular backups and be able to restore them
1. Disable uneccessary services and users.
After you set up your server, check what is running. Get a list of listening ports, check what you need, and disable what you don’t need.
In this example sshd, mysqld, vsftpd, apache2 and dhclient are listening, while only apache2, sshd and vsftpd listen for external connections. MySQL is not listening for remote connections which is fine, since our web host is self contained and not a database server nor part of a cluster. We might want to disable FTP and opt for FTPS or SFTP later.
2. Check file permissions and seperate processes, services and users
Needless to say, you should keep your user directories seperate and set user and group rights accordingly. This applies to local users, as well as users who have access (solely) through (S)FTP(S) or SSH. If you are running PHP, use separate users for different applications (or groups of applications) - see the PHP section for more info. In general, you should stick to the principle of least privilege and provide each user, service and application only with the absolute minimum on privileges that it needs to perform its tasks properly.
This also applies to cron jobs and custom scripts. A common flaw on linux systems is that you may find scripts or executables that run as root and are world writable or make use of configuration files that are world writable. When an attacker gets a local account on your system, e.g. a webshell through a compromised web application, he can easily escalate his privileges and get complete control. You don’t want that, so it is important to only run as root what absolutely must be run as root. Check if your config files are adequately protected as well.
There are some scripts which check linux servers for weak privileges. Use them, but don’t rely on them exclusively.
Here is also an excellent article on examining your system from an attacker’s point of view: https://blog.g0tmi1k.com/2011/08/basic-linux-privilege-escalation/
3. Secure existing services
These are services that you probably going to need on a standalone webserver:
- ftp (better: SFTP or FTPS)
- Apache2.4 (or nginx or some other server software)
- a database
Let’s go into more detail:
SSH is pretty secure by itself, but there are still a few things you can do to increase its security. First of all, you should enable public key authentication and disable password authentication. Create a certificate with 4096 bits and protect it with a strong password. You can generate a key pair on your linux or mac client with ssh-keygen. For Windows, there are tools like puttygen that serve the same purpose.
Copy the public key from the client to the .ssh directory on the server:
If you cannot use
ssh-copy-id, just paste the public key to ` ~/.ssh/authorized_keys` on the server.
Test your new certificate login. If it works, disable password based login in the ssh config file (
You should not allow the root user to login directly, but allow a normal user account to switch to root via the ‘su’ command and a separate password. So, for ssh set:
Restart your SSH server after you made configuration changes. Again, also check the steps for your specific distribution. Your default settings may vary and more or less configuration might be needed. Depending on your needs, you might also consider jailing ssh users to their home directories.
Besides securing SSH, you should limit authentication attempts via SSH by installing a tool such as fail2ban. Fail2ban bans a user’s ip after a specified amount of failed login attempts for a certain amount of time. Since it is quite unlikely that someone will guess your certificate correctly, you could leave that out, but there may be times when you activate password-based login for giving someone quick access to your server and then fail2ban protects you. It also saves you some ressources when you employ public key login, since it blocks ressource-wasting bruetforcing attempts. So if you notice a lot of bruteforcing in your logs (e.g.
/var/log/auth.log), a few minutes for setting up fail2ban may be well spend.
To summarize, when running SSH:
- Enable public key authentication and disable password based authentication
- Disable root login
- Consider jailing users to their home directories
- Limit bruteforcing attempts with fail2ban or similiar tools
If you can, avoid using plain FTP. By default, it does not encrypt authentication or any of its transfers. Instead, use SFTP or FTPS. Admittedly, it is a little more complex to set up, but as always, it may well be worth the additional security benefits you get. Unfortunately, not all clients support SFTP or FTPS connections, so you might have to live with what can be used in your environment.
SFTP is strictly speaking not FTP but a FTP-like protocol that operates over SSH and as such has the advantage of being tunneled by SSH on port 22. So it runs piggypack on a well established protocol and does not require you to open another port if you already use SSH.
FTPS, on the other hand, is FTP which usually uses ports 989 (data transfer) and 990 (commands). It encrypts communication with SSL/TLS. FTPS needs a certificiate from a trusted certificate authority. In general, SFTP is to be preferred over FTPS, but what you are going to use in the end depends on your requirements and possibilities.
Here is a more extensive comparison of both protocols.
When setting up any kind of FTP-like service, think about what you need in terms of directory rights and file access. You should also disable anonymous logins and jail users to their specific home directories. When adding users just for FTP login, they probably don’t need a shell, so set their shell to
/bin/false (or whatever corresponds to “no shell” on your system). In general, for securing a FTP service of any kind you should:
- Disable anonymous login and any possible default logins
- Chroot users to their home directories
- Disable shell access for users that only should login via ftp
- Adjust your firewall settings accordingly
More detailed information on how to setup SFTP can be found here: https://en.wikibooks.org/wiki/OpenSSH/Cookbook/SFTP
c) Apache (HTTP and HTTPS)
If it is not done automatically for you during setup, create a special user for Apache and don’t run Apache as root. There is no need to do that (- besides the initial Apache process which has to open the ports). Just create a special user and group that Apache can run as. You can check which user Apache is running as by using
As you notice, only the first process runs as root, while all of the other processes run as www-data. Make sure that the apache-user only has access to those files and directories it needs and does not serve confidential configuration files to the public.
By default, Apache is pretty verbose when it comes to speaking about itself. You should limit Apache’s verbosity to the outside world. This is the output of examining headers on a simple HTML page:
As you might notice in , Apache exposes its version number and distribution (Ubuntu). These information might turn out useful to an attacker. So turn off Apaches exact version disclosure in Apache’s configuration. While your at it, you may also wanto to disable ETags:
You should disable directory listing on your virtual hosts by adding
to your virtual host configuration.
Furthermore, there may be Apache modules installed that you don’t need. As with other services, it is wise to disable what you don’t need. Avoid offering, for example, webdav to your visitors, when you don’t want to do that. You can find enabled modules in
/etc/apache2/mods-enabled or list them via
- Make sure that Apache does not run as a privileged user and check file and directory permissions
- Prevent information leakage
- Disable directory listing
- Check for unneeded modules and disable them.
Note that this is just the beginning. There are many more things you could and should consider to harden your Apache installation appropriately. Here is a rather extensive list of security parameters you might want to consider: https://geekflare.com/apache-web-server-hardening-security/
You should also consider installing a web application firewall. More on that in step 5.
d) PHP or other scripting languages
When you use a scripting language, you should apply hardening settings for that specific language. In case of PHP, you should at least turn of the public display of error messages and decrease the verbosity of its output.
To make PHP hide its version info and stop displaying errors, set:
Depending on your specfic webapp(s), there are more things to consider, e.g. disable remote file inclusion possibilities to prevent RFI-Attacks:
An extensive PHP configuration list for hardening PHP can be found on the OWASP pages: https://www.owasp.org/index.php/PHP_Configuration_Cheat_Sheet
Besides hardening PHP itself, try to seperate your different webapps as far as possible. Check if you can use Fastcgi with mod_proxy and PHP-FPM to start a different pool of workers for each webapp under a different user and group. This way an application will not get affected that easily when another application on the same host gets hacked. (Note: PHP-FPM is not limited to Apache but can also be used in combination with Nginx.)
When using a database - whether it is MySQL or any other kind of database - make sure to disable all default accounts and do not allow remote connections when not needed. Do not expose the database port through your firewall (see 4) if your database is only supposed to run locally. As with Apache, you should not run your database as root. You should drop any testdata that comes preinstalled and you might also want to change the default port for local connections.
Check the MySQL documentation for common security issues and how to mitigate them or look into your specific database documentation:
For other databases, check the specific documentation. E.g. for MongoDB, beware of the famous default no-credentials login.
4. Install a host based firewall
It’s easy to get a basic configuration of iptables up and running. Writing firewall rules, on the other hand, can get quite complex. On a webserver you typicaly want at least to block pings and implement a default deny policy, which means that all ports except those explicitly allowed get blocked.
On linux servers there may be more comfortable solutions for your specific distribution than using iptables directly. On your Ubuntu server as well as some other distributions, you can use ufw to get a simple management console for setting up basic iptable rules. As with all firewall setups, be careful not to log yourself out by blocking your own ssh connection.
The following allows access via ssh (port 22), http (port 80) and https (port 443) and enables the firewall.
ufw generates a set of iptables rules that you might examine in detail with
You should also disable your servers default response to ping / ICMP requests. By default, your server will almost certainly be discoverable by ping and therefore also respond to automated scanning tools, which makes it a target of script kiddies. Blocking ICMP requests needs just a little work on ufw’s config files so you should definitely do it.
Other distributions provide similar tools for firewall-management, e.g. firewall-cmd on RedHat.
5. Use a web application firewall
A web application firewall works specifically on web application and helps you to monitor and prevent attacks that otherweise might exploit flaws in your web application. It analyzes incoming requests for patterns and logs and / or blocks these requests if it considers them to be malicious.
A good and common choice for protecting a webapp that uses Apache, Nginx or similar is using ModSecurity. ModSecurity allows you to analyse incoming and outgoing requests. It serves as a web application firewall and, to a certain degree, a data loss prevention system. It has all the pros and cons of a host based IDS / IPS but first of all it is a much cheaper solution than buying a physical or virtual web application firewall. Since it is running as an Apache module, it does affect the performance of your webserver, though. You may remediate ModSecurity’s performance impact by using a web application proxy like Varnish or just live with it.
The ModSecurity documentation recommends that you run ModSecurity for a while in monitor mode without actually blocking suspicious requests to figure out which rules you need to adapt for your specific webapplication to avoid accidentally locking your customers out.
6. Baseline your system with an HIDS and monitor for changes
When you are done setting up your system, create an initial baseline to detect any unauthorized changes to central configuration files. Host base intrusion detection systems (HIDSs) like OSSEC, AIDE or Tiger will help you to do that and notice you of any malicious changes. They do not prevent attacks, but help you to keep the integrity of your system, usually by creating a hash of configuration files in a database.
Part of baselining your system is also to get an initial overview over running (network) services (see point 1 of this post). You might take notes about memory consumption during normal operations, since memory spikes may indicate malicious activities.
Besides making use of an HIDS, You should check your system regularly for (hard to detect) rootkits with the help of a rootkit-checker like chkrootkit. If your webapplication allows for fileuploads or if you allow upoads through SSH and FTP, you might want to consider running an antivirus scanner like ClamAV.
7. Install security updates and consider auto-updating
Regularly install security relevant updates on your webserver and consider automating this process. Most linux distributions offer appropriate tools, e.g. apt-cron for debian derivates or yum-cron for redhat derivates, that are easy to set up. Of course, every update has the chance of interfering with running services on your production system and an Apache update may cause an automatic restart of Apache. Whether you want to automate these updates is up to you. I would strongly recommend doing so since I never run into trouble with automatic security updates on production servers.
8. Create regular backups and be able to restore them!
Even the most secure server is going to lose data sooner or later. The question is not if, but when this will happen. The cause may be a simple hardware failure, a damaged raid array, a human mistake on the command line or similiar things that are not the fault of an external attacker. Better prepare for the case that your server becomes suddenly inaccessible and always store a copy of your data elsewhere. (Note: depending on your database, it may not be enough to do a simple VM snapshot, since the IDs might get messed up.)
You can start simple with a rsync script, rsnapshot or some other solution. Of course, you should store your backups in a separate place that is not easily accessible from your webserver, so if your webserver gets compromised, your backups are outside of the attacker’s reach.
While all these things will definitely take some time, they should be considered minimum for a production system where you save confidential user data and personal identifiable information. Of course, they do not (fully) protect you of badly written web applications or sloppy configurations nor of serious zero day attacks or social engineering. But they do increase the obstacles for many attackers and automated scanning tools out there.