Doing Denyhosts a bit better

At my $dayjob we have been using denyhosts to help protect our systems from potential bad guys and botnets trying to break into our systems. Apparently the latest craze has been trying to get rooted unix systems incorporated into botnets using some combination of iframes, nginx, and the usual array of normal hacking methods. One attack vector is to bruteforce ssh, which isn't new, but it's been an increasing threat. As proof of the increasing sophistication of attackers, yesterday i noticed that we had an incoming attack using common Dutch firstnames. The goal, obviously is to find a legitimate user with a weak password.

There are a few ways to protect yourself against this vector, some easier than others. Using a mix will help you further as well. The usual logic, such as making the usernames not obvious, enforcing strong password policies and all the usual niceties that manage to start flamewars so easily. You can also make sure to disable remote login as root with password authentication or block root completely unless you're already logged in. One step further is to enforce sudo only access.

One step we also take is using denyhosts to blacklist the IP addresses that are clearly showing behavior matching that of a hacker. Examples of this behavior is logging in with a non existant username, logging in with an obviously bad username such as 'www-data' or 'oracle', logging in as more than one or two users, too many bad password attempts, logging as root and so on. Denyhosts handles the heuristics for us nicely but we wanted to take this a step further. The admins before me put together a nice script where one server could convert the output from denyhosts, pass it along to our puppet configuration which would in turn deploy a blacklist across the entire network. This would in turn do two things, it would protect servers who have not yet been touched by a certain attacker, and denyhosts running on each machine would provide an initial level of blockage.

In this process, we discovered a few problems with how denyhosts works in terms of network performance. One problem we had is that processing the output from denyhosts can be pretty heavy on a system and can involved a huge number of DNS queries. When i tried to tweak script to block more things, it overloaded the system, took out our LDAP server, users couldn't authenticate, and denyhosts started systematically blocking our own network. While certain critical paths were whitelisted, we still ended up with a screwed up network for a few hours, and had one hell of a morning. One local best practice we have now is to run processes like these on a seperate VM dedicated to the task. The further we can offload the processing from critical network space, the lest likely we'll overload the network. The VMs are also running on a hypervisor that's dedicated to running our failover services that are only activated when the primary network goes down.

The second problem we had is that denyhosts will put hostnames as well as IP addresses in /etc/hosts.deny. We realized that network traffic was just a bit laggier than normal once we had the setup running for a bit. Ultimately doing both a firewall level blacklist and then running a nearly identical blacklist in /etc/hosts.deny will reduce traffic more than is necessary. Instead, i cleaned up the /etc/hosts.deny from what denyhosts did, and in the future we'll have to look at another low performance hitting method of having more computers protected faster.

One idea i have for the future is to make the system more event based rather than cron job based. When denyhosts decides it needs to block a hostname or server, it calls something that sends a message across the network to the server that will do all the processing, sanitizing, checking to make sure the host is not on a whitelist, and so on, so that computers can respond right away, but still protect the whole network. Realistically, what we should be doing is just blacklisting the entire intarwebz except for our external webservers, and having our users work from a VPN. But that's a story for another day. All in all it was a fun exercise in how to think practically about the actual load we're deploying to our servers before we do it.

0 flames: