2019-01-20 fail2ban to watch over my sites

I’m hosting my sites on a tiny server. At the same time, it’s all dynamic web apps and wikis and CGI scripts that take CPU and memory resources. Not a problem, if humans are browsing the site. If people try to download the entire site using automated tools that don’t wait between requests (leeches), or don’t check the meta information included in HTTP headers and HTML tags, then they overload my sites, and they get lost in the maze of links: page history, recent changes, old page revisions, you can download for ever if you’re not careful.

Enter fail2ban. This tool watches log files for regular expressions (filters) and if it find matches, it adds IP numbers to the firewall. You can then tell it which filters to apply to which log files and how many hits you’ll allow (jail).

When writing the rules, I just need to be careful: it’s OK to download a lot of static files. I just don’t want leeches or spammers (trying to brute force the questions I sometimes ask before people get to edit their first page on my sites).

Here’s my setup:

alex-apache.conf

This is for the Apache web server with virtual hosts. The comment shows an example entry.

Notice the `ignoreregex` to make sure that some of the apps and directories don’t count.

Note that only newer versions of `fail2ban` will be able to match IPv6 hosts.

Author: Alex Schroeder alex@gnu.org

[Definition]

ANY match in the logfile counts!

communitywiki.org:443 000.000.000.000 - - [24/Aug/2018:16:59:55 +0200] "GET /wiki/BannedHosts HTTP/1.1" 200 7180 "https://communitywiki.org/wiki/BannedHosts" "Pcore-HTTP/v0.44.0"

failregex = ^[^:]+:[0-9]+

Except cgit, css files, images...

alexschroeder.ch:443 0:0:0:0:0:0:0:0 - - [28/Aug/2018:09:14:39 +0200] "GET /cgit/bitlbee-mastodon/objects/9b/ff0c237ace5569aa348f6b12b3c2f95e07fd0d HTTP/1.1" 200 3308 "-" "git/2.18.0"

ignoreregex = ^[^"]*"GET /(robots.txt |favicon.ico |[^/ ]+.(css|js) |cgit/|css/|fonts/|pics/|1pdc/|gallery/|static/|munin/|osr/|indie/|face/|traveller/|hex-describe/|text-mapper/)

## alex-gopher.conf

Yeah, I also make the wiki available via gopher...

# Author: Alex Schroeder 

[Init]
# 2018/08/25-09:08:55 CONNECT TCP Peer: "[000.000.000.000]:56281" Local: "[000.000.000.000]:70"
datepattern = ^%%Y/%%m/%%d-%%H:%%M:%%S

[Definition]
# ANY match in the logfile counts!
failregex = CONNECT TCP Peer: "\[\]:\d+"

alex.conf

Now I need to tell `fail2ban` which log files to watch and which filters to use.

Note how I assume a human will basically click a link every 2s. Bursts are OK, but 20 hits in 40s are the limit.

Notice that the third jail just reuses the filter of the second jail.

[alex-apache]

enabled = true

port = http,https

logpath = %(apache_access_log)s

findtime = 40

maxretry = 20

[alex-gopher]

enabled = true

port = 70

logpath = /home/alex/farm/gopher-server.log

findtime = 40

maxretry = 20

[alex-gopher-ssl]

enabled = true

filter = alex-gopher

port = 7443

logpath = /home/alex/farm/gopher-server-ssl.log

findtime = 40

maxretry = 20

​#Administration ​#Web ​#fail2ban

## Comments

(Please contact me if you want to remove your comment.)

⁂

Blimey. I don’t remember the last time anyone did anything gophery I noticed.

– Blue Tyson 2019-01-30 10:47 UTC

=> http://cosmicheroes.space/blog Blue Tyson

---

The #gopher discussion is alive and well on Mastodon... 🙂

=> https://octodon.social/tags/gopher #gopher

– Alex Schroeder 2019-01-30 13:11 UTC

---

Something to review when I have a bit of time: Web Server Security by @infosechandbook.

=> https://infosec-handbook.eu/categories/web-server-security/ Web Server Security
=> https://mastodon.at/users/infosechandbook @infosechandbook

– Alex Schroeder 2019-02-01 18:11 UTC

---

OK, I added a meta rule: If people get banned a few times, I want to ban them for longer periods! (But see below! There is a better solution.)

This is `filter.d/alex-fail2ban.conf`:

Author: Alex Schroeder alex@gnu.org

[Init]

2019-07-07 06:45:45,663 fail2ban.actions [459]: NOTICE [alex-apache] Ban 187.236.231.123

datepattern = ^%%Y-%%m-%%d %%H:%%M:%%S

[Definition]

failregex = NOTICE .* Ban

And my jail in `jail.d/alex.conf` gets a new section that uses this filter:

[alex-fail2ban]

enabled = true

all ports

logpath = /var/log/fail2ban.log

ban repeated offenders for 6h: if you get banned three times in an

hour, you're banned for 6h

bantime = 6h

findtime = 1h

maxretry = 3

– Alex Schroeder 2019-07-10 10:36 UTC

---

Oh, and if you’re curious, here’s my `fail2ban` cheat sheet. Remember, `fail2ban` has a separate blacklist!

Get all the jails

fail2ban-client status

List banned IPs in a jail

fail2ban-client status alex-apache

Unban an IP

fail2ban-client unban 127.0.0.1

– Alex Schroeder 2019-07-10 10:38 UTC

---

=> fail2ban_to_watch_over_my_sites.png Munin reports nothing strange

As you can see, Munin is picking up the new rule, but apparently all the bans are due to Apache logs.

I’m quite certain that my SSH bans are zero because I’m running SSH on a non-standard port... 😇 I know, some people disapprove. But I say: everything else being the same, running it on a separate port simply reduces the number of drive-by attacks, in other words if you’re not being targeted specifically but *incidentally*, then having moved to a non-standard port helps.

– Alex Schroeder 2019-07-10 13:06 UTC

---

@MrManor recently told me that fail2ban watching its own logs is already in `jail.conf`: look for `[recidive]`.

=> https://fosstodon.org/users/MrManor @MrManor

This is what it has:

[recidive]

logpath = /var/log/fail2ban.log

banaction = %(banaction_allports)s

bantime = 1w

findtime = 1d

There’s also a warning regarding `fail2ban.conf`, saying I must change `dbpurgeage`. No problem:

dbpurgeage = 648000

All I need to do is enable it in `jail.d/alex.conf` by writing:

[recidive]

enabled = true

Now the files `filter.d/alex-fail2ban.conf` and the section `[alex-fail2ban]` in my `jail.d/alex.conf` are both unnecessary.

– Alex Schroeder 2019-07-30 06:25 UTC

---

These days I no longer check for Gopher using fail2ban because I’m using a different solution for Phoebe (my Gemini wiki, which also serves Gopher).

> When I look at my Gemini logs, I see that plenty of requests come from Amazon hosts. I take that as a sign of autonomous agents. I might sound like a fool on the Butlerian Jihad, but if I need to block entire networks, then I will. – 2020-12-25 Defending against crawlers

=> 2020-12-25_Defending_against_crawlers 2020-12-25 Defending against crawlers

– Alex 2021-08-22 11:26 UTC

---

**2024-07-15**. I recently got email from @reidrac@mastodon.sdf.org, talking about firewall rules instead of using fail2ban. It works well for SSH but not for protocols with multiplexing (such as HTTP/1.1 and up):

> This works when you know the protocol and port and there is no
> multiplexing, so tracking new connections is meaningful when a failed
> attempt requires a new one. – Juan

This is the example from the original mail:

cat /etc/iptables/rules.v4

Generated by iptables-save v1.4.21 on Tue Feb 16 15:42:27 2016

:INPUT ACCEPT [729:53717]

:FORWARD ACCEPT [0:0]

:OUTPUT ACCEPT [494:129905]

-A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set

--name SSH --mask 255.255.255.255 --rsource

-A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent

--update --seconds 60 --hitcount 6 --rttl --name SSH --mask

255.255.255.255 --rsource -j LOG --log-prefix "SSH_brute_force "

-A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent

--update --seconds 60 --hitcount 6 --rttl --name SSH --mask

255.255.255.255 --rsource -j DROP

COMMIT

Completed on Tue Feb 16 15:42:27 2016

Proxy Information
Original URL
gemini://alexschroeder.ch/2019-01-20_fail2ban_to_watch_over_my_sites
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
166.552226 milliseconds
Gemini-to-HTML Time
4.502221 milliseconds

This content has been proxied by September (3851b).