Post: [NGINX] Prevent & Block Layer-7 DDoS
10-18-2015, 08:40 PM #1
Octolus
I defeated!
(adsbygoogle = window.adsbygoogle || []).push({}); I don't know if many of the members here are interested in learning such, or even runs their own servers. However here is some tips, if you have issues with Layer-7 attacks. This is what I found working best.

Sometimes, CloudFlare or other services isn't enough. There are booters like 'YouBoot' that bypasses the 'Attack Mode' in CloudFlare. I've had great success, blocking attacks with up to 800 windows bots.




Limiting Requests
This can be a issue if you're not doing it correctly. Limiting requests, can block normal users from accessing your web site. Therefore, I always limit requests to php documents only.
You can do this by adding something simple as this.

Find location ~ \.php$ { and add limit_req zone=one burst=5; in your nginx.conf, or whatever kind of setup you have. In my scenario, it would end up looking like this: You must login or register to view this content. (I include the PHP, from a separate file called php.conf)

Now add limit_req_zone $binary_remote_addr zone=one:10m rate=5r/s; inside your http { block.

This would, as you may think. Limit the user to max 5 requests a second, this is usually more than a user need to access PHP Documents. If the user make more requests than that, i'd consider it as DDoS.




Disable Error & Access Logs
This can be bad, and good. Good when you are under attack, but bad if someone h4x your website, and you want to figure out what they did. When you are under Layer-7 attack, your logs get massive. You can decrease your CPU Usage with over 50% by disabling access and error logs, from own experience.




Block bad user-agents
To block bad user-agents, simply apply this to your server { block.

Empty User-Agents (Usually Joomla Attacks):
if ($http_user_agent = "") { return 444; }
if ($http_user_agent = " ") { return 444; }
if ($http_user_agent = "-") { return 444; }

WordPress, Joomla, GHP User-Agents:
if ($http_user_agent ~* "PHP|curl|Wget|HTTrack|Nmap|Verifying|PingBack|Pingdom|Joomla|Wordpress") { return 444; }

It will return 444, which will close the connection 'instantly'. This uses barely any resources.

There are some downsides:
- Facebook seems to use empty user-agent to parse your website.
- People connecting to API's, might use empty user-agents.




Fail2Ban & Limit Requests
Perfect combination for botnet attacks, however it requires some processing power and ram. I take no credit for this method, I actually found it You must login or register to view this content. and tweaked it a little.

In /etc/fail2ban, you'll need to create a file called "jail.local" if it's not there already. Then add this content to it:
    [nginx-req-limit]

enabled = true
filter = nginx-req-limit
port = all
action = iptables-allports
logpath = /home/nginx/domains/google.com/error.log
findtime = 1200
bantime = 172800
maxretry = 3


Pretty self-explained. It will scan error.log, using nginx-req-limit filter. If the user reach the request limit three times, it will ban the user for 172800 seconds.

Now create a new file inside here /etc/fail2ban/filter.d/ called nginx-req-limit.conf. It will contain:
    # Fail2Ban configuration file
#
# supports: ngx_http_limit_req_module module

[Definition]

failregex = limiting requests, excess:.* by zone.*client: <HOST>

# Option: ignoreregex
# Notes.: regex to ignore. If this regex matches, the line is ignored.
# Values: TEXT
#
ignoreregex =


This is the regex, for people that reached the request limit in Nginx.

Once done, restart fail2ban (service fail2ban restart).

You can run the following command to see if you have banned any IP's: fail2ban-client status nginx-req-limit

Of course:
- You'll have to make sure that the error.log path is correct.
- You'll have to make sure that it logs failed requests, you might want to change 444 to 403 on bad user-agents to block them too.
- This works on botnet attacks, but can also ban real visitors that have 25 tabs open at once or 25 tabs open with a ajax shoutbox.




You must login or register to view this content.
This is a excellent module to filter out bad traffic, without the need of CloudFlare. This is a nginx module, that will do a simple cookie check, to check if the user can store cookies. In most botnet attacks, and booter attacks - the attacker(s) can't store cookies. If they can't, this module would deny them to access the site.

There are two awesome alternatives:
- Set cookies through headers.
- Set cookies through a static html document.

The most secure one, would be setting through headers. This one allows you to encrypt the cookies, and prevent cookie spoofing. Second one, would also be the most light one, since it's serving from a static document.
Last edited by Octolus ; 10-18-2015 at 08:55 PM.

The following 12 users say thank you to Octolus for this useful post:

Algebra, HamoodDev, Norway-_-1999, ParadoxSPRX, Rath, seb5594, Passion, TheRichSlut, TheFreakyClown, Trefad, Tustin
07-08-2017, 07:27 PM #11
Algebra
[move]mov eax, 69[/move]
Originally posted by Octolus View Post
I don't know if many of the members here are interested in learning such, or even runs their own servers. However here is some tips, if you have issues with Layer-7 attacks. This is what I found working best.

Sometimes, CloudFlare or other services isn't enough. There are booters like 'YouBoot' that bypasses the 'Attack Mode' in CloudFlare. I've had great success, blocking attacks with up to 800 windows bots.




Limiting Requests
This can be a issue if you're not doing it correctly. Limiting requests, can block normal users from accessing your web site. Therefore, I always limit requests to php documents only.
You can do this by adding something simple as this.

Find location ~ \.php$ { and add limit_req zone=one burst=5; in your nginx.conf, or whatever kind of setup you have. In my scenario, it would end up looking like this: You must login or register to view this content. (I include the PHP, from a separate file called php.conf)

Now add limit_req_zone $binary_remote_addr zone=one:10m rate=5r/s; inside your http { block.

This would, as you may think. Limit the user to max 5 requests a second, this is usually more than a user need to access PHP Documents. If the user make more requests than that, i'd consider it as DDoS.




Disable Error & Access Logs
This can be bad, and good. Good when you are under attack, but bad if someone h4x your website, and you want to figure out what they did. When you are under Layer-7 attack, your logs get massive. You can decrease your CPU Usage with over 50% by disabling access and error logs, from own experience.




Block bad user-agents
To block bad user-agents, simply apply this to your server { block.

Empty User-Agents (Usually Joomla Attacks):
if ($http_user_agent = "") { return 444; }
if ($http_user_agent = " ") { return 444; }
if ($http_user_agent = "-") { return 444; }

WordPress, Joomla, GHP User-Agents:
if ($http_user_agent ~* "PHP|curl|Wget|HTTrack|Nmap|Verifying|PingBack|Pingdom|Joomla|Wordpress") { return 444; }

It will return 444, which will close the connection 'instantly'. This uses barely any resources.

There are some downsides:
- Facebook seems to use empty user-agent to parse your website.
- People connecting to API's, might use empty user-agents.




Fail2Ban & Limit Requests
Perfect combination for botnet attacks, however it requires some processing power and ram. I take no credit for this method, I actually found it You must login or register to view this content. and tweaked it a little.

In /etc/fail2ban, you'll need to create a file called "jail.local" if it's not there already. Then add this content to it:
    [nginx-req-limit]

enabled = true
filter = nginx-req-limit
port = all
action = iptables-allports
logpath = /home/nginx/domains/google.com/error.log
findtime = 1200
bantime = 172800
maxretry = 3


Pretty self-explained. It will scan error.log, using nginx-req-limit filter. If the user reach the request limit three times, it will ban the user for 172800 seconds.

Now create a new file inside here /etc/fail2ban/filter.d/ called nginx-req-limit.conf. It will contain:
    # Fail2Ban configuration file
#
# supports: ngx_http_limit_req_module module

[Definition]

failregex = limiting requests, excess:.* by zone.*client: <HOST>

# Option: ignoreregex
# Notes.: regex to ignore. If this regex matches, the line is ignored.
# Values: TEXT
#
ignoreregex =


This is the regex, for people that reached the request limit in Nginx.

Once done, restart fail2ban (service fail2ban restart).

You can run the following command to see if you have banned any IP's: fail2ban-client status nginx-req-limit

Of course:
- You'll have to make sure that the error.log path is correct.
- You'll have to make sure that it logs failed requests, you might want to change 444 to 403 on bad user-agents to block them too.
- This works on botnet attacks, but can also ban real visitors that have 25 tabs open at once or 25 tabs open with a ajax shoutbox.




You must login or register to view this content.
This is a excellent module to filter out bad traffic, without the need of CloudFlare. This is a nginx module, that will do a simple cookie check, to check if the user can store cookies. In most botnet attacks, and booter attacks - the attacker(s) can't store cookies. If they can't, this module would deny them to access the site.

There are two awesome alternatives:
- Set cookies through headers.
- Set cookies through a static html document.

The most secure one, would be setting through headers. This one allows you to encrypt the cookies, and prevent cookie spoofing. Second one, would also be the most light one, since it's serving from a static document.




Is this outdated or not?
07-08-2017, 07:28 PM #12
Algebra
[move]mov eax, 69[/move]
Originally posted by Octolus View Post
I don't know if many of the members here are interested in learning such, or even runs their own servers. However here is some tips, if you have issues with Layer-7 attacks. This is what I found working best.

Sometimes, CloudFlare or other services isn't enough. There are booters like 'YouBoot' that bypasses the 'Attack Mode' in CloudFlare. I've had great success, blocking attacks with up to 800 windows bots.




Limiting Requests
This can be a issue if you're not doing it correctly. Limiting requests, can block normal users from accessing your web site. Therefore, I always limit requests to php documents only.
You can do this by adding something simple as this.

Find location ~ \.php$ { and add limit_req zone=one burst=5; in your nginx.conf, or whatever kind of setup you have. In my scenario, it would end up looking like this: You must login or register to view this content. (I include the PHP, from a separate file called php.conf)

Now add limit_req_zone $binary_remote_addr zone=one:10m rate=5r/s; inside your http { block.

This would, as you may think. Limit the user to max 5 requests a second, this is usually more than a user need to access PHP Documents. If the user make more requests than that, i'd consider it as DDoS.




Disable Error & Access Logs
This can be bad, and good. Good when you are under attack, but bad if someone h4x your website, and you want to figure out what they did. When you are under Layer-7 attack, your logs get massive. You can decrease your CPU Usage with over 50% by disabling access and error logs, from own experience.




Block bad user-agents
To block bad user-agents, simply apply this to your server { block.

Empty User-Agents (Usually Joomla Attacks):
if ($http_user_agent = "") { return 444; }
if ($http_user_agent = " ") { return 444; }
if ($http_user_agent = "-") { return 444; }

WordPress, Joomla, GHP User-Agents:
if ($http_user_agent ~* "PHP|curl|Wget|HTTrack|Nmap|Verifying|PingBack|Pingdom|Joomla|Wordpress") { return 444; }

It will return 444, which will close the connection 'instantly'. This uses barely any resources.

There are some downsides:
- Facebook seems to use empty user-agent to parse your website.
- People connecting to API's, might use empty user-agents.




Fail2Ban & Limit Requests
Perfect combination for botnet attacks, however it requires some processing power and ram. I take no credit for this method, I actually found it You must login or register to view this content. and tweaked it a little.

In /etc/fail2ban, you'll need to create a file called "jail.local" if it's not there already. Then add this content to it:
    [nginx-req-limit]

enabled = true
filter = nginx-req-limit
port = all
action = iptables-allports
logpath = /home/nginx/domains/google.com/error.log
findtime = 1200
bantime = 172800
maxretry = 3


Pretty self-explained. It will scan error.log, using nginx-req-limit filter. If the user reach the request limit three times, it will ban the user for 172800 seconds.

Now create a new file inside here /etc/fail2ban/filter.d/ called nginx-req-limit.conf. It will contain:
    # Fail2Ban configuration file
#
# supports: ngx_http_limit_req_module module

[Definition]

failregex = limiting requests, excess:.* by zone.*client: <HOST>

# Option: ignoreregex
# Notes.: regex to ignore. If this regex matches, the line is ignored.
# Values: TEXT
#
ignoreregex =


This is the regex, for people that reached the request limit in Nginx.

Once done, restart fail2ban (service fail2ban restart).

You can run the following command to see if you have banned any IP's: fail2ban-client status nginx-req-limit

Of course:
- You'll have to make sure that the error.log path is correct.
- You'll have to make sure that it logs failed requests, you might want to change 444 to 403 on bad user-agents to block them too.
- This works on botnet attacks, but can also ban real visitors that have 25 tabs open at once or 25 tabs open with a ajax shoutbox.




You must login or register to view this content.
This is a excellent module to filter out bad traffic, without the need of CloudFlare. This is a nginx module, that will do a simple cookie check, to check if the user can store cookies. In most botnet attacks, and booter attacks - the attacker(s) can't store cookies. If they can't, this module would deny them to access the site.

There are two awesome alternatives:
- Set cookies through headers.
- Set cookies through a static html document.

The most secure one, would be setting through headers. This one allows you to encrypt the cookies, and prevent cookie spoofing. Second one, would also be the most light one, since it's serving from a static document.




Is this outdated or not?

Copyright © 2024, NextGenUpdate.
All Rights Reserved.

Gray NextGenUpdate Logo