Fighting against spammers: my experiences

Started from June this year, I began to receive about 20 application error notifications every day via email from our church’s web site, which I have developed and have been maintaining ever since, indicating that “A potentially dangerous Request.Form value was detected from the client (txtAddress=”. Like many other applications involving membership, there is a registration form for church members to register so as to receive member-only services, such as receiving devotionals/newsletters via email, receiving event notifications, and so on. The registration form has a multi-line text box for members typing their mail address. The reason of using a multi-line text box for address is that we have many ex-members outside US, and we don’t care much about the format of the address. It was this address text box that triggered ASP.NET to throw the above application error warning that someone tried to put some dangerous code in that text box.

My first response was that it was a “SQL injection attack”, and my question was “what has been put in that text box?”. ASP.NET is gracious and cautious enough that it stripped off the dangerous content for you, so that you won’t accidentally be attacked when viewing the error message. But I need to know what has been typed in the text box so that I know what to do to defend, so I first changed my exception handler in such a way that it encodes the exception error message before it sends out the error via email. Right after I redeployed the application, I got another application error like this: “A potentially dangerous Request.Form value was detected from the client (txtAddress=http://”. To my relief, it was not a SQL injection attack, but some spammers trying to abuse our registration form to post spam links.

The above application error occurred about once an hour, which made me believe it was human being, not robot or script that tried to post the spam links, because once per hour is way too slow for a robot or a script. Besides, I have implemented CAPTCHA on the registration page, which should stop robots or scripts from automatically posting spam data.

The first thing I did against spammers was to block their IP’s. I re-wrote my exception handler that every time an exception is caught, it checks the exception message and if the message contains “A potentially dangerous Request.Form…” message, then the exception handler adds the remote IP address to the blocked IP list. The application checks user’s IP before it loads the registration page. If the user’s IP is on the blocked IP list, then redirects the user back to the default page.

Unfortunately, this feature didn’t help much and I still got about 12 to 15 error notifications every day. The reason, I guess, was that spammers were using proxy servers and their IP’s are changing all the time, which made the IP-blocking inefficient.

Since I can’t block those spammers, what I can do is to discourage them to abuse my registration form and hope they will eventually give up. So I made the following two changes to my application:

1. Users can’t access the registration page directly. They have to visit the default page at least once before they can access the registration page. It’s done by creating a session variable on the default page and checking the session variable in the Page_Load event handler of the registration page.

2. Added the following line of code at the very top in the submit button’s event handler:


This line of code will pause the application for 10 seconds to every user, including the spammers. Normal users can probably bear the 10-second delay, but spammers can’t.

Notes: This line of code can also be used to defend web applications against SQL inject attack and Brute force attack, because attackers usually use scripts to automatically send tons of requests to web applications, then attackers will analyze the responses and repeat the process. But let your application pause for several seconds will not affect your user too much, but it will affect the automatic scripts very much. For example, if an attacker uses a script to launch a Brute force attack on a login form, and the script can send 100 requests per second. For an one-hour attack, the attacker will get 3600*100 = 360000 responses for him to analyze. But if you let the login form pause for 3 seconds when submitted, then for an one-hour attack, the attacker will only get 3600*100/3 = 120000 responses for 360000 requests. The attacker will most likely give up your site and move on to other more “efficient” sites. Remember, the above line of code will NOT prevent you from attacking, but only discourage attackers to attack your application.

After the implementation of the two features, the number of application error notification dropped to about 6 per day. Still annoying, and I need to take some other approaches.

After spending hours of searching the Internet, I found that spammers can actually use some proxy tools to by-pass the client validation and to inject data into the Request stream by changing the view state value. So the best way to protect yourself is to encrypt the view state of your ASP.NET application.

Milan Negovan has written an excellent article about ASP.NET view state on his web site. In this article, he explains nearly everything about ASP.NET view state, good and bad. He also created a tool called <machineKey> Generator which can be used to generate a complete machineKey that can be pasted into web.Config to encrypt the view state. So I used his tool and generated a machineKey with 3DES algorithm, and pasted the machineKey section into my web.config. The number of application error notification has dropped to about one per week ever since. My great thanks to Milan Negovan!

I sincerely hope my experience will help you in some way if you are facing the same problem.


0 0 vote
Article Rating
Notify of
Newest Most Voted
Inline Feedbacks
View all comments
Milan Negovan
12 years ago

Jeffrey, I’m glad I could help. 🙂 No doubt, fighting spammers is a complex social problem. It takes various measures to counter them, locking view state is one of them.

Banning IP is not hopeless. It’s just that there are so many zombie computers and relaying proxies of all kinds out there. Still, it’s worth it to automate IP blacklisting.

I’ve commented on this in a recent post by Rick Strahl and provided two links to something I wrote on this subject.

Hope this helps!

12 years ago

Milan, thank you very much for your comments. Those two articles are very helpful. Currently I am blacklisting those bad IP’s by storing them in my database, so I can manually remove one if any mistake happens, or I can manually add one if I need to, all through my web interface. However, storing banned IP’s in a text is also a good idea, so it does not “waste” my database space. I will definitely check out your HttpModule and try to implement it in my application. Your site is very helpful and I have added you to my blogroll.

Would love your thoughts, please comment.x
Close Bitnami banner