Securing a Contact Form - javascript

I have a client whose website I created with Wordpress. It has a contact form created with contact form 7. This client is a subsidiary of a larger organization who's IT department runs scans on their subdomains. The asked my client to protect Contact Form 7 from malicious scripts or take it down.
When I asked for an example of what they tested, my client informed me that they run tests to see if a script could be inserted into a input (ie: <script>alert('hello');</script>) field or as a url string (ie: www.mydomain.com/contact?<script>alert('hello');</script>).
With the query string, the contact form sets the action to: action="/?scriptalert('hello');/script#wpcf7-f1-p6-o1". My first question would be, will this harm anything since the "<" and ">" has been removed from the string?
If so, is there anything I can add to remove the possibility of running scripts in this contact form?

HTML Encoding is one way to prevent any HTML/JS from taking effect. It's a good idea to encode any user-supplied value before displaying it in the page.
See http://ca3.php.net/manual/en/function.htmlentities.php

Related

How to find or create a questionnaire with the function of the prohibition on the same answer?

I need help.
We need to find a questionnaire in which the answer to the question (for example, telephone number) may not be the same for two users.
For Example:
Enter a phone number:
First the user enters a phone number: 123456789
And completes the survey.
The second user starts to question and answers the same phone number 123456789.
He receives an error message or a request to enter a different answer.
Or is there any easy implementation of this problem using php or javascript.
Maybe it is possible to implement with surveymonkey api.
I would be glad of any help or advice.
Google forms writes to a spreadsheet database, allows regular expression fields and attached scripts. Set the field to mach your required format, then validate the response with a script.
Your requirement is persistent storage. You will either require a database, such as MySQL, and use PHP to access it, or a third party application such as SurveyMonkey. However, I can't tell you the limitations of SurveyMonkey, you will have to check their documentation.
I realize that this late and this thread is limited but for others looking for same answer. Here is how I did exactly the same thing for one of my coworkers using Examinare Framework and Examinare Survey Tool.
First, create a html form that sends the phone number to a form.
Inside that PHP file call Examinare PHP Wrapper and check if a recipient has this phone number on their recipient profile.
https://developer.examinare.com/apidocs/listrecipientsbygroup/
A tip: And loop through it the results.
If the recipient not exist, create the recipient. Make sure you add a fake or real email to prevent getting an error.
https://developer.examinare.com/apidocs/addrecipient/
Mark the recipient as a active in the survey.
https://developer.examinare.com/apidocs/markrecipientstosurvey/
Redirect to the survey.
If you want the recipient to return to a certain path then add redirect_url variable to the url with a valid url.
Hope it helps you and future seakers.

Redirect using post and get the succes of the redirect

Say, I have a simple form on my website having three fields : name, password and email.
I have to get these information from the users, and keep in my database.
Then redirect to another website and send all these information using post.
I also have to know whether the user was successfully redirected to that site(HTTP STATUS 200).
Here's how I'm doing it:
For Point 1, I'm simply submitting the form.
After the data has been successfully saved in my database, I'm rendering the following form with hidden fields. This gets submitted and user gets redirected to anotherwebsite.com
<form id="form_id" action="https://www.anotherwebsite.com/form" method="POST">
<input type ="hidden" name ="name" value ="$name">
<input type ="hidden" name ="password" value ="$password">
<input type ="hidden" name ="email" value ="$email">
</form>
<script> document.getElementById('form_id').submit(); </script>
Problems:
I don't think my strategy to achieve point 1 and 2 is correct. I need a better solution. Submitting the form, then rendering a page with hidden fields and submitting it again to redirect to another site just doesn't feel right.
I have no clue to achieve the 3rd point.
Based on your question you might try this approach:
create a form with name, password, email fields in a file ( HTML ).
Submit the form to server.
On the server side get the data (including the form attribute in a variable) and save it to database.
then redirect to the given website ( using the variable you've stored in step 3 ).
You can easily know the status ( 202 or any error) using any of server side scripting language.
If you are sending the user to another website, the only way to know that the user was successfully redirected is for that website to notify you in some manner. Once the user leaves your page (and that's what a redirect is - it tells the browser "leave this URI and go to this URI instead"), the scripts on that page stop running, so they can't collect any further information.
If you just need to know that the information was submitted successfully, your script could POST the data in the background, wait for a 200 response, then redirect after the information has been submitted. But that may not meet your requirements, since you still won't know if the redirect succeeded.
Another possibility which does allow you to know whether the page on the other site loaded correctly would be to open it in a new browser window/tab instead of redirecting. This is the only way to keep your page active (and, thus, your scripts able to run) while loading another page. However, it introduces other issues, like what to do with the original page. (Leave it open in the background (likely to confuse the user) or close itself after seeing that the new URI has loaded (could cause undesirable visual artifacts as one window/tab opens and then the original one closes; destroys browser history)?)
If at all possible, having the final destination site notify you when the transaction completes is almost certainly the best way to go.
To achieve point 3 you need to use cookies if you are actually trying to implement a login-cum-membersarea system. Othewise, you simple need a redirect inside a condition statement.
my $cgi = CGI->new;
if (condition) { print $cgi->redirect('https://www.examplesite.com/file.html') }
for a general way of doing point 1-2, you can look at the tutorial here:
http://practicalperl5.blogspot.com/

Simple HTML form into SQL DB using PHP, getting hammered by bots

I have a very simple HTML form which is for pre-registering for my car show. Unfortunately it has attracted the attention of spammers because there's an "address" field which they use to inject their spam URLs into.
I've added javascript form validation which says if the address field contains any slashes (like "http://") then it pops up a box telling spammers to go away.
I've added htaccess that I thought was supposed to stop users from being able to hit the PHP file which is used to submit the form into the DB without coming from my domain first.
I had recaptcha, but they were able to get around that as well so I removed it since it wasn't effective.
I know one flaw is that I can browse directly to my PHP file and it will insert a blank row into the database - how can I prevent this as well?
Does anyone have a good site or steps to take to stop these bots from hitting my form?
ReCaptcha, if well configured, should have solved your problem. There's no easy way to "go around that".
I've added htaccess that I thought was supposed to stop users from
being able to hit the PHP file which is used to submit the form into
the DB without coming from my domain first.
That's probably your problem. The bots are problem just calling the registration page with the right parameters. One way to get around it would be to display a hidden input field on your form, populate it with some random value, and check that you get the same value when the form is submitted.
But again ReCaptcha should work... if it doesn't you should ask a specific question about that.
first of all, validate the data that are send from the Form, check them if are valid, not empty. etc. If you are using a framework those have validation classes(use it), else create some,
second put back the captcha and don't send any data to the server if this isn't valid

Preventing bot form submission

I'm trying to figure out a good way to prevent bots from submitting my form, while keeping the process simple. I've read several great ideas, but I thought about adding a confirm option when the form is submitted. The user clicks submit and a Javascript confirm prompt pops up which requires user interaction.
Would this prevent bots or could a bot figure this out too easy? Below is the code and JSFIddle to demonstrate my idea:
JSFIDDLE
$('button').click(function () {
if(Confirm()) {
alert('Form submitted');
/* perform a $.post() to php */
}
else {
alert('Form not submitted');
}
});
function Confirm() {
var _question = confirm('Are you sure about this?');
var _response = (_question) ? true : false;
return _response;
}
This is one problem that a lot of people have encountered. As user166390 points out in the comments, the bot can just submit information directly to the server, bypassing the javascript (see simple utilities like cURL and Postman). Many bots are capable of consuming and interacting with the javascript now. Hari krishnan points out the use of captcha, the most prevalent and successful of which (to my knowledge) is reCaptcha. But captchas have their problems and are discouraged by the World-Wide Web compendium, mostly for reasons of ineffectiveness and inaccessibility.
And lest we forget, an attacker can always deploy human intelligence to defeat a captcha. There are stories of attackers paying for people to crack captchas for spamming purposes without the workers realizing they're participating in illegal activities. Amazon offers a service called Mechanical Turk that tackles things like this. Amazon would strenuously object if you were to use their service for malicious purposes, and it has the downside of costing money and creating a paper trail. However, there are more erhm providers out there who would harbor no such objections.
So what can you do?
My favorite mechanism is a hidden checkbox. Make it have a label like 'Do you agree to the terms and conditions of using our services?' perhaps even with a link to some serious looking terms. But you default it to unchecked and hide it through css: position it off page, put it in a container with a zero height or zero width, position a div over top of it with a higher z-index. Roll your own mechanism here and be creative.
The secret is that no human will see the checkbox, but most bots fill forms by inspecting the page and manipulating it directly, not through actual vision. Therefore, any form that comes in with that checkbox value set allows you to know it wasn't filled by a human. This technique is called a bot trap. The rule of thumb for the type of auto-form filling bots is that if a human has to intercede to overcome an individual site, then they've lost all the money (in the form of their time) they would have made by spreading their spam advertisements.
(The previous rule of thumb assumes you're protecting a forum or comment form. If actual money or personal information is on the line, then you need more security than just one heuristic. This is still security through obscurity, it just turns out that obscurity is enough to protect you from casual, scripted attacks. Don't deceive yourself into thinking this secures your website against all attacks.)
The other half of the secret is keeping it. Do not alter the response in any way if the box is checked. Show the same confirmation, thank you, or whatever message or page afterwards. That will prevent the bot from knowing it has been rejected.
I am also a fan of the timing method. You have to implement it entirely on the server side. Track the time the page was served in a persistent way (essentially the session) and compare it against the time the form submission comes in. This prevents forgery or even letting the bot know it's being timed - if you make the served time a part of the form or javascript, then you've let them know you're on to them, inviting a more sophisticated approach.
Again though, just silently discard the request while serving the same thank you page (or introduce a delay in responding to the spam form, if you want to be vindictive - this may not keep them from overwhelming your server and it may even let them overwhelm you faster, by keeping more connections open longer. At that point, you need a hardware solution, a firewall on a load balancer setup).
There are a lot of resources out there about delaying server responses to slow down attackers, frequently in the form of brute-force password attempts. This IT Security question looks like a good starting point.
Update regarding Captcha's
I had been thinking about updating this question for a while regarding the topic of computer vision and form submission. An article surfaced recently that pointed me to this blog post by Steve Hickson, a computer vision enthusiast. Snapchat (apparently some social media platform? I've never used it, feeling older every day...) launched a new captcha-like system where you have to identify pictures (cartoons, really) which contain a ghost. Steve proved that this doesn't verify squat about the submitter, because in typical fashion, computers are better and faster at identifying this simple type of image.
It's not hard to imagine extending a similar approach to other Captcha types. I did a search and found these links interesting as well:
Is reCaptcha broken?
Practical, non-image based Captchas
If we know CAPTCHA can be beat, why are we still using them?
Is there a true alternative to using CAPTCHA images?
How a trio of Hackers brought Google's reCaptcha to its knees - extra interesting because it is about the audio Captchas.
Oh, and we'd hardly be complete without an obligatory XKCD comic.
Today I successfully stopped a continuous spamming of my form. This method might not always work of course, but it was simple and worked well for this particular case.
I did the following:
I set the action property of the form to mustusejavascript.asp which just shows a message that the submission did not work and that the visitor must have javascript enabled.
I set the form's onsubmit property to a javascript function that sets the action property of the form to the real receiving page, like receivemessage.asp
The bot in question apparently does not handle javascript so I no longer see any spam from it. And for a human (who has javascript turned on) it works without any inconvenience or extra interaction at all. If the visitor has javascript turned off, he will get a clear message about that if he makes a submission.
Your code would not prevent bot submission but its not because of how your code is. The typical bot out there will more likely do an external/automated POST request to the URL (action attribute). The typical bots aren't rendering HTML, CSS, or JavaScript. They are reading the HTML and acting upon them, so any client logic will not be executed. For example, CURLing a URL will get the markup without loading or evaluating any JavaScript. One could create a simple script that looks for <form> and then does a CURL POST to that URL with the matching keys.
With that in mind, a server-side solution to prevent bot submission is necessary. Captcha + CSRF should be suffice. (http://en.wikipedia.org/wiki/Cross-site_request_forgery)
No Realy are you still thinking that Captcha or ReCap are Safe ?
Bots nowDays are smart and can easly recognise Letters on images Using OCR Tools (Search for it to understand)
I say the best way to protect your self from auto Form submitting is adding a hidden hash generated (and stored on the Session on your server of the current Client) every time you display the form for submitting !
That's all when the Bot or any Zombie submit the form you check if it the given hash equals the session stored Hash ;)
for more info Read about CSRF !
You could simply add captcha to your form. Since captchas will be different and also in images, bots cannot decode that. This is one of the most widely used security for all wesites...
you can not achieve your goal with javascript. because a client can parse your javascript and bypass your methods. You have to do validation on server side via captchas. the main idea is that you store a secret on the server side and validate the form submitted from the client with the secret on the server side.
You could measure the registration time offered no need to fill eternity to text boxes!
I ran across a form input validation that prevented programmatic input from registering.
My initial tactic was to grab the element and set it to the Option I wanted. I triggered focus on the input fields and simulated clicks to each element to get the drop downs to show up and then set the value firing the events for changing values. but when I tried to click save the inputs where not registered as having changed.
;failed automation attempt because window doesnt register changes.
;$iUse = _IEGetObjById($nIE,"InternalUseOnly_id")
;_IEAction($iUse,"focus")
;_IEAction($iUse,"click")
;_IEFormElementOptionSelect($iUse,1,1,"byIndex")
;$iEdit = _IEGetObjById($nIE,"canEdit_id")
;_IEAction($iEdit,"focus")
;_IEAction($iEdit,"click")
;_IEFormElementOptionSelect($iEdit,1,1,"byIndex")
;$iTalent = _IEGetObjById($nIE,"TalentReleaseFile_id")
;_IEAction($iTalent,"focus")
;_IEAction($iTalent,"click")
;_IEFormElementOptionSelect($iTalent,2,1,"byIndex")
;Sleep(1000)
;_IEAction(_IETagNameGetCollection($nIE,"button",1),"click")
This caused me to to rethink how input could be entered by directly manipulating the mouse's actions to simulate more selection with mouse type behavior. Needless to say I wont have to manualy upload images 1 by 1 to update product images for companies. used windows number before letters to have my script at end of the directory and when the image upload window pops up I have to use active accessibility to get the syslistview from the window and select the 2nd element which is a picture the 1st element is a folder. or the first element in a findfirstfile return only files call. I use the name to search for the item in a database of items and then access those items and update a few attributes after upload of images,then I move the file from that folder to a another folder so it doesn't get processed again and move onto the next first file in the list and loop until script name is found at the end of the update.
Just sharing how a lowly data entry person saves time, and fights all these evil form validation checks.
Regards.
This is a very short version that hasn't failed since it was implemented on my sites 4 years ago with added variances as needed over time. This can be built up with all the variables and if else statements that you require
function spamChk() {
var ent1 = document.MyForm.Email.value
var str1 = ent1.toLowerCase();
if (str1.includes("noreply")) {
document.MyForm.reset();
}
<input type="text" name="Email" oninput="spamChk()">
I had actually come here today to find out how to redirect particular spam bot IP addresses to H E L L .. just for fun
Great ideas.
I removed re-captcha a while back converted my contactform.html to contactform.asp and added this to the top (Obviously with some code in between to full-fill a few functions like sendmail, verify form filled out completely etc.).
<%
if Request.Form("Text") = 8 then
dothis
else
send them to google.com
end if
%>
On the form i stuck a basic text field with the name text so its just looks like anything not specifying what its for at all, I then stuck some text 2 lines above in red that states enter what 2 + 6 = in the box below to submit your request.

Sharepoint: access subsite page contact from workflow

I'm trying to create a work flow that will send an email to the users in the contact field for the page that the initial link was followed from.
In other words, a user clicks a link on page ../top/sub/pages/page1.aspx which takes them to a form here: ..top/lists/feedback/newform.aspx. Once they submit the form on the top level page it starts a workflow (at ..top/lists/feedback/) which will email the users in the meta data for the referrer page (../top/sub/pages/page1.aspx) and finish by deleting the feedback item.
My problem lies in trying to email the correct user. I have tried to make a work-flow on the sub-site, but it seems like the work-flow has ZERO access outside of its directory.
My next idea would be to try and send the user as a parameter (as part of the form) using a script, but I'm unsure of how to access the information I need.
How would I access the page's contact user? Am I even on the right track?
ps. I dont have access to the server and therefore am unable to use visual studio
Probably there might be some dirty workaround.
add a hidden field to you top/lists/feedback/
add a delegate control that stores the contact on the page the user came from i.e. in the user session (or somewhere in SharePoint or in DB) and place the control on the pages ../top/sub/pages/ (or place it everywhere but make it work only on the pages)
add an itemeventreceiver in the list ..top/lists/feedback/ that grabs what the delegate control saved and inserts it into the hidden field in the item that is created in the ..top/lists/feedback/newform.aspx
use the hidden field in the wotkflow.
I hope you can either accept it or generate you own idea reading my answer.
Good luck!

Categories