Spam-prevention for my contact form - javascript

I've been doing a lot of research about spam-prevention methods, I do not want to resort to using CAPTCHA.
The form typically sends an email to the user and the webmaster with the contents of the form.
The first thing I've done is to remove the contents of the form in the email sent to the user and simply have a confirmation message.
I have added a row for the persons 'title' and hidden the row using CSS, if the field is filled in. The submission completes without sending any emails.
I'd like to add a couple of other techniques,
Check the time to complete submission - do not send emails if under 5 seconds.
Pass through an unique ID - do not send emails if no match
The problem is that website pages are cached, so directly setting a session variable is useless. I'm considering use ajax to hit a CFC and set the variable, but it would require JavaScript.
Should I restrict submissions to only those with JavaScript enabled? Or are there any alternative suggestions?
Thanks

Daniel,
I have a similar spam-detection approach that has been in place since last year. I can share what I have seen.
Session based tests:
Checking the time it takes someone to fill out the form and checking that the user comes from the right page have been very reliable checks, though somewhat fraught with difficulty. In your case, forcing users to have modern, javascript enabled browsers might be your best option. And it seems like it's becoming a more accepted practice, I guess, right? I don't really know..
Content based tests:
Another two fairly helpful practices are to check that form fields contain different values and that no more than a specified number of URLs have been entered. Spammers almost always seem to stick the same trash URL into every field. However, these checks aren't nearly as good as session-based checks.
Our spam-detection heuristic has a few other checks, in addition to the ones above:
Basic regex injection tests - bare-bones, but I can share if you are interested
Spam Content - pretty useless - a simple library constructed mostly by hand
Banned IP Address - also pretty useless..
Some numbers from our heuristic over the last year or so.
Total failed tests= 83,356
Failed Injection Test = 54 (0 failed this test and no other tests)
Failed Too Many URLs In Input Test = 18,935 (2396)
Failed Spam Content Test = 3673 (46)
Failed Hidden Field Tampering Test = 60,295 (1479)
Failed Dubious Time Elapse Test = 64,430 (17,126)
Failed Invalid Session Test = 28,706 (140)
Failed Fields Contain Same Values Test = 167 (49)
Failed Banned IP Address (not implemented) = 0 (0)
I don't want to post too many details about exactly what our criteria are, but if you are interested I'd be happy to share code.
-Ben

I suggest you take a look at http://cfformprotect.riaforge.org/ as it works well for me.

Related

How to find or create a questionnaire with the function of the prohibition on the same answer?

I need help.
We need to find a questionnaire in which the answer to the question (for example, telephone number) may not be the same for two users.
For Example:
Enter a phone number:
First the user enters a phone number: 123456789
And completes the survey.
The second user starts to question and answers the same phone number 123456789.
He receives an error message or a request to enter a different answer.
Or is there any easy implementation of this problem using php or javascript.
Maybe it is possible to implement with surveymonkey api.
I would be glad of any help or advice.
Google forms writes to a spreadsheet database, allows regular expression fields and attached scripts. Set the field to mach your required format, then validate the response with a script.
Your requirement is persistent storage. You will either require a database, such as MySQL, and use PHP to access it, or a third party application such as SurveyMonkey. However, I can't tell you the limitations of SurveyMonkey, you will have to check their documentation.
I realize that this late and this thread is limited but for others looking for same answer. Here is how I did exactly the same thing for one of my coworkers using Examinare Framework and Examinare Survey Tool.
First, create a html form that sends the phone number to a form.
Inside that PHP file call Examinare PHP Wrapper and check if a recipient has this phone number on their recipient profile.
https://developer.examinare.com/apidocs/listrecipientsbygroup/
A tip: And loop through it the results.
If the recipient not exist, create the recipient. Make sure you add a fake or real email to prevent getting an error.
https://developer.examinare.com/apidocs/addrecipient/
Mark the recipient as a active in the survey.
https://developer.examinare.com/apidocs/markrecipientstosurvey/
Redirect to the survey.
If you want the recipient to return to a certain path then add redirect_url variable to the url with a valid url.
Hope it helps you and future seakers.

How to secure Ajax link requests?

Imagine the next scenario: a user wants to register to a webpage and fills a form. While he is filling the form, jQuery keeps checking through a regular expression if fields are valid, etc...
Taking the email as the primary key which the user will use after registering to login, the email field needs to be checked with Ajax to let the user know if that email is registered or not. I want to check it with Ajax to avoid sending the full form and emptying it, refreshing page, etc...
So, when the user has ended filling the email field, the Ajax request is sent to the server, something like the next link:
example.com/check.php?email=abcdefg#gmail.com
When check.php receives the email, it asks the database if it exists or not and returns a message like: User already exists if user exists or null if user does not exist.
The question is: if someone digs through my .js and finds out links similar to that, they could use that link to send a large number of requests to find out if those random emails exist. This could lead to heavy use of the database or in the worst cases even crashing and private information leaks.
Someone could do a huge for loop to check emails like:
//Getting the response of the next links
example.com/check.php?email=aaaaaaa#gmail.com // Returns null
example.com/check.php?email=aaaaaab#gmail.com // Returns null
example.com/check.php?email=aaaaaac#gmail.com // Returns null
example.com/check.php?email=aaaaaad#gmail.com // Returns User already exists
------------------------------------------------------------------------------------------------------------------------------------
Since i last accepted the answer, i kept investigating this and found the solution to avoid this behaviour. The following code is for JAVA but the logic can be applied to any other server-side language.
Before doing ANY ajax request to the server, I request a token to the server. This token looks like this fmf5p81m6e56n4va3nkfu2ns8n it is made by a simple method, it can, however, be more complex, but this is good to go.
public String getToken() throws UnsupportedEncodingException {
return new BigInteger(130, new SecureRandom()).toString(32);
}
When requesting the token, the server does not only return the token, but also a small script that in case someone uses browser to inspect element (and browser navbar) and such the script will run and the token will be cleared. Servlet returns something like this:
_html += "<head>"
+ "<script> "
+ "window.onload=function(){\n"
+ " document.body.innerHTML = \"\";\n"
+ " }"
+ "window.location.href='http://mywebsite.com' "
+ "</script>"
+ "</head>"
+ "<body>"
+ "[" + token+ "]"
+ "</body>"
+ "</html>";
First empties the body then navigates back to wherever we want. javascript/jquery will however, catch the entire content as string, then I simply extract the string between [ and ]. This token is only available for the next request, so every AJAX request will have its unique token. On the 2nd reques the token just used is deleted.
After I get the token I append it as parameter to whatever link i request, something like this:
ajaxRequestObjet = $.ajax({
url: "http://localhost:8084/mywebsite.com/servlet", //<-- local tomcat server
method: "POST",
data: "type=AJAX&page=some-article&token=fmf5p81m6e56n4va3nkfu2ns8n"
});
This method works fine against someone who inspects the website manually and try to use the links, but what about java/php/IIS servers that do this automaticly?
For this ask for header! Something like this:
boolean isAjax = "XMLHttpRequest".equals(request.getHeader("X-Requested-With"));
It will be true only and only if XMLHttpRequest exists....
There is one last thing to keep in mind. Make sure 'Access-Control-Allow-Origin' header is NOT present in your app to make sure that any javascript NOT in your server wont get the server resources. If this header does not exist, chrome will return this:
XMLHttpRequest cannot load http://localhost:8084/mywebsite.com/servlet. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost' is therefore not allowed access.
Java server was in tomcat and I had another apache for this tests, this is the small html present in apache which gave the error above:
<html>
<head>
<script src="http://code.jquery.com/jquery-latest.min.js" type="text/javascript"></script>
<script>
ajaxRequestObjet = $.ajax({
url: "http://localhost:8084/mywebsite.com/servlet",
method: "POST",
data: "type=AJAX&page=Token"
});
ajaxRequestObjet.done(function (msg) {
alert(msg);
});
</script>
</head>
<body>
</body>
</html>
While you can not control this 100%... there are a few options..
Try using the same methods that people use with Captcha scripts..
Basically when the user loads the form / page.. You generate a random string/id in their PHP session and store it.. When they send the ajax requests, have your ajax check also append the string/id and require it before allowing a check to perform else return a header of 500 or something..
Using this approach with sessions, you could set a allowed limit of checks (say 5) and once the user has tried more than 5 checks, They are required to reload the page or perform a human check (eg Captcha).. Then it resets their count.. Even allow a total of say 30 within 1 hour / per IP or something.
Also use smart events to trigger when the ajax check is done, eg field/tab change or on a button press.. Or when a valid email is detected.. but say .com.au would trigger twice.
Basically this way, even if someone sniffed your JS files and tried to automate the email checker.. It would require them finding a way to append the string/id that you generate and also limit their amount of requests performed.
Beyond this, there is not to much more you can do easily.. But there are still a few other idea's.
Most of them would work around using a PHP session / cookie.. Say for example if they check and find 3 email addresses.. Then again you set that as a limit and force them to require a manual submission or something.
See how the above suggestion goes for you, any questions do feel free to ask. But may take me a day or two to reply as weekend.. Also research how Captcha scripts work as plenty of source code for them.. As they work on the same idea.
Time Delays will simply look bad / make your site appear slow / bug the user with waiting for a response.
You need to limit the amount of look up's per session / ip address.. Otherwise there is always a way to get past these checks.. Basically once they hit a limit.. Force the user/ip/session to wait a few minutes/hours and verify them with a Captcha script so it can not be scripted...
Javascript Security / Hiding The Source
While you can not do this truly, you can do certain things generate the JS using a PHP page with a JS header.. so <script src='myjscode.php'></script> and this allows PHP to check for a valid session.. So stops external requests to an extent.. But this is mostly useful for allowing JS to be only available behind a membership/login..
Multiple Checks / If Possible In This Case
Depending on your approach, is this for a user to check if they already have an account? If so.. you could combine the email check with something like their name/country/age/dob ... So they would need to select two or three correct matching values before being able to get a check/response from the ajax call?
Maybe not in your case, but just thought would add this as well.
The JavaScript code on your website is executed on the computer of the user, so there is no way you could stop him from digging through your code. Even if you use a code obfuscator (for example, https://www.javascriptobfuscator.com/), the hacker could debug your application and record all requests send to the server.
Everything security-relevant has to happen on the server. You could limit the amount of requests from a specific IP address.
You could protect against brute force attacks with something similar to CSRF tokens:
Assign a server-generated ID to every client session. Each request to check.php should include this ID.
check.php should reject requests that do not include an ID, or include an ID that the server did not generate (to prevent attacks with spoofed IDs). It should also rate limit on ID - if a given ID has made a request in (say) the last second, or a given ID makes more than n requests in a 10 second interval, it should return an error response. This protects against requests from a single session arriving from several IP addresses.
You should also rate limit by IP address to prevent brute-forcing by opening a large number of web application sessions.
There isn't much you can do to prevent an attacker looking up a single, or small number, of specific email addresses - it's an inherent risk with this type of validation.
One approach to resolve this problem could be this:
Suppose you have ajax request calling your server to receive a response from a particular user or client. You can have a table in your database where you provide a unique token for every user or hash value that can be checked every time user makes an ajax request to the server. If the token value matches the user request value than he is a genuine user. You can also record his number of request on the table to ensure he is making legitimate requests. I acknowledge the fact that it may slow down your app performance, but it will be safe option to consider. Note: you need to retrieve the token on your HTML page to send it with ajax.
Please comment to know more. I have been using this approach and there is no problem until now.
Example:
This type of attack can be treated the same as any other brute force attack, where the only effective solution is to use a Captcha. But of course, Captchas are a detriment to UX, so you have to consider if the additional security is worth it, especially for an attack that is very unlikely to happen anyway. That said, you may want to use a Captcha on your registration form anyway, to prevent bots from creating accounts.
This sort of attack has a huge cost for little reward for the attacker. There are billions of possible email addresses to test for. It could only be worth going to great lengths such as this, if the site in question was particularly sensitive, such as some kind of adult site, where the attacker hopes to blackmail users that he finds.
CloudFlare
Not as good as a Captcha solution but the brute force attack might be detected and prevented by CloudFlare's DDoS system. Also, CF can force Tor users to solve a Captcha before accessing your site, which would prevent an attacker from using Tor as a vehicle for the attack.
IP Rate Limiting
Rate limiting on an IP basis has problems because if an attacker decided to undertake a task as huge as this, he will likely be using a Botnet or some other system of multiple machines to launch the attack.
Consider a large organisation such as a University, where all users share the public IP. One of the users launches an attack on your site, and you block his IP, and in the processes blocking everyone else. This countermeasure could actually be used to launch a DoS attack.
Session ID/CRSF Token
Definitely not a solution because the attacker needs to simply make a request to the page first, to obtain the token. It's an additional request to make but only an inconvenience for the attacker.
First of all: I'd URL-encode the mail-address. 'example.com/check.php?email=' . urlencode(abcdefg#gmail.com)
Ad your question: when check.php is called, you can
check, if the user's session and his IP have sent a request during the last seconds
if not, write the user's session, the user's IP plus the current timestamp to a helper-table plus to a cookie and hit your DB
if yes, block the request
But I'm afraid this won't help you from fraud because everyone can check your JavaScript and if someone want's to exploit this, he will find ways.
check.php should depending on the setup either only be accessible internally, or verify from where the connection is made. Take a look at this previous question- I hope it might be what you're looking for. how to verify the requesting server in php?
You could use a CSRF token and exit early from your script if you detect that no or an invalid CSRF token. Almost (if not all) PHP frameworks come with support for this.
Also check this question from the security community: https://security.stackexchange.com/questions/23371/csrf-protection-with-custom-headers-and-without-validating-token

PHP Double-Click Dilemma

We have a problem with users double-clicking on buttons within our application to proceed from screen to screen.
We have implemented the ( onclick="this.disabled=true" ) on our buttons but we are convinced that it is not always sufficient to stop the fast-fingered double-click.
A simple example :-
Screen A has four input fields and a proceed button. When the proceed button is pressed, control is passed to server-side routine to validate info, set some session vars and call screen B.
What appears to happen occasionally is :-
On first click the server-side routine is called and begins validating info and setting session vars. Second-click takes control and again calls the server-side routine and begins validating info and setting session vars -> for us, the session vars are already set and this highlights the problem.
We have looked at tokens but don't think they will solve our problem.
We think that since every PHP application must be vulnerable to this double-click issue there has to be a standard method for resolving it but we have yet to find one.
If you have resolved this issue then we would be grateful if you would like to give us some insights into how we might overcome the problem.
* Thanks for the replies. Loic and Brian Nickel - hard to separate as both going for the token method via timestamp or GUID. We will have to go back and take another look at tokens. After discussion - as a preferred solution for us, we would go with the GUID token concept.
Since double click will basically submit the same form twice you can check the timestamp between two submits.
I'll take the example of stackoverflow because this site is awesome.
Let's say I vote this question up, server side, if my POST request is valid, then my POST request will be treated, and saved.
Then server side, before treating a request, they will check if this same form hasn't been posted in last few seconds (don't they?).
Anyway, my point is, give your forms a name, and when validated, put a timestamp in your users session so you can refuse their post of the same form given a defined amount of time.
Best of luck.
This is a very common problem with a fairly standard solution. Whenever you generate your form, you should generate a unique token like a GUID and stick it in SQL, redis, memcached, the session, or any short term persistent store you have. Stick it in a hidden field. You should be doing one token for each generated form.
When the form gets submitted, atomically check for and remove the token from the store. If it's there the form was submitted for the first time. If not, it's a duplicate.
For bonus points, instead of showing an error on the second submission, you can store the token with the successful result data and use it to render the same success page as you would have if they clicked once.
1) Put a for the eye hidden div (or other element) on z-top of button (opacity:0.01)
2) when once clicked (mousedown) remove div
or:
1) Remove click event when once clicked

Remove server side validation and make a full blown client side validation?

Is it recommended to make all the necessary input validations in client side? I want to optimize the processing of the server (meaning lesser double validation so that the programmer may focus only to business logic).
Example:
On the client side, there's an 'Age' input textfield (JavaScript will not allow to submit the form unless it's within the range)
On the server side, there's no more validation of the 'Age'
// instead of validating again the age
int age = Integer.parseInt(request.getParameter("age"));
// check age if valid
if(age >= 0 ) { /* codes * / }
We can instead proceed only to
int age = Integer.parseInt(request.getParameter("age"));
because we are very sure that it is valid.
To accommodate disabled JavaScripts in Web browsers, we need to check first. If JavaScript is enabled, proceed to the application, otherwise block the application. (Just like Facebook)
Is my theory / concept acceptable?
If you need to enforce certain input patterns, you cannot rely on data that comes from the client. Folks can disable JavaScript, or simply bypass your validation completely and send whatever data they want. However, most casual users will not have this problem, and the data is coming from the client anyway.
In short, it depends.
For most of my applications, I have client-side validation and only worry about some things server-side that can throw an error condition. For example, if I have a form that sends an e-mail to someone, I will have JavaScript that checks for a valid to: e-mail address, and alert the user. Server-side, if that e-mail address isn't valid or isn't present, I will simply throw an error writing code to let the user nicely know something has gone wrong. For the message body, I'll validate client-side whether or not it has one, but server side I won't really care. Again, what you do depends on your needs.
I believe in validation EVERYWHERE
I like to have client side validation for:
required fields populated
minimum size fields (like zip)
Regex for proper character types in proper locations (eg social security numbers, new passwords, etc)
privilege enforcement (users can only see and do what their role should be allowed to)
Server Side validation for:
all client side requirements
entity associations (child-parent relationships are legit)
changes or requests are Role-Authorized
User Entry is the enemy! Users will find a way to break your site willingly or un-willingly. Things will fall through the cracks. So I strongly endorse double and triple checking.
I would believe in client side validation to save server processing, if it NOT for my fear that this kind of thinking will make me exceptions more prominent.
Overall, the reasons why I value rich Client side validation are:
one more level of checking data integrity (as well as server side)
guiding users; intelligent client side validation makes helps users make more efficient answers quicker
a better experience; If users don't have to wait for the loop back to server and returned to client to see their errors, their user experience should be better.
If security / data integrity is a concern, I would advise against this. While it'll be enough to prevent Joe Smith from entering unwanted data, you'll leave your system open to serious data manipulation from people who understand how the web works.
Let's say you have a voting system like on StackOverflow where anytime a user votes, an AJAX call is made. While the JS validation may prevent a person from using the displayed HTML to cast multiple votes on the same question or answer, it will not prevent a user from going into their browser's console and manually submitting POST or GET requests to get around the JS validation. Before you know it, you'll see Lloyd Banks with 100k reputation after answering just a few questions

Preventing bot form submission

I'm trying to figure out a good way to prevent bots from submitting my form, while keeping the process simple. I've read several great ideas, but I thought about adding a confirm option when the form is submitted. The user clicks submit and a Javascript confirm prompt pops up which requires user interaction.
Would this prevent bots or could a bot figure this out too easy? Below is the code and JSFIddle to demonstrate my idea:
JSFIDDLE
$('button').click(function () {
if(Confirm()) {
alert('Form submitted');
/* perform a $.post() to php */
}
else {
alert('Form not submitted');
}
});
function Confirm() {
var _question = confirm('Are you sure about this?');
var _response = (_question) ? true : false;
return _response;
}
This is one problem that a lot of people have encountered. As user166390 points out in the comments, the bot can just submit information directly to the server, bypassing the javascript (see simple utilities like cURL and Postman). Many bots are capable of consuming and interacting with the javascript now. Hari krishnan points out the use of captcha, the most prevalent and successful of which (to my knowledge) is reCaptcha. But captchas have their problems and are discouraged by the World-Wide Web compendium, mostly for reasons of ineffectiveness and inaccessibility.
And lest we forget, an attacker can always deploy human intelligence to defeat a captcha. There are stories of attackers paying for people to crack captchas for spamming purposes without the workers realizing they're participating in illegal activities. Amazon offers a service called Mechanical Turk that tackles things like this. Amazon would strenuously object if you were to use their service for malicious purposes, and it has the downside of costing money and creating a paper trail. However, there are more erhm providers out there who would harbor no such objections.
So what can you do?
My favorite mechanism is a hidden checkbox. Make it have a label like 'Do you agree to the terms and conditions of using our services?' perhaps even with a link to some serious looking terms. But you default it to unchecked and hide it through css: position it off page, put it in a container with a zero height or zero width, position a div over top of it with a higher z-index. Roll your own mechanism here and be creative.
The secret is that no human will see the checkbox, but most bots fill forms by inspecting the page and manipulating it directly, not through actual vision. Therefore, any form that comes in with that checkbox value set allows you to know it wasn't filled by a human. This technique is called a bot trap. The rule of thumb for the type of auto-form filling bots is that if a human has to intercede to overcome an individual site, then they've lost all the money (in the form of their time) they would have made by spreading their spam advertisements.
(The previous rule of thumb assumes you're protecting a forum or comment form. If actual money or personal information is on the line, then you need more security than just one heuristic. This is still security through obscurity, it just turns out that obscurity is enough to protect you from casual, scripted attacks. Don't deceive yourself into thinking this secures your website against all attacks.)
The other half of the secret is keeping it. Do not alter the response in any way if the box is checked. Show the same confirmation, thank you, or whatever message or page afterwards. That will prevent the bot from knowing it has been rejected.
I am also a fan of the timing method. You have to implement it entirely on the server side. Track the time the page was served in a persistent way (essentially the session) and compare it against the time the form submission comes in. This prevents forgery or even letting the bot know it's being timed - if you make the served time a part of the form or javascript, then you've let them know you're on to them, inviting a more sophisticated approach.
Again though, just silently discard the request while serving the same thank you page (or introduce a delay in responding to the spam form, if you want to be vindictive - this may not keep them from overwhelming your server and it may even let them overwhelm you faster, by keeping more connections open longer. At that point, you need a hardware solution, a firewall on a load balancer setup).
There are a lot of resources out there about delaying server responses to slow down attackers, frequently in the form of brute-force password attempts. This IT Security question looks like a good starting point.
Update regarding Captcha's
I had been thinking about updating this question for a while regarding the topic of computer vision and form submission. An article surfaced recently that pointed me to this blog post by Steve Hickson, a computer vision enthusiast. Snapchat (apparently some social media platform? I've never used it, feeling older every day...) launched a new captcha-like system where you have to identify pictures (cartoons, really) which contain a ghost. Steve proved that this doesn't verify squat about the submitter, because in typical fashion, computers are better and faster at identifying this simple type of image.
It's not hard to imagine extending a similar approach to other Captcha types. I did a search and found these links interesting as well:
Is reCaptcha broken?
Practical, non-image based Captchas
If we know CAPTCHA can be beat, why are we still using them?
Is there a true alternative to using CAPTCHA images?
How a trio of Hackers brought Google's reCaptcha to its knees - extra interesting because it is about the audio Captchas.
Oh, and we'd hardly be complete without an obligatory XKCD comic.
Today I successfully stopped a continuous spamming of my form. This method might not always work of course, but it was simple and worked well for this particular case.
I did the following:
I set the action property of the form to mustusejavascript.asp which just shows a message that the submission did not work and that the visitor must have javascript enabled.
I set the form's onsubmit property to a javascript function that sets the action property of the form to the real receiving page, like receivemessage.asp
The bot in question apparently does not handle javascript so I no longer see any spam from it. And for a human (who has javascript turned on) it works without any inconvenience or extra interaction at all. If the visitor has javascript turned off, he will get a clear message about that if he makes a submission.
Your code would not prevent bot submission but its not because of how your code is. The typical bot out there will more likely do an external/automated POST request to the URL (action attribute). The typical bots aren't rendering HTML, CSS, or JavaScript. They are reading the HTML and acting upon them, so any client logic will not be executed. For example, CURLing a URL will get the markup without loading or evaluating any JavaScript. One could create a simple script that looks for <form> and then does a CURL POST to that URL with the matching keys.
With that in mind, a server-side solution to prevent bot submission is necessary. Captcha + CSRF should be suffice. (http://en.wikipedia.org/wiki/Cross-site_request_forgery)
No Realy are you still thinking that Captcha or ReCap are Safe ?
Bots nowDays are smart and can easly recognise Letters on images Using OCR Tools (Search for it to understand)
I say the best way to protect your self from auto Form submitting is adding a hidden hash generated (and stored on the Session on your server of the current Client) every time you display the form for submitting !
That's all when the Bot or any Zombie submit the form you check if it the given hash equals the session stored Hash ;)
for more info Read about CSRF !
You could simply add captcha to your form. Since captchas will be different and also in images, bots cannot decode that. This is one of the most widely used security for all wesites...
you can not achieve your goal with javascript. because a client can parse your javascript and bypass your methods. You have to do validation on server side via captchas. the main idea is that you store a secret on the server side and validate the form submitted from the client with the secret on the server side.
You could measure the registration time offered no need to fill eternity to text boxes!
I ran across a form input validation that prevented programmatic input from registering.
My initial tactic was to grab the element and set it to the Option I wanted. I triggered focus on the input fields and simulated clicks to each element to get the drop downs to show up and then set the value firing the events for changing values. but when I tried to click save the inputs where not registered as having changed.
;failed automation attempt because window doesnt register changes.
;$iUse = _IEGetObjById($nIE,"InternalUseOnly_id")
;_IEAction($iUse,"focus")
;_IEAction($iUse,"click")
;_IEFormElementOptionSelect($iUse,1,1,"byIndex")
;$iEdit = _IEGetObjById($nIE,"canEdit_id")
;_IEAction($iEdit,"focus")
;_IEAction($iEdit,"click")
;_IEFormElementOptionSelect($iEdit,1,1,"byIndex")
;$iTalent = _IEGetObjById($nIE,"TalentReleaseFile_id")
;_IEAction($iTalent,"focus")
;_IEAction($iTalent,"click")
;_IEFormElementOptionSelect($iTalent,2,1,"byIndex")
;Sleep(1000)
;_IEAction(_IETagNameGetCollection($nIE,"button",1),"click")
This caused me to to rethink how input could be entered by directly manipulating the mouse's actions to simulate more selection with mouse type behavior. Needless to say I wont have to manualy upload images 1 by 1 to update product images for companies. used windows number before letters to have my script at end of the directory and when the image upload window pops up I have to use active accessibility to get the syslistview from the window and select the 2nd element which is a picture the 1st element is a folder. or the first element in a findfirstfile return only files call. I use the name to search for the item in a database of items and then access those items and update a few attributes after upload of images,then I move the file from that folder to a another folder so it doesn't get processed again and move onto the next first file in the list and loop until script name is found at the end of the update.
Just sharing how a lowly data entry person saves time, and fights all these evil form validation checks.
Regards.
This is a very short version that hasn't failed since it was implemented on my sites 4 years ago with added variances as needed over time. This can be built up with all the variables and if else statements that you require
function spamChk() {
var ent1 = document.MyForm.Email.value
var str1 = ent1.toLowerCase();
if (str1.includes("noreply")) {
document.MyForm.reset();
}
<input type="text" name="Email" oninput="spamChk()">
I had actually come here today to find out how to redirect particular spam bot IP addresses to H E L L .. just for fun
Great ideas.
I removed re-captcha a while back converted my contactform.html to contactform.asp and added this to the top (Obviously with some code in between to full-fill a few functions like sendmail, verify form filled out completely etc.).
<%
if Request.Form("Text") = 8 then
dothis
else
send them to google.com
end if
%>
On the form i stuck a basic text field with the name text so its just looks like anything not specifying what its for at all, I then stuck some text 2 lines above in red that states enter what 2 + 6 = in the box below to submit your request.

Categories