I never had any problems with "Delete" button on any browsers, but I had 2 users complain about it - they click on the button and nothing happens. One of the users did not have any problems with the buttons in the past. The code seems OK, no errors were ever generated. All other buttons work for these users too. Any idea on why it could be happening?
It's also possible that these 2 users have security applications performing deep inspection on the network traffic and blocking HTTP DELETE messages.
It's possible that the Rails controller action can't delete it but doesn't raise an exception, rather it returns some error code that the JavaScript isn't prepared to handle so it doesn't do anything when the request is returned.
Related
I am trying to save user changes to a form on the server with AJAX on tab/window close.
This is a similar question:
Intercept page exit event
I am using this code :
$(window).bind('beforeunload', beforeUnload);
...and it seems to work fine except for when using IE11.
It seems that when the user verification alert pops up in IE11, every JS piece of code that was previously running gets halted (and my data is not sent over the wire).
So, if the user chooses to leave, everything is gone.
Has anybody made it work on that browser?
Is it possible?
EDIT :
I see now that it works sometimes and it fails on others.
When it fails, it starts the AJAX call (it hits the breakpoint at that point), but never gets in the success/fail function... (and I see nothing being sent when using Fiddler)
In summary, it first hits the AJAX call breakpoint, it then displays the confirmation dialog, and when you choose to leave the page nothing gets sent... :(
Turns out that this method of saving data is not reliable and it is working only by pure luck.
Also, the synchronous XHR is marked as deprecated by now, so it's not a good idea to use that either.
So, I guess the web is not meant to work this way...
I'm trying to figure out what is going on here. I've been at it for hours now and can't seem to get a grip on why this is happening.
I'm making a few AJAX calls, and I keep getting this error back only in Firefox (version 21) on Mac OS X.
Here is the error:
"[Exception... "A parameter or an operation is not supported by the underlying object"
code: "15" nsresult: "0x8053000f (InvalidAccessError)" location:
"https://ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min.js Line: 6"
I'm making a CORS call, so I set up my AJAX like so:
$.ajaxSetup({
crossDomain: true,
xhrFields: {
withCredentials: true
}
});
And continue calls henceforth. Basically, does anyone out there have ANY experience with this error? I see some posts online but they all seem to do with Cross-Domain CSS, which I'm not using.
Okay, so after of hours of testing (and great discussion from #Dave and #danronmoon, I've finally figured out what's going on.
The CORS (Cross-Domain Resource Sharing) calls I was making were set to 'async: false' -- (which I realize I did not include in my original post, as I thought it was inconsequential) this, seems to operate fine in all browsers except Firefox, where jQuery will bark at you and your ajax call will fail.
Thank you all for your help and I hope this helps someone else!
Since this is the first duckduckgo result for InvalidAccessError: A parameter or an operation is not supported by the underlying object I will add another source for this.
If you deal with such error when doing iframe/window actions, then you're probably prevented by the iframe's sandbox attribute (see https://html.spec.whatwg.org/multipage/iframe-embed-object.html#attr-iframe-sandbox ) even when being on the same origin.
In my case, an iframe was trying to do a window.top.location.href = ... after a form submission success. The allow-top-navigation sandbox option is mandatory to do so.
Funny thing, this sandbox option is not mandatory to reload the top browsing context... it's only required for navigating in it.
For me, I was using WebSockets and called WebSocket.close(1001). It doesn't like my status code. Changing it to 1000 or not specifying a code (default 1005) works just fine.
this is the real solution by Diogo Cardoso, the xhr object or parent seems to lack a toString() method
CORS synchronous requests not working in firefox
Yes, it is a CORS problem caused by using ajax. But as user320550 asks, what if you NEED to use the property 'async:false'? I found that using the 'withCredentials:false' property as a workaround fixes the issue on firefox and doesn't affect other browsers.
Just want to add a somewhat nasty intermittent variant of Xenos's answer. As he mentioned, you can get this problem if you try and navigate the window by setting window.top.location.href = ... from within a sandboxed iframe, and that this can be prevented if your iframe has the allow-top-navigation option set.
But you might also find your iframe has the more restrictive allow-top-navigation-by-user-activation option. This will allow navigation, but only in response to a user action such as clicking a link or a button. For example, it will be allowed within a form submit event handler, but you can't just trigger it at an arbitrary point in time, such as from a setTimeout() callback with a long delay.
This can be problematic if you are (for example) using AJAX form submission before performing a redirect. The browser needs to decide if the navigation is in response to a user action or not. It does this by only allowing the navigation if it is considered to have happened within an acceptable time period of the user interaction. The HTML standard refers to this as transient activation.
The bottom line is that if your AJAX call is too slow, or if your user has a poor network connection, the navigation will fail. How slow is too slow? I have only tested Firefox, but it appears to allow 5 seconds before it considers the user interaction to have expired.
Possible solutions:
Ask whoever is responsible for the iframe options to upgrade to the blanket allow-top-navigation option
Don't perform async work such as AJAX requests in between user actions and top navigation. For example, use old-school POST form submission directly to the back-end, rather than using an AJAX request
Make sure your responses are as fast as possible. Catch any errors, and prompt the user to click something to trigger the navigation manually. For example:
async function submitForm() {
await doPotentiallySlowAsyncFormSubmit()
try {
window.top.location.href = ...
} catch (e) {
// Show message to user, e.g. "Form submitted, click here to go to the next step"
}
}
I'm trying to figure out a good way to prevent bots from submitting my form, while keeping the process simple. I've read several great ideas, but I thought about adding a confirm option when the form is submitted. The user clicks submit and a Javascript confirm prompt pops up which requires user interaction.
Would this prevent bots or could a bot figure this out too easy? Below is the code and JSFIddle to demonstrate my idea:
JSFIDDLE
$('button').click(function () {
if(Confirm()) {
alert('Form submitted');
/* perform a $.post() to php */
}
else {
alert('Form not submitted');
}
});
function Confirm() {
var _question = confirm('Are you sure about this?');
var _response = (_question) ? true : false;
return _response;
}
This is one problem that a lot of people have encountered. As user166390 points out in the comments, the bot can just submit information directly to the server, bypassing the javascript (see simple utilities like cURL and Postman). Many bots are capable of consuming and interacting with the javascript now. Hari krishnan points out the use of captcha, the most prevalent and successful of which (to my knowledge) is reCaptcha. But captchas have their problems and are discouraged by the World-Wide Web compendium, mostly for reasons of ineffectiveness and inaccessibility.
And lest we forget, an attacker can always deploy human intelligence to defeat a captcha. There are stories of attackers paying for people to crack captchas for spamming purposes without the workers realizing they're participating in illegal activities. Amazon offers a service called Mechanical Turk that tackles things like this. Amazon would strenuously object if you were to use their service for malicious purposes, and it has the downside of costing money and creating a paper trail. However, there are more erhm providers out there who would harbor no such objections.
So what can you do?
My favorite mechanism is a hidden checkbox. Make it have a label like 'Do you agree to the terms and conditions of using our services?' perhaps even with a link to some serious looking terms. But you default it to unchecked and hide it through css: position it off page, put it in a container with a zero height or zero width, position a div over top of it with a higher z-index. Roll your own mechanism here and be creative.
The secret is that no human will see the checkbox, but most bots fill forms by inspecting the page and manipulating it directly, not through actual vision. Therefore, any form that comes in with that checkbox value set allows you to know it wasn't filled by a human. This technique is called a bot trap. The rule of thumb for the type of auto-form filling bots is that if a human has to intercede to overcome an individual site, then they've lost all the money (in the form of their time) they would have made by spreading their spam advertisements.
(The previous rule of thumb assumes you're protecting a forum or comment form. If actual money or personal information is on the line, then you need more security than just one heuristic. This is still security through obscurity, it just turns out that obscurity is enough to protect you from casual, scripted attacks. Don't deceive yourself into thinking this secures your website against all attacks.)
The other half of the secret is keeping it. Do not alter the response in any way if the box is checked. Show the same confirmation, thank you, or whatever message or page afterwards. That will prevent the bot from knowing it has been rejected.
I am also a fan of the timing method. You have to implement it entirely on the server side. Track the time the page was served in a persistent way (essentially the session) and compare it against the time the form submission comes in. This prevents forgery or even letting the bot know it's being timed - if you make the served time a part of the form or javascript, then you've let them know you're on to them, inviting a more sophisticated approach.
Again though, just silently discard the request while serving the same thank you page (or introduce a delay in responding to the spam form, if you want to be vindictive - this may not keep them from overwhelming your server and it may even let them overwhelm you faster, by keeping more connections open longer. At that point, you need a hardware solution, a firewall on a load balancer setup).
There are a lot of resources out there about delaying server responses to slow down attackers, frequently in the form of brute-force password attempts. This IT Security question looks like a good starting point.
Update regarding Captcha's
I had been thinking about updating this question for a while regarding the topic of computer vision and form submission. An article surfaced recently that pointed me to this blog post by Steve Hickson, a computer vision enthusiast. Snapchat (apparently some social media platform? I've never used it, feeling older every day...) launched a new captcha-like system where you have to identify pictures (cartoons, really) which contain a ghost. Steve proved that this doesn't verify squat about the submitter, because in typical fashion, computers are better and faster at identifying this simple type of image.
It's not hard to imagine extending a similar approach to other Captcha types. I did a search and found these links interesting as well:
Is reCaptcha broken?
Practical, non-image based Captchas
If we know CAPTCHA can be beat, why are we still using them?
Is there a true alternative to using CAPTCHA images?
How a trio of Hackers brought Google's reCaptcha to its knees - extra interesting because it is about the audio Captchas.
Oh, and we'd hardly be complete without an obligatory XKCD comic.
Today I successfully stopped a continuous spamming of my form. This method might not always work of course, but it was simple and worked well for this particular case.
I did the following:
I set the action property of the form to mustusejavascript.asp which just shows a message that the submission did not work and that the visitor must have javascript enabled.
I set the form's onsubmit property to a javascript function that sets the action property of the form to the real receiving page, like receivemessage.asp
The bot in question apparently does not handle javascript so I no longer see any spam from it. And for a human (who has javascript turned on) it works without any inconvenience or extra interaction at all. If the visitor has javascript turned off, he will get a clear message about that if he makes a submission.
Your code would not prevent bot submission but its not because of how your code is. The typical bot out there will more likely do an external/automated POST request to the URL (action attribute). The typical bots aren't rendering HTML, CSS, or JavaScript. They are reading the HTML and acting upon them, so any client logic will not be executed. For example, CURLing a URL will get the markup without loading or evaluating any JavaScript. One could create a simple script that looks for <form> and then does a CURL POST to that URL with the matching keys.
With that in mind, a server-side solution to prevent bot submission is necessary. Captcha + CSRF should be suffice. (http://en.wikipedia.org/wiki/Cross-site_request_forgery)
No Realy are you still thinking that Captcha or ReCap are Safe ?
Bots nowDays are smart and can easly recognise Letters on images Using OCR Tools (Search for it to understand)
I say the best way to protect your self from auto Form submitting is adding a hidden hash generated (and stored on the Session on your server of the current Client) every time you display the form for submitting !
That's all when the Bot or any Zombie submit the form you check if it the given hash equals the session stored Hash ;)
for more info Read about CSRF !
You could simply add captcha to your form. Since captchas will be different and also in images, bots cannot decode that. This is one of the most widely used security for all wesites...
you can not achieve your goal with javascript. because a client can parse your javascript and bypass your methods. You have to do validation on server side via captchas. the main idea is that you store a secret on the server side and validate the form submitted from the client with the secret on the server side.
You could measure the registration time offered no need to fill eternity to text boxes!
I ran across a form input validation that prevented programmatic input from registering.
My initial tactic was to grab the element and set it to the Option I wanted. I triggered focus on the input fields and simulated clicks to each element to get the drop downs to show up and then set the value firing the events for changing values. but when I tried to click save the inputs where not registered as having changed.
;failed automation attempt because window doesnt register changes.
;$iUse = _IEGetObjById($nIE,"InternalUseOnly_id")
;_IEAction($iUse,"focus")
;_IEAction($iUse,"click")
;_IEFormElementOptionSelect($iUse,1,1,"byIndex")
;$iEdit = _IEGetObjById($nIE,"canEdit_id")
;_IEAction($iEdit,"focus")
;_IEAction($iEdit,"click")
;_IEFormElementOptionSelect($iEdit,1,1,"byIndex")
;$iTalent = _IEGetObjById($nIE,"TalentReleaseFile_id")
;_IEAction($iTalent,"focus")
;_IEAction($iTalent,"click")
;_IEFormElementOptionSelect($iTalent,2,1,"byIndex")
;Sleep(1000)
;_IEAction(_IETagNameGetCollection($nIE,"button",1),"click")
This caused me to to rethink how input could be entered by directly manipulating the mouse's actions to simulate more selection with mouse type behavior. Needless to say I wont have to manualy upload images 1 by 1 to update product images for companies. used windows number before letters to have my script at end of the directory and when the image upload window pops up I have to use active accessibility to get the syslistview from the window and select the 2nd element which is a picture the 1st element is a folder. or the first element in a findfirstfile return only files call. I use the name to search for the item in a database of items and then access those items and update a few attributes after upload of images,then I move the file from that folder to a another folder so it doesn't get processed again and move onto the next first file in the list and loop until script name is found at the end of the update.
Just sharing how a lowly data entry person saves time, and fights all these evil form validation checks.
Regards.
This is a very short version that hasn't failed since it was implemented on my sites 4 years ago with added variances as needed over time. This can be built up with all the variables and if else statements that you require
function spamChk() {
var ent1 = document.MyForm.Email.value
var str1 = ent1.toLowerCase();
if (str1.includes("noreply")) {
document.MyForm.reset();
}
<input type="text" name="Email" oninput="spamChk()">
I had actually come here today to find out how to redirect particular spam bot IP addresses to H E L L .. just for fun
Great ideas.
I removed re-captcha a while back converted my contactform.html to contactform.asp and added this to the top (Obviously with some code in between to full-fill a few functions like sendmail, verify form filled out completely etc.).
<%
if Request.Form("Text") = 8 then
dothis
else
send them to google.com
end if
%>
On the form i stuck a basic text field with the name text so its just looks like anything not specifying what its for at all, I then stuck some text 2 lines above in red that states enter what 2 + 6 = in the box below to submit your request.
I ran into a scenario where I was thrown an unexpected behavior only in IE8 browser. IE9 and Firefox browsers work fine. The behavior went like:
User populated a form
On purpose - user leaves a mandatory field blanked
User clicked "Submit button" and browser sent a POST request
Expected behavior - error message is thrown along with data that was already provided. Only mandatory field should be left blanked as we did not provide anything in step 2. But instead I'm getting an error message with previous data lost i.e. form empty.
And note this only happens in IE8. Any suggestions?
I am going to answer this questions myself. So, here's what happened in my scenario. It was a double click problem. But I only clicked the button once. Then how did that happen? Some programmer who worked on this project was handling a form submit where he did another submit using JavaScript. But then how did this work in Firefox or IE9+?
I used Fiddler to go deep into this - I noticed in IE8 browser two requests are sent to the server. But IE9 and Firefox correctly handles this scenario (i.e. learns about double click) and sends only 1 POST request instead of 2.
Technologies used: Spring Framework 2.0, JSP, HTML, JavaScript
Why data is lost has also to do with Server - Spring modifies the session attributes (to be specific it's a formObject which is temporarily removed and re-added) while processing requests. When there's another request at the same time it goes through another pipeline (handleInvalidSubmit) which ends up creating a new formObject and thus destroying old data.
Hope this will help others :)
When using geolocation API's navigator.geolocation.getCurrentPosition() how to deal with a negative response?
It says that the second callback function is called when there is an error. However when user chooses not to reveal his location by cancelling the request that function is never fired.
It seems that getCurrentPosition() waits for an answer indefinitely. (at least in Firefox 4)
How can I know when user presses cancel (or no etc.)
Any ideas?
See edit below
You are correct, the error handler should fire when a user denies the location request. The error object passed into the error handler should contain an error code and message letting you know the user denied the request. However, I'm not seeing this in FF4 when selecting the option Not Now from the location request dialogue.
In Chrome, the API/callbacks work exactly as expected, but in Chrome there is no 3rd option.
EDIT
Ahhh okay I found a little quirk in the behavior of this in FF4. In normal mode (not private browsing), the user will be presented 3 options:
Always share
Never share
Not Now
Never share triggers the error handler correctly, but Not Now does not.
What does this mean and how to handle it?
Well, it looks like if the user hits Not Now, you aren't going to get a response. Therefore, I would set a timeout which checks a flag that would be set by one of the handlers. If this flag is not set (meaning the handlers didn't fire in the allotted time), you can do one of two things:
Assume that the user denied the request (even though the denial was temporary)
You can ask the user for permission again (via the same call) and the user will be presented with the dialog again.
Option 2 is probably bad usability (and annoying), so it is probably best to assume they denied temporarily and ask them again (politely!) the next time they visit the site.
I created a JsFiddle to play around with this API:
http://jsfiddle.net/7yYpn/11/
I don't think it's a bug, but an intentional choice when it comes to making it difficult to make websites that provides undesirable functionalities.. (as the top answer implied; IF you request again- when someone already said no- is rather annoying...)...
The difference between "not now".. and "never".. is that the programmer of the website KNOWS.. that if "not now" was triggered.. there would be an actual prompt to the user IF he sent the request again.. hence he would be able to "force" the user's hand to EITHER accept it.. or simply block data until the user agrees..
Decent and respectful programmers want to use such information to better provide a service (and to not wait for things that won't happen).. but truth is that there are enough spammers out there to overwhelm the end user..
(and there is no need to even TRY to send the request again, if it has been answered with "never".. because.. the user will not be terrorized in the same manner.. and if the site becomes sluggish and unresponsive, the user will just close it)
Ps. OH, and SERIOUS programmers might actually take a rejection as an actual.. rejection.. and store this choice somewhere.. despite the fact that "not now" is actually not intended as an ABSOLUTE rejection, but rather a "I have decided to not take any definite stand as of yet".. so.. someone who say "not now".. if the server knows of this choice and takes it as a "no".. then there might NEVER be another request sent.. despite the person WANTING to be able to reconsider at a later date)