Related
I have a page that allows users to watch a YouTube video and automatically receive a reward as soon as the video ends playing. This is done with the Youtube JS API:
pseudocode:
function videoStoppedPlaying() {
requestRewardFromServer(); // currently uses an XMLHttpRequest
}
The problem with this approach is that one could just open the browser console and manually call requestRewardFromServer().
I am already applying obfuscation on the code, but this is like putting a bandage on a hole in a boat; It does not solve the problem.
Edit: So far the only solution that comes close is using timestamps. Even though this is not the ideal solution, I will take the advice to heart and try to further obfuscate the JS code.
Any suggestions?
The only solution to control that the user is actually watching the video, besides checking end time - start time vs length of video is to actually check that the user is really watching it. I suppose you have a function that can tell you at what minute of the video (or percentage) the user is. So, you could periodically inform the server via javascript about that: For example, every 5 seconds or every 5% of the video, the client has to send a xmlhttp request to the server to inform it about this. The server will check for a client that it received all requests in the corresponding order (or almost, maybe he rewatched a part of it, you will have to figure out an appropiate algorithm).
It is not easy, it requires some work and it is not quite 100% 'bulletproof'. But, anything related to javascript can still be manipulated locally.
When getting the reward, store the Youtube video ID so you get at least one reward per video. Of course you have to keep track of the video ID's or else you can pass any string.
To prevent users getting the reward without watching the video you could build some timer such that it would be impossible to get the reward after the time has exceed.
Below are my thoughts on this-
I feel you need to dive into server side for this.
Set a session variable on the server, such as video_length = some_length.
From the time the video started, send an ajax request to the server and set some session variable, as video_started = some_time.
Now, when you call requestRewardFromServer(), compare the current time with video_start. It needs to greater than or equal to the video_length.
If it is satisifes the condition, reward them, or else not. You might say this doesn't guarantee that the full video is watched. Yes, but at least the person trying to spoof has to wait that long.
We have a news website where we cache a complete article page.
There are 4 areas that need to continue to be dynamic on that page:
View Counter: We add +1 to view_counts of that article when page loads.
Header: On the header of the website we check if session->id exists or not if it does we display a Welcome [Name], My Profile / Logout and if not we show Register / Login.
Comments: We display the comments made for that article.
Track User Behavior: We track every single action made by users on the site
Now the only way we could think of doing this is through AJAX calls:
$('#usercheck').load(<?php echo "'" . base_url() . "ajax/check_header'"; ?>);
And so on.
This is creating a massive load on CPU, but what would be the right/alternative way of approaching this?
Please see attached:
First of all, you do not have to use AJAX for every possible dynamic content, especially in the case of comments, you may as well load them via an iframe.
That way, you are not relying on Javascript to make the request.
It may even work for the counter.
However, you problem is not Javascript, nor the database server, based on what I can see from your graph. It seems to me you have some heavy PHP controllers, maybe you are loading a heavy framework just to have $session->id checked.
Further, what do you mean by "we track every single action"? How do you track them? Are you sending an AJAX request from every little thing or are you debouncing them with JS and only sending them one every 30 seconds or so?
My advice is that you consider the size of the PHP code you are calling, and slim it down as much as you can, even to zero if it seems feasible (by leveraging localStorage to keep track of you user session after the first login), and maybe loading the counter and the comments in alternative ways.
For example, I infer you are only checking the counter once per page load, ignoring subsequent loads by other users while the current user is reading the article, so your counter may happen to be out-of-date once i a while, depending on your traffic.
I going to explain it better: your page has n views, so when I load it, you request for n and then display n+1 to me. While I'm reading, the same page gets requested and viewed x times by other users. Your counter on the server has been surely updated to n+x, but the counter on my page still says "n views".
So, what's the point in being picky and showing n+1 to me and the not updating it, thus being off by x?
So, first of all the counter controller should be as slim as possible, and what if you loaded it within an iframe, auto updating without AJAX?
How to refresh an iframe not using javascript?
That would keep the counter up-to-date, you may render it with PHP just once per page view, and then just statically serve the resulting HTML file.
i have a logout function that sets the User offline in my DB (mysql), but if it just closes the browser, in my DB the User is still online despite it's not , How can i manage this? How can i set the User Offline without press the logout botton? Cheers in advance !
Ps: Yes, i'm using SESSION
You can do it in following ways.
1) send the Ajax request to server every 5 seconds to update the current time.
2) and where you want to show offline just get records where current time is more than 5 seconds ago.
HI the only reliable way is to set an interval that calls the server and logs it in a database
var timeout = 15000; //milliseconds
setInterval( function(){
$.post('yoursite/keepalive' );
}, timeout );
Then you check the session on the server side you need a simple database table with the user id and a timestamp of the last time keepalive was called, then you just get the current time an there id ( from the session ) and save that. Then you can check if its been more then say like 20 seconds you will know they are gone ( should be updated every 15 sec ). Obviously you would need to have this interval on every page of your site to accurately track a user.
Things such as checking the session time, and unload are not accurate enough,
Unload is fired when any page is closed, so for example,
we have a user that has 2 pages open, they close one of them. the other page is already loaded so there is no traffic between client and server, and no way to know that page is still open
for Session time we have a similar problem, say someone is reading a long post on your page, They need to use the facilities and leave the page open. 30 minutes go by the come back and continue reading the post for another 10 minutes. now maybe the session has expired maybe it hasn't the fact remains they are still looking at your site, and you have no way to know it.
An interval will continue as long as the page is open and there are no javascript issues. A disadvantage of this is it will also keep their session updated ( you can get around this by sending the user id along with the ajax and not using the session, but that has other complications ) because you have that 15 second update you can check anytime if it has been more then 15 seconds. Say you want to display a list of online users to your other forum users, you just query for everyone with a current timestamp from that table, easy beazy.
As for the amount of time for the interval, you have to strike a balance between performance ( network traffic ) and how granular you need to know the information, if it's ok to only know if they logged off within the last minute then use that, if you can wait 5 minutes to know etc....
Really the Crux of the problem is how the server, and a client communicate. Right there is no two way communication like if your on the phone. It's more like a walkies talky where you have to say 10-4 and let go of the button for the other guy to talk. Essentially a client will make a request, that request is fulfilled by the server. that is the end of the communication and the state. Subsequent request state is maintained by using session so the next request uses that session to 'remember' the client. other then that there is no communication between client and server. There is no way to know they hung up the phone, for example, but to ask them if they are still there. ( this is an oversimplification because you cant send a request from the server to ask, more like they have to tell you they are not there, unless you use node.js or something like that ).
As #David has mentioned you could track this based on last activity, for that you would just need to know when the session was last updated. One of the easiest ways is to move the session into a database handler via http://php.net/manual/en/function.session-set-save-handler.php that way you can access when they were last active.
Using this vs ajax really depends on what you need to know, and how accurately. There is also the content of your page to weigh in. If you have a site that makes requests frequently it would be a better approach because you save on network traffic, for example. However, if you have long post someone could be reading for 20-30 minutes but want to know more frequent then that use ajax.
You can do it in many ways:
Launch an AJAX call on onbeforeunload javascript event. Prompting for a confirmation "Windows is closing, are you sure? YES/NO" should give you enough time to set the flag in the db, just be sure that if the user clicks "NO" you should unset your flag
Check session time... Add a var in your PHP_SESSION that is updated at every user event. If it becomes older than a preset threshold (i.e. 5 minutes), you can safely assume the user is gone
Example for onbeforeunload
function myConfirmation() {
return 'Are you sure you want to quit?';
}
window.onbeforeunload = myConfirmation;
You can try the javascript beforeunload event:
window.onbeforeunload = function() {
// Some AJAX request to logout.php or whatever script handles the logout
}
It will trigger when the user attempts to close the current window.
Watch out though, even if the user closes a single tab (your page), the event will be triggered, so if there are other tabs opened, so the browser will be, and you'll still get your users logged out.
Also, if several tabs of your website are opened, and you close one of them, you'll get your users logged out, which may not be what you want, so you'll probably have to find a way around to fix it.
I'm trying to figure out a good way to prevent bots from submitting my form, while keeping the process simple. I've read several great ideas, but I thought about adding a confirm option when the form is submitted. The user clicks submit and a Javascript confirm prompt pops up which requires user interaction.
Would this prevent bots or could a bot figure this out too easy? Below is the code and JSFIddle to demonstrate my idea:
JSFIDDLE
$('button').click(function () {
if(Confirm()) {
alert('Form submitted');
/* perform a $.post() to php */
}
else {
alert('Form not submitted');
}
});
function Confirm() {
var _question = confirm('Are you sure about this?');
var _response = (_question) ? true : false;
return _response;
}
This is one problem that a lot of people have encountered. As user166390 points out in the comments, the bot can just submit information directly to the server, bypassing the javascript (see simple utilities like cURL and Postman). Many bots are capable of consuming and interacting with the javascript now. Hari krishnan points out the use of captcha, the most prevalent and successful of which (to my knowledge) is reCaptcha. But captchas have their problems and are discouraged by the World-Wide Web compendium, mostly for reasons of ineffectiveness and inaccessibility.
And lest we forget, an attacker can always deploy human intelligence to defeat a captcha. There are stories of attackers paying for people to crack captchas for spamming purposes without the workers realizing they're participating in illegal activities. Amazon offers a service called Mechanical Turk that tackles things like this. Amazon would strenuously object if you were to use their service for malicious purposes, and it has the downside of costing money and creating a paper trail. However, there are more erhm providers out there who would harbor no such objections.
So what can you do?
My favorite mechanism is a hidden checkbox. Make it have a label like 'Do you agree to the terms and conditions of using our services?' perhaps even with a link to some serious looking terms. But you default it to unchecked and hide it through css: position it off page, put it in a container with a zero height or zero width, position a div over top of it with a higher z-index. Roll your own mechanism here and be creative.
The secret is that no human will see the checkbox, but most bots fill forms by inspecting the page and manipulating it directly, not through actual vision. Therefore, any form that comes in with that checkbox value set allows you to know it wasn't filled by a human. This technique is called a bot trap. The rule of thumb for the type of auto-form filling bots is that if a human has to intercede to overcome an individual site, then they've lost all the money (in the form of their time) they would have made by spreading their spam advertisements.
(The previous rule of thumb assumes you're protecting a forum or comment form. If actual money or personal information is on the line, then you need more security than just one heuristic. This is still security through obscurity, it just turns out that obscurity is enough to protect you from casual, scripted attacks. Don't deceive yourself into thinking this secures your website against all attacks.)
The other half of the secret is keeping it. Do not alter the response in any way if the box is checked. Show the same confirmation, thank you, or whatever message or page afterwards. That will prevent the bot from knowing it has been rejected.
I am also a fan of the timing method. You have to implement it entirely on the server side. Track the time the page was served in a persistent way (essentially the session) and compare it against the time the form submission comes in. This prevents forgery or even letting the bot know it's being timed - if you make the served time a part of the form or javascript, then you've let them know you're on to them, inviting a more sophisticated approach.
Again though, just silently discard the request while serving the same thank you page (or introduce a delay in responding to the spam form, if you want to be vindictive - this may not keep them from overwhelming your server and it may even let them overwhelm you faster, by keeping more connections open longer. At that point, you need a hardware solution, a firewall on a load balancer setup).
There are a lot of resources out there about delaying server responses to slow down attackers, frequently in the form of brute-force password attempts. This IT Security question looks like a good starting point.
Update regarding Captcha's
I had been thinking about updating this question for a while regarding the topic of computer vision and form submission. An article surfaced recently that pointed me to this blog post by Steve Hickson, a computer vision enthusiast. Snapchat (apparently some social media platform? I've never used it, feeling older every day...) launched a new captcha-like system where you have to identify pictures (cartoons, really) which contain a ghost. Steve proved that this doesn't verify squat about the submitter, because in typical fashion, computers are better and faster at identifying this simple type of image.
It's not hard to imagine extending a similar approach to other Captcha types. I did a search and found these links interesting as well:
Is reCaptcha broken?
Practical, non-image based Captchas
If we know CAPTCHA can be beat, why are we still using them?
Is there a true alternative to using CAPTCHA images?
How a trio of Hackers brought Google's reCaptcha to its knees - extra interesting because it is about the audio Captchas.
Oh, and we'd hardly be complete without an obligatory XKCD comic.
Today I successfully stopped a continuous spamming of my form. This method might not always work of course, but it was simple and worked well for this particular case.
I did the following:
I set the action property of the form to mustusejavascript.asp which just shows a message that the submission did not work and that the visitor must have javascript enabled.
I set the form's onsubmit property to a javascript function that sets the action property of the form to the real receiving page, like receivemessage.asp
The bot in question apparently does not handle javascript so I no longer see any spam from it. And for a human (who has javascript turned on) it works without any inconvenience or extra interaction at all. If the visitor has javascript turned off, he will get a clear message about that if he makes a submission.
Your code would not prevent bot submission but its not because of how your code is. The typical bot out there will more likely do an external/automated POST request to the URL (action attribute). The typical bots aren't rendering HTML, CSS, or JavaScript. They are reading the HTML and acting upon them, so any client logic will not be executed. For example, CURLing a URL will get the markup without loading or evaluating any JavaScript. One could create a simple script that looks for <form> and then does a CURL POST to that URL with the matching keys.
With that in mind, a server-side solution to prevent bot submission is necessary. Captcha + CSRF should be suffice. (http://en.wikipedia.org/wiki/Cross-site_request_forgery)
No Realy are you still thinking that Captcha or ReCap are Safe ?
Bots nowDays are smart and can easly recognise Letters on images Using OCR Tools (Search for it to understand)
I say the best way to protect your self from auto Form submitting is adding a hidden hash generated (and stored on the Session on your server of the current Client) every time you display the form for submitting !
That's all when the Bot or any Zombie submit the form you check if it the given hash equals the session stored Hash ;)
for more info Read about CSRF !
You could simply add captcha to your form. Since captchas will be different and also in images, bots cannot decode that. This is one of the most widely used security for all wesites...
you can not achieve your goal with javascript. because a client can parse your javascript and bypass your methods. You have to do validation on server side via captchas. the main idea is that you store a secret on the server side and validate the form submitted from the client with the secret on the server side.
You could measure the registration time offered no need to fill eternity to text boxes!
I ran across a form input validation that prevented programmatic input from registering.
My initial tactic was to grab the element and set it to the Option I wanted. I triggered focus on the input fields and simulated clicks to each element to get the drop downs to show up and then set the value firing the events for changing values. but when I tried to click save the inputs where not registered as having changed.
;failed automation attempt because window doesnt register changes.
;$iUse = _IEGetObjById($nIE,"InternalUseOnly_id")
;_IEAction($iUse,"focus")
;_IEAction($iUse,"click")
;_IEFormElementOptionSelect($iUse,1,1,"byIndex")
;$iEdit = _IEGetObjById($nIE,"canEdit_id")
;_IEAction($iEdit,"focus")
;_IEAction($iEdit,"click")
;_IEFormElementOptionSelect($iEdit,1,1,"byIndex")
;$iTalent = _IEGetObjById($nIE,"TalentReleaseFile_id")
;_IEAction($iTalent,"focus")
;_IEAction($iTalent,"click")
;_IEFormElementOptionSelect($iTalent,2,1,"byIndex")
;Sleep(1000)
;_IEAction(_IETagNameGetCollection($nIE,"button",1),"click")
This caused me to to rethink how input could be entered by directly manipulating the mouse's actions to simulate more selection with mouse type behavior. Needless to say I wont have to manualy upload images 1 by 1 to update product images for companies. used windows number before letters to have my script at end of the directory and when the image upload window pops up I have to use active accessibility to get the syslistview from the window and select the 2nd element which is a picture the 1st element is a folder. or the first element in a findfirstfile return only files call. I use the name to search for the item in a database of items and then access those items and update a few attributes after upload of images,then I move the file from that folder to a another folder so it doesn't get processed again and move onto the next first file in the list and loop until script name is found at the end of the update.
Just sharing how a lowly data entry person saves time, and fights all these evil form validation checks.
Regards.
This is a very short version that hasn't failed since it was implemented on my sites 4 years ago with added variances as needed over time. This can be built up with all the variables and if else statements that you require
function spamChk() {
var ent1 = document.MyForm.Email.value
var str1 = ent1.toLowerCase();
if (str1.includes("noreply")) {
document.MyForm.reset();
}
<input type="text" name="Email" oninput="spamChk()">
I had actually come here today to find out how to redirect particular spam bot IP addresses to H E L L .. just for fun
Great ideas.
I removed re-captcha a while back converted my contactform.html to contactform.asp and added this to the top (Obviously with some code in between to full-fill a few functions like sendmail, verify form filled out completely etc.).
<%
if Request.Form("Text") = 8 then
dothis
else
send them to google.com
end if
%>
On the form i stuck a basic text field with the name text so its just looks like anything not specifying what its for at all, I then stuck some text 2 lines above in red that states enter what 2 + 6 = in the box below to submit your request.
In the past, when I've covered events, I've used a meta-refresh with a 5 minute timer to refresh the page so people have the latest updates.
Realizing that this may not be the perfect way to do it (doesn't always work in IE, interrupts a person's flow, restarts things for people with screen readers, etc.) I'm wondering if there's any other way to do handle this situation.
Is it possible to have something like ajax check every few minutes if the html file on the server is newer and have it print a message saying "Update info available, click here to refresh"?
If that's crazy, how about a javascript that just counts down from 5 minutes and just suggests a refresh.
If anyone could point me to tutorials or code snippets I'd appreciate. I just play a programmer on TV. :-)
Actually, your thought on a timed Ajax test is an excellent idea. I'm not sure that is exactly what StackOverflow uses, but it checks periodically to see if other answers have been posted and shows the user, on an interval, if there are updates.
I think this is ideal for these reasons:
It's unobtrusive - the reader can easily ignore the update if they don't care
It won't waste bandwith - no reloading unless the user chooses to
It's informative - the user knows there's an update and can choose to act on it.
My take on how - have the ajax script send off the latest post id to a script that checks for new scripts. This script can query your database to see if there are any new posts, and how many there are. It can return this number. If there are new posts, show some (non modal) message including the number of updates, and let the user decide what to do about it.
setInterval(function() {
if (confirm("Its Been 5 Minutes. Would you like to refresh")) {
window.location.reload(true);
//Or instead of refreshing the page you could make an ajax call and determing if a newer page exists. IF one does then reload.
}
}, 300000);
You can use the setInterval function in javascript.
here's a sample
setInterval("refresh function", milliseconds, lang);
You will use it passing a name to a function that actually refresh the page for the first param and the number of milliseconds between refresh for the second param (300000 for 5 minutes). The third parameter lang is optional
If the user would be interacting with the scores and clicking on things it would be a little rude to just refresh the page on them. I think doing something like a notification that the page has been updated would be ideal.
I would use jQuery and do an ajax call to the file on the server or something that will return the updated data. If it's newer than throw up a Growl message
Gritter - jQuery Growl System
Demo of what a Growl is using Gritter
A Growl message would come up possibly with whatever was changed, new scores and then an option within that message to refresh and view the new results.
jQuery Ajax information