Choosing "unavailable" pickup point in online shop [closed] - javascript

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 months ago.
Improve this question
just discovered one bug in a "n" online shop. The bug is the possibility to change html code (thanks to inspect element) and make an early unavailable pickup point available. As consequence, I was able to order some stuff, pay, and even get confirmation of my order. My question is, how an owner can prevent something like this?
P.s. During ordering, I was only on the one web page, there was no redirecting to another page or refreshing the current, until payment.
P.s.s. just want to mention, that I'm a total newbie in these "magic" things. So probably you can recommend me books/webpages etc. where I can read more about "server responses".

As you found out, editing the HTML code of a site and/or modifying the data sent to or from your browser is indeed not too difficult. That's part of how a browser is designed and intended to work, so you'll have to deal with this kind of "hacking" on the server side.
Here's a very superficial (and not complete) list of things to keep in mind when setting up your server and backend application:
Every request from outside ("the client") is potentially malicious or tampered with. → Make sure you use server-side validation for "everything". This may refer to:
Input fields (length, value, format, ...)
Data formats (e. g. correct JSON/XML structure)
User authentication and authorization
Your business rules (this is, as I think, the one decisive in your example - probably everything else was valid, but the server side did not check for the availability of that pickup point you injected)
Thus, do never rely on client-side validation (typically JavaScript / TypeScript) only! You can use this for a better user experience, but the real "hard" validation must take place on the server side.
Depending on the criticality of your site and the confidence of the data associated, think about adding more security by using a Web Application Firewall (WAF), rate limiting, log crawling and other techniques to identify and block suspicious traffic.
Keep your server software (the operating system with all its libraries etc., the application server (like Apache / Nginx / WildFly / ...) and the software your site comprises of (like a Spring / PHP / Angular / ... application)) up to date. There are means and methods like Dependabot helping you to automatize this process. Outdated software and libraries might have some known bugs an attacker can exploit.
Try to use standard software, frameworks and mechanisms wherever possible. Modern Web Frameworks like Spring Boot, Laravel, ... are well-maintained and security issues are found and fixed early. Also, the have validation and fraud detection methods built-in already, you'll just have to make use of them. On the other hand, if you try to code your own authorization framework (for example), you'll most likely overlook something and leave a security gap.

Related

Somebody help me to answer me why do we use script in asp.net? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
when i use asp.net to coding for website. Asp.net in server call sql server (ado.net, linq or entities framework) and get back data, send for clientside. i use some control as girdview to show data. actually, i do web optimization ( sql server - store procedure, create index, partition => faster, acceleration for get data. Website: UI simple, do not use too many effects)
but why in client-side, when server return data for client, many people always are going to use script (such as javascript, jquery, node js, angular js, bootstrap, react, google o/i....) to show in webpage.
so, it slower or faster when we use girdview?
And when User (people in clientsite) stop scrip in browser, it's mean, Manufacturers allow user or offer to stop script on browser in clientside., so why do we user them ( *.js), when User can stop script?
Even many people use asp.net (new version - 2013) in server, they also use script in there. so asp.net + script in server is faster or slow when we only use asp.net?
please, help me answer.
(I'm apologize because my English is not good.)
Thank you so much.
In the early years of Web development, Javascript on the client side provided for considerable enhancement of the client's "user experience" that static HTML delivered from the server did not. This includes such things as the enabling or disabling of certain interface features based on user input, the appearance or hiding of certain regions of a display based on user input, or combination of other pieces of data.
As web development evolved, the need for even more robust client-side interaction with back-end web servers became evident, and the "frameworks" you mentioned all work in various ways to improve the design, responsiveness, and behavior of a web-based application in ways beyond just enabling or disabling a button. This amounts to complex data binding, callbacks to web services, reducing server round-trips, and creating rich client interfaces, to name only a few.
They're all tools, each with their own role, each working to make web applications a bit more robust than those of the generation before them.
If I understand your question right, the answer comes down to speed and preference.
Firstly, if you disable client-side javascript, your asp.net controls aren't going to really work anyway. You'll find few places that still disable this so it's not really a concern people have anymore.
Secondly, it comes down to where you want to focus development effort and what kind of developers you have. If you have a lot of people used to working backend (C#) and want to stay there, then using asp.net controls and the like make development easier.
If you have javascript developers or people who want to use it, then you have more options that allow you to more decouple your server-side code from your front-end code. This can work out well for maintenance purposes.
The real point is that if you can utilize ajax (http://www.w3schools.com/ajax/default.asp) within your web application, you can make it a lot more responsive. ASP.NET Controls can often cause your page to refresh and cause unnecessary server-side computing to get the data and re-render the entire page (or partial page with asp.net mvc). Using new technologies like angular and others you listed, you can focus data computation and network traffic only on what's important.
For example, if you need a table to change what data is loaded, you can make an ajax request JUST for the data you need to load and then just render that portion on the client.
First of all, every "script" you mentioned (jQuery, AngularJS, Bootstrap, React) is a library written in JavaScript. Except node, which isn't even front-end. And I'm not sure what did you mean by google o/i... JavaScript is currently the only language which works in all browsers.
Initial purpose of JavaScript was to check form values before sending data to servers. It quickly evolved past that, although the usage was throttled by browser adoption, which is still a problem today.
Nowadays we can use JavaScript to render whole webpages. First when opening the page, it can help with rendering, meaning that server doesn't have to do all the work, but can just send plain data, usually in JSON. It's also used to add content to page later, without reloading the page (AJAX). Most well-known examples are real-time chat systems, like the one on facebook. This greatly improves user experience, I can't imagine how terrible it would be if whole page would reload to display a single new message.
Although user can disable JavaScript in their browsers and this would mean the page probably won't work, except if there is fallback design for such cases, I do not know why would someone disable it. And to be honest, probably most of the regular users don't even know this can be done and where is the setting to disable JavaScript.

Third-party cookie method becoming obsolete to track users - what is next? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Nowadays, people/companies behind the development of browsers are taking privacy in a serious way. They try to implement new security measures or simply change the default browser behaviours which have been around for a long time and today are considered as harmful for the privacy.
One example of this are third-party cookies. While IE requires a P3P policy to be sent when setting a cookie from a third-party domain, other browsers are blocking these cookies by default - or encouraging the user to activate such blocking option.
Also, if we think about extensions that help to prevent tracking (AdBlock, Ghostery...), it is getting more and more difficult to track users (whether for legitimate reasons or not).
As a developer, I found that there are some workarounds, such as ETag, although as you may know already, there are ways to prevent this type of tracking. Local Storage, available in most modern browsers (the ones that support HTML5 + enabled JS), is another way to accomplish this.
I would like to ask you what method do you find better and why. I feel like Local Storage could be the best replacement to third-party cookies, as it stores persistent data (it is not cleaned after the browser is closed) and it works in the vast majority of browsers - but still a much smaller percentage that cookies. A LocalStorage+fallback_to_cookies seems to be the best way for me, but would like to hear more opinions.
localstorage isn't getting the same heat as cookies simply because it's a "newer" technology. Give it time and I guarantee you it will end up being blocked/removed the same way cookies are being blocked/removed.
So far first party cookies are relatively safe, though ultimately scripts like GA still make requests to GA server, and as you said, there are many plugins/extensions/addons that block them.
But IMO the future will be in server-side tracking solutions. For example, when you go to a web page, that's a request to the server. Lots of basic info can be grabbed from it already. Then the javascript library would send (ajax) requests to the same server, not the 3rd party tracking server. Then all this data would then be forwarded to the 3rd party tracking vendor (e.g. GA, Adobe Analytics, etc.) by a server-side script.
Many tracking script offer server-side solutions already, but it's little more than an API with (many times) vague documentation, since it's not as popular to go this route. So I think there will be a lot of development to more easily handle payloads from the client and make server-side requests, make it almost as easy to implement as the current js version.
The main sticking point is tying the info to a single visitor. That's the most important part of the tracking cookies: a visitor ID that can tie all the activity together. Thing is, the alternatives (using combinations of IP and header info) isn't that far behind the accuracy of cookies, when you measure it against the cookies being blocked, so it's not a complete loss to not rely on cookies in the first place. But I think this will also have the affect of more and more websites enforcing a login system before a visitor does anything meaningful on their site. This will allow them to use your login id as the visitor id and would actually stand to increase accuracy.
But overall.. it's more important to look at the trends in the numbers, not the actual numbers, and from that PoV, it's even less a big deal. Unfortunately, a lot of people forget this or don't understand this point.

Why do we need both client side and server side validation? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
One argument for using both client side validation (JavaScript) and server side validation using a validator is that if the client browser does not support JavaScript or JavaScript has been turned off deliberately, then client side validation is rendered useless.
My question is how good is this argument in practice? In theory it makes sense, but in practice, if JavaScript is disabled in the browser, then most website features will not even work. The user probably cannot even load the page without JavaScript, let alone submit a form.
Client-side validation just avoids the client from going "but I filled this all in and it didn't tell me anything!". It's not actually mandatory, and in reality, client-side validation is a very new thing (read: 5 years old or less). In practice, all it does is prevent your client (with JS enabled) to know whether the form is okay before reloading a page.
If AJAX is in the game, it is different - it allows you to save bandwidth as well as to provide user with feedback before submission.
Finally, if you're building strictly client-side, peer-to-peer exchange apps (think games), you'll want client-side validation to keep the clients from cheating.
Server-side validation is also crucial due to the fact that client-side validation can be completely bypassed by turning off JavaScript. In a way, JS-driven validation is a convenience and an aesthetic/cosmetic improvement and should not be relied upon. Furthermore, it is trivial to edit the source of a page locally in order to disable or bypass even the most complex of JS validation.
What could a user do if you do not server-side validate? Anything, depending on how you use their data. You could be allowing users to drop entire databases (or worse, leak them), modify anything they like (or worse, read anything they like. Directory traversal flaws are extremely common entrance points for naughty people), and elevate their privileges at will. Do you want to run this risk? Not validating user input is like trusting people and not installing locks on your house.
Validation should always be performed server-side - you can never trust client-side validation.
Client-side validation is always in the sense of providing a better User Experience (UX), so the user doesn't have to submit and reload a page simply because a value in a form isn't valid - it makes things more dynamic.
As you don't even need a browser to make requests, independently of your website relying on JS to work properly, you will need server-side validation and sanitize all user input in case you care about not having your databases pwned.
Now it is up to you whether you want to provide an UI with dynamic client-side validation hints or not.
Always protect your inputs on the server. It's not always about users having JavaScript disabled but also that they could break the server.
For example, if a site has a JavaScript max-length check on an <input>, a user can could disable that check, thereby sending more data than your server and/or database is expecting. This could overload the server by a large POST occupying a server thread for a long time, it could expose a weakness in the database for example violating a database constraint potentially exposing details about any persistence information. Worse, if there is no constraint a user might be able to perform injection attacks.
Another example is someone using an external HTTP tool to send requests to your server, completely bypassing any JavaScript. I use the Advanced REST Client for Chrome all the time in development for testing JSON APIs.
Client side validation through JavaScript is just a way of providing a quicker feedback to a person using the site of any information about their interaction with the site. In traditional client-server communication it should not be the only validation for the reasons outlined above.
If a user has disabled javascript is a problem of himself and he decided alone to disable the javascript for a reason... For that, when you making a website you must have always in mind that your web site must be valid for users with and without javascript. The both side validation is needed for a number of reasons, some of them are:
User has disabled javascript
An evil user in purpose has removed the javascript in order to exploit the system
With javascript validation you reducing the data traffic between the website and the client.
And of course with server validation you make sure once for all that the data are correct
It is possible to have a website that is using both javascript and "older" technologies to be valid for every user and every browser.
Client-side validation is a solution for highly interactive forms with on-the-fly field validation, but it will not prevent a ill-intentioned user from injecting and posting invalid formatted data to the server. It is important that your server-side script validates everything the user is doing, otherwise you will expose your site to SQL injection attacks, XSS attacks, users doing stuff they are not supposed to etc.

How might I block an IP address using JavaScript or jQuery? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I use the www.Spruz.com network for my website. I am not looking to use PHP or SSL blocking methods being that I can't as is. I can't seem to find JavaScript or jQuery code to possibly hide a DIV element or redirect without having to use PHP or SSL methods. I am getting hit with foreign spammers/advertisers that are hitting up my messengers and whatnot. I need to block a few IPs but I am lost. How might I achieve my goal?
I suggest you use server side scripting for this. I think you are misunderstanding what javascript actually does. Javascript is a client-side scripting language which means it runs on the client machine. So although you can hide the div, simply changing the css properties will reveal everything (a regular user wouldn't do that, you can't say anything about a malicious user).
Ideally you'd get access to your PHP or whatever server side language so you could run IP blocks from there. If you can't, you can try the solution here to get the IP via JavaScript:
How to get client's IP address using javascript only?
But the best solution is probably to rebuild whatever form is being abused with either reCAPTCHA or some other spam blocking script.
Well before trying to figure this out, just for grins I fed "spruz web blocking" into Google. The very first hit was http://my.spruz.com/forums/?page=post&id=9B8BE09C-1D18-4C9D-8510-C1D3035BDA44&lastp=1 ...looks like it explains in detail exactly how to do what you want on the server side. I haven't actually tested it or investigated in detail that they implemented it what I would consider correctly, but it seems worth a shot.
As others have expressed, client-side blocking won't do more than slightly annoy serious spammers. The logic is to a) send them the requested pages then b) have the pages recognize they're inside a hostile environment and self-destruct. Once the spammers already have the pages (a), they can take a quick snapshot before the pages self-destruct (b), and do whatever they wish. (In fact they'd likely use something like 'wget' to fetch the raw page and skip page execution and display altogether no matter what you do.) Correctly-coded server-side solutions, on the other hand, never send the pages to that IP address in the first place, so the only thing they can do is try to fool you by pretending to be somebody else.
(There's a pretty standard way to do this sort of thing pretty easily with an 'apache' web server, but I see that's not relevant on spruz.)

Javascript Distributed Computing [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Why aren't there any Javascript distributed computing frameworks / projects? The idea seems absolutely awesome to me because:
The Client is the Browser
Iteration can be done with AJAX
Webmasters could help projects by linking the respective Javascript
Millions or even billions of users would help DC projects without even noticing
Please share your views on this subject.
EDIT: Also, what kind of problems do you think would be suitable for JSDC?
GIMPS for instance would be impossible to implement.
I think that Web Workers will soon be used to create distributed computing frameworks, there are some early attempts at this concept. Non-blocking code execution could have been done before using setTimeout, but it made a little sense as most browser vendors focused on optimizing their JS engines just recently. Now we have faster code execution and new features, so running some tasks unconsciously in background as we browse the web is probably just a matter of months ;)
There is something to be said for 'user rights' here. It sounds like you're describing a situation where the webmaster for Foo.com includes the script for, say, Folding#Home on their site. As a result, all visitors to Foo.com have some fraction of their CPU "donated" to Folding#Home, until they navigate away from Foo.com. Without some sort of disclaimer or opt-in, I would consider that a form of malware and avoid vising any site that did that.
That's not to say you couldn't build a system that asked for confirmation or permission, but there is definite potential for abuse.
I have pondered this myself in the context of item recommendation.
First, there is no problem with speed! JIT compiled javascript can be as fast as unoptimized C, especially for numeric code.
The bigger problem is that running javascript in the background will slow down the browser and therefore users may not like your website because it runs slowly.
There is obviously an issue of security, how can you verify the results?
And privacy, can you ensure sensitive data isn't compromised?
On top of this, it's quite a difficult thing to do. Can the number of visits you receive justify the effort that you'll have to put into it? It would be better if you could run the code transparently on either the server or client-side. Compiling other languages to javascript can help here.
In summary, the reason that it's not widespread is because developers' time is more valuable than server time. The risk of losing user data and the inconvenience to users outweighs the potential gains.
First that comes to my mind is security.
Almost all distributed protocols that I know have encryption, thats why they prevent security risks. Although this subject is not so innovative..
http://www.igvita.com/2009/03/03/collaborative-map-reduce-in-the-browser/
Also Wuala is a distributed system, that is implemented using java applet.
I know of pluraprocessing.com doing similar thing, not sure if exactly javascript, but they run Java through browser and runs totally in-memory with strict security.
They have 50,000 computers grid on which they have successfully run applications even like web-crawling (80legs).
I think we can verify results on some kind of problem.
Let's say we have n number of items and need to sort it. We'll give it to worker-1, worker-1 will give us the result. We can verify it O(n) time. Please consider that it take at least O(n*log(n)) time to produce the result. Additionally we should consider how large is n items? (concern about network speed)
Another example, f(x)=12345, and function is given. Purpose is to find value of x. We can test it by replace x with some worker's result. I think some problems that are not verifiable are difficult to give to someone.
The whole idea of Javascript Distributed Computing has number of disadvantages:
single point of failure - there is no direct way to comunicate between nodes
natural fails of nodes - every node is working as long as browser
no guarantee that message sent will be ever received - according to natural fails of nodes
no guarantee that message received have been ever sent - because some hacker can interpose
annoying load on client side
ethical problems
while there is only one (but very tempting) advantage:
easy and free access to milions of nodes - almost every device has JS supporting browser nowadays
However the biggest problem is corelation between scalability and annoyance. Let's say you offer some attractive web service and run computing on client side. More people you use for computing, more people are annoyed. More people are annoyed, less people use your service. Well, you can limit annoyance (computing), scalability or try something between.
Consider google for example. If google will run computations on client side, some people will start to use bing. How many ? Depends on annoyance level.
The only hope for Javascript Distributed Computing may be multimedial services. As long as they consume lots of CPU, nobody will notice any additional load.
I think the no.1 problem is javascript inefficiency at computing. It wouldn't be just worth it, because an application in pure c/c++ would be 100 times faster.
I found a question similar to this a while back, so I built a thingy that does this. It uses web workers and fetches scripts dynamically (but no Eval!). Web workers sandbox the scripts so they cannot access the window or the DOM. You can see the code here, and the main website here
The library has a consent popup on first load, so the user knows what's going on in the background.

Categories