My ads blocker just blocked an unknown google tag manager request initiated from my vendor's chunk.
Is it a common practice to have tracking in dependencies, and what kind of data it is possible to extract from my website using google tag manager?
Should I even bother?
Having some sort of tracking in libraries isn't common practice but it isn't unheard of. I've seen it less after GDPR and similar laws where introduced however. But that kind of tracking is usually very specific, maybe some usage stats are sent towards an endpoint or maybe Google Analytics is embedded or something.
Google Tag Manager on the other hand can be used to inject almost anything into your site. They could inject a crypto miner, they could take all information from the current (possibly logged in) page and send it it to wherever, they could take actions on behalf of the user, redirect users to another page etc. Basically this is a backdoor into your site that might look harmless now but might do something completely different tomorrow so I really wouldn't trust it.
Related
I'm creating a personal website and learning to implement a service worker that takes advantage of the push API in order to update myself and whatever others look at it that I have updated the website whenever I deploy. I figured out that the pushSubscription provides you with a capability URL and that I would need to store that somewhere in order to push a message to that url.
That's where my question comes in. Do I need a privacy policy if I'm going to be storing these capability URL in a database? If so, can I somehow only allow my website to store my capability URL instead of storing others?
your question would likely work better at law.stackexchange, thought let me say this: I work on privacy policies as a service (at iubenda) and one of the things you learn is that privacy policy requirements are usually triggered a lot sooner than most people think.
So in theory, in many legislations privacy policies are a requirement whenever your site is in any way commercial (this could be a personal site as well with a connection to your job in any way) and processes any personal data (this on the other hand could be the IP that gets automatically processed for connection purposes. Or, take what most people do on any site they make - Google Analytics - as an example: Google requests you to have a compliant privacy policy (check under 7).
Therefore let me give you this rule of thumb: having a privacy policy is rarely a bad idea from a requirement standpoint and it doesn't usually start with push notifications.
Also, here you might feel like reading up a bit about it when curiosity strikes:
California
UK
I've read about Firebase and it looks awesome for what I want to do.
I've read about authentication and how based on rules certain logged-in users are authorized to do different stuff. Al good.
However, I'm unsure about another type of security: how do I make sure that only my own site (using client-side javascript) can talk to my firebase-backend? I'm asking because afaik there's no way to prevent anyone from looking up my firebase endpoint from the client-side code (url pointing to my specific firebase backend) and start using that for god knows what.
This is especially worrisome in situations in which I want to open up writes to the anonymous user role. (e.g: some analytics perhaps)
Any help in clearing my mind on this much appreciated.
Update (May 2021): Thanks to the new feature called Firebase App Check, it is now actually possible to limit calls to your backend service to only those requests coming from iOS, Android and Web apps that are registered in your Firebase project.
You'll typically want to combine this with the user authentication based security that Kato describes below, so that you have another shield against abusive users that do use your app.
In my opinion, this isn't so much a question about Firebase's security as a general discussion of the internet architecture as it stands today. Since the web is an open platform, you can't prevent anyone from visiting a URL (including to your Firebase) any more than you can prevent someone from driving past your house in the real world. If you could, a visitor could still lie about the site of origin and there is no way to stop this either.
Secure your data with authentication. Use the Authorized Domains in Forge to prevent CSRF. Put security rules in place to prevent users from doing things they should not. Most data writes you would use a server to prevent can be accomplished with security rules alone.
This is actually one of the finer qualities of Firebase and API services in general. The client is completely isolated and thus easily replaced or extended. As long as you can prove you're allowed in, and follow the rules, where you call in from is unimportant.
Regarding anonymous access, if you could make them visit only from your site, that still won't stop malicious writes (I can open my JavaScript debugger and write as many times as I want while sitting on your site). Instead, place tight security rules on the format, content, and length of data writable by anonymous users, or save yourself some time and find an existing service to handle your analytics for you, like the ubiquitous Google Analytics.
You can, of course, use a server as an intermediary as you would with any data store. This is useful for some advanced kinds of logic that can't be enforced by security rules or trusted to an authenticated user (like advanced game mechanics). But even if you hide Firebase (or any database or service) behind a server to prevent access, the server will still have an API and still face all the same challenges of identifying clients' origins, as long as it's on the web.
Another alternative to anonymous access is to use custom login, which would allow a server to create its own Firebase access tokens (a user wouldn't necessarily have to authenticate for this; the signing of the tokens is completely up to you). This is advantageous because, if the anonymous user misbehaves, the access token can then be revoked (by storing a value in Firebase which is used by the security rules to enforce access).
UPDATE
Firebase now has anonymous authentication built into simple login, no need to use custom login for common use cases here.
In my site I have:
...
<script type="text/javascript" src="https://www.google.com/jsapi"></script>
...
The script above is the Google script to load up other resources dynamically.
(eg Google charts API)
This works 99.99% of the time.
However, I just got a client that for some reasons got his company restricting access to google.com.
As a consequence of this my website simply threw a JavaScript error.
Now I know how to handle that, and I can check if window.Google exists.
but my question is
"what's the standard way to deal with this? "
In other words if you embed 3rd party JavaScript how best do you deal with their JS not available?
NOTE: VERY IMPORTANT
You can not host the chart code locally or on an intranet.
SEE FAQ from Google: https://developers.google.com/chart/interactive/faq#localdownload
Can I download and host the chart code locally, or on an intranet?
Sorry; our terms of service do not allow you to download and save or
host the Google.load or Google.visualization code.
There is no real alternative. Due to Google's terms of service you cannot use Google API without access to google.com.
Check the connection to Google and iform user that function is not available
Develop your own or use non-google api. Still you can use Google if available
The solution is that your client's company review their content filtering policies. Google are quite clear in their previous answer concerning offline access:
…your computer must have live access to http://www.google.com/jsapi in order to use charts.
You are using a third-party solution according to their terms and conditions, which naturally imposes limits on how that solution may be used by your clients. You need to stand firm or find a more liberally-licensed solution. (At any rate, you are more likely to succeed at convincing your client's IT department than petitioning Google to change their TOS.)
For the more general case of third party JS APIs that may not load but for which you are allowed to keep a local copy on your server, see this question.
You can try it like this:
Instead of using the direct link to the Google libraries you want to use, use a link which points to your server:
<script type="text/javascript" src="https://www.myserver.com/jsapi"></script>
When your server gets an incoming request to this URL, your server now makes a request to Google to get the API and sends the response to the client.
That means you do not install the API anywhere locally or on a server and always get the most actual version directly from Google. People also do not need access to Google (as in the company you mentioned) and therefore can use your service.
Use Firebug or the Chrome Dev Tools to inspect your HTML source once the charts scripts are loaded. Access the scripts in your browser and save them locally, then serve them from your own server. This isn't recommended, of course, but if you don't have any other choice...
For example, checking the code of one of the pages I use it on, the core script for the Google Charts library is located at:
https://www.google.com/uds/api/visualization/1.0/3d781368978b51b3ca00a01566dccf40/format+en,default,corechart.I.js
Use the javascript window.onload to check whether the api has loaded or not, if no then load it from your server.
You already know how to check whether or not your library has been loaded (checking the object), if it fails, than what you can do within giving constrains:
Keep checking the object with timer and trying to download library, displaying message for a user
In case first one fails, you have two ways again:
Stopping your application and displaying an error: "Application error... try later"
Or downloading different library as a fallback
Are you progressively enhancing or gracefully degrading the page? If so, what do you display to users without JavaScript for this chart? A table? A list? This is what you should leave in the page and only start changing it once google's JS is available. Either that, or find an alternative library like raphaeljs that lets you keep all your code within your project.
IF (BIG IF) you are not worried about the interactivity the Google Charts and want to display them to the user just to see - maybe add your own javascript to it but not depend on the Google Javascript at all, this can turn the google charts into a image that you can display to the user.
Also this requires access to install a command line tool on the server.
http://code.google.com/p/wkhtmltopdf/ is a command line tool that will generate an image from an html page. If you build a simple page that only shows the chart you want and point the wkhtmltoimage tool at the local html file it will load the Google Charts javascript and generate the chart then generate an image out of the results.
YES I understand this is VERY kludgy and is adding a big tool for a small problem but with the browser restriction and the Google Terms of Service this will solve most of the problem.
You can try going straight to google and if it fails (if google is restricted) you can bounce the request off of your server which forwards the request using CURL to google. If that doesn't work then Google is most likely down. This should cover the issue that you described in your question, but there isn't really a fix for if google itself actually goes down. It should, however, give your application access around domain restrictions because the request will be routed to your server rather than straight to google. I use this architecture for all requests so that I don't have ajax requests routed to random servers. It allows me to control what interacts with my front end using my backend. There are other benefits to this, especially if you are using something like AngularJS with NodeJS because you can decouple a lot of your third party libraries. This however, is beyond the scope of your question!
Basically, it works like this (pseudo code):
If(!Browser->Google->Browser){
return Browser->MyServer->Google->MyServer->Browser;
}
An answer has been accepted already, but still I would like to leave an additional aspect elaborating on the comment I made above ....
It has been accepted that the Google Server is the only place from where the API can be loaded. We don't know whether the client's IT manager will re-think their content policy, they might have good reasons for that.
Given a non-100% availability of all the components along the path between a user browser and the Google API, sooner or later a user will end up in an error situation; statistically this is unavoidable.
What is not acceptable (and avoidable) for a user is to receive an "unspecific" JS error making him/her believe there's a bug on the page. So my solution would be to trap the failure loading the Google API and display a message "Third party components temporarily unavailable - Please try later".
This will demonstrate to the user that
we know what's going on
there's nothing we can do about it now
but it's not totally unexpected and still somehow under control
I am using ajax in my website and in order to use the ajax, I habe to write the name of the file for example:
id = "123";
$.getJSON(jquerygetevent.php?id=" + id, function(json)
{
//do something
});
how can I protect the url? I dont want people to see it and use it...
that is a limitation of using client side scripts. there is no real way to obfuscate it from the user there are many ways to make it less readable (minify etc) but in the end an end-user can still view the code
Hi Ron and welcome to the internet. The internet was (to quote Wikipedia on the subject)
The origins of the Internet reach back to research of the 1960s, commissioned by the United States government in collaboration with private commercial interests to build robust, fault-tolerant, and distributed computer networks. The funding of a new U.S. backbone by the National Science Foundation in the 1980s, as well as private funding for other commercial backbones, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The commercialization of what was by the 1990s an international network resulted in its popularization and incorporation into virtually every aspect of modern human life.
Because of these origins, and because of the way that the protocols surrounding HTTP resource identification (like for URLs) there's not really any way to prevent this. Had the internet been developed as a commercial venture initially (think AOL) then they might have been able to get away with preventing the browser from showing the new URL to the user.
So long as people can "view source" they can see the URLs in the page that you're referring them to visit. The best you can do is to obfuscate the links using javascript, but at best that's merely an annoyance. What can be decoded for the user can be decoded for a bot.
Welcome to the internet, may your stay be a long one!
I think the underlying issue is why you want to hide the URL. As everyone has noted, there is no way to solve the actual resolved URL. Once it is triggered, FireBug gives you everything you need to know.
However, is the purpose to prevent a user from re-using the URL? Perhaps you can generate one-time, session-relative URLs that can only be used in the given HTTP Session. If you cut/paste this URL to someone else, they would be unable to use it. You could also set it to expire if they tried to Refresh. This is done all the time.
Is the purpose to prevent the user from hacking your URL by providing a different query parameter? Well, you should be handling that on the server side anyways, checking if the user is authorized. Even before activating the link, the user can use a tool like FireBug to edit your client side code as much as they want. I've done this several times to live sites when they're not functioning the way I want :)
UPDATE: A HORRIBLE hack would be to drop an invisible Java Applet on the page. They can also trigger requests and interact with Javascript. Any logic could be included in the Applet code, which would be invisible to the user. This, however, introduces additional browser compatibility issues, etc, but can be done. I'm not sure if this would show up in Firebug. A user could still monitor outgoing traffic, but it might be less obvious. It would be better to make your server side more robust.
Why not put some form of security on your php script instead, check a session variable or something like that?
EDIT is response to comment:
I think you've maybe got the cart before the horse somehow. URLs are by nature public addresses for resources. If the resource shouldn't be publicly consumable except in specific instances (i.e. from within your page) then it's a question of defining and implementing security for the resource. In your case, if you only want the resource called once, then why not place a single use access key into the calling page? Then the resource will only be delivered when the page is refreshed. I'm unsure as to why you'd want to do this though, does the resource expose sensitive information? Is it perhaps very heavy on the server to run the script? And if the resource should only be used to render the page once, rather than update it once it's rendered, would it perhaps be better to implement it serverside?
you can protect (hide) anything on client, just encrypt/encode it into complicated format to real human
Okay, this just feels plain nasty, but I've been directed to do it, and just wanted to run it past some people who actually have a clue, so they can point out all the massive holes in it.....so here goes.....
We've got this legacy site & a new public beta-test one. Apparently it's super cereal that moving from one to the other is seamless, so in a manner of speaking, we need a single signon solution.
As we're not allowed to put any serious development into the legacy site (It's also in old school ASP, a language I don't care to learn.) I can't do a proper single sign-on solution, so I proposed the following: On login, the legacy site performs an AJAX post to the login controller of the new beta site, logging the user in there, it then simply proceeds with the login on the legacy site as normal. This may not be acceptable as there's code to prevent a user from being logged on twice, I'm not sure if it's been written to apply across sites.
The other idea I had was to pass a salted hash of the user's details across with their username when they try to access the 2nd site. If the hash matches the details of the user, then access is granted. This would need ASP development obviously as generating the hash on the client side would only serve to enhance the idiocy even further.
Does anyone have any thoughts?
The old ASP site must have some concept of a session if it requires a logon. You will, at a minimum, need to understand how to provide the session information to the legacy site and splice some code in to keep it copacetic if both sites need to be kept up indefinitely.
"Classic" ASP isn't so bad if you can read/write VB6, VBA, VBScript or VB.net. It probably won't be difficult to graft session initialization provided the code is half way decent.
Consider creating a common logon page for both sites + either an automatic redirect based on either the requested URL (I'm guessing the old and new sites have distinct URLs) or cookies passed with the request (the old site, if it used cookies, could identify a legacy user). This common logon page could initialize session on both the legacy site (only if required by user type) and on the new site. This will allow you to keep your new logon process unencumbered by the legacy process while maintaining the old as long as required.
Bear in mind that your first approach (AJAX request from one site to the other) won't work if the sites are on different domains, because of javascript security restrictions.
You might be able to work around this by using a hidden iframe for the post like this, but it's getting a little hacky.