I have a web application which has login page.
In the source code (specifically in the <head>), I can see the third party javascript libraries used and the path to this library, sometimes the version of the library.
I can even access the code of these libraries without authentication.
Is that a security risk ?
For example:
<script type="text/javascript" src="/****/js/ui/js/jquery-ui-1.2.2.custom.min.js"></script>
<script type="text/javascript" src="/*****/dwr/interface/AjaxService.js"></script>
If yes, how to mitigate it?
Yes, there are two threats you need to mitigate:
First, the authenticity of the library. This can be achieved with SRI, which is a way to check the library signature - see this great post by Scott Helme.
Second, you want to check the library itself for know vulnerabilities. I'm not sure how it can be done when you add the libraries in that way - but there are tools you can use like Snyk to test and see if the library has known security issues. For example, here Snyk's results to the jquery version you're using. See here to find out more on the issue.
Hoped this help you out :)
Yes, such way has some issues.
The attacker can exploit lib server and to give you modified lib code.
First, I recommend you to download a lib (or even better is to add it to bundle via package.json) and to include all libs from your server, not 3rd party.
Every time you download you can check control sum of the lib to make sure it is not modified.
This will save you from some issues, but your address can be changed by the attacker too.
(He can redirect user to his host, instead of your when user resolves your address).
So it's better to have html + js in 1 file without cross link to be more safer.
This can be achieved using webpack bundling.
So attacker can compromise only the whole app, not 1 lib, it can be harder.
EDIT
(However, option to have only 1 file is good only for small project. For a bigger project you should use links for perfomance and have just a bit more risk.)
And you can check the code that you have (on server or in package.json) using snyk, which is open-source database of vulnerabilities.
EDIT
One more way of protection is using CSP headers. They allow to download content of some format (styles or scripts or images or etc) using only specific list of sources. It can prevent some kinds of XSS. It is highly recommended to use all types of CSP headers always. However the risk remains always: trusted source can be compromised, even DNS can be compomised.
Related
My latest update to a Firefox addon has been rejected because I've used a custom jquery-ui (generated by their site with just the widgets I wanted) and it fails their checksum check.
Your add-on includes a JavaScript library file that doesn't match our
checksums for known release versions. We require all add-ons to use
unmodified release versions, obtained directly from the developer's
website.
We accept JQuery/JQuery-UI libraries downloaded from
'ajax.googleapis.com', 'jquery.com' or 'jqueryui.com'; and used
without any modification. (file-name change does not matter) I'm
sorry, but we cannot accept modified, re-configured or customized
libraries.
Fair enough, I could just download the full one and resubmit, but I was wondering if it is possible to link to one instead?
If I try this:
contentScriptFile: [self.data.url("https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"), self.data.url("https://ajax.googleapis.com/ajax/libs/jqueryui/1.11.3/jquery-ui.min.js"), self.data.url("api.js")],
I get an error at runtime telling me that content scripts much be local. Both google and the API seem to be proving illusive to me for an answer.
Does anyone know if this is possible and how?
Cheers
Rich
self.data.url("https://...")
It seems like you haven't read the documentation on data.url()
It clearly states that
The data.url() method returns a resource:// url that points at an embedded data file.
Which means you cannot link to an external resource.
Does anyone know if this is possible and how?
No, contentScriptFile runs with (slightly) elevated privileges compared to regular web content, that's why you are not allowed to load scripts from sources that might change and could theoretically inject malicious code in the future.
If you want to rely on external libraries and keep them up to date you could simply write a little build script that always downloads the newest version when building a new XPI.
In principle you could just load the script via privileged XHR and then pass it as string, but that's probably not gonna pass AMO review.
And piece of personal opinion: Since you're targeting a specific browser you don't really need jquery for its cross-browser logic, modern web APIs provide lots of convenience methods that you can get pretty far just with vanilla ES6-javascript and state-of-the-art DOM APIs. Iterators and arrow functions also make bulk operations fairly concise.
I noticed that some programmers use two ways of calling .js file.
1- this way where you must have the js file:
<script src="lib/jquery.js" type="text/javascript"></script>
2- and this way where you don't need the js file :
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.js" type="text/javascript"></script>
and I want to know which way is better to use.
The first option is using local files, The second option is using a CDN.
A CDN is a group of fast servers with several common use files. Is really useful to save bandwidth and speed up the download of your site.
However, as was mentioned, you would have problems if the end user don't have access to internet.
Basically, if you expect your application to be executed always online, a CDN is a great option. If you are developing an app that could be executed offline (like a CRM for a company) then it would be better to be served using local files.
If the CDN is down, then your website will be broke. But is more likely that your website is down than the CDN.
Depends.
Method #1 means you have a local copy of the file -- you don't need to rely on an existing path to the internet (from an intranet behind a firewall, spotty internet service, etc). You take care of any caching, and making sure the file exists.
Method #2 may give you a fast planet-wide content-delivery-network (CDN).
I have, and will continue to use both methods... but #2 is easier.
Today I found foreign JavaScript on my homepage along with a backlink to a website I don't recognize (although the backlink is not visible when viewing my homepage, they have positioned it somehow so that it is hidden but search engines still find it).
I was wondering how my Joomla website managed to become compromised? Is there any possibility you can think of? How can I protect my website from this attack in the future?
First of all which version of joomla are you using.?
There is some possibilities to hack the Joomla Version of 1.5.23 or some similar version hacked and some bad script attached in all js files or may be some rewrite url condition in your .htaccess file.
the best option to prevent the problem is Update your Joomla Version and change your admin and FTP Password.
There could be a number of reasons, a few things to check:
Are you on a shared server? Is it secure?
Has someone compromised your password?
Is your version of Joomla up to date?
Are you running any other PHP apps on your web server? Are they secure?
Just because Joomla appears to have been affected doesn't mean that it was necessarily the entry point for the compromise - check everything. Make sure you keep your software up to date. Disable anything you don't require to run your website. Use .htaccess to protect files and folders. Make sure your own computer is as secure as possible and patched and up to date. Make sure you are using the latest version of PHP.
Good luck.
I don't know HOW, but if you wanna eliminate it is probably it is in the index.php file, check there.
website root/templates/yourtemplate/index.php
Installed joomla extension (plugins, module, component and template) may also contain file which will be very unsafe and may perform dangerous file activity like updating, renaming, deleting and creating the file on your site.
So my suggestion is read the joomla forum and manage the permission of your file according to that.
Let's say I write a jQuery plugin and add it to my repository (Mercurial in my case). It's a single file, say jquery.plugin.js. I'm using BitBucket to manage this repository, and one of its features is a Downloads page. So, I add jquery.plugin.js as one of the downloads.
Now I want to make available a minified version of my plugin, but I'm not sure what the best practice is. I know that it should be available on the Downloads page as jquery.plugin.min.js, but should I also version control it each time I update it to reflect the unminified version?
The most obvious problem I see with version controlling the minified version is that I might forget to update it each time I make a change to the unminified version.
So, should I version control the minified file?
No, you should not need to keep generated minimized versions under source control.
We have had problems when adding generated files into source control (TFS), because of the way TFS sets local files to be read-only. Tools that generate files as part of the build process then have write access problems (this is probably not a problem with other version control systems).
But importantly, all the:
tools
scripts
source code
resources
third party libraries
and anything else you need to build, test and deploy your product should be under version control.
You should be able to check out a specific version from source control (by tag or revision number or the equivalent) and recreate the software exactly as it was at that point in time. Even on a 'fresh' machine.
The build should not be dependent on anything which is not under source control.
Scripts: build-scripts whether ant, make, MSBuild command files or whatever you are using, and any deployment scripts you may have need to be under version control - not just on the build machine.
Tools: this means the compilers, minimizers, test frameworks - everything you need for your build, test and deployment scripts to work - should be under source control. You need the exact version of those tools to be available to recreate to a point in time.
The book 'Continuous Delivery' taught me this lesson - I highly recommend it.
Although I believe this is a great idea - and stick to it as best as possible - there are some areas where I am not 100% sure. For example the operating system, the Java JDK, and the Continuous Integration tool (we are using Jenkins).
Do you practice Continuous Integration? It's a good way to test that you have all the above under control. If you have to do any manual installation on the Continuous Integration machine before it can build the software, something is probably wrong.
My simple rule of thumb:
Can this be automatically generated during a build process?
If yes, then it is a resource, not a source file. Do not check it in.
If no, then it is a source file. Check it in.
Here are the Sensible Rules for Repositories™ that I use for myself:
If a blob needs to be distributed as part of the source package in order to build it, use it, or test it from within the source tree, it should be under version control.
If an asset can be regenerated on demand from versioned sources, do that instead. If you can (GNU) make it, (Ruby) rake it, or just plain fake it, don't commit it to your repository.
You can split the difference with versioned symlinks, maintenance scripts, submodules, externals definitions, and so forth, but the results are generally unsatisfactory and error prone. Use them when you have to, and avoid them when you can.
This is definitely a situation where your mileage may vary, but the three Sensible Rules work well for me.
I've inherited a high-traffic site that loads some Ext javascript files and I'm trying to trim some bandwidth usage.
Are Ext libraries necessary for development only or are they required for the finished site? I've never used Ext.: Ext JS - Client-side JavaScript Framework
The site loads ext-base.js (35K), ext-all-debug.js (950K), expander.js, exteditor.js. It appears that expander.js and exteditor.js have some site specific code, so they should stay?
But what about ext-base.js and ext-all-debug.js? Am I reading this correctly - are base and debugging libraries necessary for a live site?
Simply consult the documentation the previous developers have written for you. :P
To actually answer your question: You will more than likely want to keep all of the files available. You might however want to change ext-all-debug.js to ext-all.js since the debug file contains non-minimized Javascript.
The previous posters are correct that if the site is actually using ExtJS, then you will need to keep the references to ExtJS. Assuming that you actually need to keep the references, replacing ext-all-debug.js with ext-all.js will save some bandwidth. Additionally, consider using one of the CDNs available now. For instance, using Google's CDN, you will save not only your own bandwidth, but bandwidth for your client and decrease page load times.
ExtJS files are available to be hosted on the Cachefly CDN: Ext CDN – Custom Builds, Compression, and Fast Performance.
Hosting the files remotely should remove the load for at least those files.
As to which you can safely remove, you need a JavaScript developer to work on documenting what's truly necessary to your application.
As to what ExtJS is, it's a JavaScript library and framework - a la jQuery, YUI, MooTools, PrototypeJS, etc. So indeed, it can be critical to your site if your site relies on JavaScript to work.
I don't know much about Ext, but I think it's to assume that expander.js and exteditor.js depend on ext-base.js and ext-all-debug.js. As such, removing the latter two will break the site functionality.
The only thing I'd change would to switch from the debug version of ext-all to the production (which is most probably called ext-all.js and you should be able to load it from the same place the debug is located or from the Ext site).
One option would be to condense all of those files into one file (it would be larger, but it would reduce the overhead of multiple HTTP requests). Also verify that the server is sending the ETag and Expires headers, so that the browser can cache as much of it as possible...