Per-page localStorage? - javascript

In HTML5, is it possible to create a localstorage that is accessible only to a single webpage?
I am currently experimenting with possibilities of writing self-contained single-page applications, and whether it is possible for users to host them themselves, e.g. on their Dropbox (which has some basic webhosting capabilities for public files) or by running a minimal webserver on localhost.
A user may then start such HTML Applications from various sources in his local server / Dropbox, or be asked to open one from another users Dropbox.
Since all these pages would be served from the same origin (currently https://dl.dropboxusercontent.com), they would all share a single localStorage, which may both interfere with the functionality if names clash, and leak data; E.g. such a page may want to store the authentication token for accessing the users Dropbox account in localStorage, but then any other such "App" would be able to steal the token.
I have to say here, that I am new to HTML5, and may very well be stretching the intended scope of usage here, as I keep running into limitations due to basic websecurity concepts like the same-origin policy – especially when opening a HTML file from a local drive through a file:// uri.
The core intent is allowing users to host their own custom apps in a manner that works across their mobile and desktop devices, by utilizing their existing webservice subscriptions for both hosting and data synchronization rather than moving their data to yet another service.

As stated here, localStorage is scoped by protocol, domain and port, nothing else.
And with this, even by prefixing each localStorage key by a unique page token (i.e. localStorage.set('page1.' + key)), it wouldn't avoid another page from getting those info, so no simple way to avoid information leak.

You can use unique page identifier (or even url) as a key for encryption of stored data. In theory.

Related

How to store credentials in an Outlook Add-in

I'm looking for the correct, secure way to store credentials for a third party API in an Outlook add-in. This overview of the different storage options only says not to store credentials in Settings, but not where to put them, so I assumed the RoamingSettings would be okay. Then I ran into this page with information about RoamingSettings, where it says that is not the right location either.
The question then becomes: What is the right place? Should I build my own storage solution and store/encrypt the credentials in a file or cookie? That does not feel very secure either, since we are talking about what is basically a web app running in an Iframe.
I assume you cannot implement another authorization scheme (token based, cookies etc.) for your API and you are stuck with Basic Authentication and its issues. If you are using ASP.NET, with all the samples available it could be very easy to add another authentication scheme that is more adapted to web clients (such as Office web add-ins).
Having said that, for me your best option is to use HTML5 storage or cookie storage (if not implemented by browser) to store your credentials.
The fact that the app is iFramed is not really a big deal. Those storages (HTML5: sessionStorage/localStorage) rely on domains separation which means that the storage slots where you will put the credentials will not be be visible by other apps, even those living on the parent iFrame.
You may also consider the fact that you may serve the web add-ins and the apis from the same domain. They are both web applications!
You can do what Outlook itself does for its POP3/SMTP/IMAP4 passwords - use CredRead / CredWrite Windows API functions. The data can only be decrypted under the local Windows account used to encrypt the data, so it cannot be take to a different machine and decrypted.
I don't think you can access these functions from JavaScript. This is for an OWA addin, not the Outlook application, is it?

Opening an instance of Microsoft Word from Javascript

I am building an application that will allow user to open a word document through a web page. This web application will open the word document using the local word instance on the machine.
I have two working solutions.
Using ActiveX (Only on IE)
Since the application is intranet application, I am using PsTools in the web service to remotely open word instance on remote machines.
The second architecture is what I am following right now. It is based on a Web Service which receives machines name through Javascript/jquery call. Later in the web method I am using PsTools to remotely execute MS Word instance on remote machine.
Both the architecture works, but both of them have limitations. With ActiveX I can use it on IE and it also requires changes in network policy to allow ActiveX. With PsTools, it is working great but I can't get the path of Word.Exe and I can only assume that it would always be at \\machinename\C$\Program Files(x86)\.....
We might make this application public as well and in that case our solution with PsTools will not work anymore.
I was just wondering if there is any other more suitable/cross browser way to open local word instance through web application ?
The document has to be modified on a remote location, one option would be to let the user download the document, then modify it and upload it to the server, this is out of question since we are replacing a thick client and wants to keep the user experience same
I am building an application that will allow user to open a word
document through a web page.
If it is an Intranet scenario, then you could use application protocol with Office URI schemes for links to the documents which will then open in the locally installed client.
The Office URI schema is like this:
<scheme-name>:<command-name>"|"<command-argument-descriptor> "|"<command-argument>
For Word specifically, an example would be:
<a href='ms-word:ofe|u|https://example.com/example.docx'>Edit</a>
Where, ms-word: is the scheme, ofe command stands for open-for-edit, the u is the command-descriptor to use the URI that follows, and finally the URI to the document itself. There are other commands like ofv (open-for-view), and nft (new-from-template), and also other command-descriptors like s for save.
Here is the complete reference: https://msdn.microsoft.com/en-us/library/office/dn906146.aspx
The protocols are registered with Windows when the Office client is installed.
You could enable WebDAV easily on your IIS server. The WebDAV client is built-in with Windows at the client-side.
You can also use components like FFWinPlugin Plug-in which is part of the SharePoint Foundation, or OpenDocuments Control which is an ActiveX control installed along with the Office client.
We might make this application public as well
I would discourage you from doing that, unless your company owns or deals-with services like OneDrive or Office.com. This can quickly get tricky as mentioned in the other answer. Moreover, enforcing a proprietary client on general public is not a good idea anyway. Further, even Microsoft's own solutions do not work reliably across browsers and work best with IE only (even Edge has problems with this), which would be forcing a specific browser to general public. Not a good idea.
However, if you really need to, then it would be better if you could use some of the solutions already built around WebDAV. Alfresco ECM (enterprise content management) is one example of public offering which uses WebDAV similar to your use-case.
There is another one by IT Hit and a live demo is here: http://www.ajaxbrowser.com. They also have a basic tutorial on how to setup your own WebDAV server on the same lines as your use-case. You will need to find their documentation.
When you say: "We might make this application public as well", what kind of scale are you talking about? Just a couple of folks from the a team, or as a real web application that needs to deal with edit conflicts, transactions, locking, performance, etc.? Even the intranet solution you mentioned will likely become a headache as soon as 2-3 people start to edit the same document.
For this type of document sharing, you basically have two options:
Significant investment in a rich web UI that behaves similarly to MS Word, with back-end services that will store the info in a scalable data store and provide simultaneous edits and document downloads, or
Integrating with a third party vendor API or white-label provider that offers similar capabilities for a fee. E.g. Box.com APIs, HyperOffice, FirePad, etc.
This would be a super-simple problem to solve if you can convert the document in question to a type of form. There are probably a hundred different services that offer embedded forms functionality with excellent reporting and database management. If a document in Word format is needed, then your app would just convert the stored data to a .doc/.docx document for users to download at will.
Whatever direction you go with, try to get out of the PsTools-based current setup. It's like a rinkydink house of cards and as #Matt-Burland mentions, likely to cause a security disaster pretty soon.

How to secure EmberJS or any Javascript MVC framework?

I'm looking forward to start using Ember.js in a real life project, but as someone coming from a Java background, I always care about Security.
And when I say to my fellow Java developers, I start using a JavaScript MVC, they start saying it's not secure enough, as JavaScript is all about client-side there is always a way to hack around your JavaScript code, and know backdoors to your services and APIs.
So is there any good practice that can help prevent this kind of attacks, or at least trying to make it less effective?
There is always a way to hack around your javascript code, and know backdoors to your services and APIs.
JavaScript presents few new problems for services and APIs that are already exposed to the web. Your server/service shouldn't blindly trust requests from the web, so using javascript doesn't alter your security posture, but it can lull people into a false sense of security by making them think they control the user-agent.
The basic rule of client/server security is still the same: don't trust the user-agent ; place the server and the client on different sides of a trust boundary.
A trust boundary can be thought of as line drawn through a program. On one side of the line, data is untrusted. On the other side of the line, data is assumed to be trustworthy. The purpose of validation logic is to allow data to safely cross the trust boundary--to move from untrusted to trusted.
Validate everything that the server receives from the client, and, because XSS vulnerabilities are common and allow clients to access information sent to the client, send as little sensitive information to the client as possible.
When you do need to round-trip data from the server to the client and back, you can use a variety of techiniques.
Cryptographically sign the data so that the server can verify that it was not tampered with.
Store the data in a side-table and only send an opaque, unguessable identifier.
When you do need to send sensitive data to the client, you also have a variety of strategies:
Store the data in HTTP-only cookies which get attached to requests but which are not readable by JavaScript
Divide your client into iframes on separate origins, and sequester the sensitive information in an iframe that is especially carefully reviewed, and has minimal excess functionality.
Signing
Signing solves the problem of verifying that data you received from an untrusted source is data that you previously verified. It does not solve the problem of eavesdropping (for that you need encryption) or of a client that decides not to return data or that decides to substituting different data signed with the same key.
Cryptographic signing of "pass-through" data is explained well by the Django docs which also outline how their APIs can be used.
The golden rule of Web application security is to never trust data from untrusted sources. Sometimes it can be useful to pass data through an untrusted medium. Cryptographically signed values can be passed through an untrusted channel safe in the knowledge that any tampering will be detected.
Django provides both a low-level API for signing values and a high-level API for setting and reading signed cookies, one of the most common uses of signing in Web applications.
You may also find signing useful for the following:
Generating “recover my account” URLs for sending to users who have lost their password.
Ensuring data stored in hidden form fields has not been tampered with.
Generating one-time secret URLs for allowing temporary access to a protected resource, for example a downloadable file that a user has paid for.
Opaque Identifiers
Opaque identifiers and side tables solve the same problem as signing, but require server-side storage, and requires that the machine that stored the data have access to the same DB as the machine that receives the identifier back.
Side tables with opaque, unguessable, identifiers can be easily understood by looking at this diagram
Server DB Table
+--------------------+---------------------+
| Opaque Primary Key | Sensitive Info |
+--------------------+---------------------+
| ZXu4288a37b29AA084 | The king is a fink! |
| ... | ... |
+--------------------+---------------------+
You generate the key using a random or secure pseudo-random number generator and send the key to the client.
If the client has the key, all they know is that they possess a random number, and possibly that it is the same as some other random number they received from you, but cannot derive content from it.
If they tamper (send back a different key) then that will not be in your table (with very high likelihood) so you will have detected tampering.
If you send multiple keys to the same misbehaving client, they can of course substitute one for the other.
HTTP-only cookies
When you're sending a per-user-agent secret that you don't want it to accidentally leak to other user-agents, then HTTP-only cookies are a good option.
If the HttpOnly flag (optional) is included in the HTTP response header, the cookie cannot be accessed through client side script (again if the browser supports this flag). As a result, even if a cross-site scripting (XSS) flaw exists, and a user accidentally accesses a link that exploits this flaw, the browser (primarily Internet Explorer) will not reveal the cookie to a third party.
HttpOnly cookies are widely supported on modern browsers.
Sequestered iframes
Dividing your application into multiple iframes is the best current way to allow your rich client to manipulate sensitive data while minimizing the risk of accidental leakage.
The basic idea is to have small programs (secure kernels) within a larger program so that your security sensitive code can be more carefully reviewed than the program as a whole. This is the same way that qmail's cooperating suspicious processes worked.
Security Pattern: Compartmentalization [VM02]
Problem
A security failure in one part of a system allows
another part of the system to be exploited.
Solution
Put each part in a separate security domain. Even when
the security of one part is compromised, the other parts
remain secure.
"Cross-frame communication the HTML5 way" explains how iframes can communicate. Since they're on different domains, the security sensitive iframe knows that code on other domains can only communicate with it through these narrow channels.
HTML allows you to embed one web page inside another, in the element. They remain essentially separated. The container web site is only allowed to talk to its web server, and the iframe is only allowed to talk to its originating server. Furthermore, because they have different origins, the browser disallows any contact between the two frames. That includes function calls, and variable accesses.
But what if you want to get some data in between the two separate windows? For example, a zwibbler document might be a megabyte long when converted to a string. I want the containing web page to be able to get a copy of that string when it wants, so it can save it. Also, it should be able to access the saved PDF, PNG, or SVG image that the user produces. HTML5 provides a restricted way to communicate between different frames of the same window, called window.postMessage().
Now the security-sensitive frame can use standard verification techniques to vet data from the (possibly-compromised) less-sensitive frame.
You can have a large group of programmers working efficiently at producing a great application, while a much smaller group works on making sure that the sensitive data is properly handled.
Some downsides:
Starting up an application is more complex because there is no single "load" event.
Iframe communication requires passing strings that need to be parsed, so the security-sensitive frame still needs to use secure parsing methods (no eval of the string from postMessage.)
Iframes are rectangular, so if the security-sensitive frame needs to present a UI, then it might not easily fit neatly into the larger application's UI.

activate/enable javascript plugins based on user options

I'm building a JS client for a set of REST WebServices. The client will be delivered as an embeddable iframe, which should load JS scripts based on user options (license profile, user admin options, etc.)
I wonder what's the most effective and efficient pattern to do that.
At now I have a single "bootstrap" script, which includes the other scripts. I could create the bootstrap script code dynamically (server side), to make it load only the set of scripts required by the user configuration. Anyway those scripts would be publicly available, even if the services are not enabled for certain users... IMHO that's not a good solution.
On the other hand, how to control the access to static javascipt files on a public folder?
I want to avoid to serve javascript code though my code. It would be an expensive overload for the application!
Mmm... I'm a bit confused...
Giovanni
In general, if you wish to control access to a resource based on a business requirement (licenses, user profiles etc) you have no choice but to route all requests for that resource through your application.
However, since you are you are sending the files to the client there is no guarantee that anyone in possession of the scripts is currently authorized or authenticated (license may have expired, etc). As such, you cannot infer that a request to your web services is valid based on the fact that a consumer knows how to call your web service.
There should be very little to lose in making the scripts publicly accessible (since there should be nothing that you wish to kept secret in them). So, in answer to your question, I would suggest authenticating and authorizing any requests at the web service level and allowing the javascript files to be publicly accessible.

How do I get maximum offline storage from within a web site?

I'm trying to develop a cross platform application, where the most obvious route would be a web site with JavaScript, but then I lose the cosy comforts I'm used to using in my C# desktop apps, like file system access etc. How do I go about accessing similar services from within the browser?
E.g. I don't need to access anything I don't create, so actual file system access is just a luxury. I can use whatever the browser offers for offline storage, but have no clue how to do this.
Depending what you need to store you could possibly use a cookie.
For larger storage this is what the upcoming HTML5 client-side storage methods address, but we're not quite there yet.
Security concerns prevent browsers from getting real access to storing things client-side for the most part, though.
Have a look at Google Gears.
For simple file manipulation, why not use a well established protocol like FTP?
It's accessible both from browsers and from code.
If you're encountering firewall/security/permissions concerns, you can always use HTTP.
It all depends on how you access your storage.

Categories