How can I protect fonts on a node webkit app? - javascript

I would like to create a Node-Webkit app but avoid redistributing the font files. I have thought of a few approaches. I am considering the model used by hosted font solutions where a temporary URL containing the font file is hosted.
I have a way to encrypt the font file. You could convert the font to base64 assign it to a local variable in a javascript library with closure. The javascript file is compiled to binary and cannot be read by the end user.
Setting the base64 value to a style property would potentially expose the font as a base64 value to the DOM. What I would like to do is create a temporary route to the font file that I render from the private base64 value then remove the route once it is accessed. I can see how to achieve this as a node.js app but I'm new to Node-Webkit and don't see any documentation on router.
It seems like the hosted font solutions allow a one-time access to the font files so users cannot download the files. So does Node-Webkit have the capability to perform routing?

First things first: welcome to the internet, you can't prevent people from using the data you send them. Either you don't send them data, or you accept the fact that once the data is on their computer, they can do with it whatever they like (if you want people to be able to connect to your content through a browser, the browser needs to download and decode the content, which means that you cannot stop distribution, and in fact you are a distributor in this scenario).
Even if you tie the font loading to a session identifier (e.g. your user has to hit the page URL first, which sets a cookie value, which is then checked when they try to download the webfont in combination with the IP for which the cookie was initially set) they only need to download your font once to trivially have access to it and do with it what they want. It'll either live in their browser's cache directory, or it's accessible via JavaScript (by mining document.stylesheets for instance) which means it's trivially converted to real data and saved to disk (e.g. a window.open with a binary mimetype causes browsers to pop up a save-to-file dialog).
There, I just downloaded your fonts despite your best efforts: if you can send me the data, and the technology we've chosen for that exchange is HTTP(S), I will be able to access that data, no matter how much you further restrict how or when I can get that data. I just need to get it once.
So: don't focus your efforts on the how or when. Take the given that your users will have access to your font(s), even if only once, and instead focus your efforts on what they can do with your font(s) once they do, because that's far more important. There are several things you can do to make sure that what you distribute is mostly useless outside your content. For instance:
Don't use full fonts, use subsets, so that your users only get partial fonts containing strictly those glyphs that are needed to render your own content. This severely limits what others can do. You can take this as far as you like, serving dedicated subset fonts per page, or even per section of a page.
Set the fsType flag for your font(s) to not allow installation. That way people will get your font(s), but they can't further use them except on the web
Make sure to properly mark the font(s) license in the font itself, so that if people do use your fonts, you have legal recourse and can sue them for circumventing your license outside the "personal use" context.
However, if you also want to take advantage of caching, you don't want to do (1), and (2) and (3) are enough to give you a legal basis to go after people who use your font(s).
Bottom line: preventing users from "getting" your data is a waste of time. It's the internet, your users getting your data is entirely the point of the technology. Instead focus on making sure that what they get is useful only within the context of your content.
After all, if TypeKit can make this work, so can you. (Which would be an additional recommendation: don't roll your own solution if you can make use of an established existing solution. Are the fonts available through Typekit or the like? Use them instead and save yourself the trouble of reinventing the wheel)

You could encrypt it and then decrypt it with node native methods for that matter:
http://lollyrock.com/articles/nodejs-encryption/
Also you can zip all your assets into one file and rename it to "package.nw" and the chrome executable will run it (I know this is not a reliable security measure) plus with that file you are able also to merge it with the nw.exe file there you'll end up with only one file that is the executable, so the regular users will not be able to see your files and your package.json file is somehow protected thus preventing users from changing the configuration and seeing your files/assets.
https://github.com/nwjs/nw.js/wiki/How-to-package-and-distribute-your-apps#step-2a-put-your-app-with-nw-executable

Related

Reading and using data from user files in Javascript (web application), without uploading them

I'd like to have a way for a webpage -which is generated dynamically by my server- to read all the files in a specific user folder, manipulate them using javascript, within the web browser, and using them to show to the user some results (specific correlations between the data, dependent on the context and sometimes some graphs, drawn using these correlations).
Communication with the server about these data is neither required nor desired. Actually, since all the manipulations needed can be done via javascript and the files can be huge, for now I absolutely don't want that their content is uploaded to the server. Therefore there are no security risks (at least none that I can see).
Server side, I'm only interested to save the name of the folder, so that the user (who is registered) doesn't need to select the files one by one or to select them again every time a new page is dynamically created.
For now, the only hopes to find a solution that I have been able to gather are about using the Chrome FileSystem API (but I'd prefer a general solution, not dependent on a specific browser) or creating an extension that the user should install to use this feature when visiting the website (which, for me, is maybe even worse than relying on a specific browser).
So I wonder if there is a way to implement this feature using only pure javascript and HTML5 and using neither extensions nor browser dependent solutions.
Due to security reasons, JavaScript running in the browser should not be used to access the filesystem directly. But definitely you can access it using Node's fs module (but that's on the server side).
Another way is, if you let the user pick files using the <input type="file"> tag then you can use the File API to fetch the contents. But I think that is not what you are looking for.
Recommended reading: https://en.wikipedia.org/wiki/JavaScript#Security

Is it possible to drag-and-drop a local file from a file manager to a browser and get the file's uri?

Context:
I'm a TiddlyWiki (an offline non-linear personal notebook) user, and I would like to improve the workflow of image attaching. The basic way to attach an image to a TW is to write stuff like this:
[img[image-url-either-absolute-or-relative]]
in the edit area. The issue is, if I have 20 images I'd like to attach, I have to extract 20 urls and paste them (and surrond with the [img[...]] wrappers). My idea is to write a plugin that would allow me to drag-and-drop the 20 files from a file manager to the editing area, and get 20 urls (better wrapped with basic [img[...]] syntax or some other).
Is this possible?:
Getting a url (or uri, whatever) of a local file isn't a usual operation for web applications, and for security reasons it seems to be forbidden (at least by default). Still, is there any way to imlement this? Provided that the user will accept any security warnings.
May be workaround?
If there's a possibility for a workaround (may be using AutoHotKey or smth else), I'll be glad to hear (keep in mind that the goal is to improve the workflow, so minimum of additional clicking/keypressing is desirable).
Currently, I would love to implement this for Window 7 + Opera 12.17, but of'course the more general the solution is, the better (in the end, I'll share that with others if it's that useful). And yes, currently I'm talking about TW Classic, not TW5.
The approach for doing it with a copy operation has two phases:
First, you use the HTML5 support for drag-and-drop upload 1 [2] to the get file into the browser.
As the page I linked demonstrates, the HTML5 File API provides you with the filename (without a path), mimetype (possibly guessed), size in bytes, and contents. (The API spec also shows a field for last modified date)
The second step is then to get the file back onto disk in a known location relative to the TiddlyWiki file and insert a relative URL at the current cursor location.
TiddlyWiki has a collection of different saver backends it uses, so you basically want to hook in and use the same API TW Classic uses to write the backup file and optional RSS ChangeLog.
Note: This may or may not yet be available in TW5. I remember seeing re-adding backup support as still on the roadmap and, judging by savers/tiddlyfox.js, it seems to currently hard-code the path of the TiddlyWiki itself. However, saving them within the Wiki as binary data tiddlers is always an option.
As for doing it without making a copy...
I get a pretty strong impression that it's not possible to do it perfectly for security reasons, but here are some ideas:
If you are willing to either manually type the path or guarantee that the files will always be in a known folder, you could use the File API to retrieve the filename only.
You could write a script or plugin which implements this workflow:
Copy the file in Explorer
Position the cursor where you want the URL
Press a global hotkey
As someone who moved to Linux years ago, I'd use Python+PyGTK+python-xlib but AutoHotKey is GPLed and should allow you to do it on Windows:
It can bind a global hotkey
It can read the clipboard
If you can't find or write an AHK clone of Python's os.path.relpath and urllib.pathname2url, you can call an external script to do the work.
It can fake keypresses, allowing it to auto-type into your TiddlyWiki's edit window.

How to make a HTML file auto savable?

I need to create a single html where the person can input text in text fields, then click a button and save the file itself, so he wont lose changes. The idea is similiar to what wysiwyg does to html documents, but I need that to be implemented on the doc itself.
Where do I start from? I can't find anything like that on Google, perhaps I'm searching the wrong therms.
Need something that uses HTML + Javascript, no server side scripting.
JavaScript alone does not have the ability to modify files on your file system. All browsers do this for (good) security reasons. You will not be able to make changes to the html document itself (but according to the comment by Sean below, you might be able to produce a new copy of the document).
You might try using cookies to store the input values (automatically write them and load them when the document opens). There are various jQuery plugins available to aide in reading and writing cookies.
In business or enterprise systems this is usually done with a database, which would require server-side scripting.
I think most of these answers are incorrect. Using the FileSystem API, content is only saved to a sandboxed hidden folder, the user has no control as to where it is saved.
As suggested by Sean Vieira, using TiddlyWiki is a good solution.
However, if you want to customise it, you can make a Flash/JS bridge in which the Flash SWF saves the actual content.

If the name of the uploaded image contains .php.jpg

I preg_replace anything like .php.jpg, .php.png,
.js.jpg, .js.png, in the name of the uploaded file (images) for security purposes and would like to know if there are any extensions that I should also consider replacing before moving the file to the corresponding folder after the upload is complete?
Looking at the filename alone is really not a safe way to prevent rogue code/executables being uploaded.
Depending on the type of files you are accepting for upload there are better ways to play it safe.
As a general rule never upload any files to anywhere publicly accessible from the web until you know 100% they're not anything dubious.
If you are allowing image uploads - use a server side technology such as GD or ImageMagick to re-save the file out before using it. If these tools can't load a valid image from what has been uploaded (catch the errors so you know...) either drop, or quarantine the file until you investigate manually.
In any case never store the file under the original file name as uploaded even with extensions swapped out / replaced.
Search the site for upload security for some more detailed tips - this question does come up reasonably regularly.
Depending on what you need to do with the file it would be possible to crypt or just run a gzip on the files. There you save space on the server and if someone uploads scriptfiles they can't be run via a http request. The thing i would to is to check if there is any public access from the filed and/or set up an htaccess file.
Short answer, yes there are many other executable extensions that you should also consider blocking if you are going to use a black list approach, like 'phtml', 'java', 'perl', 'py', 'asp', 'go' and you should consider exe and bat and others that could deliver malware.
But, I would only use a blacklist if you are going to completely block moving them (or possibly even uploading as a kind of form validation before you get to the more complicated parts), not if you are going to upload and change the names on moving. No matter what you don't want some unknown Java or Go or Python etc sitting on your server regardless of what it is called and even if you are not on Apache (which lets you put the executable extension in places besides the last segment). Why allow that upload to go forward? It can be simpler (and you'll see people recommend) to just not allow 3+ level names but there are plenty of legitimate uses of three level names such as for language identification. This approach alone is not going to keep you safe from executable code however. This needs to be part of a larger set of things you to to make file uploading as safe as possible (such as you also want to check image files for embedded code).

What's a good way to organize external JavaScript file(s)?

In an ASP.NET web application with a lot of HTML pages, a lot of inline JavaScript functions are accumulating. What is a good plan for organizing them into external files? Most of the functions are particular to the page for which they are written, but a few are relevant to the entire application.
A single file could get quite large. With C#, etc., I usually divide the files at least into one containing the general functions and classes, so that I can use the same file for other applications, and one for functions and classes particular to this application. I don't think that a large file would be good for performance in a web application, however.
What is the thinking in this regard?
You probably want each page to have its page-specific JavaScript in one place, and then all the shared JavaScript in a large file. If you SRC the large file, then your users' browsers will cache the JavaScript code on the first load, and the file size won't be an issue. If you're particularly worried about it, you can pack/minify your JavaScript source into a "distributable" form and save a few kilobytes.
Single file is large but is cached. Too many small files mean more requests to the server. It's a balancing act. Use tools like Firebug and YSlow to measure your performance and figure out what is best for your application.
There is some per-request overhead, so in total you will improve performance by combining it all into a single file. It may, however, slow down load times on the first page a user visits, and it may result in useless traffic if some user never require certain parts of your js.
The first of these problems isn't quite as problematic, though. If you have something like a signup page that everyone visits first and spends some time on (filling out a form, etc.), the page will be displayed once the html has been loaded and the js can load in the background while the user is busy with the form anyway.
I would organize the js into different files during development, i. e. one for general stuff and one per model, then combine them into a single file in the build process. You should also do compression at this point.
UPDATE: I explain this a bit more in depth in a blog post.
Assuming you mean .aspx pages when you indicate "HTML pages," here is what I do:
Let's say I have a page named foo.aspx and I have JavaScript specific to it. I name the .js file foo.aspx.js. Then I use something like this in a base page class (i.e. all of my pages inherit from this class):
protected override void OnLoad(EventArgs e)
{
base.OnLoad(e);
string possiblePageSpecificJavaScriptFile = string.Format("{0}.js", this.TemplateControl.AppRelativeVirtualPath);
if (File.Exists(Server.MapPath(possiblePageSpecificJavaScriptFile)) == true)
{
string absolutePath = possiblePageSpecificJavaScriptFile.Replace("~", Request.ApplicationPath);
absolutePath = string.Format("/{0}", absolutePath.TrimStart('/'));
Page.ClientScript.RegisterClientScriptInclude(absolutePath, absolutePath);
}
}
So, for each page in my application, this will look for a *.aspx.js file that matches the name of the page (in our example, foo.aspx.js) and place, within the rendered page, a script tag referencing it. (The code after the base.OnLoad(e); would best be extracted, I am simply trying to keep this as short as possible!)
To complete this, I have a registry hack that will cause any *.aspx.js files to collapse underneath the *.aspx page in the solution explorer of Visual Studio (i.e. it will hide underneath the page, just like the *.aspx.cs file does). Depending on the version of Visual Studio you are using, the registry hack is different. Here are a couple that I use with Windows XP (I don't know if they differ for Vista because I don't use Vista) - copy each one into a text file and rename it with a .reg extension, then execute the file:
Visual Studio 2005
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\8.0\Projects\{E24C65DC-7377-472b-9ABA-BC803B73C61A}\RelatedFiles\.aspx\.js]
#=""
Visual Studio 2008
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\9.0\Projects\{E24C65DC-7377-472b-9ABA-BC803B73C61A}\RelatedFiles\.aspx\.js]
#=""
You will probably need to reboot your machine before these take effect. Also, the nesting will only take place for newly-added .js files, any that you have which are already named *.aspx.js can be nested by either re-adding them to the project or manually modifying the .csproj file's XML.
Anyway, that is how I do things and it really helps to keep things organized. For JavaScript files containing commonly-used JavaScript, I keep those in a root-level folder called JavaScript and also have some code in my base page class that adds those references. That should be simple enough to figure out. Hope this helps someone.
It also depends on the life of a user session. If a user is likely to go to multiple pages and spend a long time on the site a single large file can be worth the initial load seeing as it's cached. If it's more likely the user will come from google and just hit a single page then it would be better to just have individual files per page.
Use "namespacing" together with a folder-structure:
alt text http://www.roosteronacid.com/js.jpg
All you have to do is include Base.js, since that file sets up all the namespaces. And the .js file(s) (the classes) you want to use on a given page.
As far as page-specific scripts goes, I normally name the script according to the ASPX/HTML pages:
Default.aspx
Default.aspx.js
I would recommend that if you split your JS into seperate files, that you do not use lots of tags to include them , that will kill page-load performance. Instead, use server-side includes to inline them before they leave the server.

Categories