I am trying to add the SRI (subresssource integrity) for an external JS file for our website, now my problem is that the hash from srihash.org is not accepted by either firefox or google chrome,the hash from https://report-uri.io/home/sri_hash does also differ and is not accepted, other online hash generator hashes are also not accepted, now I've seen that the hash chrome generated is completly different from what even "official" sri-generators produce, is this a know problem and how do I fix it? I have no clue where chrome takes this different hash from.
I tried using different SRI-Generators, hashing the SHA256 from the file and encoding it to base64, tried copying the code from the file hashing it in an online hash generator, in the end nothing comes close to the accepted SRI hash
Related
We are using the pdf.js (1.3.91) and pdf.worker.js. Our security ask me now, if I use the sha-1 algorithm in my or my third party code. But I cannot answer it. So my question here is:
Did PDF and so mozillas pdf.js ever used the SHA-1 algorithm (e.g. password hashing)?If yes,
did pdf.js removed it?If yes, at what version?
More context must be provided. I guess OP's security concern about weakness of SHA-1 as described at https://shattered.io/static/shattered.pdf
Did PDF and so mozillas pdf.js ever used the SHA-1 algorithm (e.g. password hashing)?
Goal of any PDF viewer was to display a PDF no matter how broken and unsafe it is (and the Reader did set very "high" bar hereāø®) So it's very unlikely any reader will remove SHA-1 algorithm for consumer by default.
If yes, did pdf.js removed it?
There is no evidence of implementing SHA-1 in PDF.js by looking at https://github.com/mozilla/pdf.js/blob/a8c87f8019aed3e9fcc5a7c2733ea3b8aa33e59a/src/core/crypto.js . Per PDF32000, SHA-1 is used only for signature check and https://github.com/mozilla/pdf.js/issues/1076 looks still opened.
So, no, pure SHA-1 did not make into PDF.js yet.
I would like to create a Node-Webkit app but avoid redistributing the font files. I have thought of a few approaches. I am considering the model used by hosted font solutions where a temporary URL containing the font file is hosted.
I have a way to encrypt the font file. You could convert the font to base64 assign it to a local variable in a javascript library with closure. The javascript file is compiled to binary and cannot be read by the end user.
Setting the base64 value to a style property would potentially expose the font as a base64 value to the DOM. What I would like to do is create a temporary route to the font file that I render from the private base64 value then remove the route once it is accessed. I can see how to achieve this as a node.js app but I'm new to Node-Webkit and don't see any documentation on router.
It seems like the hosted font solutions allow a one-time access to the font files so users cannot download the files. So does Node-Webkit have the capability to perform routing?
First things first: welcome to the internet, you can't prevent people from using the data you send them. Either you don't send them data, or you accept the fact that once the data is on their computer, they can do with it whatever they like (if you want people to be able to connect to your content through a browser, the browser needs to download and decode the content, which means that you cannot stop distribution, and in fact you are a distributor in this scenario).
Even if you tie the font loading to a session identifier (e.g. your user has to hit the page URL first, which sets a cookie value, which is then checked when they try to download the webfont in combination with the IP for which the cookie was initially set) they only need to download your font once to trivially have access to it and do with it what they want. It'll either live in their browser's cache directory, or it's accessible via JavaScript (by mining document.stylesheets for instance) which means it's trivially converted to real data and saved to disk (e.g. a window.open with a binary mimetype causes browsers to pop up a save-to-file dialog).
There, I just downloaded your fonts despite your best efforts: if you can send me the data, and the technology we've chosen for that exchange is HTTP(S), I will be able to access that data, no matter how much you further restrict how or when I can get that data. I just need to get it once.
So: don't focus your efforts on the how or when. Take the given that your users will have access to your font(s), even if only once, and instead focus your efforts on what they can do with your font(s) once they do, because that's far more important. There are several things you can do to make sure that what you distribute is mostly useless outside your content. For instance:
Don't use full fonts, use subsets, so that your users only get partial fonts containing strictly those glyphs that are needed to render your own content. This severely limits what others can do. You can take this as far as you like, serving dedicated subset fonts per page, or even per section of a page.
Set the fsType flag for your font(s) to not allow installation. That way people will get your font(s), but they can't further use them except on the web
Make sure to properly mark the font(s) license in the font itself, so that if people do use your fonts, you have legal recourse and can sue them for circumventing your license outside the "personal use" context.
However, if you also want to take advantage of caching, you don't want to do (1), and (2) and (3) are enough to give you a legal basis to go after people who use your font(s).
Bottom line: preventing users from "getting" your data is a waste of time. It's the internet, your users getting your data is entirely the point of the technology. Instead focus on making sure that what they get is useful only within the context of your content.
After all, if TypeKit can make this work, so can you. (Which would be an additional recommendation: don't roll your own solution if you can make use of an established existing solution. Are the fonts available through Typekit or the like? Use them instead and save yourself the trouble of reinventing the wheel)
You could encrypt it and then decrypt it with node native methods for that matter:
http://lollyrock.com/articles/nodejs-encryption/
Also you can zip all your assets into one file and rename it to "package.nw" and the chrome executable will run it (I know this is not a reliable security measure) plus with that file you are able also to merge it with the nw.exe file there you'll end up with only one file that is the executable, so the regular users will not be able to see your files and your package.json file is somehow protected thus preventing users from changing the configuration and seeing your files/assets.
https://github.com/nwjs/nw.js/wiki/How-to-package-and-distribute-your-apps#step-2a-put-your-app-with-nw-executable
Context:
I'm a TiddlyWiki (an offline non-linear personal notebook) user, and I would like to improve the workflow of image attaching. The basic way to attach an image to a TW is to write stuff like this:
[img[image-url-either-absolute-or-relative]]
in the edit area. The issue is, if I have 20 images I'd like to attach, I have to extract 20 urls and paste them (and surrond with the [img[...]] wrappers). My idea is to write a plugin that would allow me to drag-and-drop the 20 files from a file manager to the editing area, and get 20 urls (better wrapped with basic [img[...]] syntax or some other).
Is this possible?:
Getting a url (or uri, whatever) of a local file isn't a usual operation for web applications, and for security reasons it seems to be forbidden (at least by default). Still, is there any way to imlement this? Provided that the user will accept any security warnings.
May be workaround?
If there's a possibility for a workaround (may be using AutoHotKey or smth else), I'll be glad to hear (keep in mind that the goal is to improve the workflow, so minimum of additional clicking/keypressing is desirable).
Currently, I would love to implement this for Window 7 + Opera 12.17, but of'course the more general the solution is, the better (in the end, I'll share that with others if it's that useful). And yes, currently I'm talking about TW Classic, not TW5.
The approach for doing it with a copy operation has two phases:
First, you use the HTML5 support for drag-and-drop upload 1 [2] to the get file into the browser.
As the page I linked demonstrates, the HTML5 File API provides you with the filename (without a path), mimetype (possibly guessed), size in bytes, and contents. (The API spec also shows a field for last modified date)
The second step is then to get the file back onto disk in a known location relative to the TiddlyWiki file and insert a relative URL at the current cursor location.
TiddlyWiki has a collection of different saver backends it uses, so you basically want to hook in and use the same API TW Classic uses to write the backup file and optional RSS ChangeLog.
Note: This may or may not yet be available in TW5. I remember seeing re-adding backup support as still on the roadmap and, judging by savers/tiddlyfox.js, it seems to currently hard-code the path of the TiddlyWiki itself. However, saving them within the Wiki as binary data tiddlers is always an option.
As for doing it without making a copy...
I get a pretty strong impression that it's not possible to do it perfectly for security reasons, but here are some ideas:
If you are willing to either manually type the path or guarantee that the files will always be in a known folder, you could use the File API to retrieve the filename only.
You could write a script or plugin which implements this workflow:
Copy the file in Explorer
Position the cursor where you want the URL
Press a global hotkey
As someone who moved to Linux years ago, I'd use Python+PyGTK+python-xlib but AutoHotKey is GPLed and should allow you to do it on Windows:
It can bind a global hotkey
It can read the clipboard
If you can't find or write an AHK clone of Python's os.path.relpath and urllib.pathname2url, you can call an external script to do the work.
It can fake keypresses, allowing it to auto-type into your TiddlyWiki's edit window.
I would like to implement an in-browser Microsoft Word document merge feature that will convert the merged document into PDF and offer it to the user for download. I would like to this process to be supported in Google Chrome and Firefox. Here is how I would like it to work:
Client-side JavaScript obtains the Word template document in docx format, either from a server, or by asking the user for a file upload (which it can then read using the FileReader API)
The JavaScript uses its local data structures (e.g., data lists it has obtained via Ajax) to expand the template into a document. It can do this either directly, by unzipping the docx file and processing its contents, or using DOCx.js. The template expansion is just a matter of substituting template variables with values obtained from the local data structures.
The JavaScript then converts the expanded template into PDF.
The JavaScript offers the PDF file to the user for download, e.g., using Downloadify.
The difficulty I am having is in step 3. My understanding (based on all the Googling I have done so far) is that I have the following options:
Require that the local machine is a Windows machine, and invoke Word on it, to convert to PDF. This can be done using a little bit of scripting using WScript.shell, and it looks doable with Internet Explorer. But based on what I have read, it doesn't look like I can call WScript.shell from within either Chrome or Firefox, because of their security constraints.
I am open to trying Silverlight to do the conversion, but I have not found enough documentation on how to do this. Ideally, if I used Silverlight, I would like to write the Silverlight code in JavaScript, because (a) I don't know much CSharp, and (b) I think it would be much easier in JavaScript.
Create a web service that will convert a given docx file to a pdf file, and invoke that service via Ajax. I would rather not do this, if possible, for a few reasons: (a) I tried using docx4java (I am a reasonably skilled Java programmer) but the conversion process is far too slow, and it does not preserve document content very well; and (b) I would like to avoid a call out to the network, to avoid security issues. It does seem possible to write a little service on a Windows server for doing the conversion, and if there is no other good option, I might go that route.
If I have been unclear about anything, please let me know. I would appreciate your ideas and feedback.
I love command line tools.
Load the doc to your server and use LibreOffice to convert it to PDF via the command line
soffice.exe --headless --convert-to pdf --outdir E:\Docs\Out E:\Docs\In\a.doc
You can display a progress bar to the user and when complete give them the option to download the doc.
More info on LibreOffice's command line parameters go here
Done.
Old old question now, but for anyone who stumbles across this, web assembly (wasm) now makes this sort of approach possible.
We've just released https://www.npmjs.com/package/#nativedocuments/docx-wasm which can perform the conversion locally.
So, I've got to implement a drag-and-drop operation from Mac's mail application into a website that I'm working on, after reading Apple's Documentation and dissecting a few HTML5 demos I'm fairly well stuck.
I've already got the site properly processing .EML files; so everything would be great if I could, populate a file-input w/ the email's location. (Though, since it's a promised-file it [apparently] doesn't quite exist yet.)
I can get the list of contents sure, but I'm at a loss as to how to get the file from the promised-file-url or the (apparent) .EML file from dyn.ah62d4rv4gu8yc6durvwwa3xmrvw1gkdusm1044pxqyuha2pxsvw0e55bsmwca7d3sbwu.
Why not simply use AppleScript to export the message(s) you need out of Mail.app? There is an "archive" function in the ASD for Mail.app which will export, into an external file, pretty much all the fields you might want (identified in a pseudo-printf-like syntax). Then you can slurp in the contents of that file however you want. Probably the shortest distance between two points, and far more future-proof than trying to reverse engineer Mail's mailbox format.