If the name of the uploaded image contains .php.jpg - javascript

I preg_replace anything like .php.jpg, .php.png,
.js.jpg, .js.png, in the name of the uploaded file (images) for security purposes and would like to know if there are any extensions that I should also consider replacing before moving the file to the corresponding folder after the upload is complete?

Looking at the filename alone is really not a safe way to prevent rogue code/executables being uploaded.
Depending on the type of files you are accepting for upload there are better ways to play it safe.
As a general rule never upload any files to anywhere publicly accessible from the web until you know 100% they're not anything dubious.
If you are allowing image uploads - use a server side technology such as GD or ImageMagick to re-save the file out before using it. If these tools can't load a valid image from what has been uploaded (catch the errors so you know...) either drop, or quarantine the file until you investigate manually.
In any case never store the file under the original file name as uploaded even with extensions swapped out / replaced.
Search the site for upload security for some more detailed tips - this question does come up reasonably regularly.

Depending on what you need to do with the file it would be possible to crypt or just run a gzip on the files. There you save space on the server and if someone uploads scriptfiles they can't be run via a http request. The thing i would to is to check if there is any public access from the filed and/or set up an htaccess file.

Short answer, yes there are many other executable extensions that you should also consider blocking if you are going to use a black list approach, like 'phtml', 'java', 'perl', 'py', 'asp', 'go' and you should consider exe and bat and others that could deliver malware.
But, I would only use a blacklist if you are going to completely block moving them (or possibly even uploading as a kind of form validation before you get to the more complicated parts), not if you are going to upload and change the names on moving. No matter what you don't want some unknown Java or Go or Python etc sitting on your server regardless of what it is called and even if you are not on Apache (which lets you put the executable extension in places besides the last segment). Why allow that upload to go forward? It can be simpler (and you'll see people recommend) to just not allow 3+ level names but there are plenty of legitimate uses of three level names such as for language identification. This approach alone is not going to keep you safe from executable code however. This needs to be part of a larger set of things you to to make file uploading as safe as possible (such as you also want to check image files for embedded code).

Related

How can I protect fonts on a node webkit app?

I would like to create a Node-Webkit app but avoid redistributing the font files. I have thought of a few approaches. I am considering the model used by hosted font solutions where a temporary URL containing the font file is hosted.
I have a way to encrypt the font file. You could convert the font to base64 assign it to a local variable in a javascript library with closure. The javascript file is compiled to binary and cannot be read by the end user.
Setting the base64 value to a style property would potentially expose the font as a base64 value to the DOM. What I would like to do is create a temporary route to the font file that I render from the private base64 value then remove the route once it is accessed. I can see how to achieve this as a node.js app but I'm new to Node-Webkit and don't see any documentation on router.
It seems like the hosted font solutions allow a one-time access to the font files so users cannot download the files. So does Node-Webkit have the capability to perform routing?
First things first: welcome to the internet, you can't prevent people from using the data you send them. Either you don't send them data, or you accept the fact that once the data is on their computer, they can do with it whatever they like (if you want people to be able to connect to your content through a browser, the browser needs to download and decode the content, which means that you cannot stop distribution, and in fact you are a distributor in this scenario).
Even if you tie the font loading to a session identifier (e.g. your user has to hit the page URL first, which sets a cookie value, which is then checked when they try to download the webfont in combination with the IP for which the cookie was initially set) they only need to download your font once to trivially have access to it and do with it what they want. It'll either live in their browser's cache directory, or it's accessible via JavaScript (by mining document.stylesheets for instance) which means it's trivially converted to real data and saved to disk (e.g. a window.open with a binary mimetype causes browsers to pop up a save-to-file dialog).
There, I just downloaded your fonts despite your best efforts: if you can send me the data, and the technology we've chosen for that exchange is HTTP(S), I will be able to access that data, no matter how much you further restrict how or when I can get that data. I just need to get it once.
So: don't focus your efforts on the how or when. Take the given that your users will have access to your font(s), even if only once, and instead focus your efforts on what they can do with your font(s) once they do, because that's far more important. There are several things you can do to make sure that what you distribute is mostly useless outside your content. For instance:
Don't use full fonts, use subsets, so that your users only get partial fonts containing strictly those glyphs that are needed to render your own content. This severely limits what others can do. You can take this as far as you like, serving dedicated subset fonts per page, or even per section of a page.
Set the fsType flag for your font(s) to not allow installation. That way people will get your font(s), but they can't further use them except on the web
Make sure to properly mark the font(s) license in the font itself, so that if people do use your fonts, you have legal recourse and can sue them for circumventing your license outside the "personal use" context.
However, if you also want to take advantage of caching, you don't want to do (1), and (2) and (3) are enough to give you a legal basis to go after people who use your font(s).
Bottom line: preventing users from "getting" your data is a waste of time. It's the internet, your users getting your data is entirely the point of the technology. Instead focus on making sure that what they get is useful only within the context of your content.
After all, if TypeKit can make this work, so can you. (Which would be an additional recommendation: don't roll your own solution if you can make use of an established existing solution. Are the fonts available through Typekit or the like? Use them instead and save yourself the trouble of reinventing the wheel)
You could encrypt it and then decrypt it with node native methods for that matter:
http://lollyrock.com/articles/nodejs-encryption/
Also you can zip all your assets into one file and rename it to "package.nw" and the chrome executable will run it (I know this is not a reliable security measure) plus with that file you are able also to merge it with the nw.exe file there you'll end up with only one file that is the executable, so the regular users will not be able to see your files and your package.json file is somehow protected thus preventing users from changing the configuration and seeing your files/assets.
https://github.com/nwjs/nw.js/wiki/How-to-package-and-distribute-your-apps#step-2a-put-your-app-with-nw-executable

Is it possible to drag-and-drop a local file from a file manager to a browser and get the file's uri?

Context:
I'm a TiddlyWiki (an offline non-linear personal notebook) user, and I would like to improve the workflow of image attaching. The basic way to attach an image to a TW is to write stuff like this:
[img[image-url-either-absolute-or-relative]]
in the edit area. The issue is, if I have 20 images I'd like to attach, I have to extract 20 urls and paste them (and surrond with the [img[...]] wrappers). My idea is to write a plugin that would allow me to drag-and-drop the 20 files from a file manager to the editing area, and get 20 urls (better wrapped with basic [img[...]] syntax or some other).
Is this possible?:
Getting a url (or uri, whatever) of a local file isn't a usual operation for web applications, and for security reasons it seems to be forbidden (at least by default). Still, is there any way to imlement this? Provided that the user will accept any security warnings.
May be workaround?
If there's a possibility for a workaround (may be using AutoHotKey or smth else), I'll be glad to hear (keep in mind that the goal is to improve the workflow, so minimum of additional clicking/keypressing is desirable).
Currently, I would love to implement this for Window 7 + Opera 12.17, but of'course the more general the solution is, the better (in the end, I'll share that with others if it's that useful). And yes, currently I'm talking about TW Classic, not TW5.
The approach for doing it with a copy operation has two phases:
First, you use the HTML5 support for drag-and-drop upload 1 [2] to the get file into the browser.
As the page I linked demonstrates, the HTML5 File API provides you with the filename (without a path), mimetype (possibly guessed), size in bytes, and contents. (The API spec also shows a field for last modified date)
The second step is then to get the file back onto disk in a known location relative to the TiddlyWiki file and insert a relative URL at the current cursor location.
TiddlyWiki has a collection of different saver backends it uses, so you basically want to hook in and use the same API TW Classic uses to write the backup file and optional RSS ChangeLog.
Note: This may or may not yet be available in TW5. I remember seeing re-adding backup support as still on the roadmap and, judging by savers/tiddlyfox.js, it seems to currently hard-code the path of the TiddlyWiki itself. However, saving them within the Wiki as binary data tiddlers is always an option.
As for doing it without making a copy...
I get a pretty strong impression that it's not possible to do it perfectly for security reasons, but here are some ideas:
If you are willing to either manually type the path or guarantee that the files will always be in a known folder, you could use the File API to retrieve the filename only.
You could write a script or plugin which implements this workflow:
Copy the file in Explorer
Position the cursor where you want the URL
Press a global hotkey
As someone who moved to Linux years ago, I'd use Python+PyGTK+python-xlib but AutoHotKey is GPLed and should allow you to do it on Windows:
It can bind a global hotkey
It can read the clipboard
If you can't find or write an AHK clone of Python's os.path.relpath and urllib.pathname2url, you can call an external script to do the work.
It can fake keypresses, allowing it to auto-type into your TiddlyWiki's edit window.

Is is possible to increase the maximum upload size on wordpress?

Basicaly I'd like to find a way to increase the maximum upload size, I'm only talking about wordpress because it happens to be the platform Im working on but what I need is to find a way to go around php setting on the server as wordpress seems to use those setting as default.
Note that I'm only looking at uploading pictures, no other sort of files.
I have found several pages on google explaining either how to edit wordpress setting (in my case this is not something that would work), or changing the php setting on the server side, and I simply cannot do that.
I have thought of a few ways to reach this goal but don't know where to start as I'm not that good in picture processing. I was thinking about evaluating the picture size and dividing it into several files (is it something do-able in js ?), then upload them onto the server and assemble them from there in php (that I can do).
Can you tell me what you think about it ? I'm not asking anybody to do the work, i'm just looking for a hint or just for somebody to tell me where to start from in js or whatever other language I need to use on the front end.
Thanks a lot.
Yes, it is certainly possible to split a file via JS into chunks and to senk these ones individually.
Declare a file input field:
<input type="file" id="input">
Access the first file (there could be more if using the multiple attribute):
var selected_file = document.getElementById('input').files[0];
You now have a File object which also inherits the Blob interface. You can therefore call Blob.slice in order to split the data into chunks.
Upload those chunks via AJAX
Combine the chunks again on the server side
Further readings:
https://developer.mozilla.org/en-US/docs/Using_files_from_web_applications
https://developer.mozilla.org/en-US/docs/Web/API/File
https://developer.mozilla.org/en-US/docs/Web/API/Blob
The restriction is at the server level.
You MUST adjust your server configuration in order to trickle down to WordPress.
If you set WordPress configuration to a max of 40MB, and Apache (or other web server software) still has a 2MB limit, you won't ever be able to post more than 2MB.
One option would be to use a Flash-based upload tool which can perform some pre-processing to downsize the image before uploading.
If you really want to get fancy, you could develop your own Flash-based upload tool to chunk uploads into 2MB parts (or whatever your server's maximum post size is) and some server-side script to re-assemble the pieces, just like the internet does with packets.

Send Information inside a .png image then extract it through javascript?

After searching around in Google for a while I have not had any luck or guidance in my question.
I want to be able to load up a website using javascript, ajax, in order to reduce the amount of requests needed by the server from the client. My goal is to embed/encode data within an image such that only the client needs to request this image through ajax call, and then be decoded to find the js, css, and other files needed. Then the js, css and other files will be inserted into the DOM.
If I can get the above to work then I could have a lot of flexibility on how my webapp is loaded and be able to notify the user how close the webapp is to being ready for viewing.
Currently my problem is that I cannot find how I would encode the data within an image.
Even if this is not the way to be going about serving up a webapp my curiosity is getting the best of me and I would just really like to do this.
Any guidance or pointers would be greatly appreciated!
Also: I am learning Python so if you know of a python module that I could play with that would be cool. Currently i'm playing with the pypng module to see if this could be done.
To be frank. Don't do that.
The brightest minds on earth use other methods to keep the number of requests and response time down. The most common technique for minimizing the number of requests is called Bundling. In short, you just copy'n paste all js files after each other into one big js file and all the css files into one big css file. This way you need to download two files, one js and one css. Better than that is usually not worth the trouble.
To further keep response times down you usually minify your js and css files. This is a process where all white space, comments, etc are removed and internal variable names are made as short as possible.
Finally you can serve both js and css files as gziped files to further reduce the file size to transfer.
There are many tools out there that does both bundling and minification for you. Google and pick one that suits your other tooling support.

min.js to clear source

As far i know until now, the min version of a .js(javascript) file is obtaining by removing the unncessary blank spaces and comments, in order to reduce the file size.
My questions are:
How can I convert a min.js file into a clear, easy readable .js file
Besides, size(&and speed) are there any other advtages of the min.js file.
the js files can be encripted?
can js be infected. I think the answer is yes, so the question is how to protect the .js files from infections?
Only the first question is most important and I'm looking for help on it.
TY
To convert a minified file into a editable source, simply open any IDE that supports auto-formatting and auto-format it. I use Netbeans to do this.
If you do client side caching for the minified file, it means to say that the client (computer) needs to process less bytes. Size and speed are the main advantages of a minified file, and they are already great advantages to prepare for a future that requires great load of data transfer. By the way, it also saves you some bandwidth on your server and therefore money.
I don't see the need of encryption. See How to disable or encrypt "View Source" for my site
Javascript files cannot be edited unless it is done so on the server. The security of your Javascript files depends on your 1) server protection 2) data protection. Data should not be able to exploit. But of course, Javascript is executed on the client side, it will be meaningless for the client user to attack him/herself. However Twitter has shown multiple Javascript exploits. You need to constantly test and check your codes against XSS, CSRF and other attacks. This means to say that if your Javascript file has a loophole, it was the developer, you, who created it.
Multiple minifiers exists, that also are able to compress JS, see http://dean.edwards.name/weblog/2007/04/packer3 for one of the most being used. Some others exists, also see the JSMin library http://www.crockford.com/javascript/jsmin.html
The main advantage is the size gain. You should also aggregate your JS files when you have multiple JS files, this also saves a lot of I/O (less HTTP requests) between the server and the client. This is probably more important than minifying.
I can't answer you about encryption. Client security will mainly depend on its browser.
EDIT: Ok my first answer is not for the first question, merged both in 2.

Categories