My Blazor Server app uses a canvas element to manipulate an image that is loaded into an img element via a file drop zone, it serves as a reference for the canvas and holds the information that is worked on via JS. I'm using Excubo's Canvas wrapper to work within C# as much as possible but I don't think this error is related to it as it also happens with pure JS. Since this is more of a logic problem I don't think sharing code would be much help but here is a crud visual representation of what I mean, perhaps it will be more useful in trying to convey the problem.
The end goal is to invoke a function in JS to load the image src into a secondary work canvas, and using the data present in the main canvas, manipulate pixel information accordingly for both. After parsing the data, I convert it back toDataURL and set the img.src attribute.
The problem happens when returning the base64 data from that function particularly when using large images. The server disconnects immediately and I'm not sure if there is a way to extend the allowed wait time or this is just an implementation problem on my part.
Also worth mentioning that I extended the maximum file size for the dropzone InputFile component to 30MB but this happens with files of 3MB (and maybe even lower but I couldn't troubleshoot the threshold value yet).
Perhaps there's another way to achieve what I'm aiming for but I'm not sure how I would reference the img src attribute in C# without JS. Client-Server side logic isn't a concern because this app is supposed to be run locally on the server machine only so they are essentially the same target.
Found a solution that works for my case. Here is the reference for the fix.
The default MaximumReceiveMessageSize is too low for my needs, I increased it to 30MB and it works without disconnecting from server. Also tried increasing the ApplicationMaxBufferSize and TransportMaxBufferSize on the MapBlazorHub pipeline configurations but it didn't help. Increasing JS interop timeout on the AddServerSideBlazor service options was also unfruitful.
Here is the relevant Program.cs code I'm using now:
var maxBufferSize = 30 * 1024 * 1024;
builder.Services.AddServerSideBlazor().AddHubOptions(opt => { opt.MaximumReceiveMessageSize = maxBufferSize; });
I have an offline Electron + React desktop app which uses no server to store data. Here is the situation:
An user can create some text data and add any number of images to them. When they save the data, a ZIP with the following structure is created:
myFile.zip
├ main.json // Text data
└ images
├ 00.jpg
├ 01.jpg
├ 02.jpg
└ ...
If they want to edit the data, they can open the saved ZIP with the app.
I use ADM-ZIP to read the ZIP content. Once open, I send the JSON data to Redux since I need them for the interface display. The images, however, are not displayed on the component that read the archive and can be on multiple "pages" of the app (so different components). I display them with an Image component that takes a name props and return an img tag. My problem here is how to get their src from the archive.
So here is my question: what could be the most efficient way to get the data of the images contained in the ZIP given the above situation?
What I am currently doing:
When the user open their file, I extract the ZIP in a temp folder and get the images paths from there. The temp folder is only deleted when the user close the file and not when they close the app (I don't want to change this behaviour).
// When opening the archive
const archive = new AdmZip('myFiles.zip');
archive.extractAllTo(tempFolderPath, true);
// Image component
const imageSrc = `${tempFolderPath}/images/${this.props.name}`;
Problem: if the ZIP contains a lot of images, the user must have enough space on their disk to get them extracted properly. Since the temp folder is not necessarily deleted when the app is closed, it means that the disk space will be taken until they close the file.
What I tried:
Storing the ZIP data in the memory
I tried to save the result of the opened ZIP in a props and then to get my images data from it when I need them:
// When opening the archive
const archive = new AdmZip('myFile.zip');
this.props.saveArchive(archive); // Save in redux
// Image component
const imageData = this.props.archive.readFile(this.props.name);
const imageSrc = URL.createObjectURL(new Blob([imageData], {type: "image/jpg"}));
With this process, the images loading time is acceptable.
Problem: with a large archive, storing the archive data in the memory might be bad for the performances (I think?) and I guess there is a limit on how much I can store there.
Opening the ZIP again with ADM-ZIP every time I have to display an image
// Image component
const archive = new AdmZip('myFile.zip');
const imageData = archive.readFile(this.props.name);
const imageSrc = URL.createObjectURL(new Blob([imageData], {type: "image/jpg"}));
Problem: bad performances, extremely slow when I have multiple images on the same page.
Storing the images Buffer/Blob in IndexedDB
Problem: it's still stored on the disk and the size is much bigger than the extracted files.
Using a ZIP that doesn't compress the data
Problem: I didn't see any difference compared to a compressed ZIP.
What I considered:
Instead of a ZIP, I tried to find a file type that could act as a non-compressed archive and be read as a directory but I couldn't find anything like that. I also tried to create a custom file with vanilla Node.js but I'm afraid that I don't know the File System API enough to do what I want.
I'm out of ideas so any suggestions are welcome. Maybe I didn't see the most obvious way to do... Or maybe that what I'm currently doing is not that bad and I am overthinking the problem.
What do you mean by "most efficient" is not exactly very clear. I'll make some assumptions and respond ahead according them.
Cache Solution
Basically what you have already done. It's pretty efficient loading it all at once (extracting to a temporary folder) because whenever you need to reuse something, the most spendable task doesn't have to be done again. It's a common practice of some heavy applications to load assets/modules/etc. at their startup.
Obs¹: Since you are considering the lack of disk space as a problem, if you want to stick with this is desirable that you handle this case programmatically and give the user some alert, since having little free space on storage is critical.
TL;DR - Spendable at first, but faster later for asset reutilization.
Lazy Loading
Another very common concept which consists in basically "load as you need, only what you need". This is efficient because this approach assures your application only loads the minimum needed to run and then load things as demanded by the user.
Obs¹: It looks pretty much on what you have done at your try number 2.
TL;DR - Faster at startup, slower during runtime.
"Smart" Loading
This is not a real name, but it describes satisfactorily what i mean. Basicaly, focus on understanding the goal of your project and mix both of previous solutions according to your context, so you can achieve the best overall performance in your application, reducing trade offs of each approach.
Ex:
Lazy loading images per view/page and keep a in-memory cache with
limited size
Loading images in background while the user navigates
Now, regardless of your final decision, the follow considerations should not be ignored:
Memory will always be faster to write/read than disk
Partially unzipping (setting specific files) is possible in many packages (including ADM-ZIP) and is always faster than unzipping everything, specially if the ZIP file is huge.
Using IndexedDB or a custom file-based database like SQLite offers overall good result for a huge number of files and a "smart" approach, since querying and organizing data is easier through those.
Always keep in mind the reason of every application design choice, the best you understand why you are doing something, the better your final result will be.
Topic 4 being said, in my opinion, in this case you really overthought a little, but that's not a bad thing and it's really admirable to have this kind of concern about how can you do things on the best possible way, I do this often even if it's not necessary and it's good for self improvement.
Well, I wrote a lot :P
I would be very pleased if all of this help you somehow.
Good luck!
TL;DR - There is not a closed answer based on your question, it depends a lot on the context of your application; some of your tries are already pretty good, but putting some effort into understanding the usability context to handle image loading "smartly" would surely award you.
The issue of using ZIP
Technically streaming the content would be the most efficient way but streaming zipped content from inside a ZIP is hard and inefficient by design since the data is fragmented and needs to parse the whole file to get the central directory
A much better format for your specific use case is 7zip it allows you to stream out single file and does not require you to read out the whole file to get the information about this single files.
The cost of using 7zip
You will have to add a 7zip binary to your program, luckily it's very small ~2.5MB.
How to use the 7zip
const { spawn } = require('child_process')
// Get all images from the zip or adjust to glob for the needed ones
// x, means extract while keeping the full path
// (optional instead of x) e, means extract flat
// Any additional parameter after the archive is a filter
let images = spawn('path/to/7zipExecutable', ['x', 'path/to/7zArchive', '*.png'])
Additional consideration
Either you use the Buffer data and display them inside the HTML via Base64 or you locate them into a cache folder that gets cleared as part of the extraction process. Depending on granular you want to do this.
Keep in mind that you should use appropriate folder for that under windows this would be %localappdata%/yourappname/cache under Linux and MacOS I would place this in a ~/.yourappname/cache
Conclusion
Streaming data is the most efficient (lowest memory/data footprint)
By using a native executable you can compress/decompress much faster then via pure js
the compression/decompression happens in another process avoiding your application from beeing blocked in the execution and rendering
ASAR as Alternative
Since you mentioned you considered to use another format that is not compressed you should give the asar file format a try it is a build in format in electron that can be accessed by regular filepaths via FS no extraction etc.
For example you have a ~/.myapp/save0001.asar which contains a img folder with multiple images in it, you would access that simply by ~/.myapp/save0001/img/x.png
When I try to load a very large file using the appropriate loaders provided with the library, the tab my website runs in crashes. I have tried implementing the Worker class, but it doesnt seem to work. Heres what happens:
In the main javascript file I have:
var worker = new Worker('loader.js');
When user selects one of available models I check for the extension and pass the file URL/path to the worker: (in this instance a pcd file)
worker.postMessage({fileType: "pcd", file: file});
Now the loader.js has the appropriate includes that are necessary to make it work:
importScripts('js/libs/three.js/three.js');
importScripts('js/libs/three.js/PCDLoader.js');
and in its onmessage method, it uses the apropriate loader depending on file extension.
var loader = new THREE.PCDLoader();
loader.load(file, function (mesh) {
postMessage({points: mesh.geometry.attributes.position.array, colors: mesh.geometry.attributes.color.array});
});
The data is passed back successfully to the main javascript which adds it to the scene. At least for small files - large ones, like I said, take too long and the browser decides there was an error. Now I thought the worker class was supposed to work asynchronously, so whats the deal here?
Currently Three.js's loaders rely on strings and arrays of strings to parse data from a file. They dont split files into pieces, which leads to excessive memory usage which browsers immediately interrupt. Loading a 64 MB file spikes to over 1 GB memory used during load (which then results in an error).
I use pywebkit and html5 to develope desktop map server.
Map tiles store in sqlite database.
So when I set html img's src, I have two options.
One is read image from database and b64encode it, then set img's src with "data:image/png;base64,b64Encoder-string".
Another is read image from database and save it on disk, then set img's src with url to local folder.
My question is which way is better.
I am most concerned about rendering speed. Which one is faster for the browser to render images.
<img src="http://mysite.com/images/myimage.jpg" /> this is actually a http URI scheme,
while <img src="data:image/png;base64,efT....." /> is a data URI, this way image is inlined in the HTML and there is no extra HTTP request however embedded images can’t be cached between different page loads in most cases, so the solution actually goes through your way...what is better for you and convenient to overlook what Ian suggested :)
Now move to the browser compatibility -
Data URI's don't work in IE 5-7, but are supported in IE 8. Source
Page Size:
base64 encoding increases page size as well, another thing to look at.
It largely depends on how the application is going to run, and other details such as running environment.
Saving the image to disk has the advantage that you can avoid re-encoding the image every time you need it (which can be avoided by adding a column in you DB with that computed base-64 string).
Summary:
1-Use the first option but cache it the database or a server variable.
2-Use the second option and cache the file names.
But then there is the question of how fast is your secondary storage. You might want to avoid using the HD, even with caching, because it's slow for example or you have limited binary storage (and have more DB storage).
I'd suggest you add specifics regarding what you are worried about.
My web application uses jQuery and some jQuery plugins (e.g. validation, autocomplete). I was wondering if I should stick them into one .js file so that it could be cached more easily, or break them out into separate files and only include the ones I need for a given page.
I should also mention that my concern is not only the time it takes to download the .js files but also how much the page slows down based on the contents of the .js file loaded. For example, adding the autocomplete plugin tends to slow down the response time by 100ms or so from my basic testing even when cached. My guess is that it has to scan through the elements in the DOM which causes this delay.
I think it depends how often they change. Let's take this example:
JQuery: change once a year
3rd party plugins: change every 6 months
your custom code: change every week
If your custom code represents only 10% of the total code, you don't want the users to download the other 90% every week. You would split in at least 2 js: the JQuery + plugins, and your custom code. Now, if your custom code represents 90% of the full size, it makes more sense to put everything in one file.
When choosing how to combine JS files (and same for CSS), I balance:
relative size of the file
number of updates expected
Common but relevant answer:
It depends on the project.
If you have a fairly limited website where most of the functionality is re-used across multiple sections of the site, it makes sense to put all your script into one file.
In several large web projects I've worked on, however, it has made more sense to put the common site-wide functionality into a single file and put the more section-specific functionality into their own files. (We're talking large script files here, for the behavior of several distinct web apps, all served under the same domain.)
The benefit to splitting up the script into separate files, is that you don't have to serve users unnecessary content and bandwidth that they aren't using. (For example, if they never visit "App A" on the website, they will never need the 100K of script for the "App A" section. But they would need the common site-wide functionality.)
The benefit to keeping the script under one file is simplicity. Fewer hits on the server. Fewer downloads for the user.
As usual, though, YMMV. There's no hard-and-fast rule. Do what makes most sense for your users based on their usage, and based on your project's structure.
If people are going to visit more than one page in your site, it's probably best to put them all in one file so they can be cached. They'll take one hit up front, but that'll be it for the whole time they spend on your site.
At the end of the day it's up to you.
However, the less information that each web page contains, the quicker it will be downloaded by the end-viewer.
If you only include the js files required for each page, it seems more likely that your web site will be more efficient and streamlined
If the files are needed in every page, put them in a single file. This will reduce the number of HTTP request and will improve the response time (for lots of visits).
See Yahoo best practice for other tips
I would pretty much concur with what bigmattyh said, it does depend.
As a general rule, I try to aggregate the script files as much as possible, but if you have some scripts that are only used on a few areas of the site, especially ones that perform large DOM traversals on load, it would make sense to leave those in separate file(s).
e.g. if you only use validation on your contact page, why load it on your home page?
As an aside, you can sometimes sneak these files into interstitial pages, where not much else is going on, so when a user lands on an otherwise quite heavy page that needs it, it should already be cached - use with caution - but can be a handy trick when you have someone benchmarking you.
So, as few script files as possible, within reason.
If you are sending a 100K monolith, but only using 20K of it for 80% of the pages, consider splitting it up.
It depends pretty heavily on the way that users interact with your site.
Some questions for you to consider:
How important is it that your first page load be very fast?
Do users typically spend most of their time in distinct sections of the site with subsets of functionality?
Do you need all of the scripts ready the moment that the page is ready, or can you load some in after the page is loaded by inserting <script> elements into the page?
Having a good idea of how users use your site, and what you want to optimize for is a good idea if you're really looking to push for performance.
However, my default method is to just concatenate and minify all of my javascript into one file. jQuery and jQuery.ui are small and have very low overhead. If the plugins you're using are having a 100ms effect on page load time, then something might be wrong.
A few things to check:
Is gzipping enabled on your HTTP server?
Are you generating static files with unique names as part of your deployment?
Are you serving static files with never ending cache expirations?
Are you including your CSS at the top of your page, and your scripts at the bottom?
Is there a better (smaller, faster) jQuery plugin that does the same thing?
I've basically gotten to the point where I reduce an entire web application to 3 files.
vendor.js
app.js
app.css
Vendor is neat, because it has all the styles in it too. I.e. I convert all my vendor CSS into minified css then I convert that to javascript and I include it in the vendor.js file. That's after it's been sass transformed too.
Because my vendor stuff does not update often, once in production it's pretty rare. When it does update I just rename it to something like vendor_1.0.0.js.
Also there are minified versions of those files. In dev I load the unminified versions and in production I load the minified versions.
I use gulp to handle doing all of this. The main plugins that make this possible are....
gulp-include
gulp-css2js
gulp-concat
gulp-csso
gulp-html-to-js
gulp-mode
gulp-rename
gulp-uglify
node-sass-tilde-importer
Now this also includes my images because I use sass and I have a sass function that will compile images into data-urls in my css sheet.
function sassFunctions(options) {
options = options || {};
options.base = options.base || process.cwd();
var fs = require('fs');
var path = require('path');
var types = require('node-sass').types;
var funcs = {};
funcs['inline-image($file)'] = function (file, done) {
var file = path.resolve(options.base, file.getValue());
var ext = file.split('.').pop();
fs.readFile(file, function (err, data) {
if (err) return done(err);
data = new Buffer(data);
data = data.toString('base64');
data = 'url(data:image/' + ext + ';base64,' + data + ')';
data = types.String(data);
done(data);
});
};
return funcs;
}
So my app.css will have all of my applications images in the css and I can add the image's to any chunk of styles I want. Typically i create classes for the images that are unique and I'll just take stuff with that class if I want it to have that image. I avoid using Image tags completely.
Additionally, use html to js plugin I compile all of my html to the js file into a template object hashed by the path to the html files, i.e. 'html\templates\header.html' and then using something like knockout I can data-bind that html to an element, or multiple elements.
The end result is I can end up with an entire web application that spins up off one "index.html" that doesn't have anything in it but this:
<html>
<head>
<script src="dst\vendor.js"></script>
<script src="dst\app.css"></script>
<script src="dst\app.js"></script>
</head>
<body id="body">
<xyz-app params="//xyz.com/api/v1"></xyz-app>
<script>
ko.applyBindings(document.getTagById("body"));
</script>
</body>
</html>
This will kick off my component "xyz-app" which is the entire application, and it doesn't have any server side events. It's not running on PHP, DotNet Core MVC, MVC in general or any of that stuff. It's just basic html managed with a build system like Gulp and everything it needs data wise is all rest apis.
Authentication -> Rest Api
Products -> Rest Api
Search -> Google Compute Engine (python apis built to index content coming back from rest apis).
So I never have any html coming back from a server (just static files, which are crazy fast). And there are only 3 files to cache other than index.html itself. Webservers support default documents (index.html) so you'll just see "blah.com" in the url and any query strings or hash fragments used to maintain state (routing etc for bookmarking urls).
Crazy quick, all pending on the JS engine running it.
Search optimization is trickier. It's just a different way of thinking about things. I.e. you have google crawl your apis, not your physical website and you tell google how to get to your website on each result.
So say you have a product page for ABC Thing with a product ID of 129. Google will crawl your products api to walk through all of your products and index them. In there you're api returns a url in the result that tells google how to get to that product on a website. I.e. "http://blah#products/129".
So when users search for "ABC thing" they see the listing and clicking on it takes them to "http://blah#products/129".
I think search engines need to start getting smart like this, it's the future imo.
I love building websites like this because it get's rid of all the back end complexity. You don't need RAZOR, or PHP, or Java, or ASPX web forms, or w/e you get rid of those entire stacks.... All you need is a way to write rest apis (WebApi2, Java Spring, or w/e etc etc).
This separates web design into UI Engineering, Backend Engineering, and Design and creates a clean separation between them. You can have a UX team building the entire application and an Architecture team doing all the rest api work, no need for full stack devs this way.
Security isn't a concern either, because you can pass credentials on ajax requests and if your stuff is all on the same domain you can just make your authentication cookie on the root domain and presto (automatic, seamless SSO with all your rest apis).
Not to mention how much simpler server farm setup is. Load balance needs are a lot less. Traffic capabilities a lot higher. It's way easier to cluster rest api servers on a load balancer than entire websites.
Just setup 1 nginx reverse proxy server to serve up your index .html and also direct api requests to one of 4 rest api servers.
Api Server 1
Api Server 2
Api Server 3
Api Server 4
And your sql boxes (replicated) just get load balanced from the 4 rest api servers (all using SSD's if possible)
Sql Box 1
Sql Box 2
All of your servers can be on internal network with no public ips and just make the reverse proxy server public with all requests coming in to it.
You can load balance reverse proxy servers on round robin DNS.
This means you only need 1 SSL cert to since it's one public domain.
If you're using Google Compute Engine for search and seo, that's out in the cloud so nothing to worry about there, just $.
If you like the code in separate files for development you can always write a quick script to concatenate them into a single file before minification.
One big file is better for reducing HTTP requests as other posters have indicated.
I also think you should go the one-file route, as the others have suggested. However, to your point on plugins eating up cycles by merely being included in your large js file:
Before you execute an expensive operation, use some checks to make sure you're even on a page that needs the operations. Perhaps you can detect the presence (or absence) of a dom node before you run the autocomplete plugin, and only initialize the plugin when necessary. There's no need to waste the overhead of dom traversal on pages or sections that will never need certain functionality.
A simple conditional before an expensive code chunk will give you the benefits of both the approaches you are deciding on.
I tried breaking my JS in multiple files and ran into a problem. I had a login form, the code for which (AJAX submission, etc) I put in its own file. When the login was successful, the AJAX callback then called functions to display other page elements. Since these elements were not part of the login process I put their JS code in a separate file. The problem is that JS in one file can't call functions in a second file unless the second file is loaded first (see Stack Overflow Q. 25962958) and so, in my case, the called functions couldn't display the other page elements. There are ways around this loading sequence problem (see Stack Overflow Q. 8996852) but I found it simpler put all the code in one larger file and clearly separate and comment sections of code that would fall into the same functional group e.g. keep the login code separate and clearly commented as the login code.