Ey! I recently made a small framework to build my small webpages and projects on top.
Today i'm checking a lot of articles about tricks to speed up stuff. I'm interested in improve xhr speed.
I been reading and found some file extensions get usually cached by default and others don't.
I use a filename.ff special extension on my frameworks to known what files i want to fech when accessing a resource.
As a live example
https://bugs.stringmanolo.ga/#projects/fastframework is being downloaded from https://github.com/StringManolo/bugWriteups/blob/master/projects/fastframework/fastframework.ff using XHR when you click the fastframework link in this page https://bugs.stringmanolo.ga/#projects
My question is:
If i change the extension from fastframework.ff to fastframework.ff.js is the file getting cached by the browser and then it will be downloaded faster? Also will be working offline? Or it's already cached? Changing the framework code to use .ff.js isn't going to make a diference at all?
I finally solved it in a better way using service workers and cache api.
I let you the code i used here, so maybe is helpfull to someone in the future.
ff.js (ff is a ff = {} object)
/*** Cache Service Workers Code */
ff.cache = {}
ff.cache.resources = [];
ff.cache.start = function(swName, ttl) {
let tl = 0;
tl = localStorage.cacheTTL;
if (+tl) {
const now = new Date();
if (now.getTime() > +localStorage.cacheTTL) {
localStorage.cacheTTL = 0;
caches.delete("cachev1").then(function() {
});
}
} else {
navigator.serviceWorker.register(swName, {
scope: './'
})
.then(function(reg) {
caches.open("cachev1")
.then(function(cache) {
cache.addAll(ff.cache.resources)
.then(function() {
localStorage.cacheTTL = +(new Date().getTime()) + +ttl;
});
});
})
.catch(function(err) {
});
}
};
ff.cache.clean = function() {
caches.delete("cachev1").then(function() {
});
};
/* End Cache Service Workers Code ***/
cache.js (this is the service worker intercepting the requests)
self.addEventListener('fetch', (e) => {
e.respondWith(caches.match(e.request).then((response) => {
if(response)
return response
else
return fetch(e.request)
}) )
})
main.js (this is the main file included into the index.html file)
ff.cache.resources = [
"./logs/dev/historylogs.ff",
"./blogEntries/xss/xss1.ff",
"./blogEntries/xss/w3schoolsxss1.ff",
"./blogEntries/csrf/w3schoolscsrf1.ff",
"./projects/fastframework/fastframework.ff",
"./projects/jex/jex.ff",
"./ff.js",
"./main.js",
"./main.css",
"./index.html",
"./resources/w3schoolspayload.png",
"./resources/w3schoolsxsslanscape.png",
"./resources/w3schoolsxss.png"];
ff.cache.start("./cache.js", 104800000);
/* 604800000 milliseconds equals 1 week */
You can test it live in https://bugs.stringmanolo.ga/index.html is hosted from github repo in case you need to see more code.
Related
I am trying to speed up the upload. So I tried with different solution, with both BackEnd and Front-End. Those are,
1) I uploaded the tar file (already compressed one)
2) I tried chunk upload (sequentially), if the response is success next API will get triggered. In the back-end side, in the same file the content will get appended.
3) I tried chunk upload but in parallel, at a single time I make the 50 request to upload the chunk content (I know, at a time browser handle only 6 requests). From the backend side, we are storing all the chunk file separately, after receiving the final request, appending all those chunks in to the single file.
But observed is, I am not seeing the much difference with all these cases.
Following is my service file
export class largeGeneUpload {
chromosomeFile: any;
options: any;
chunkSize = 1200000;
activeConnections = 0;
threadsQuantity = 50;
totalChunkCount = 0;
chunksPosition = 0;
failedChunks = [];
sendNext() {
if (this.activeConnections >= this.threadsQuantity) {
return;
}
if (this.chunksPosition === this.totalChunkCount) {
console.log('all chunks are done');
return;
}
const i = this.chunksPosition;
const url = 'gene/human';
const chunkIndex = i;
const start = chunkIndex * this.chunkSize;
const end = Math.min(start + this.chunkSize, this.chromosomeFile.size);
const currentchunkSize = this.chunkSize * i;
const chunkData = this.chromosomeFile.webkitSlice ? this.chromosomeFile.webkitSlice(start, end) : this.chromosomeFile.slice(start, end);
const fd = new FormData();
const binar = new File([chunkData], this.chromosomeFile.upload.filename);
console.log(binar);
fd.append('file', binar);
fd.append('dzuuid', this.chromosomeFile.upload.uuid);
fd.append('dzchunkindex', chunkIndex.toString());
fd.append('dztotalfilesize', this.chromosomeFile.upload.total);
fd.append('dzchunksize', this.chunkSize.toString());
fd.append('dztotalchunkcount', this.chromosomeFile.upload.totalChunkCount);
fd.append('isCancel', 'false');
fd.append('dzchunkbyteoffset', currentchunkSize.toString());
this.chunksPosition += 1;
this.activeConnections += 1;
this.apiDataService.uploadChunk(url, fd)
.then(() => {
this.activeConnections -= 1;
this.sendNext();
})
.catch((error) => {
this.activeConnections -= 1;
console.log('error here');
// chunksQueue.push(chunkId);
});
this.sendNext();
}
uploadChunk(resrc: string, item) {
return new Promise((resolve, reject) => {
this._http.post(this.baseApiUrl + resrc, item, {
headers: this.headers,
withCredentials: true
}).subscribe(r => {
console.log(r);
resolve();
}, err => {
console.log('err', err);
reject();
});
});
}
But the thing is, If I upload the same file in google drive it is not taking much time.
Let's consider, I have 700 MB file, to upload it in google drive it took 3 mins. But the same 700 MB file to upload with my Angular code with our back-end server it took 7 mins to finish it.
How do I improve the performance of file upload.?
forgive me ,
it seems silly answer but this depend on your hosting infrastructure
A lot of variables can cause this, but by your story it has nothing to do with your front-end code. Making it into chunks is not going to help, because browsers have their own optimized algorithm to upload files. The most likely culprit is your backend server or the connection from your client to the server.
You say that google drive is fast, but you should also know that google has a very widespread global infrastructure with top of the line cloud servers. If you are using, for example, a 2 euro per month fixed place hosting provider, you cannot expect the same processing and network power as google.
I'm building a web app that uses EvaporateJS to upload large files to Amazon S3 using Multipart Uploads. I noticed an issue where every time a new chunk was started the browser would freeze for ~2 seconds. I want the user to be able to continue to use my app while the upload is in progress, and this freezing makes that a bad experience.
I used Chrome's Timeline to look into what was causing this and found that it was SparkMD5's hashing. So I've moved the entire upload process into a Worker, which I thought would fix the issue.
Well the issue is now fixed in Edge and Firefox, but Chrome still has the exact same problem.
Here's a screenshot of my Timeline:
As you can see, during the freezes my main thread is doing basically nothing, with <8ms of JavaScript running during that time. All the work is occurring in my Worker thread, and even that is only running for ~600ms or so, not the 1386ms that my frame takes.
I'm really not sure what's causing the issue, are there any gotchas with Workers that I should be aware of?
Here's the code for my Worker:
var window = self; // For Worker-unaware scripts
// Shim to make Evaporate work in a Worker
var document = {
createElement: function() {
var href = undefined;
var elm = {
set href(url) {
var obj = new URL(url);
elm.protocol = obj.protocol;
elm.hostname = obj.hostname;
elm.pathname = obj.pathname;
elm.port = obj.port;
elm.search = obj.search;
elm.hash = obj.hash;
elm.host = obj.host;
href = url;
},
get href() {
return href;
},
protocol: undefined,
hostname: undefined,
pathname: undefined,
port: undefined,
search: undefined,
hash: undefined,
host: undefined
};
return elm;
}
};
importScripts("/lib/sha256/sha256.min.js");
importScripts("/lib/spark-md5/spark-md5.min.js");
importScripts("/lib/url-parse/url-parse.js");
importScripts("/lib/xmldom/xmldom.js");
importScripts("/lib/evaporate/evaporate.js");
DOMParser = self.xmldom.DOMParser;
var defaultConfig = {
computeContentMd5: true,
cryptoMd5Method: function (data) { return btoa(SparkMD5.ArrayBuffer.hash(data, true)); },
cryptoHexEncodedHash256: sha256,
awsSignatureVersion: "4",
awsRegion: undefined,
aws_url: "https://s3-ap-southeast-2.amazonaws.com",
aws_key: undefined,
customAuthMethod: function(signParams, signHeaders, stringToSign, timestamp, awsRequest) {
return new Promise(function(resolve, reject) {
var signingRequestId = currentSigningRequestId++;
postMessage(["signingRequest", signingRequestId, signParams.videoId, timestamp, awsRequest.signer.canonicalRequest()]);
queuedSigningRequests[signingRequestId] = function(signature) {
queuedSigningRequests[signingRequestId] = undefined;
if(signature) {
resolve(signature);
} else {
reject();
}
}
});
},
//logging: false,
bucket: undefined,
allowS3ExistenceOptimization: false,
maxConcurrentParts: 5
}
var currentSigningRequestId = 0;
var queuedSigningRequests = [];
var e = undefined;
var filekey = undefined;
onmessage = function(e) {
var messageType = e.data[0];
switch(messageType) {
case "init":
var globalConfig = {};
for(var k in defaultConfig) {
globalConfig[k] = defaultConfig[k];
}
for(var k in e.data[1]) {
globalConfig[k] = e.data[1][k];
}
var uploadConfig = e.data[2];
Evaporate.create(globalConfig).then(function(evaporate) {
var e = evaporate;
filekey = globalConfig.bucket + "/" + uploadConfig.name;
uploadConfig.progress = function(p, stats) {
postMessage(["progress", p, stats]);
};
uploadConfig.complete = function(xhr, awsObjectKey, stats) {
postMessage(["complete", xhr, awsObjectKey, stats]);
}
uploadConfig.info = function(msg) {
postMessage(["info", msg]);
}
uploadConfig.warn = function(msg) {
postMessage(["warn", msg]);
}
uploadConfig.error = function(msg) {
postMessage(["error", msg]);
}
e.add(uploadConfig);
});
break;
case "pause":
e.pause(filekey);
break;
case "resume":
e.resume(filekey);
break;
case "cancel":
e.cancel(filekey);
break;
case "signature":
var signingRequestId = e.data[1];
var signature = e.data[2];
queuedSigningRequests[signingRequestId](signature);
break;
}
}
Note that it relies on the calling thread to provide it with the AWS Public Key, AWS Bucket Name and AWS Region, AWS Object Key and the input File object, which are all provided in the 'init' message. When it needs something signed, it sends a 'signingRequest' message to the parent thread, which is expected to provided the signature in a 'signature' message once it's been fetched from my API's signing endpoint.
I can't give a very good example or analyze what you are doing with only the Worker code, but I strongly suspect that the issue either has to do with either the reading of the chunk on the main thread or some unexpected processing that you are doing on the chunk on the main thread. Maybe post the main thread code that calls postMessage to the Worker?
If I were debugging it right now, I'd try moving your FileReader operations into the Worker. If you don't mind the Worker blocking while it loads a chunk, you could also use FileReaderSync.
Post-comments update
Does generating the presigned URL require hashing the file content + metadata + a key? Hashing file content is going to take O(n) in the size of the chunk and it's possible, if the hash is the first operation that reads from the Blob, that the loading of the file content could be deferred until the hashing starts. Unless you are compelled to keep the signing in the main thread (you don't trust the worker with key material?) that would be another good thing to bring into the worker.
If moving the signing into the Worker is too much, you could have the worker do something to force the Blob to be read and/or pass the ArrayBuffer(or Uint8Array or what have you) of file content back to the main thread for signing; this would ensure that reading the chunk does not occur on the main thread.
What cache strategies are you using? I read the Offline Cookbook and the simplest strategy to use is to cache static content and the left out the API calls.
This strategy seems something like this:
Check if the request is already in cache
If not add the request, response pair to cache
Return response
How to update the cache if on the server side files has changed? Currently the clients gets always the cached results.
Here is my cache strategy's code:
// You will need this polyfill, at least on Chrome 41 and older.
importScripts("serviceworker-cache-polyfill.js");
var VERSION = 1;
var CACHES = {
common: "common-cache" + VERSION
};
// an array of file locations we want to cache
var filesToCache = [
"font-cache.html",
"script.js",
];
var neededFiles = [
"index.html"
];
var errorResponse = function() {
return new Response([
"<h2>Failed to get file</h2>",
"<p>Could not retrive response from cache</p>"
].join("\n"),
500
);
};
var networkFetch = function(request) {
return fetch(request).then(function(response) {
caches.open(CACHES["common"]).then(function(cache) {
return cache.put(request, response);
});
}).catch(function() {
console.error("Network fetch failed");
return errorResponse();
});
}
this.addEventListener("install", function(evt) {
evt.waitUntil(
caches.open(CACHES["common"]).then(function(cache) {
// Cache before
cache.addAll(filesToCache);
return cache.addAll(neededFiles);
})
);
});
this.addEventListener("activate", function(event) {
var expectedCacheNames = Object.keys(CACHES).map(function(key) {
return CACHES[key];
});
console.log("Activate the worker");
// Active worker won"t be treated as activated until promise resolves successfully.
event.waitUntil(
caches.keys().then(function(cacheNames) {
return Promise.all(
cacheNames.map(function(cacheName) {
if (expectedCacheNames.indexOf() ===
-1) {
console.log(
"Deleting out of date cache:",
cacheName);
return caches.delete(cacheName);
}
})
);
})
);
});
self.addEventListener("fetch", function(event) {
console.log("Handling fetch event for", event.request.url);
event.respondWith(
// Opens Cache objects
caches.open(CACHES["common"]).then(function(cache) {
return cache.match(event.request).then(function(
response) {
if (response) {
console.log("Found response in cache", response);
return response;
} else {
return networkFetch(event.request);
}
}).catch(function(error) {
// Handles exceptions that arise from match() or fetch().
console.error(
" Error in fetch handler:",
error);
return errorResponse();
});
})
);
});
You may get familiar with great Jeff Posnick's solution - sw-precache.
Strategy used there is:
Gulp is generating Service Worker file with checksums
Service Worker is registered (with his own checksum)
If files were added/updated, the SW file changes
With next visit, SW checks that its checksum differs, so it registers itself once again with updated files
You may automate this flow with backend in any way you want :)
He described it much better in this article
This is the code I use to cache. It fetches the resource and caches and serves it.
this.addEventListener("fetch", function(event) {
event.respondWith(
fetch(event.request).then(function(response) {
return caches.open("1").then(function(cache) {
return cache.put(event.request, response.clone()).then(function() {
return response
})
})
}).catch(function() {
return caches.match(event.request)
})
)
})
You have to change your Service Worker file. According to Introduction to Service Worker:
When the user navigates to your site, the browser tries to redownload the script file that defined the service worker in the background. If there is even a byte's difference in the service worker file compared to what it currently has, it considers it 'new'.
So even if you only need to change static resources, you'll have to update your service worker file so that a new service worker is registered that updates the cache. (You'll want to make sure to delete any previous caches as well in your activate handler.) #Karol Klepacki's answer suggests a way to automate this.
Alternatively, you could implement logic in your service worker itself to periodically check cached resources for changes and update the entries appropriately.
The documentation of the sitemap node.js module does not explain what cacheTime is. Why is it needed to generate a sitemap? What is its purpose?
The cacheTime is how long the sitemap.js module will wait before regenerating the sitemap.xml file from the list of urls given to it.
ie. on the first request, a sitemap.xml file is generated and placed in the cache. Subsequent requests read the sitemap from the cache, until it expires and is regenerated.
I agree it could be clearer, but the source code makes it pretty clear.
According to the source code at sitemap.js, line 136:
// sitemap cache
this.cacheEnable = false;
this.cache = '';
if (cacheTime > 0) {
this.cacheEnable = true;
this.cacheCleanerId = setInterval(function (self) {
self.clearCache();
}, cacheTime, this);
}
and line 187:
Sitemap.prototype.toString = function () {
var self = this
, xml = [ '<?xml version="1.0" encoding="UTF-8"?>',
'<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">'];
if (this.cacheEnable && this.cache) {
return this.cache;
}
// TODO: if size > limit: create sitemapindex
this.urls.forEach( function (elem, index) {
// SitemapItem
var smi = elem;
Specifically:
if (this.cacheEnable && this.cache) {
return this.cache;
}
And the clear cache operation has a setInterval on it equal to the cacheTime parameter given.
Note the implication that your sitemap could become out of date if your urls change and your cacheTime has not triggered a clearing of the sitemap cache.
I have a problem in my PhoneGap app. I would like to write a file of 15 MB. If I try the OS pulls more and more memory and the app crashes without message. I can reproduce this on android and blackberry tablets. Is there a way to implement the writing more efficient?
best regards
fe.createWriter(
(fw: any) => {
fw.onwriteend = (e) => {
fw.onwriteend = (e) => {
callback();
}
fw.write(data);
}
// write BOM (dead for now)
fw.write("");
},
(error: any) => {
alert("FileWriter Failed: " + error.code);
});
It's TypeScript, I hope JS developers won't struggle with this ;)
I found the answer.
Crash reason:
PhoneGap FileWrite.write cannot handle too big buffer, do not know exact size, I think
this issue is due to PG transfer data to iOS through URL Scheme, somehow it crash when
"URL" is too long.
How to fix it: write small block every time, code below:
function gotFileWriter(writer) {
function writeFinish() {
// ... your done code here...
}
var written = 0;
var BLOCK_SIZE = 1*1024*1024; // write 1M every time of write
function writeNext(cbFinish) {
var sz = Math.min(BLOCK_SIZE, data.byteLength - written);
var sub = data.slice(written, written+sz);
writer.write(sub);
written += sz;
writer.onwrite = function(evt) {
if (written < data.byteLength)
writeNext(cbFinish);
else
cbFinish();
};
}
writeNext(writeFinish);
}
UPDATE Aug 12,2014:
In my practice, the performance of saving file through Cordova FileSystem is not good, especially for large file(>5M) on phone, it takes a few seconds. If you are downloading file from server to local disk, you may want a "efficient and direct" way, try cordova-plugin-file-transfer plugin.
#Imskull's answer is the correct one ... i just want to put in the one for Blob (make sure it is blob and not arraybuffer) which is updated based on the one on top ... what i also added was a line to assure myself i am adding to the end of file ... it is more than enough to make ur app stop crashing (on ios mainly :P )
function gotFileWriter(writer) {
function writeFinish() {
// ... your done code here...
}
var written = 0;
var BLOCK_SIZE = 1*1024*1024; // write 1M every time of write
function writeNext(cbFinish) {
writer.onwrite = function(evt) {
if (written < data.size)
writeNext(cbFinish);
else
cbFinish();
};
if (written) writer.seek(writer.length);
writer.write(data.slice(written, written + Math.min(BLOCK_SIZE, data.size - written)));
written += Math.min(BLOCK_SIZE, data.size - written);
}
writeNext(writeFinish);
}