I'm using Chrome v67 and FileSaver.js.
This code works in FF & Edge but not in Chrome.
var ajaxSettings = {
url: "my/api/method",
data: JSON.stringify(myItems),
success: function (data) {
if (data) {
var dateToAppend = moment().format("YYYYMMDD-HHmmss");
createPdf("pdfDoc_" + dateToAppend + ".pdf", data[0]);
saveFile("txtDoc_" + dateToAppend + ".txt", data[1]);
//sleep(2000).then(() => {
// saveFile("txtDoc_" + dateToAppend + ".txt", data[1]);
//});
}
}
}
//function sleep(time) {
//return new Promise((resolve) => setTimeout(resolve, time));
//}
function createPdf(filename, pdfBytes) {
if (navigator.appVersion.indexOf("MSIE 9") > -1) {
window.open("data:application/pdf;base64," + encodeURIComponent(pdfBytes));
}
else {
saveFile(fileName, data);
}
}
function saveFile(fileName, data) {
var decodedByte64 = atob(data);
var byteVals = new Array(decodedByte64.length);
for (var i = 0; i < decodedByte64.length; i++) {
byteVals[i] = decodedByte64.charCodeAt(i);
}
var byte8bitArray = new Uint8Array(byteVals);
var blob = new Blob([byte8bitArray]);
saveAs(blob, fileName); //FileSaver.js
}
The result of calling the API is an array with 2 byte arrays in it.
The byte arrays are the documents.
If I run this code, as a user would, then what happens is that the first document gets "downloaded" but the second does not. Attempting to do this a second time without refreshing the page results in no documents being "downloaded". The "download" word is in quotes because it already has been downloaded, what I'm really trying to do is generate the documents from the byte arrays.
This is the strange bit ... if I open the console and place a breakpoint on the "saveFile" call and immediately hit continue when the debugger lands on the breakpoint then all is well with the world and the 2 documents get downloaded.
I initially thought it was a timing issue so I put a 2 second delay on this to see if that was it but it wasn't. The only thing I've managed to get working is the breakpoint which I'm obviously not going to be able to convince the users to start doing no matter how much I want them to.
Any help or pointers are much appreciated
You probably faced a message at the top left corner of your browser stating
https://yoursite.com wants to
Download multiple files
[Block] [Allow]
If you don't have it anymore, it's probably because you clicked on "Block".
You can manage this restriction in chrome://settings/content/automaticDownloads.
Related
On iOS I'm trying to upload videos to my own website and to do so I use the FileReader.readAsArrayBuffer method with a chunkSize of 1048576.
I upload chunks when the onprogress event fires and all of this actually works perfect except for bigger files. When trying to upload a file of 1.33GB i'm getting a out of memory exception when calling the readAsArrayBuffer method.
I guess this is because it's trying to reserve memory for the complete file, however this is not necessary. Is there a way to read a binary chunk from a file without it reserving memory for the complete file? or are there other solutions?
Thanks!
I fixed it myself today by changing the plugin code, this is the original code:
FileReader.prototype.readAsArrayBuffer = function (file) {
if (initRead(this, file)) {
return this._realReader.readAsArrayBuffer(file);
}
var totalSize = file.end - file.start;
readSuccessCallback.bind(this)('readAsArrayBuffer', null, file.start, totalSize, function (r) {
var resultArray = (this._progress === 0 ? new Uint8Array(totalSize) : new Uint8Array(this._result));
resultArray.set(new Uint8Array(r), this._progress);
this._result = resultArray.buffer;
}.bind(this));
};
and since at start progress is always 0, it always reserves the entire filesize. I added a propery READ_CHUNKED (because I still have other existing code that also uses this method and expects it to work as it did, i have to check to see that everything else also keeps working) and changed the above to this:
FileReader.prototype.readAsArrayBuffer = function(file) {
if (initRead(this, file)) {
return this._realReader.readAsArrayBuffer(file);
}
var totalSize = file.end - file.start;
readSuccessCallback.bind(this)('readAsArrayBuffer', null, file.start, totalSize, function(r) {
var resultArray;
if (!this.READ_CHUNKED) {
resultArray = new Uint8Array(totalSize);
resultArray.set(new Uint8Array(r), this._progress);
} else {
var newSize = FileReader.READ_CHUNK_SIZE;
if ((totalSize - this._progress) < FileReader.READ_CHUNK_SIZE) {
newSize = (totalSize - this._progress);
}
resultArray = new Uint8Array(newSize);
resultArray.set(new Uint8Array(r), 0);
}
this._result = resultArray.buffer;
}.bind(this));
};
When the READ_CHUNKED property is true, it only returns the chunks and doesn't reserve memory for the entire file and when it is false it works like it used to.
I've never used github (except for pulling code) so i'm not uploading this for now, I might look into it in the near future.
I´m working in a chrome extension that stores a temporary playlist from items in SoundCloud to perform several actions on it later.
So... Iknow Chrome Storage is an object and "can´t" be ordered per se, but I really need that order in any feasible way.
I tried storing objects in an Array and then Storing that Array in Storage after pushing a new element at the end of it and was the perfect workaround until, with 27 objects in it, chrome told me that i had reached memory limit (I´m going to need more elements to store.)
Storing each element as separate objects allows me virtually any amount of them (I think 50mb, wich is enough for sure), but get method throws elements the way it wants (obviously, being an object).
Objects are stored with timestamp keys, but still not working at all.
Is there a "light way" to do so?
Code is not definitive and I´m thinking in appending elements directly to a new window, leaving storage calls for other stuff and move to "lighter" code, but would like first to know if this is somehow possible.
CODE - popup.js (here is where order is not persistent)
function appendTracks(){
chrome.storage.sync.get(null, function (storageObject) {
//TODO check if is song
$.each( storageObject, function( key, trackData ) {
trackContainer(trackData["permalink"]);
});
});
}
function trackContainer(trackPermalink){
console.log(trackPermalink);
var trackWidget;
$.getJSON(
'http://soundcloud.com/oembed' +
'?format=json' +
'&url='+trackPermalink+'&visual=false'
).done(function (embedData) {
trackWidget = embedData.html;
$("#mainPlayList").append(trackWidget);
});
console.log(trackWidget);
return trackWidget;
}
CODE - main.js (Storage Setter with timestamp as key)
function downloadClick(a) {
playListName = "";
var b = this;
$.ajax({
url: a.data.reqUrl
}).done(function(a) {
var time = Date.now();
var timeStamp = Math.round(time/1000);
var key = timeStamp.toString()+"queueSc";
var trackData = {
"trackId" : a["id"],
"trackTitle" : a["title"],
"thumbnail" : a["artwork_url"],
"streamUrl" : a["stream_url"],
"permalink" : a["permalink_url"],
"duration" : a["duration"],
"genre" : a["genre"]};
trackData[key] = "key";
getStorage(null,function(storageObject){
if(!isTrackInList(trackData,storageObject)){
setStorage({
[key]: trackData
});
}else{
console.log("ya esta en la lista");
}
});
})
}
function isTrackInList(trackData, storageObject){
var isInList = false;
$.each( storageObject, function( key, value ) {
if(trackData["trackId"] == value["trackId"]){
isInList = true;
}
});
return isInList;
}
I think is important to say that other than the order issue there is not any problem with it, everything runs fine, although there are things that could be more "ellegant" for sure.
Thanks in advance, hope you can help!
The problem is that you are exceeding the QUOTA_BYTES_PER_ITEM, i.e. the storage limit you are allowed per object. If you use chrome.storage.sync you are limited to 8,192 Bytes. Using chrome.storage.local will allow you to store unlimited size per item.
Note using chrome.storage.local makes your data local to that machine and thus not synced across browsers on different machine.
Thanks to EyuelDK and finally an async call has been needed, I will try to solve this the next.
CODE - popup.js
function appendTracks(){
chrome.storage.local.get("souncloudQueue", function (storageObject) {
var length = storageObject["souncloudQueue"].length;
for (var i=0;i<length;i++){
var trackPermalink = storageObject["souncloudQueue"][i];
console.log(i, trackPermalink);
$("#mainPlayList").append(trackContainer(trackPermalink));
}
});
}
function trackContainer(trackPermalink){
console.log(trackPermalink);
var trackWidget;
$.ajax({
url: 'http://soundcloud.com/oembed' +
'?format=json' +
'&url='+trackPermalink,
dataType: 'json',
async: false,
success: function(data) {
trackWidget = data.html;
}
});
console.log(trackWidget);
return trackWidget;
}
CODE - main.js
function downloadClick(a) {
var trackUrl = a.data.reqUrl;
console.log(trackUrl);
getStorage("souncloudQueue", function(callback){
console.log(callback["souncloudQueue"]);
var tempArray = callback["souncloudQueue"];
tempArray.push(trackUrl);
setStorage({"souncloudQueue": tempArray}, function() {
});
})
}
I'm working on a CSV parsing web application, which collects data and then uses it to draw a plot graph. So far it works nicely, but unfortunately it takes some time to parse the CSV files with papaparse, even though they are only about 3MB.
So it would be nice to have some kind of progress shown, when "papa" is working. I could go for the cheap hidden div, showing "I'm working", but would prefer the use of <progress>.
Unfortunately the bar just gets updated AFTER papa has finished its work. So I tried to get into webworkers and use a worker file to calculate progress and also setting worker: true in Papa Parses configuration. Still no avail.
The used configuration (with step function) is as followed:
var papaConfig =
{
header: true,
dynamicTyping: true,
worker: true,
step: function (row) {
if (gotHeaders == false) {
for (k in row.data[0]) {
if (k != "Time" && k != "Date" && k != " Time" && k != " ") {
header.push(k);
var obj = {};
obj.label = k;
obj.data = [];
flotData.push(obj);
gotHeaders = true;
}
}
}
tempDate = row.data[0]["Date"];
tempTime = row.data[0][" Time"];
var tD = tempDate.split(".");
var tT = tempTime.split(":");
tT[0] = tT[0].replace(" ", "");
dateTime = new Date(tD[2], tD[1] - 1, tD[0], tT[0], tT[1], tT[2]);
var encoded = $.toJSON(row.data[0]);
for (j = 0; j < header.length; j++) {
var value = $.evalJSON(encoded)[header[j]]
flotData[j].data.push([dateTime, value]);
}
w.postMessage({ state: row.meta.cursor, size: size });
},
complete: Done,
}
Worker configuration on the main site:
var w = new Worker("js/workers.js");
w.onmessage = function (event) {
$("#progBar").val(event.data);
};
and the called worker is:
onmessage = function(e) {
var progress = e.data.state;
var size = e.data.size;
var newPercent = Math.round(progress / size * 100);
postMessage(newPercent);
}
The progress bar is updated, but only after the CSV file is parsed and the site is set up with data, so the worker is called, but the answer is handled after parsing. Papa Parse seems to be called in a worker, too. Or so it seems if checking the calls in the browsers debugging tools, but still the site is unresponsive, until all data shows up.
Can anyone point me to what I have done wrong, or where to adjust the code, to get a working progress bar? I guess this would also deepen my understanding of web workers.
You could use the FileReader API to read the file as text, split the string by "\n" and then count the length of the returned array. This is then your size variable for the calculation of percentage.
You can then pass the file string to Papa (you do not need to reread directly from the file) and pass the number of rows (the size variable) to your worker. (I am unfamiliar with workers and so am unsure how you do this.)
Obviously this only accurately works if there are no embedded line breaks inside the csv file (e.g. where a string is spread over several lines with line breaks) as these will count as extra rows, so you will not make it to 100%. Not a fatal error, but may look strange to the user if it always seems to finish before 100%.
Here is some sample code to give you ideas.
var size = 0;
function loadFile(){
var files = document.getElementById("file").files; //load file from file input
var file = files[0];
var reader = new FileReader();
reader.readAsText(file);
reader.onload = function(event){
var csv = event.target.result; //the string version of your csv.
var csvArray = csv.split("\n");
size = csvArray.length;
console.log(size); //returns the number of rows in your file.
Papa.parse(csv, papaConfig); //Send the csv string to Papa for parsing.
};
}
I haven't used Papa Parse with workers before, but a few things pop up after playing with it for a bit:
It does not seem to expect you to interact directly with the worker
It expects you to either want the entire final result, or the individual items
Using a web worker makes providing a JS Fiddle infeasible, but here's some HTML that demonstrates the second point:
<html>
<head>
<script src="papaparse.js"></script>
</head>
<body>
<div id="step">
</div>
<div id="result">
</div>
<script type="application/javascript">
var papaConfig = {
header: true,
worker: true,
step: function (row) {
var stepDiv = document.getElementById('step');
stepDiv.appendChild(document.createTextNode('Step received: ' + JSON.stringify(row)));
stepDiv.appendChild(document.createElement('hr'));
},
complete: function (result) {
var resultDiv = document.getElementById('result');
resultDiv.appendChild(document.createElement('hr'));
resultDiv.appendChild(document.createTextNode('Complete received: ' + JSON.stringify(result)))
resultDiv.appendChild(document.createElement('hr'));
}
};
var data = 'Column 1,Column 2,Column 3,Column 4 \n\
1-1,1-2,1-3,1-4 \n\
2-1,2-2,2-3,2-4 \n\
3-1,3-2,3-3,3-4 \n\
4,5,6,7';
Papa.parse(data, papaConfig);
</script>
</body>
</html>
If you run this locally, you'll see you get a line for each of the four rows of the CSV data, but the call to the complete callback gets undefined. Something like:
Step received: {"data":[{"Column 1":"1-1",...
Step received: {"data":[{"Column 1":"2-1",...
Step received: {"data":[{"Column 1":"3-1",...
Step received: {"data":[{"Column 1":"4","...
Complete received: undefined
However if you remove or comment out the step function, you will get a single line for all four results:
Complete received: {"data":[{"Column 1":"1-1",...
Note also that Papa Parse uses a streaming concept to support the step callback regardless of using a worker or not. This means you won't know how many items you are parsing directly, so calculating the percent complete is not possible unless you can find the length of items separately.
I'm developing a small Chrome extension that would allow me to save some records to chrome.storage and then display them.
I've managed to make the set and get process work as I wanted (kinda), but now I'd like to add a duplicate check before saving any record, and I'm quite stuck trying to find a nice and clean solution.
That's what I came up for now:
var storage = chrome.storage.sync;
function saveRecord(record) {
var duplicate = false;
var recordName = record.name;
storage.get('records', function(data) {
var records = data.records;
console.log('im here');
for (var i = 0; i < records.length; i++) {
var Record = records[i];
if (Record.name === recordName) {
duplicate = true;
break;
} else {
console.log(record);
}
}
if (duplicate) {
console.log('this record is already there!');
} else {
arrayWithRecords.push(record);
storage.set({ bands: arrayWithRecords }, function() {
console.log('saved ' + record.name);
});
}
});
}
I'm basically iterating on the array containing the records and checking if the name property already exists. The problem is it breaks basic set and get functionality -- in fact, when saving it correctly logs 'im here' and the relative record object, but it doesn't set the value. Plus, after a while (generally after trying to list the bands with a basic storage.get function) it returns this error:
Error in response to storage.get: TypeError: Cannot read property
'name' of null
I'm guessing this is due to the async nature of the set and get and my incompetence working with it, but I can't get my head around it in order to find a better alternative. Ideas?
Thanks in advance.
i'm developing a phonegap app using a lot of javascript. Now i'm debugging it using Safari Developer Tool, in particular i'm focused on some button that on the device seems to be a bit luggy.
So I've added some console.timeEnd() to better understand where the code slow down, but the "problem" is that when i open the console the code start running faster without lag, if i close it again, the lag is back.
Maybe my question is silly but i can't figure it out
Thanks
EDIT: Added the code
function scriviNumeroTastiera(tasto){
console.time('Funzione ScriviNumeroTastiera');
contenutoInput = document.getElementById('artInserito').value;
if ($('#cursoreImg').css('display') == 'none'){
//$('#cursoreImg').show();
}
else if (tasto == 'cancella'){
//alert(contenutoInput.length);
if (contenutoInput.length == 0) {
}
else {
indicePerTaglioStringa = (contenutoInput.length)-1;
contenutoInput = contenutoInput.substr(0, indicePerTaglioStringa);
$('#artInserito').val(contenutoInput);
//alert('tastoCanc');
margineAttualeImg = $('#cursoreImg').css('margin-left');
indicePerTaglioStringa = margineAttualeImg.indexOf('p');
margineAttualeImg = margineAttualeImg.substr(0, indicePerTaglioStringa);
margineAggiornato = parseInt(margineAttualeImg)-20;
$('#cursoreImg').css('margin-left', margineAggiornato+'px');
}
}
else {
//contenutoInput = document.getElementById('artInserito').value;
contenutoAggiornato = contenutoInput+tasto;
margineAttualeImg = $('#cursoreImg').css('margin-left');
indicePerTaglioStringa = margineAttualeImg.indexOf('p');
margineAttualeImg = margineAttualeImg.substr(0, indicePerTaglioStringa);
margineAggiornato = parseInt(margineAttualeImg)+20;
$('#cursoreImg').css('margin-left', margineAggiornato+'px');
$('#artInserito').val(contenutoAggiornato);
}
console.timeEnd('Funzione ScriviNumeroTastiera');
}
The code is a bit crappy, but it's just a beginning ;)
This could happen because PhoneGap/Cordova creates its own console object (in cordova.js), and it gets overwritten when you open the Safari console (safari's might be faster than phonegap's, that could be why you notice it faster).
So, one way to measure the time properly, without opening the console, would be to go to the good old alert, so you'd first add this code anywhere in your app:
var TIMER = {
start: function(name, reset){
if(!name) { return; }
var time = new Date().getTime();
if(!TIMER.stimeCounters) { TIMER.stimeCounters = {} };
var key = "KEY" + name.toString();
if(!reset && TIMER.stimeCounters[key]) { return; }
TIMER.stimeCounters[key] = time;
},
end: function(name){
var time = new Date().getTime();
if(!TIMER.stimeCounters) { return; }
var key = "KEY" + name.toString();
var timeCounter = TIMER.stimeCounters[key];
if(timeCounter) {
var diff = time - timeCounter;
var label = name + ": " + diff + "ms";
console.info(label);
delete TIMER.stimeCounters[key];
}
return diff;
}
};
(This just mimics the console.time and console.timeEnd methods, but it returns the value so we can alert it).
Then, instead of calling:
console.time('Funzione ScriviNumeroTastiera');
you'd call:
TIMER.start('Funzione ScriviNumeroTastiera');
and instead of calling:
console.timeEnd('Funzione ScriviNumeroTastiera');
you'd call:
var timeScriviNumeroTastiera = TIMER.end('Funzione ScriviNumeroTastiera');
alert('Ellapsed time: ' + timeScriviNumeroTastiera);
This would give you the proper ellapsed time without opening the console, so it computes the real time in the phonegap app.
Hope this helps.
Cheers
This really isn't something you would normally expect - opening the console should not speed up anything. If anything, it will make things slower because of additional debugging hooks and status display. However, I've had a case like that myself. The reason turned out to be very simple: opening the console makes the displayed portion of the website smaller and the code efficiency was largely dependent on the viewport size. So if I am right, making the browser window smaller should have the same effect as opening the console.