I'm a bit confused about the use of LocalForage.
I just want to save and retrieve an image from LocalForage, here is how I do.
function preload() {
localforage.setDriver(localforage.LOCALSTORAGE).then(function() {
lcl_images[0] = new Object();
lcl_images[0].key = 'lclstorage_1';
lcl_images[0].value = 'http://www.superga.com/tcnimg/S/02/S009TE0/XBS009TE0___949______.jpg';
lcl_images[1] = new Object();
lcl_images[1].key = 'lclstorage_2';
lcl_images[1].value = 'https://mir-s3-cdn-cf.behance.net/project_modules/max_1200/2150fb35419617.56f6327b44e47.gif';
for (var i = lcl_images.length - 1; i >= 0; i--) {
var valore = lcl_images[i].value;
localforage.setItem(lcl_images[i].key, lcl_images[i].value, function() {
console.log('Saved: ' + valore);
});
}
});
}
function use_preloaded_image() {
for (var i = lcl_images.length - 1; i >= 0; i--) {
var key = lcl_images[i].key;
localforage.getItem(lcl_images[i].key, function(err, readValue) {
console.log('Read: ', readValue);
document.getElementById(key).src = readValue;
});
}
}
The point is, I don't know if it is the right method to use images: I store them as strings, localForage should use parse and stringify.
This example is working fine (when I go offline, I refresh the page, images in localstorage are vsible), but I see someone using Image objects or Blobs (I neve heard about that, and an instant search hasn't told me much about that).
other do an XHR request to download the image, then they save it.
I can think about the use of XHR request for the need of checking whether the image has been downloaded. Is this right?
Secondly: someone says that storing with blobs is better. Why?
Please help me, thanks.
EDIT (Multiple Promises)
When I retrieve images I must bind each of them to an <img>. But I don't know how to do it, with promises. Because when using Promise.all() I can't use a for loop (so no index to bind images src with images id).
var promises = [];
for (var i = lcl_images.length - 1; i >= 0; i--) {
promises.push(localforage.getItem(lcl_images[i].key));
}
Promise
.all(promises)
.then(function(value) {
console.log(value);
})
.catch(function(error) {
console.log(error);
});
for (var i = lcl_images.length - 1; i >= 0; i--) {
document.getElementById(lcl_images[i].key).src = value;
}
Promise
.all(promises)
.then(function(value) {
console.log(value);
document.getElementById(lcl_images[i].key).src = value;
// I can't get the index i, no way to set <img> src for id attribute
})
.catch(function(error) {
console.log(error);
});
FIX Promise.all().then()
Promise
.all(promises)
.then(function(value) {
console.log(value);
for (var i = lcl_images.length - 1; i >= 0; i--) {
document.getElementById(lcl_images[i].key).src = value[i];
};
})
Blobs are better because IndexedDB can optimize their storage on disk. In essence it stores them directly as file handles. (In cases where Blobs aren't supported or IDB isn't supported, LocalForage will convert to base64.)
To download an image as a Blob, the easiest way is to use a fetch polyfill or the fetch() API and use response.blob() (see MDN docs). Another way is to use XMLHttpRequest and do xhr.responseType = 'blob. Example:
fetch('http://example.com/myimage.gif').then(function (response) {
return response.blob();
}).then(function (blob) {
console.log("yay I have a blob", blob);
}).catch(console.log.bind(console))
Also another thing you should know is that it looks like you are making a classic Promise mistake in your code, which is to do multiple Promises within a forEach() loop. You probably would rather do Promise.all(). Please read We have a problem with promises for more details.
For working with Blobs, you can check out blob-util, which has a tutorial, or you can read the PouchDB guide to attachments. (PouchDB uses Blobs similarly to LocalForage, so the advice is similar.)
Related
First, I'm completly newbie making chrome extension, then in a part of the chrome extension I will receive differents urls and I want to store the text of the web page to process it later, resulting in an array of boolean variables, each associated with the given url. Schematically it would be something like this:
var result;
function process(text){
if something -> result.push(true);
if not -> result.push(false);
}
function main(){
for (i...){
url = given[i];
text = getHTMLText(url);
process(text);
}
final();//when the loop finish activate another function that use the global variable: result
}
I have problems with main function, first I have tried with synchronous XMLHttpRequest, although it works it's very slow and chrome always give the warning that synchronous XMLHttpRequest is deprecated.
for (var i = 0; i < urls.length; i++){
url = urls[i];
var req = new XMLHttpRequest();
req.open('GET', url, false);
req.send(null);
if (req.status == 200) detecting(req.responseText);
};
Other solution that I find was use fetch(url), but the code that I find I don't fully understand. Although the returned text works correctly but then the proccess function give different results on each page update.
for (var i = 0; i < urls.length; i++){
url = urls[i];
fetch(url).then(function(response) {
response.text().then(function(text) {
detecting(text);
});
});
};
Other problem, but this is because of the little knowledge I have of fetch(), was that I can't store the text out of the fetch(), every time I do console.log give undefined, this greatly complicates the processing of the text for me.
I have seen that maybe it can be done through extension APIs of chrome but I can't see how to do it.
The algorithm shown in your main pseudocode can be implemented easily by using async/await and Promise.all, without a for loop:
(async () => {
const results = await Promise.all(urls.map(processUrl));
console.log(results);
// further processing must be also inside this IIFE
})();
async function processUrl(url) {
try {
const text = await (await fetch(url)).text();
return {url, text, status: detecting(text)};
} catch (error) {
return {url, error};
}
}
I have created an export to CSV extension in tableau to embed it into dashboard for users to download the data.
however we have condition and is, i need to set the filter using applyFilterAsync to some value before download and reset that filter using same applyFilterAsync with parameters of 'filtername' and 'value' and filterUpdateType.ADD to add and REMOVE to remove.
This is not working in case of SETs, Rangefilters, Dimensions and all.
Need your help resolve this issue.
Clearing the filters:
for (var i = 0; i < worksheets.length; i++) {
var sheet = worksheets[i];
if (sheetList.indexOf(sheet.name) > -1) {
sheet.getFiltersAsync());
sheet.clearFilterAsync('IN/OUT(DownloadSet)');
console.log('Filter Cleared');
}
}
Apply the filter after download:
sheet.applyFilterAsync('IN/OUT(DownloadSet)','In',tableau.FilterUpdateType.Replace);
Please your help to resolve this issue.
Thanks.
Since these are async functions, they return a promise. Your code should look like:
for (var i = 0; i < worksheets.length; i++) {
var sheet = worksheets[i];
if (sheetList.indexOf(sheet.name) > -1) {
sheet.getFiltersAsync())
.then(function() {
sheet.clearFilterAsync('IN/OUT(DownloadSet)');
})
.then(function() {
console.log('Filter Cleared');
})
}
}
Same with the applyFilterAsync:
sheet.applyFilterAsync('IN/OUT(DownloadSet)',['In'],tableau.FilterUpdateType.Replace)
.then(function() {
// do something
})
Without seeing more context/errors this is probably what is causing your issues.
EDIT: Set filters automatically evaluate to the 'In' value. I'm looking into if you have the right syntax for a set filter.
UPDATE:. This should be the right asynchronous call:
For applyFilterAsync you need to pass an array of strings, sheet.applyFilterAsync('IN/OUT(DownloadSet)',['In'],tableau.FilterUpdateType.Replace)
I was wondering what the issue with the bottom loop is or if I'm going through the last json wrong somehow when I'm trying to log it into the console. The arrays are above the given code and the first two loops work fine. I'm trying to return goals but the reality is I want to find an efficient way to return all of the stats.
d3.json('https://statsapi.web.nhl.com/api/v1/teams', function(data) {
for (i=0; i < 31; i++) {
teamID.push(data.teams[i].id);
}
});
console.log(teamID);
// request roster json data from API and loop through roster to get all player IDS
// and append them to the playerList array
d3.json('https://statsapi.web.nhl.com/api/v1/teams/1/?expand=team.roster', function(data) {
for (i=0; i < data.teams[0].roster.roster.length; i++) {
playerList.push(data.teams[0].roster.roster[i].person.id);
}
});
console.log(playerList);
// request player stat json data from API and loop through all players to get all stat
// and append them to an array
var playerStats = [];
for (i = 0; i < playerList.length; i++) {
d3.json('https://statsapi.web.nhl.com/api/v1/people/' + playerList[i] + '/stats/?stats=statsSingleSeason&season=20172018', function(data) {
console.log(data.stats[0].splits[0].stat.goals);
});
// console.log(playerStats);
};
Your final loop is probably attempting to initialize / run at the same time as the HTTP calls are being returned from the APIs. Since you are using callbacks to get the details, rather than promises, then you will need to do this in callback form. Here is the best I can do without you actually showing me the full code:
d3.json('https://statsapi.web.nhl.com/api/v1/teams', function(teamResponse) {
var teamIds = teamResponse.teams.filter((team, i) => i < 31)
.map((team) => team.id);
// I use the functional approach above because I think it is cleaner than loops.
// for (i=0; i < 31; i++) {
// teamID.push(data.teams[i].id);
//}
d3.json('https://statsapi.web.nhl.com/api/v1/teams/1/?expand=team.roster', function(rosterResponse) {
var playerIdList = rosterResponse.teams[0].roster.roster
.map((roster) => roster.person.id);
// Swap this out for the functional method above.
//for (i=0; i < data.teams[0].roster.roster.length; i++) {
// playerList.push(data.teams[0].roster.roster[i].person.id);
//}
for(var i = 0; i < playerIdList; i++) {
d3.json('https://statsapi.web.nhl.com/api/v1/people/' + playerIdList[i] + '/stats/?stats=statsSingleSeason&season=20172018', function(data) {
console.log(data.stats[0].splits[0].stat.goals);
});
}
});
});
Promises (Promise.all) are not supported at all in Internet Explorer (they are in Edge) and some older versions of other browsers. Arrow functions are also not supported in these browsers.
I assume that when you need to support older browsers you can use babel (with webpack) or know how to write ES5.
d3.json returns a promise so you can leave out the callback and uses promises:
Promise.all([
d3.json('https://statsapi.web.nhl.com/api/v1/teams'),
d3.json('https://statsapi.web.nhl.com/api/v1/teams/1/?expand=team.roster')
])
.then(
([teams,playerData])=>{
const playerList = playerData.teams[0].roster.roster.map(
player=>player.id
);
return Promise.all(
playerList.map(
playerID=>
d3.json(`https://statsapi.web.nhl.com/api/v1/people/${playerID}/stats/?stats=statsSingleSeason&season=20172018`)
)
).then(
(playerStats)=>[teams,playerData,playerStats]
)
}
)
.then(
([teams,playerData,playerStats])=>{
console.log("teams:",teams);
console.log("playerData:",playerData);
console.log("playerStats:",playerStats);
}
)
.catch(
err=>console.warn("Something went wrong:",err)
);
I did not comment on how the code works so please let me know if you have specific questions about the code. I suggest reading this if you don't know why promises are used. And google "mdn promise all" before asking what promise all does.
So I'm making a little scraper for learning purposes, in the end I should get a tree-like structure of the pages on the website.
I've been banging my head trying to get the requests right. This is more or less what I have:
var request = require('request');
function scanPage(url) {
// request the page at given url:
request.get(url, function(err, res, body) {
var pageObject = {};
/* [... Jquery mumbo-jumbo to
1. Fill the page object with information and
2. Get the links on that page and store them into arrayOfLinks
*/
var arrayOfLinks = ['url1', 'url2', 'url3'];
for (var i = 0; i < arrayOfLinks.length; i++) {
pageObj[arrayOfLinks[i]] = scanPage[arrayOfLinks[i]];
}
});
return pageObj;
}
I know this code is wrong on many levels, but it should give you an idea of what I'm trying to do.
How should I modify it to make it work? (without the use of promises if possible)
(You can assume that the website has a tree-like structure, so every page only has links to pages further down the three, hence the recursive approach)
I know that you'd rather not use promises for whatever reason (and I can't ask why in the comments because I'm new), but I believe that promises are the best way to achieve this.
Here's a solution using promises that answers your question, but might not be exactly what you need:
var request = require('request');
var Promise = require('bluebird');
var get = Promise.promisify(request.get);
var maxConnections = 1; // maximum number of concurrent connections
function scanPage(url) {
// request the page at given url:
return get(url).then((res) => {
var body = res.body;
/* [... Jquery mumbo-jumbo to
1. Fill the page object with information and
2. Get the links on that page and store them into arrayOfLinks
*/
var arrayOfLinks = ['url1', 'url2', 'url3'];
return Promise.map(arrayOfLinks, scanPage, { concurrency: maxConnections })
.then(results => {
var res = {};
for (var i = 0; i < results.length; i++)
res[arrayOfLinks[i]] = results[i];
return res;
});
});
}
scanPage("http://example.com/").then((res) => {
// do whatever with res
});
Edit: Thanks to Bergi's comment, rewrote the code to avoid the Promise constructor antipattern.
Edit: Rewrote in a much better way. By using Bluebird's concurrency option, you can easily limit the number of simultaneous connections.
In a Chrome extension im using the HTML5 FileSytem API.
Im retrieving a list of records in a folder.
var entries = [];
var metadata = [];
listFiles(folder);
function listFiles(fs) {
var dirReader = fs.createReader();
entries = [];
// Call the reader.readEntries() until no more results are returned.
var readEntries = function () {
dirReader.readEntries(function (results) {
if (!results.length) {
addMeta(entries);
} else {
console.log(results);
entries = entries.concat(toArray(results));
readEntries();
}
});
};
readEntries(); // Start reading dirs.
}
The FileEntry object does not contain metadata, I need the last modified date. I'm able to retrieve a object of metadata
function addMeta(entries) {
for (var i = 0; i < entries.length; i++) {
entries[i].getMetadata(function (metadata) {
console.log(entries);
console.log(metadata);
});
}
}
Problem is that i get the metadata in a callback.
How can i join the two object making sure the right match is made?
The simplified result im looking for is:
[
["fileName1", "modifyDate1"],
["fileName2", "modifyDate2"],
]
To get lastModifiedDate, you don't need to use getMetadata, as per the description of this question, just use entry.file.lastModifiedDate, though maybe file() is another callback.
To "join the two object making sure the right match is made", because of Closures, you could use the following code to get the right results. (Assuming the data structure is [[entry, metadata]] as you mentioned)
var ans = [];
function addMeta(entries) {
for (var i = 0; i < entries.length; i++) {
(function(entry) {
entry.getMetadata(function (metadata) {
ans.push([entry, metadata]);
});
}(entries[i]);
}
}
If what you want is to wait for all asynchronous callback ends, see this answer for more details, basically you could adjust your code and use Promise, or use other implementations like setInterval or use a variable to calculate how many callbacks remain.
I suggest to have a look on Promise-based bro-fs implementation of HTML Filesystem API.
To read all entries with metadata you can do something like this:
fs.readdir('dir')
.then(entries => {
const tasks = entries.map(entry => fs.stat(entry.fullPath))
return Promise.all(tasks);
})
.then(results => console.log(results))