I'm trying to call an API to retrieve a list of data, this data will help me get image source. So for each data entry, I try to call the image url and do some image processing using Parse Image. The problem is, the inner httpRequest has never been triggered... I don't know what is the reason... As Parse.com is not that popular, I could hardly find any solution for such a case...
Here's my code:
Parse.Cloud.httpRequest({
url: api_url,
method: 'POST'
}).then(function(data){
data = JSON.parse(data.text);
var entries = data.entries;
for (var i = 0; i < 100; i++) {
entry = entries[i][1];
var image_url = composeImgURL(entry);
Parse.Cloud.httpRequest({
url: image_url,
success: function(httpResponse){
var image = new Image();
image.setData(httpResponse.buffer);
var size = Math.min(image.width(), image.height());
if(size > 300){
var ratio = 300 / size.toFixed(2);
image.scale({
ratio: ratio.toFixed(2)
});
}
image.setFormat("JPEG");
var base64 = image.data().toString("base64");
var cropped = new Parse.File("thumbnail.jpg", { base64: base64 });
cropped.save();
entries[i][1]['url'] = cropped.url;
},
error: function(httpResponse){
console.log(httpResponse);
}
});
}
return entries;
}).then(function(entries){
response.success(entries);
});
Your Current Code
Your code will reach the return entries statement before any of your 100 http requests have completed. As soon as this occurs the line response.success(entries); will be called, and your cloud function will exit.
Parallel Promises
For this situation, what you want to use are parallel promises. The docs are here:
https://parse.com/docs/js_guide#promises-parallel
There is a relevant question/answer on Parse answers here: https://www.parse.com/questions/parallel-promises-with-parsepromisewhen
Future Problems
The above will work for simple functionality, however you are doing 2 more things inside your for loop which will require more time
cropped.save(); you should wait for this function to return
Cropping 100 images inside a cloud function probably won't work. A cloud function only runs for approximately 15 seconds. If crops take too long, some will occur and some won't. You could do something such as queuing images for cropping and performing those crops within a scheduled Job.
Related
I am trying to upload a list of images. I have the images stored in an array (called images).
I have the previews displayed on the screen.
What I want to do is upload them sequentially, and as they complete their upload, I want to set a flag. When this flag is set (thanks to the power of Knockout), the image disappears from the list.
However, I think due to the async nature of the post command .. I'm not achieving the desired results.
Below is what I am trying to do:
for(var i = 0; i < self.images().length; i++) {
var photo = self.images()[i];
var thisItem = photo;
var object = JSON.stringify({
Image: thisItem.Image,
AlbumID: albumId,
Filesize: thisItem.Filesize,
Filetype: thisItem.Filetype,
Description: thisItem.Description,
UniqueID: thisItem.UniqueID
});
var uri = "/api/Photo/Upload";
$.post({
url: uri,
contentType: "application/json"
}, object).done(function(data) {
if(data.IsSuccess) {
photo.Status(1);
}
}).fail(function() {
// Handle here
}).always(function() {
remainingImages = remainingImages - 1;
if(remainingImages == 0) self.isUploading(false);
});
}
self.isUploading(false);
But I think what's happening is that the for loop ends before all the posts have received a reply. Because online one image is removed.
I tried with a async: false ajax post, but that locked up the screen and then they all disappeared.
I thought the 'done' method would only execute once the post is completed, but I think the whole method just ends once the post commands have been sent, and then I never get the done.
How can I achieve what I'm trying to do... Set each image's status once the post gets a reply?
Your first problem is that you are losing the reference you think you have to each photo object because the loop finishes before your AJAX calls return, so that when they do return, photo is a reference to the last item in self.images().
What we need to do to solve this is to create a new scope for each iteration of the loop and each of those scopes will have its own reference to a particular photo. JavaScript has function scopes, so we can achieve our goal by passing each photo to a function. I will use an Immediately Invoked Function Expression (IIFE) as an example:
for (var i = 0; i < self.images().length; i++) {
var photo = self.images()[i];
(function (thisItem) {
/* Everything else from within your for loop goes here. */
/* Note: the done handler must reference `thisItem`, not `photo`. */
})(photo);
}
Note that you should remove self.isUploading(false); from the last line. This should not be set to false until all of the POST requests have returned.
I have created a functioning fiddle that you can see here.
However, this solution will not perform the POST requests "sequentially". I am not sure why you would want to wait for one POST to return before sending the next as this will only increase the time the user must wait. But for the sake of completeness, I will explain how to do it.
To fire a POST after the previous POST returns you will need to remove the for loop. You will need a way to call the next POST in the always handler of the previous POST. This is a good candidate for a recursive function.
In my solution, I use an index to track which item from images was last POSTed. I recursively call the function to perform the POST on the next item in images until we have POSTed all items in images. The following code replaces the for loop:
(function postNextImage (index) {
var photo = self.images()[i];
var thisItem = photo;
var object = JSON.stringify({
Image: thisItem.Image,
AlbumID: albumId,
Filesize: thisItem.Filesize,
Filetype: thisItem.Filetype,
Description: thisItem.Description,
UniqueID: thisItem.UniqueID
});
var uri = "/api/Photo/Upload";
$.post({
url: uri,
contentType: "application/json"
}, object)
.done(function (data) {
if(data.IsSuccess) {
thisItem.Status(1);
}
})
.fail(function () {
// Handle here
})
.always(function () {
if (index < (self.images().length - 1)) {
index += 1;
postNextImage(index);
} else {
self.isUploading(false);
}
});
})(0);
I have created a fiddle of this solution also, and it can be found here.
I'm working on a CSV parsing web application, which collects data and then uses it to draw a plot graph. So far it works nicely, but unfortunately it takes some time to parse the CSV files with papaparse, even though they are only about 3MB.
So it would be nice to have some kind of progress shown, when "papa" is working. I could go for the cheap hidden div, showing "I'm working", but would prefer the use of <progress>.
Unfortunately the bar just gets updated AFTER papa has finished its work. So I tried to get into webworkers and use a worker file to calculate progress and also setting worker: true in Papa Parses configuration. Still no avail.
The used configuration (with step function) is as followed:
var papaConfig =
{
header: true,
dynamicTyping: true,
worker: true,
step: function (row) {
if (gotHeaders == false) {
for (k in row.data[0]) {
if (k != "Time" && k != "Date" && k != " Time" && k != " ") {
header.push(k);
var obj = {};
obj.label = k;
obj.data = [];
flotData.push(obj);
gotHeaders = true;
}
}
}
tempDate = row.data[0]["Date"];
tempTime = row.data[0][" Time"];
var tD = tempDate.split(".");
var tT = tempTime.split(":");
tT[0] = tT[0].replace(" ", "");
dateTime = new Date(tD[2], tD[1] - 1, tD[0], tT[0], tT[1], tT[2]);
var encoded = $.toJSON(row.data[0]);
for (j = 0; j < header.length; j++) {
var value = $.evalJSON(encoded)[header[j]]
flotData[j].data.push([dateTime, value]);
}
w.postMessage({ state: row.meta.cursor, size: size });
},
complete: Done,
}
Worker configuration on the main site:
var w = new Worker("js/workers.js");
w.onmessage = function (event) {
$("#progBar").val(event.data);
};
and the called worker is:
onmessage = function(e) {
var progress = e.data.state;
var size = e.data.size;
var newPercent = Math.round(progress / size * 100);
postMessage(newPercent);
}
The progress bar is updated, but only after the CSV file is parsed and the site is set up with data, so the worker is called, but the answer is handled after parsing. Papa Parse seems to be called in a worker, too. Or so it seems if checking the calls in the browsers debugging tools, but still the site is unresponsive, until all data shows up.
Can anyone point me to what I have done wrong, or where to adjust the code, to get a working progress bar? I guess this would also deepen my understanding of web workers.
You could use the FileReader API to read the file as text, split the string by "\n" and then count the length of the returned array. This is then your size variable for the calculation of percentage.
You can then pass the file string to Papa (you do not need to reread directly from the file) and pass the number of rows (the size variable) to your worker. (I am unfamiliar with workers and so am unsure how you do this.)
Obviously this only accurately works if there are no embedded line breaks inside the csv file (e.g. where a string is spread over several lines with line breaks) as these will count as extra rows, so you will not make it to 100%. Not a fatal error, but may look strange to the user if it always seems to finish before 100%.
Here is some sample code to give you ideas.
var size = 0;
function loadFile(){
var files = document.getElementById("file").files; //load file from file input
var file = files[0];
var reader = new FileReader();
reader.readAsText(file);
reader.onload = function(event){
var csv = event.target.result; //the string version of your csv.
var csvArray = csv.split("\n");
size = csvArray.length;
console.log(size); //returns the number of rows in your file.
Papa.parse(csv, papaConfig); //Send the csv string to Papa for parsing.
};
}
I haven't used Papa Parse with workers before, but a few things pop up after playing with it for a bit:
It does not seem to expect you to interact directly with the worker
It expects you to either want the entire final result, or the individual items
Using a web worker makes providing a JS Fiddle infeasible, but here's some HTML that demonstrates the second point:
<html>
<head>
<script src="papaparse.js"></script>
</head>
<body>
<div id="step">
</div>
<div id="result">
</div>
<script type="application/javascript">
var papaConfig = {
header: true,
worker: true,
step: function (row) {
var stepDiv = document.getElementById('step');
stepDiv.appendChild(document.createTextNode('Step received: ' + JSON.stringify(row)));
stepDiv.appendChild(document.createElement('hr'));
},
complete: function (result) {
var resultDiv = document.getElementById('result');
resultDiv.appendChild(document.createElement('hr'));
resultDiv.appendChild(document.createTextNode('Complete received: ' + JSON.stringify(result)))
resultDiv.appendChild(document.createElement('hr'));
}
};
var data = 'Column 1,Column 2,Column 3,Column 4 \n\
1-1,1-2,1-3,1-4 \n\
2-1,2-2,2-3,2-4 \n\
3-1,3-2,3-3,3-4 \n\
4,5,6,7';
Papa.parse(data, papaConfig);
</script>
</body>
</html>
If you run this locally, you'll see you get a line for each of the four rows of the CSV data, but the call to the complete callback gets undefined. Something like:
Step received: {"data":[{"Column 1":"1-1",...
Step received: {"data":[{"Column 1":"2-1",...
Step received: {"data":[{"Column 1":"3-1",...
Step received: {"data":[{"Column 1":"4","...
Complete received: undefined
However if you remove or comment out the step function, you will get a single line for all four results:
Complete received: {"data":[{"Column 1":"1-1",...
Note also that Papa Parse uses a streaming concept to support the step callback regardless of using a worker or not. This means you won't know how many items you are parsing directly, so calculating the percent complete is not possible unless you can find the length of items separately.
I don't even know how to properly ask this question but I have concerns about the performance (mostly memory consumption) of the following code. I am anticipating that this code will consume a lot of memory because of map on a large set and a lot of 'hanging' functions that wait for external service. Are my concerns justified here? What would be a better approach?
var list = fs.readFileSync('./mailinglist.txt') // say 1.000.000 records
.split("\n")
.map( processEntry );
var processEntry = function _processEntry(i){
i = i.split('\t');
getEmailBody( function(emailBody, name){
var msg = {
"message" : emailBody,
"name" : i[0]
}
request(msg, function reqCb(err, result){
...
});
}); // getEmailBody
}
var getEmailBody = function _getEmailBody(obj, cb){
// read email template from file;
// v() returns the correct form for person's name with web-based service
v(obj.name, function(v){
cb(obj, v)
});
}
If you're worried about submitting a million http requests in a very short time span (which you probably should be), you'll have to set up a buffer of some kind.
one simple way to do it:
var lines = fs.readFileSync('./mailinglist.txt').split("\n");
var entryIdx = 0;
var done = false;
var processNextEntry = function () {
if (entryIdx < lines.length) {
processEntry(lines[entryIdx++]);
} else {
done = true;
}
};
var processEntry = function _processEntry(i){
i = i.split('\t');
getEmailBody( function(emailBody, name){
var msg = {
"message" : emailBody,
"name" : name
}
request(msg, function reqCb(err, result){
// ...
!done && processNextEntry();
});
}); // getEmailBody
}
// getEmailBody didn't change
// you set the ball rolling by calling processNextEntry n times,
// where n is a sensible number of http requests to have pending at once.
for (var i=0; i<10; i++) processNextEntry();
Edit: according to this blog post node has an internal queue system, it will only allow 5 simultaneous requests. But you can still use this method to avoid filling up that internal queue with a million items if you're worried about memory consumption.
Firstly I would advise against using readFileSync, and instead favour the async equivalent. Blocking on IO operations should be avoided as reading from a disk is very expensive, and whilst that's the sole purpose of your code now, I would consider how that might change in the future - and arbitrarily wasting clock cycles is never a good idea.
For large data files I would read them in in defined chunks and process them. If you can come up with some schema, either sentinels to distinguish data blocks within the file, or padding to boundaries, then process the file piece by piece.
This is just rough, untested off the top of my head, but something like:
var fs = require("fs");
function doMyCoolWork(startByteIndex, endByteIndex){
fs.open("path to your text file", 'r', function(status, fd) {
var chunkSize = endByteIndex - startByteIndex;
var buffer = new Buffer(chunkSize);
fs.read(fd, buffer, 0, chunkSize, 0, function(err, byteCount) {
var data = buffer.toString('utf-8', 0, byteCount);
// process your data here
if(stillWorkToDo){
//recurse
doMyCoolWork(endByteIndex, endByteIndex + 100);
}
});
});
}
Or look into one of the stream library functions for similar functionality.
H2H
ps. Javascript and Node works extremely well with async and eventing.. using sync is an antipattern in my opinion, and likely to cause code to be a headache in future
I am just getting started with coding for FirefoxOS and am trying to get a list of files in a directory.
The idea is to find the name of each file and add it to the array (which works), but I want to return the populated array and this is where I come unstuck. It seems that the array gets populated during the function (as I can get it to spit out file names from it) but when I want to return it to another function it appears to be empty?
Here is the function in question:
function getImageFromDevice (){
var imageHolder = new Array();
var pics = navigator.getDeviceStorage('pictures');
// Let's browse all the images available
var cursor = pics.enumerate();
var imageList = new Array();
var count = 0;
cursor.onsuccess = function () {
var file = this.result;
console.log("File found: " + file.name);
count = count +1;
// Once we found a file we check if there are other results
if (!this.done) {
imageHolder[count] = file.name;
// Then we move to the next result, which call the cursor
// success with the next file as result.
this.continue();
}
console.log("file in array: "+ imageHolder[count]);
// this shows the filename
}
cursor.onerror = function () {
console.warn("No file found: " + this.error);
}
return imageHolder;
}
Thanks for your help!
Enumerating over pictures is an asynchronous call. Essentially what is happening in your code is this:
You are initiating an empty array
You are are telling firefox os to look for pictures on the device
Then in cursor.onsuccess you are telling firefox os to append to the array you have created WHEN it gets back the file. The important thing here is that this does not happen right away, it happens at some point in the future.
Then you are returning the empty array you have created. It's empty because the onsuccess function hasn't actually happened.
After some point in time the onsuccess function will be called. One way to wait until the array is full populated would be to add in a check after:
if (!this.done) {
imageHolder[count] = file.name;
this.continue();
}
else {
//do something with the fully populated array
}
But then of course your code has to go inside the getImageFromDevice function. You can also pass a callback function into the getImageFromDevice function.
See Getting a better understanding of callback functions in JavaScript
The problem is with the aSynchronous nature of the calls you are using.
You are returning (and probably using) the value of imageHolder when it's still empty - as calls to the "onsuccess" function are deferred calls, they happen later in time, whereas your function returns immediately, with the (yet empty) imageHolder value.
You should be doing in this case something along those lines:
function getImageFromDevice (callback){
...
cursor.onsuccess = function () {
...
if (!this.done) {
// next picture
imageHolder[count] = file.name;
this.continue();
} else {
// no more pictures, return with the results
console.log("operation finished:");
callback(imageHolder);
}
}
}
Or use Promises in your code to accomplish the same.
Use the above by e.g.:
getImageFromDevice(function(result) {
console.log(result.length+" pictures found!");
});
I'm trying to grab all the URLs of my Facebook photos.
I first load the "albums" array with the album id's.
Then I loop through the albums and load the "pictures" array with the photos URLs.
(I see this in Chrome's JS debugger).
But when the code gets to the last statement ("return pictures"), "pictures" is empty.
How should I fix this?
I sense that I should use a closure, but not entirely sure how to best do that.
Thanks.
function getMyPhotos() {
FB.api('/me/albums', function(response) {
var data = response.data;
var albums = [];
var link;
var pictures = [];
// get selected albums id's
$.each(data, function(key, value) {
if ((value.name == 'Wall Photos')) {
albums.push(value.id);
}
});
console.log('albums');
console.log(albums);
// get the photos from those albums
$.each(albums, function(key, value) {
FB.api('/' + value + '/photos', function(resp) {
$.each(resp.data, function(k, val) {
link = val.images[3].source;
pictures.push(link);
});
});
});
console.log('pictures');
console.log(pictures);
return pictures;
});
}
You're thinking about your problem procedurally. However, this logic fails anytime you work with asynchronous requests. I expect what you originally tried to do looked something like this:
var pictures = getMyPhotos();
for (var i = 0; i < pictures.length; i++) {
// do something with each picture
}
But, that doesn't work since the value of 'pictures' is actually undefined (which is the default return type of any function without an actual return defined -- which is what your getMyPhotos does)
Instead, you want to do something like this:
function getMyPhotos(callback) {
FB.api('/me/albums', function (response) {
// process respose data to get a list of pictures, as you have already
// shown in your example
// instead of 'returning' pictures,
// we just call the method that should handle the result
callback(pictures);
});
}
// This is the function that actually does the work with your pictures
function oncePhotosReceived(pictures){
for (var i = 0; i < pictures.length; i++) {
// do something with each picture
}
};
// Request the picture data, and give it oncePhotosReceived as a callback.
// This basically lets you say 'hey, once I get my data back, call this function'
getMyPhotos(oncePhotosReceived);
I highly recommend you scrounge around SO for more questions/answers about AJAX callbacks and asynchronous JavaScript programming.
EDIT:
If you want to keep the result of the FB api call handy for other code to use, you can set the return value onto a 'global' variable in the window:
function getMyPhotos(callback) {
FB.api('/me/albums', function (response) {
// process respose data to get a list of pictures, as you have already
// shown in your example
// instead of 'returning' pictures,
// we just call the method that should handle the result
window.pictures = pictures;
});
}
You can now use the global variable 'pictures' (or, explicitly using window.pictures) anywhere you want. The catch, of course, being that you have to call getMyPhotos first, and wait for the response to complete before they are available. No need for localStorage.
As mentioned in the comments, asynchronous code is like Hotel California - you can check any time you like but you can never leave.
Have you noticed how the FB.api does not return a value
//This is NOT how it works:
var result = FB.api('me/albums')
but instead receives a continuation function and passes its results on to it?
FB.api('me/albums', function(result){
Turns out you need to have a similar arrangement for your getMyPhotos function:
function getMyPhotos(onPhotos){
//fetches the photos and calls onPhotos with the
// result when done
FB.api('my/pictures', function(response){
var pictures = //yada yada
onPhotos(pictures);
});
}
Of course, the continuation-passing style is contagious so you now need to call
getMyPhotos(function(pictures){
instead of
var pictures = getMyPhotos();