Meteor: wait until file is generated - javascript

I do a Meteor.call() synchronously (without callbacks), which downloads from a location and generates a file on the server successfully, and then Meteor.Router.to('/file/generated.just.now');
However, sometimes the file takes a few extra seconds to generate and I redirect to the file before it exists.
I've tried to use Futures and Fibers, but not sure if this can achieve blocking (wait until file is finished written);
if (Meteor.isServer) {
var request = Npm.require('request');
var fs = Npm.require('fs');
var Future = Npm.require('fibers/future'), wait = Future.wait;
Fiber = Npm.require('fibers');
var result = function(){
downloadAndSaveFile(content.pdf, filename).wait();
}.future();
function downloadAndSaveFile(fileUrl, fileName) {
var future = new Future;
request(fileUrl).pipe(fs.createWriteStream(getPath() + fileName)).on('closed', function(){
future.return();
});
return future;
}
}

Meteor's router .to function is client side only, used to invoke the routing callbacks. It doesn't tell the browser to physically redirect, only swap out the DOM to reflect the new page according to what the templates & your routes are.
If you want to redirect you should use
window.location = 'newurl';
Or a link that the user click's created from the .call callback.

Related

How do I visit a list of urls one after the other?

I've been trying to create an automatic way to load url after url sequentially and save the resource found at each url to a folder.
Jdownloader can't seem to notice the resource at the url so I've tried various javascript options.
<script>
var i = 100;
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
async function demo() {
while (i < 330) {
window.location = "https://ia601708.us.archive.org/BookReader/BookReaderImages.php?zip=/10/items/sexualsuicide00gild/sexualsuicide00gild_jp2.zip&file=sexualsuicide00gild_jp2/sexualsuicide00gild_0"+i+".jp2&scale=1&rotate=0";
console.log('Taking a break...');
await sleep(5000);
console.log('Two seconds later');
i++;
}
}
demo();
</script>
and
<script>
var i = 100;
while (i < 330) {
window.location = "https://ia601708.us.archive.org/BookReader/BookReaderImages.php?zip=/10/items/sexualsuicide00gild/sexualsuicide00gild_jp2.zip&file=sexualsuicide00gild_jp2/sexualsuicide00gild_0"+i+".jp2&scale=1&rotate=0";
$(window).bind("load", function() {
i++;
});
}
</script>
I thought I'd be able to just loop url requests, iterate the url names by one, load the resource, then automatically load the next url in the sequence and then just later save the accumulated cache. But no, all the loops I've tried just freeze my browser. I'm surprised such a simple task is so difficult.
You will need to use fetch() and then parse the HTML response using DOMParser or use XMLHttpRequest to get the DOM object for the pages you're scraping. Then you can use query selectors to find the next URL you want to scrape and save the current one (or any external reference from it) as file blobs.
Depending on the target host, you might run into CORS restrictions that would prevent you from accessing the response contents. For this and other reasons it's more common to use Node.js for writing scrapers, because they're not restricted by CORS policies and you have direct access to the file system for storage.

JS Web worker share the same scripts in multiple workers

I have multiple inline workers that I need to run. My problem is that every instance needs to share the same scripts that are imported. But on every instance that I created, it redownloads the scripts.
function workerFn(){
importScripts('https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.4/lodash.js');
// Each time this function will do something else
postMessage(_.head([1,2,3]));
}
var code = workerFn.toString();
code = code.substring(code.indexOf("{") + 1, code.lastIndexOf("}"));
var blob = new Blob([code], {type: "application/javascript"});
var worker = new Worker(URL.createObjectURL(blob));
var worker2 = new Worker(URL.createObjectURL(blob));
worker.onmessage = function (m) {};
worker2.onmessage = function (m) {};
So, in the above example, lodash will download twice. How can I prevent this?
Short answer:
Unfortunately, you cannot. It's up to the browser whether it downloads the scripts or not. I assume this does not even happen in every browser.
Another short answer:
The only way to avoid browser re-downloading the script would be to either a) have it compiled in your code (eg. web pack) or b) download it via AJAX, assuming CDN allows requests from other domains.
Neither of these thing is something I would recommend for production code.

How to make browser save a remote file and get informed by a callback with Javascript/jQuery

I have encountered a problem with my Java script/jQuery code.
I want to make a piece of code which could fulfill the following requirement:
1.Make the browser save my remote binary file, let's say http://192.168.0.100/system/diagdata
2.Since the preparing the file in the server side with cost some time(usually around 40s), so I need a callback to let me know when the data will be ready to download(the file itself is very small, so let's ignore the actually data transmit duration) so that I could display some kind of loading page to tell the user the downloading procedure is on the way.
At first, I make a piece of code like this without callback:
var elemIF = document.createElement("iframe");
elemIF.src = 'http://192.168.0.100/system/diagdata';
elemIF.style.display = "none";
document.body.appendChild(elemIF);
It works well(but without callback)
Then in order to make callback possible, then I added some code like this:
var deferred = jQuery.Deferred();
var elemIF = document.createElement("iframe");
elemIF.src = 'http://192.168.0.100/system/diagdata';
elemIF.style.display = "none";
document.body.appendChild(elemIF);
elemIF.defer = 'defer';
if (window.ActiveXObject) { // IE
sc.onreadystatechange = function() {
if ((that.readyState == 'loaded'
||that.readyState == 'complete') ) {
}
}
}
else { // Chrome, Safari, Firefox
elemIF.onload = function() {
alert("onload");
};
elemIF.onerror = function(e) {
alert("onerror");
};
}
deferred.promise();
After I run this piece of code, the "onload" has been called, but the browser did not tend to save the file "diagdata" but try to load it and report a parsing error exception.
Did anyone have a substitute solution which could not only make browser save the binary file but also will callback to inform the data ready status?

Execute an external .js file using selenium webdriver

How can we execute an external .js file using selenium webdriver file using java selenium. I got some reference "Call external javascript functions from java code", however invoke function is able to accept the function inside that file. I want to just execute the whole file as a whole.
It's as simple as this to run an external JavaScript from your server upon the client:
// Assume Guava, but whatever you use to load files...
String externalJS = Files.toString( new File("external.js"), Charset.forName("utf-8"));
// Execute, assume no arguments, and no value to return
Object ignore = ((JavascriptExecutor) driver).executeScript(externalJS);
The link you provided isn't useful, because it's about executing JavaScript upon the server (within the Java VM) rather than upon the browser/device client.
If rather than executing, you're interested in injecting JavaScript into the page for other scripts etc. to interact with (i.e. rather than a one-off execution), see this question.
Here is the code for nodeJS calling external JS and executing a function within the JS:
var fs = require('fs');
var webdriver = require('selenium-webdriver'),
By = webdriver.By,
until = webdriver.until;
var driver = new webdriver.Builder()
.forBrowser('phantomjs')
.build();
var axeSource = fs.readFileSync('lib/axe.js', 'utf8');
driver
.get('http://www.google.com/ncr')
driver.executeScript(axeSource)
.then(function(){
driver.switchTo().defaultContent();
driver.executeAsyncScript(function() {
var callback = arguments[arguments.length - 1];
window.axe.a11yCheck(document, null, function (results) {
callback(results);
});
}).then(function(str) {
var viola = processResults(str);
console.log(viola);
});
})
driver.quit();

Why is this node.js code blocking?

I tried to play a bit with node.js and wrote following code (it doesn't make sense, but that does not matter):
var http = require("http"),
sys = require("sys");
sys.puts("Starting...");
var gRes = null;
var cnt = 0;
var srv = http.createServer(function(req, res){
res.writeHeader(200, {"Content-Type": "text/plain"});
gRes = res;
setTimeout(output,1000);
cnt = 0;
}).listen(81);
function output(){
gRes.write("Hello World!");
cnt++;
if(cnt < 10)
setTimeout(output,1000);
else
gRes.end();
}
I know that there are some bad things in it (like using gRes globally), but my question is, why this code is blocking a second request until the first completed?
if I open the url it starts writing "Hello World" 10 times. But if I open it simultaneous in a second tab, one tab waits connecting until the other tab finished writing "Hello World" ten times.
I found nothing which could explain this behaviour.
Surely it's your overwriting of the gRes and cnt variables being used by the first request that's doing it?
[EDIT actually, Chrome won't send two at once, as Shadow Wizard said, but the code as is is seriously broken because each new request will reset the counter, and outstanding requests will never get closed].
Instead of using a global, wrap your output function as a closure within the createServer callback. Then it'll have access to the local res variable at all times.
This code works for me:
var http = require("http"),
sys = require("sys");
sys.puts("Starting...");
var srv = http.createServer(function(req, res){
res.writeHeader(200, {"Content-Type": "text/plain"});
var cnt = 0;
var output = function() {
res.write("Hello World!\n");
if (++cnt < 10) {
setTimeout(output,1000);
} else {
res.end();
}
};
output();
}).listen(81);
Note however that the browser won't render anything until the connection has closed because the relevant headers that tell it to display as it's downloading aren't there. I tested the above using telnet.
I'm not familiar with node.js but do familiar with server side languages in general - when browser send request to the server, the server creates a Session for that request and any additional requests from the same browser (within the session life time) are treated as the same Session.
Probably by design, and for good reason, the requests from same session are handled sequentially, one after the other - only after the server finish handling one request it will start handling the next.

Categories