JS Web worker share the same scripts in multiple workers - javascript

I have multiple inline workers that I need to run. My problem is that every instance needs to share the same scripts that are imported. But on every instance that I created, it redownloads the scripts.
function workerFn(){
importScripts('https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.4/lodash.js');
// Each time this function will do something else
postMessage(_.head([1,2,3]));
}
var code = workerFn.toString();
code = code.substring(code.indexOf("{") + 1, code.lastIndexOf("}"));
var blob = new Blob([code], {type: "application/javascript"});
var worker = new Worker(URL.createObjectURL(blob));
var worker2 = new Worker(URL.createObjectURL(blob));
worker.onmessage = function (m) {};
worker2.onmessage = function (m) {};
So, in the above example, lodash will download twice. How can I prevent this?

Short answer:
Unfortunately, you cannot. It's up to the browser whether it downloads the scripts or not. I assume this does not even happen in every browser.
Another short answer:
The only way to avoid browser re-downloading the script would be to either a) have it compiled in your code (eg. web pack) or b) download it via AJAX, assuming CDN allows requests from other domains.
Neither of these thing is something I would recommend for production code.

Related

Google Apps Script does not fully execute for other users after V8 was introduced

I wrote a script (with a lot of assistance from the good folks here) that copies a folder (and contents) on Google Drive using Google Sheets Scripts.
It worked fine for a long time but then I enabled the V8 engine (disabled now). The problem is, it still works for me (and maybe two other users) but does not work for everyone else. I'm not a programmer but I learned enough to help me automate some tasks on Excel/ Sheets.
So far, I've tried rechecking all the permissions, creating a brand new sheet, assigning new owners, removing triggers, learning more about V8. But it's not really working because I can't even figure out the problem.
I would appreciate any leads. TIA
PS: We're using shared drives and the Source/Target folder are accessible to all users.
Here's the script:
function onClick() {
ss.getRange("B2:B8").clearContent();
}
function start() {
var sourceFolder = ss.getRange("B19").getValue() ; // Change every month
var targetFolder = ss.getRange("B22").getValue();
var source = DriveApp.getFoldersByName(sourceFolder); // Grab the folder we're going to copy
var parentFolder=DriveApp.getFolderById(ss.getRange("B11").getValue()); // Destination for the new folder.
var target = parentFolder.createFolder(targetFolder);
if (source.hasNext()) {
copyFolder(source.next(), target);
}
}
function copyFolder(source, target) {
var folders = source.getFolders();
var files = source.getFiles();
var prefix = ss.getRange("B23").getValue();
while(files.hasNext()) {
var file = files.next();
file.makeCopy(file.getName(), target).setName(prefix + file.getName());
}
while(folders.hasNext()) {
var subFolder = folders.next();
var folderName = subFolder.getName();
var targetFolder = target.createFolder(folderName);
copyFolder(subFolder, targetFolder);
var NewFolderUrl = target.getUrl()
SpreadsheetApp.getActiveSheet().getRange('B8').setValue(NewFolderUrl);
}
//file.setName(prefix + file.getName());
}
Since you are not getting any logs for the users the script doesn't work for, the issue is most likely related to your functions not being executed properly and/or at all.
A issue for this is that the script is not being attached to the spreadsheet. You can try and declare the ss variable to the start function. Another issue for the behavior mentioned can be caused by passing wrong variables to the functions. You can check that by using console.log() and checking if the variables are the expected ones or not.
Moreover, since you share this script with multiple users, you might want to take a look into Edditor add-ons. This can make sharing easier as the users will only need to install the add-on.
Reference
Apps Script Troubleshooting;
Editor add-ons.
Through some trial and error - I've found that all someone had to do to run this script successfully was to open the SourceFolder once. It may be because it searches for it based on the file name.

Meteor: wait until file is generated

I do a Meteor.call() synchronously (without callbacks), which downloads from a location and generates a file on the server successfully, and then Meteor.Router.to('/file/generated.just.now');
However, sometimes the file takes a few extra seconds to generate and I redirect to the file before it exists.
I've tried to use Futures and Fibers, but not sure if this can achieve blocking (wait until file is finished written);
if (Meteor.isServer) {
var request = Npm.require('request');
var fs = Npm.require('fs');
var Future = Npm.require('fibers/future'), wait = Future.wait;
Fiber = Npm.require('fibers');
var result = function(){
downloadAndSaveFile(content.pdf, filename).wait();
}.future();
function downloadAndSaveFile(fileUrl, fileName) {
var future = new Future;
request(fileUrl).pipe(fs.createWriteStream(getPath() + fileName)).on('closed', function(){
future.return();
});
return future;
}
}
Meteor's router .to function is client side only, used to invoke the routing callbacks. It doesn't tell the browser to physically redirect, only swap out the DOM to reflect the new page according to what the templates & your routes are.
If you want to redirect you should use
window.location = 'newurl';
Or a link that the user click's created from the .call callback.

JavaScript: Writing an output file within a limited & secure scenario

I would like to add a function in my javascript to write to a text file in the local directory where the javascript file is located. This means I'm not looking for some insecure way of accessing the user's file system in any way. All I care about is extracting the user's input into an html page that is accessed by my javascript then using that input as data externally. I just need a simple text file. This user input isn't actually text by the way, but rather a bunch of actions using my online game's components that the underlying javascript turns into a text string (so this particular string is what I want to save, not really even anything direct from the user).
I don't want to write to a user's file system, but rather, the file where the javascript (and html) code is located (a folder hosted on a server). Is there any simple way to get some file I/O going?
I know Javascript has a FileReader, is there any way to get it to do this in reverse? Like a FileWriter. GoogleClosure looks like it has a FileWriter, but it doesn't seem to quite work and I can't find any decent examples of how to get it to do this.
If this requires a different language, is there any way I can just get the relevant snippet and insert this into my Javascript file?
(the folder is hosted on a Linux system if that helps)
ADDENDUM: Elias Van Ootegem's solution below is excellent and I would highly recommend looking into it as it's a great example of client-server interaction and getting your system to provide you the data you're looking to extract. Workers are pretty interesting.
But for those of you looking at this post with that similar question that I initially had about JavaScript I/O, I found one other work-a-round depending on your case. My team's project site made use of a database site, MongoDB, that stored some of the user's interaction data if the user had hit a "Save" button. MongoDB, and other online database systems, provide a "dumping" function/script that you can call from your local machine/server and put that data into an output file (I was able to put the JSON data into a text file). From that output, you can write a parser to extract and format the data you desire from that output since databases like MongoDB can be pretty clear as to what format the text will be in (very structured, organized). I wrote a parser in C (with a few libraries I had written to extend the language) to do what I needed, but the idea is pretty generalizable to other programming/scripting languages.
I did look at leaving cookies as option as well, and made use of a test program to try it out (it works too!). However, one tradeoff for leaving cookies on a user's local system is that those cookies generally are meant to hold small amounts of data (usually things like username, date created, & expiration date of the cookie) and are dependent upon the user's local machine. Further, while you can extract the data in those cookies from JavaScript, you are back to the initial problem: the data still exists on the web, not on an output file on your server's file system. In the case you need to extract data and want some guarantee this data will exist on your machine, use Elias Van Ootegem's solution.
JavaScript code that is running client-side cannot access the server's filesystem at the same time, let alone write a file. People often say that, if JS were to have IO capabilities, that would be rather insecure... just imagine how dangerous that would be.
What you could do, is simply build your string, using a Worker that, on closing, returns the full data-string, which is then sent to the server (AJAX call).
The server-side script (Perl, PHP, .NET, Ruby...) can receive this data, parse it and then write the file to disk as you want it to.
All in all, not very hard, but quite an interesting project anyway. Oh, and when using a worker, seeing as it's an online game and everything, perhaps a setInterval to send (a part of) the data every 5000ms might not be a bad idea, either.
As requested - some basic code snippets.
A simple AJAX-setup function:
function getAjax(url,method, callback)
{
var ret;
method = method || 'POST';
url = url || 'default.php';
callback = callback || success;//assuming you have a default function called "success"
try
{
ret = new XMLHttpRequest();
}
catch (error)
{
try
{
ret= new ActiveXObject('Msxml2.XMLHTTP');
}
catch(error)
{
try
{
ret= new ActiveXObject('Microsoft.XMLHTTP');
}
catch(error)
{
throw new Error('no Ajax support?');
}
}
}
ret.open(method, url, true);
ret.setRequestHeader('X-Requested-With', 'XMLHttpRequest');
ret.setRequestHeader('Content-type', 'application/x-www-form-urlencode');
ret.onreadystatechange = callback;
return ret;
}
var getRequest = getAjax('script.php?some=Get&params=inURL', 'GET');
getRequest.send(null);
var postRequest = getAjax('script.php', 'POST', function()
{//passing anonymous function here, but this could just as well have been a named function reference, obviously...
if (this.readyState === 4 && this.status === 200)
{
console.log('Post request complete, answer was: ' + this.response);
}
});
postRequest.send('foo=bar');//set different headers to pos JSON.stringified data
Here's a good place to read up on whatever you don't get from the code above. This is, pretty much a copy-paste bit of code, but if you find yourself wanting to learn just a bit more, Here's a great place to do just that.
WebWorkers
Now these are pretty new, so using them does mean not being able to support older browsers (you could support them by using the event listeners to send each morsel of data to the server, but a worker allows you to bundle, pre-process and structure the data without blocking the "normal" flow of your script. Workers are often presented as a means to sort-of multi-thread JavaScript code. Here's a good intro to them
Basically, you'll need to add something like this to your script:
var worker = new Worker('preprocess.js');//or whatever you've called the worker
worker.addEventListener('message', function(e)
{
var xhr = getAjax('script.php', 'post');//using default callback
xhr.send('data=' + e.data);
//worker.postMessage(null);//clear state
}, false);
Your worker, then, could start off like so:
var time, txt = '';
//entry point:
onmessage = function(e)
{
if (e.data === null)
{
clearInterval(time);
txt = '';
return;
}
if (txt === '' && !time)
{
time = setInterval(function()
{
postMessage(txt);
}, 5000);//set postMessage to be called every 5 seconds
}
txt += e.data;//add new text to current string...
}
Server-side, things couldn't be easier:
if ($_POST && $_POST['data'])
{
$file = $_SESSION['filename'] ? $_SESSION['filename'] : 'File'.session_id();
$fh = fopen($file, 'a+');
fwrite($fh, $_POST['data']);
fclose($fh);
}
echo 'ok';
Now all of this code is a bit crude, and most if it cannot be used in its current form, but it should be enough to get you started. If you don't know what something is, google it.
But do keep in mind that, when it comes to JS, MDN is easily the best reference out there, and as far as PHP goes, their own site (php.net/{functionName}) is pretty ugly, but does contain a lot of info, too...

How can I list websites on IIS7, from script, without using IIS6 compat pack (WMI veneer)

On IIS6, I can use WMI to list available websites, like this:
var iis = GetObject("winmgmts://localhost/root/MicrosoftIISv2");
var query = "SELECT * FROM IIsWebServerSetting"
// get the list of virtual servers
var results = iis.ExecQuery(query);
for(var e = new Enumerator(results); !e.atEnd(); e.moveNext()) {
var site = e.item();
// site.Name // W3SVC/1, W3SVC/12378398, etc
// site.Name.substr(6) // 1, 12378398, etc
// site.ServerComment) // "Default Web Site", "Site2", etc
// site.ServerBindings(0).Port // 80, 8080, etc
}
I know I can run this script on IIS7, if I have previously installed the IIS6 Compatibility Pack.
Is it possible to get the list of WebSites without requiring the compatibility pack as a pre-requisite?
I know I can run AppCmd to do this from the command line:
\Windows\system32\inetsrv\appcmd list sites
But... can I run that from a custom action in an MSI?
And... if not, how can I do the equivalent thing (list websites on IIS7) from javascript?
EDIT
Here's how I tried running the command from within Javascript.
function GetWebSites_IIS7()
{
var ParseOneLine = function(oneLine) {
...a bunch of regex parsing here....
};
LogMessage("GetWebSites_IIS7() ENTER");
var shell = new ActiveXObject("WScript.Shell");
var windir = shell.Environment("system")("windir");
// aka Session.Property("%WINDIR%")
var appcmd = windir + "\\system32\\inetsrv\\appcmd.exe list sites";
var oExec = shell.Exec(appcmd);
var sites = [];
while (!oExec.StdOut.AtEndOfStream) {
var oneLine = oExec.StdOut.ReadLine();
var line = ParseOneLine(oneLine);
LogMessage(" site: " + line.name);
sites.push(line);
}
return sites;
}
This works, but it briefly pops a visible console window, which then disappears. Doesn't look very polished. I think I can avoid the console window by using shell.Run() instead of shell.Exec(). But shell.Run() doesn't give access to the stdout, so I would have to redirect the output to a temporary file, then read the output. I haven't tried that yet. That may introduce some security issues; I'll have to see.
Related:
Where and how should my CustomAction create and read a temporary file?
Yes, you can run appcmd from the custom action the same way you do any custom action which runs exe. First off, you should author a DirectorySearch/FileSearch elements to find the full path to the executable. Next, add a custom action with ExeCommand attribute. You're probably trying to get feedback from a user, so leave it immediate. Also, think about using QuietExec in order not to show console window to your users.
By the way, if my guess is correct, you're trying to do something like this. Hope this helps.

Using JavaScript to perform a GET request without AJAX

Out of curiosity, I'm wondering about the best (easiest, fastest, shortest, etc; make your pick) way to perform a GET request in JavaScript without using AJAX or any external libraries.
It must work cross-browser and it's not allowed to distort the hosting web page visually or affect it's functionality in any way.
I don't care about headers in the request, just the url-part. I also don't care about the result of the request. I just want the server to do something as a side effect when it receives this request, so firing it is all that matters. If your solution requires the servers to return something in particular, that's ok as well.
I'll post my own suggestion as a possible answer, but I would love it if someone could find a better way!
Have you tried using an Image object? Something like:
var req = new Image();
req.onload = function() {
// Probably not required if you're only interested in
// making the request and don't need a callback function
}
req.src = 'http://example.com/foo/bar';
function GET(url) {
var head = document.getElementsByTagName('head')[0];
var n = document.createElement('script');
n.src = url;
n.type = 'text/javascript';
n.onload = function() { // this is not really mandatory, but removes the tag when finished.
head.removeChild(n);
};
head.appendChild(n);
}
I would go with Pekka idea and use hidden iframe, the advantage is that no further parsing will be done: for image, the browser will try to parse the result as image, for dynamically creating script tag the browser will try to parse the results as JavaScript code.. iframe is "hit and run", the browser doesn't care what's in there.
Changing your own solution a bit:
function GET(url) {
var oFrame = document.getElementById("MyAjaxFrame");
if (!oFrame) {
oFrame = document.createElement("iframe");
oFrame.style.display = "none";
oFrame.id = "MyAjaxFrame";
document.body.appendChild(oFrame);
}
oFrame.src = url;
}

Categories