Loading a file From Wasm? - javascript

I am using rust and webassembly. I am getting this error message operation not supported on wasm yet. So either one of two things is happening, and I was curious if anyone knew the answer. So either my filepath is not correct and this is the least helpful error message, or wasm doesn't support loading files.
#[wasm_bindgen]
#[macro_use]
pub fn file() -> () {
let mut data: Vec<u8> = Vec::new();
///I would load the png with the same path in my javascript.
let opened = File::open("./png/A_SingleCell.png");
let unwraped = match opened {
Ok(a) => log(&format!("opened {}", "worked")),
Err(e) => log(&format!("{}", e)),
};
// .read_to_end(&mut data)
// .unwrap();
return ();
}
#[wasm_bindgen]
extern "C" {
#[wasm_bindgen(js_namespace = console)]
fn log(msg: &str);
}
and the javascript call is merely just file().
Is there a different directory path I need to use to get the png or can you truly not load a file?
Edit
Adding my index.js to show that webpack has already loaded the png.
import { memory } from "break-game/break_game_bg";
import A from './png/A_SingleCell.png';
import { alloc, fill, decode, file } from "break-game";
file();

Loading a file from the filesystem doesn't work with the Rust standard library and the wasm32-unknown-unknown target. This error is, consequently, an expected runtime error for using File::open.
The Rust standard library currently provides a uniform API surface area for all targets, regardless of whether the target actually supports the functionality or not. It turns out that almost all platforms implement basically all of the stable surface area of the standard library, but the wasm32-unknown-unknown target in specific is a bit of an odd-one-out. On this wasm target the standard library notably has no way to implement functions in modules like std::net or std::fs, so the functions unconditionally return an error. What you're seeing here is that File::open unconditionally returns an error on the wasm32-unknown-unknown target.
Speaking specifically about the wasm32-unknown-unknown target, this target is used to represent the "base layer of compatibility for Rust and wasm". On this target the standard library can only assume the WebAssembly instruction set, nothing else. Because WebAssembly provides no means of doing I/O or loading files, it means that these stubs are left to return errors in the standard library.
Note that an alternative way for us to provide the standard library on the wasm32-unknown-unknown target is to simply not provide these functions at all, causing a compile time error when they're attempted to be used. We chose to not take this route, for better or worse, to remain consistent across targets in Rust. It's hoped that something like the proposed portability lint can change the calculus of this story, but Rust isn't quite there yet!
In the meantime, though, you probably still want to get this done! The wasm-bindgen project has a few guides about the wasm32-unknown-unknown target which may help you make some progress here:
Which crates will work off the shelf with WebAssembly?
How to add WebAssembly support to a general purpose crate
Notably the web platform doesn't currently provide the ability to load files from the filesystem, so there's no great way to implement File::open as an API in Rust for the wasm32-unknown-unknown target, even if JS is used. If you're using Node.js, though, I'd recommend reading about JS interop as well as checking out the wasm-bindgen guide for importing Node.js functions to implement this.

Related

Check to see if dynamic import is a module in JavaScript/TypeScript

I am working on a ScriptManager class for a project that was created many years ago. The original code read scripts from a database, and these scripts are different depending on the customer and installation (the application is a desktop app that uses Chrome Embedded Framework to display web pages). The code would read custom JavaScript code and eval() it, which of course is highly undesirable.
I am replacing this code with a ScriptManager class that can support dynamically inserted code, and the ScriptManager is capable of loading code as a module using JavaScript's dynamic import() command, or loading code as pure script by creating a script tag dynamically in the document.
My problem is that there are many different possible custom code blocks in the database, and not all are modules; some will be pure script until those can be converted to modules at a later time. My code can handle this as described above, but I need a way to detect if the script code from the database is a module, so I can either use the import() command or insert a script tag if it is not.
I am solving this temporarily by making sure any module script code has "export const isModule = true", and checking this after calling import(). This works, but any code that is pure script still results in a module variable, but with no exports in it. If possible I don't want the other developers to have to remember to add isModule = true to any modules they develop in the future.
Is there a way to check that code is a module without having to do complex analysis of the code to check if there are exports in it? Since import() still returns an object and throws no errors if there are no exports, I don't know how to detect this.
UPDATE: Here are some examples of how this is intended to work:
// Not real code, pretend that function gets the string of the script.
let code = getSomeCodeFromTheDatabase();
// Save the code for later loading.
let filename = 'some-filename.js';
saveCodeToFile(code, filename);
// Attempt to dynamically import the script as a module.
let module = await import(filename);
// If it is NOT a module, load it instead as a script tag.
// This is where I need to be able to detect if the code is
// a module or pure script.
if (!module.isModule) {
let scriptTag = document.createElement('script');
scriptTag.src = filename;
document.head.appendChild(script);
}
So if you look here How can I tell if a particular module is a CommonJS module or an ES6 module? you will see I answered a similar question.
So the thing is Sarah, modules are defined by the way that they resolve. Module-types resolving differently is what, not only makes them incompatible with one another, but it is also why we name them differently. Originally Transpillers like Babel & TypeScript were invented because of differences in ECMA-262 Specifications, and the desire to support people who didn't have the updated specifications, as well as supporting the newer features for those who did.
Today transpilers are still being used, conceptually, much the same way. They still help us maintain a single code base while supporting older specifications, and features in newer specifications, at the same time, but the also have the added benefit of being able to generate multiple different builds from a single code base. They use this to support different module types. In node.js, the primary module-type is CJS, but the future lies in ESM modules, so package maintainers have opted to dual build the projects. The use the TypeScript Compiler (aka Transpiler) to emit a CJS build, and an ESM build.
Now this is where things get complicated, because you cannot tell just by looking at a module if it CJS or ESM in this situation, **you absolutely have to inspect its code, and check if it has more than one tsconfig.json file (because it would need at-least 2 to maintain a bi-modular build (which are becoming increasingly common with the turn of each day)
My Suggestion to You:
Use well documented packages. Good packages should be documented well, and those packages should state in their README.md document, what type of package/module they are, and if the package supports more than one module type. If in doubt you can either come and ask here, or better yet, you can ask the maintainer by creating an issue, asking them to add that information to their README.md document.
You can check that there are no export after import. In chrome import() added empty default for non module.
function isNotModule(module) {
return (!Object.keys(module).length) || (!!module.default && typeof module.default === 'object' && !Object.keys(module.default).length)
}
import('./test.js')
.then((module) => {
console.log('./test.js',isNotModule(module))
})
May be it's better to check source code via regex to check if it contains export
something like this
const reg = new RegExp('([^\w]|^)export((\s+)\w|(\s*{))')
reg.test(source)

Dependency tracking modules from a concatenated library

What I've got
A large (proprietary unfortunately) JS library, the many small modules that get rolled up into it during the build process, the accompanying source map, and over 300 examples that use the built version of the library.
The goal
A form of dependency tracking, I guess? I need to be able to modify one of the small modules, rebuild the large file, and then only re-verify the examples that were affected by this change. Note: I don't care whether this requires static analysis or if I have to run all examples thru a headless browser to extract something or so - I'm fine as long as it can be automated.
What I've tried so far
I've read answers to questions like this and tried pre-existing tools like
Madge, but none of them seem to work for my case. Madge in particular is great for telling me which of the modules depend on which modules, but that's not what I'm looking for. Most solutions online are based on the assumption that you're already using something like require.js or similar on which they can piggy-back, but in my case the library is simply just a giant blob.
My current approach is instrumenting the built version of the library by simply appending every line with something like neededModules["the_file_this_line_comes_from.module.js"] = true similar to how code coverage tools do it. However, that fails because of several parts like this:
Points.prototype = Object.assign( Object.create( Info.prototype ), {
plot: ( function () {
var static = new Background();
return function plot( line, physics ) {
<code>
};
}() ),
copy: function () {
return new this.constructor( this.info, this.history ).copy( this );
}
} );
The copy function is tracked/skipped just fine, but because the plot function is an IIFE(right?), the line var static = new Background(); always gets executed, even if there is absolutely no connection to the Points module.
What I'm looking for
Either some help with my current approach and its problems with IIFE or a different solution altogether. I've seen Facebook's Jest does offer dependency tracking, maybe someone has experience with that one, or there's some way to incorporate the source map?
Again, as long as it's automatable and finishes in let's say < 5 min, I'm totally fine with it no matter if it's static analysis or just some hacky script or whatever :)
Thanks!

How does es6 module loading work

I've been reading about es6 module loaders and I just don't quite understand how it works and am hoping someone can enlighten me.
In the practical workflows link above they have an example like this
System.import('app/app').then(function(app) {
// app is now the Module object with exports as getters
});
No problem with that - I get it. But then I see stuff like this
var $ = require('jquery');
and get really confused. What happens if at the time of this call jquery has not yet been transferred to the browser? Does the thread just spin? Does the browser parse your script behind-the-scenes and reform it into a callback like RequireJs does? Is what it does configurable? Are there specific limitations?
Can someone give me a rundown?
The ES6 Module Loader will fetch the source, determine dependencies, and wait until those dependencies have loaded before executing the module. So by the time the require executes, the dependency is already sitting there waiting to be executed.
When loading CommonJS through an ES6 module loader, we rely on statically parsing out the require statements from the source, and only executing the source once those requires have loaded.
In this way we can support CommonJS in the browser loaded dynamically. Circular references are treated identically to the way they are handled in Node.
The regular expressions parsing out the require are actually pretty reliable and quick, while taking into account comments and surrounding tokens. See https://github.com/systemjs/systemjs/blob/master/lib/extension-cjs.js#L10 for the one used by SystemJS.
There is one remaining limitation with this approach and that is dynamic and conditional CommonJS requires like if (condition) require('some' + 'name') aren't detected properly. This is a necessary cost though to make CommonJS behave as a fully asynchronous module format in the browser.

TypeScript use typescript-require shared files

I use TypeScript to code my javascript file with Object Oriented Programing.
I want to use the node module https://npmjs.org/package/typescript-require to require my .ts files from other files.
I want to share my files in both server and client side. (Browser) And that's very important. Note that the folder /shared/ doesn't mean shared between client and server but between Game server and Web server. I use pomelo.js as framework, that's why.
For the moment I'm not using (successfully) the typescript-require library.
I do like that:
shared/lib/message.js
var Message = require('./../classes/Message');
module.exports = {
getNewInstance: function(message, data, status){
console.log(requireTs);// Global typescript-require instance
console.log(Message);
return new Message(message, data, status);
}
};
This file need the Message.js to create new instances.
shared/classes/Message.ts
class Message{
// Big stuff
}
try{
module.exports = Message;
}catch(e){}
At the end of the fil I add this try/catch to add the class to the module.exports if it exists. (It works, but it's not really a good way to do it, I would like to do better)
If I load the file from the browser, the module.export won't exists.
So, what I did above is working. Now if I try to use the typescript-require module, I'll change some things:
shared/lib/message.js
var Message = requireTs('./../classes/Message.ts');
I use requireTs instead of require, it's a global var. I precise I'm using .ts file.
shared/classes/Message.ts
export class Message{
// Big stuff
}
// remove the compatibility script at the end
Now, if I try like this and if I take a look to the console server, I get requireTs is object and Message is undefined in shared/lib/message.js.
I get the same if I don't use the export keyword in Message.ts. Even if I use my little script at the end I get always an error.
But there is more, I have another class name ValidatorMessage.ts which extends Message.ts, it's not working if I use the export keyword...
Did I did something wrong? I tried several other things but nothing is working, looks like the typescript-require is not able to require .ts files.
Thank you for your help.
Looking at the typescript-require library, I see it hasn't been updated for 9 months. As it includes the lib.d.ts typing central to TypeScript (and the node.d.ts typing), and as these have progressed greatly in the past 9 months (along with needed changes due to language updates), it's probably not compatible with the latest TypeScript releases (just my assumption, I may be wrong).
Sharing modules between Node and the browser is not easy with TypeScript, as they both use very different module systems (CommonJS in Node, and typically something like RequireJS in the browser). TypeScript emits code for one or the other, depending on the --module switch given. (Note: There is a Universal Module Definition (UMD) pattern some folks use, but TypeScript doesn't support this directly).
What goals exactly are you trying to achieve, and I may be able to offer some guidance.
I am doing the same and keep having issues whichever way I try to do things... The main problems for me are:
I write my typescript as namespaces and components, so there is no export module with multiple file compilation you have to do a hack to add some _exporter.ts at the end to add the export for your library-output.js to be importable as a module, this would require something like:
module.exports.MyRootNamespace = MyRootNamespace
If you do the above it works, however then you get the issue of when you need to reference classes from other modules (such as MyRootNamespace1.SomeClass being referenced by MyRootNamespace2.SomeOtherClass) you can reference it but then it will compile it into your library-output2.js file so you end up having duplicates of classes if you are trying to re-use typescript across multiple compiled targets (like how you would have 1 solution in VS and multiple projects which have their own dll outputs)
Assuming you are not happy with hacking the exports and/or duplicating your references then you can just import them into the global scope, which is a hack but works... however then when you decide you want to test your code (using whatever nodejs testing framework) you will need to mock out certain things, and as the dependencies for your components may not be included via a require() call (and your module may depend upon node_modules which are not really usable with global scope hacking) and this then makes it difficult to satisfy dependencies and mock certain ones, its like an all or nothing sort of approach.
Finally you can try to mitigate all these problems by using a typescript framework such as appex which allows you to run your typescript directly rather than the compile into js first, and while it seems very good up front it is VERY hard to debug compilation errors, this is currently my preferred way but I have an issue where my typescript compiles fine via tsc, but just blows up with a max stack size exception on appex, and I am at the mercy of the project maintainer to fix this (I was not able to find the underlying issue). There are also not many of these sort of projects out there however they make the issue of compiling at module level/file level etc a moot point.
Ultimately I have had nothing but problems trying to wrestle with Typescript to get it to work in a way which is maintainable and testable. I also am trying to re-use some of the typescript components on the clientside however if you go down the npm hack route to get your modules included you then have to make sure your client side uses a require compatible resource/package loader. As much as I would love to just use typescript on my client and my server projects, it just does not seem to want to work in a nice way.
Solution here:
Inheritance TypeScript with exported class and modules
Finally I don't use require-typescript but typescript.api instead, it works well. (You have to load lib.d.ts if you use it, else you'll get some errors on the console.
I don't have a solution to have the script on the browser yet. (Because of export keyword I have some errors client side) I think add a exports global var to avoid errors like this.
Thank you for your help Bill.

Is it possible to use requirejs when modules may have to be removed to conserve memory

We develop an application in an embedded environment. It is a high level computing environment with a complete webbrowser on top of a busybox Linux system. The only exception is that the system has a limited amount of system memory.
Our application is built in JavaScript and runs inside a Webkit based webbrowser and consists of a lot of javascript modules that are loaded in sequence (Which is not very efficient).
Some modules provide common functionality that is used by several modules. We are in the process of converting our current javascript loader with requirejs, but there is one specific need we have to address first.
Is it possible to unload a module when it has been loaded using requirejs? Assume that we dynamically loads a module using :
require(["somemodule.js"], function(m) { m.run(); } );
That works well for loading and running 'somemodule' and also resolving all dependencies for 'somemodule' and the requirejs framework will store a reference to 'somemodule' for future requests.
If we at some point need to reclaim memory, e.g to be able to load and run an infinite number of modules, we have to start removing some of them after some time. Is that possible with requirejs without altering the internal implementation?
Has anyone dealt with this kind of problem before? Most single page JS apps runs in a webbrowser on a desktop PC where memory usage usually is not a major concern.
RequireJS does not have a built-in unload feature, but it could be added perhaps as an additional part you could build into it. If you would like to have that feature, feel free to propose it in the mailing list or as a GitHub issue.
If you wanted to experiment to see if it helps your situation, what you need to do is the following:
1) Remove the defined module from the RequireJS module cache. If you are not using the multiversion support, you can do something like:
var context = require.s.contexts['_'];
delete context.defined[moduleName];
delete context.specified[moduleName];
delete context.loaded[moduleName];
2) Then you can try removing the script tag to see if that helps:
var scripts = document.getElementsByTagName('script');
for (var i = scripts.length - 1; i >= 0; i--) {
var script = scripts[i];
if (script.getAttribute('data-requiremodule') === moduleName) {
script.parentNode.removeChild(script);
break;
}
}
Note that the module may not be garbage collected if another module holds on to it via the closure function(){} that defines that other module. That other module would need to be removed too.
You can try to limit that impact by not passing in the module as a function argument, but just use require("somemodule") inside the function definition whenever you want to get a hold of dependent modules, and not holding on to that require return value for too long.
Also, in your example above, for modules that use require.def to define themselves, it should look like this (without the .js suffix):
require(["somemodule"], function(m) { m.run(); } );
Try this: require.undef(moduleName)

Categories