I am currently developing a web application to edit some custom file formats (a small editor offering an interface to edit configuration files).
A key element of this app is an object called "FileReader" that is a wrapper who triggers the right interface according to the detected type of config file. This way I just spawn a new FileReader( src ) - src being an url or a blob and it keeps clean.
Once the type of the config file is retrieved, the wrapper checks if it has an interface associated to it and simply use it. Each interface (for each different config type) is defined in a separate JS file. Once the wrapper is fully loaded, I hook all the events and the execution of app begins.
Here is the HTML skeleton I used with the relevant elements :
<!doctype html>
<html><head></head>
<body>
<div id="UI"></div>
<script src="js/fileReader/fileReader.js" >App.FileReader = function(){ ... }</script>
<script src="js/fileReader/configType1.js" >App.FileReader.registerFileType("config1",object)</script>
<script src="js/fileReader/configType2.js" >App.FileReader.registerFileType("config2",object)</script>
<script src="js/fileReader/configType3.js" >App.FileReader.registerFileType("config3",object)</script>
<script src="js/main.js" >App.run();</script>
</body>
</html>
Now, the app grew up and I decided to convert my app to use requirejs.
My problem are mostly about "what's the best organisation" to deal with those modules because I can only consider that the FileReader is ready to use once all the file-type modules are loaded but I can't load them first because they need the mainWrapper to register their type.
The best structure I could come up with is :
main.js :
require(['require','FileReader','./fileReader/config1','./fileReader/config2','./fileReader/config3'],
function( require ) {
require(['require','./core/app'], function( require , App ) {
App.run( window ) ;
});
});
});
fileReader.js :
define(function () {
...
return FileReader ;
});
config1.js :
define(['FileReader'], function ( FileReader ) {
var Interface = ... ;
...
App.FileReader.registerFileType("config1",Interface)
return FileReader ;
});
app.js :
define(['FileReader'], function ( FileReader ) {
var App = function(){} ;
App.run = function(){ FileReader.openFile( ... ) ; ... } ;
return App ;
});
What's the point of my problem ?
To get sure that the FileReader object contains the right Interface objects, I first force requirejs manually to load all the FileReader related files and THEN load the main app file. This way, once the App object requests for the FileReader object, it already contains the right Interfaces registred.
What I'd like to achieve is just to only request for
"./core/app" in the main file
"./fileReader" in the core/app file
and then during it's first load, it would be fine if it could load the config-type modules IN the fileReader file, register all the stuff and only then returns the answer.
What I tried was : (in FileReader)
fileReader.js :
define(["require"],function ( require ) {
...
require(["config1","config2","config3"],
function(){
//occurs asynchronously, after the App.run() is triggered because of the synchronous return
callback(FileReader) ; //unfortunately I did not find any callback of this sort
//if it existed I could asynchronously find the list of the modules in the directory, load the them, and only then fire this fictive "ready" event
}
return FileReader ; //the first return occurs synchronously, so before the the fileReader is ready : modules are missing
});
So what is the best way to load this sort of modules (with some sort of circular dependency between the the config-type files and the the main FileReader) ? What'if I'd like to force it to be asynchronous so I can read what's available in the config-type directory ? With the simple HTML scripts list they were loaded in the right order and I had an easy access to the modules lists, now I'd like to be sure that everyting is ready before the app starts.
It seems to me you could parcel off the functionality that records what file types are available into a new module, maybe named FileTypeRegistry. So your config files could be like this:
define(['FileTypeRegistry'], function ( FileTypeRegistry ) {
var Interface = ... ;
...
FileTypeRegistry.registerFileType("config1", Interface);
});
These modules don't need to return a value. registerFileType just registers a type with the registry.
FileReader.js could contain:
define(["FileTypeRegistry", "config1", "config2", "config3"], function ( FileTypeRegistry ) {
var FileReader = {};
var FileReader.openFile = function (...) {
var impl = FileTypeRegister.getFileTypeImplementation(...);
};
return FileReader;
});
getFileTypeImplementation retrieves an implementation (an Interface previously registered) on the basis of a type name.
Louis' answer was interesting but it just moved the problem creating a new module, and even if it placed the dependencies in the right context, I tried to search for some real asynchronous module definition (not just loading).
The best I could finally come up with was to edit the require.js source file to add the behaviour I expected. I registered a new handler (a special dependency behaviour like "require" or "exports" ) called "delay" that provided a callback function for the module definition :
basically, this works this way :
define(["delay"], function ( delay ) {
var Module = ... ;
setTimeout(function(){ //Some asynchronous actions
delay(Module) ; //This does actually returns the Module's value and triggers the "defined" event.
//It's actually transparent so the other modules just "feel" that the network's load time was a bit longer.
},1000);
return Module ; //This does not do anything because delay is active
});
Thanks to this asynchronous definition, I can now scan the directory (an asynchronous request) and then transparently load all the modules.
The modifications represents only about 10 lines so if someone is interested, I post it there :
https://github.com/jrburke/requirejs/pull/1078/files
Apparently it's not the first request for asynchronous exports but it's still in debate so I just use this for personal projects.
Related
In my web app I want to give users the ability to author their own modules for use in the app. These are ideally written in JS on the client side.
However, I'm having trouble allowing for classes created on the client side to be accessed by the app. Originally I wanted some kind of module import, but dynamic imports still require a path, which isn't accessible by the browser for security reasons. Just doing a straight import of the JS into a script tag pollutes the global namespace, which isn't ideal either.
Is there some sensible way to do this? I would ideally want to import the class, e.g.
// MyClass.js, on the client side
export default class MyClass {
myPrint() {
console.log('ya blew it');
}
}
And then in App.js:
import(somehow_get_this_path).then((MyClass) => { etc});
Is getting that path possible? The current namespace-polluting method uses a select file dialog but doesn't allow me to tell import what path it has. All you get is a blob. I'm pretty new to this stuff so apologies if this question is dumb.
edit: I've tried getting an object URL using CreateObjectURL, which gives the error:
Error: Cannot find module 'blob:null/d651a896-d568-437f-86d0-72ebcee7bc56'
if you are using webpack as a bundler then use can use magic comments to lazy load the components.
or else you can use dynamic imports.
import('path_to_my_class').then(MyClass => {
// Do something with MyClass
});
Edited 1:-
You can use this code to get a working local URL for the uploaded js file. try using this
const path = (window.URL || window.webkitURL).createObjectURL(file);
console.log('path', path);
Edited 2:-
Alternatively, you can create your own object at a global level and wrap the file using an anonymous function (creating closure) such that it won't pollute the global namespace.
// you can add other methods to it according to your use case.
window.MyPlugins = {
plugins: [],
registerPlugin: function (plugin){
this.plugins.push(plugin);
}
}
// MyFirstPlugin.js
// you can inject this code via creating a script tag.
(function(w){ //closure
function MyFirstPlugin() {
this.myPrint = function (){
console.log('ya blew it');
}
}
w.MyPlugins.registerPlugin(new MyFirstPlugin());
})(window)
// get the reference to the plugin in some other file
window.MyPlugins.plugins
Read file and use eval
<script>
function importJS(event) {
var input = event.target;
var reader = new FileReader();
reader.onload = function(){
eval(reader.result);
input.value='';
};
reader.readAsText(input.files[0]);
};
</script>
Select a *.js file and execute
<br>
<br>
<input type="file" onchange="importJS(event);">
I am trying to update a Rails 3 app to Rails 6 and I have problems with the now default webpacker since my Javascript functions are not accessible.
I get: ReferenceError: Can't find variable: functionName for all js function triggers.
What I did is:
create an app_directory in /app/javascript
copied my development javascript file into the app_directory and renamed it to index.js
added console.log('Hello World from Webpacker'); to index.js
added import "app_directory"; to /app/javascript/packs/application.js
added to /config/initializers/content_security_policy.rb:
Rails.application.config.content_security_policy do |policy|
policy.connect_src :self, :https, "http://localhost:3035", "ws://localhost:3035" if Rails.env.development?
end
I get 'Hello World from Webpacker' logged to console, but when trying to access a simple JS function through <div id="x" onclick="functionX()"></div> in the browser I get the reference error.
I understand that the asset pipeline has been substituted by webpacker, which should be great for including modules, but how should I include simple JS functions? What am I missing?
Thanks in advance?
For instructions on moving from the old asset pipeline to the new webpacker way of doing things, you can see here:
https://www.calleerlandsson.com/replacing-sprockets-with-webpacker-for-javascript-in-rails-5-2/
This is a howto for moving from the asset pipeline to webpacker in Rails 5.2, and it gives you an idea of how things are different in Rails 6 now that webpacker is the default for javascript. In particular:
Now it’s time to move all of your application JavaScript code from
app/assets/javascripts/ to app/javascript/.
To include them in the JavaScript pack, make sure to require them in
app/javascript/pack/application.js:
require('your_js_file')
So, create a file in app/javascript/hello.js like this:
console.log("Hello from hello.js");
Then, in app/javascript/packs/application.js, add this line:
require("hello")
(note that the extension isn't needed)
Now, you can load up a page with the browser console open and see the "Hello!" message in the console. Just add whatever you need in the app/javascript directory, or better yet create subdirectories to keep your code organized.
More information:
This question is cursed. The formerly accepted answer is not just wrong but grotesquely wrong, and the most upvoted answer is still missing the mark by a country mile.
anode84 above is still trying to do things the old way, and webpacker will get in your way if you try that. You have to completely change the way you do javascript and think about javascript when you move to webpacker. There is no "scoping issue". When you put code in a web pack it's self-contained and you use import/export to share code between files. Nothing is global by default.
I get why this is frustrating. You're probably like me, and accustomed to declaring a function in a javascript file and then calling it in your HTML file. Or just throwing some javascript at the end of your HTML file. I have been doing web programming since 1994 (not a typo), so I've seen everything evolve multiple times. Javascript has evolved. You have to learn the new way of doing things.
If you want to add an action to a form or whatever, you can create a file in app/javascript that does what you want. To get data to it, you can use data attributes, hidden fields, etc. If the field doesn't exist, then the code doesn't run.
Here's an example that you might find useful. I use this for showing a popup if a form has a Google reCAPTCHA and the user hasn't checked the box at the time of form submission:
// For any form, on submit find out if there's a recaptcha
// field on the form, and if so, make sure the recaptcha
// was completed before submission.
document.addEventListener("turbolinks:load", function() {
document.querySelectorAll('form').forEach(function(form) {
form.addEventListener('submit', function(event) {
const response_field = document.getElementById('g-recaptcha-response');
// This ensures that the response field is part of the form
if (response_field && form.compareDocumentPosition(response_field) & 16) {
if (response_field.value == '') {
alert("Please verify that you are not a robot.");
event.preventDefault();
event.stopPropagation();
return false;
}
}
});
});
});
Note that this is self-contained. It does not rely on any other modules and nothing else relies on it. You simply require it in your pack(s) and it will watch all form submissions.
Here's one more example of loading a google map with a geojson overlay when the page is loaded:
document.addEventListener("turbolinks:load", function() {
document.querySelectorAll('.shuttle-route-version-map').forEach(function(map_div) {
let shuttle_route_version_id = map_div.dataset.shuttleRouteVersionId;
let geojson_field = document.querySelector(`input[type=hidden][name="geojson[${shuttle_route_version_id}]"]`);
var map = null;
let center = {lat: 36.1638726, lng: -86.7742864};
map = new google.maps.Map(map_div, {
zoom: 15.18,
center: center
});
map.data.addGeoJson(JSON.parse(geojson_field.value));
var bounds = new google.maps.LatLngBounds();
map.data.forEach(function(data_feature) {
let geom = data_feature.getGeometry();
geom.forEachLatLng(function(latlng) {
bounds.extend(latlng);
});
});
map.setCenter(bounds.getCenter());
map.fitBounds(bounds);
});
});
When the page loads, I look for divs with the class "shuttle-route-version-map". For each one that I find, the data attribute "shuttleRouteVersionId" (data-shuttle-route-version-id) contains the ID of the route. I have stored the geojson in a hidden field that can be easily queried given that ID, and I then initialize the map, add the geojson, and then set the map center and bounds based on that data. Again, it's self-contained except for the Google Maps functionality.
You can also learn how to use import/export to share code, and that's really powerful.
So, one more that shows how to use import/export. Here's a simple piece of code that sets up a "watcher" to watch your location:
var driver_position_watch_id = null;
export const watch_position = function(logging_callback) {
var last_timestamp = null;
function success(pos) {
if (pos.timestamp != last_timestamp) {
logging_callback(pos);
}
last_timestamp = pos.timestamp;
}
function error(err) {
console.log('Error: ' + err.code + ': ' + err.message);
if (err.code == 3) {
// timeout, let's try again in a second
setTimeout(start_watching, 1000);
}
}
let options = {
enableHighAccuracy: true,
timeout: 15000,
maximumAge: 14500
};
function start_watching() {
if (driver_position_watch_id) stop_watching_position();
driver_position_watch_id = navigator.geolocation.watchPosition(success, error, options);
console.log("Start watching location updates: " + driver_position_watch_id);
}
start_watching();
}
export const stop_watching_position = function() {
if (driver_position_watch_id) {
console.log("Stopped watching location updates: " + driver_position_watch_id);
navigator.geolocation.clearWatch(driver_position_watch_id);
driver_position_watch_id = null;
}
}
That exports two functions: "watch_position" and "stop_watching_position". To use it, you import those functions in another file.
import { watch_position, stop_watching_position } from 'watch_location';
document.addEventListener("turbolinks:load", function() {
let lat_input = document.getElementById('driver_location_check_latitude');
let long_input = document.getElementById('driver_location_check_longitude');
if (lat_input && long_input) {
watch_position(function(pos) {
lat_input.value = pos.coords.latitude;
long_input.value = pos.coords.longitude;
});
}
});
When the page loads, we look for fields called "driver_location_check_latitude" and "driver_location_check_longitude". If they exist, we set up a watcher with a callback, and the callback fills in those fields with the latitude and longitude when they change. This is how to share code between modules.
So, again, this is a very different way of doing things. Your code is cleaner and more predictable when modularized and organized properly.
This is the future, so fighting it (and setting "window.function_name" is fighting it) will get you nowhere.
Looking at how webpacker "packs" js files and functions:
/***/ "./app/javascript/dashboard/project.js":
/*! no static exports found */
/***/ (function(module, exports) {
function myFunction() {...}
So webpacker stores these functions within another function, making them inaccessible. Not sure why that is, or how to work around it properly.
There IS a workaround, though. You can:
1) change the function signatures from:
function myFunction() { ... }
to:
window.myFunction = function() { ... }
2) keep the function signatures as is, but you would still need to add a reference to them as shown here:
window.myFunction = myFunction
This will make your functions globally accessible from the "window" object.
Replace the code in your custom java Script file
from
function function_name() {// body //}
to
window.function_name = function() {// body //}
From the official rails app guide:
Where to Stick Your JavaScript
Whether you use the Rails asset
pipeline or add a tag directly to a view, you have to make a
choice about where to put any local JavaScript file.
We have a choice of three locations for a local JavaScript file:
The app/assets/javascripts folder,the lib/assets/javascripts folder and the vendor/assets/javascripts folder
Here are guidelines for selecting
a location for your scripts:
Use app/assets/javascripts for JavaScript you create for your
application.
Use lib/assets/javascripts for scripts that are shared by many
applications (but use a gem if you can).
Use vendor/assets/javascripts for copies of jQuery plugins, etc., from
other developers. In the simplest case, when all your JavaScript files
are in the app/assets/javascripts folder, there’s nothing more you
need to do.
Add JavaScript files anywhere else and you will need to understand how
to modify a manifest file.
More reading:
http://railsapps.github.io/rails-javascript-include-external.html
Content
I have a large module that I am assembling in Javascript, which is problematic because JS currently has poor native module support.
Since my module is large, I personally do not like having one massive file with my module object e.g.
var my_module = {
func_1: function(param) {console.log(param)},
...,
func_n: function(param_1, param_2) {console.log(param_1 - param_2)}
}
where func_n ends around line number 3000. I would much rather store each of my functions (or several related functions) in separate files. I personally find this easier to manage.
This poses a problem, however, as although one could use synchronous calls to load the functions - the javascript will still be parsed asynchronously (to my understanding). Thus several independent synchronous calls to loading files is insufficient - as the mth file might call something related to the nth file (n < m) which has not yet been parsed causing an error.
Thus the solution a solution is apparent: recursively - synchronously - load files in the callback of the previous file.
Consider the code at the bottom of this post.
Now this isn't perfect. It has several assumptions e.g. that each file contains one function and that function is the same as the filename after striping the extension (a() is in a.js; do_something(a, b, c) is in do_something.js). It also doesn't encapsulate private variables. However, this could be worked around by adding a JSON file with these variables. Adding this JSON to the module as module.config and then passing the config object to each of the functions in the module.
In addition this still pollutes the namespace.
Question
My question is as follows:
what is a native JS way (nor do I not want a library that does this for me - jQuery included) to load functions stored across many files into a cohesive module without polluting the namespace, and ensuring that all the files are parsed before any function calls?
Code to consider (my solution)
Code directory structure:
- directory
---- index.html
---- bundle.js
---- test_module/
-------- a.js
-------- b.js
-------- log_num.js
-------- many_parameters.js
index.html
<head>
<script src="bundle.js"></script>
</head>
bundle.js
// Give JS arrays the .empty() function prototype
if (!Array.prototype.empty){
Array.prototype.empty = function(){
return this.length == 0;
};
};
function bundle(module_object, list_of_files, directory="") {
if (!list_of_files.empty()) {
var current_file = list_of_files.pop()
var [function_name, extension] = current_file.split(".")
var new_script = document.createElement("script")
document.head.appendChild(new_script)
new_script.src = directory + current_file
new_script.onload = function() {
module_object[function_name] = eval(function_name)
bundle(module_object, list_of_files, directory)
/*
nullify the function in the global namespace as - assumed - last
reference to this function garbage collection will remove it. Thus modules
assembled by this function - bundle(obj, files, dir) - must be called
FIRST, else one risks overwritting a funciton in the global namespace and
then deleting it
*/
eval(function_name + "= undefined")
}
}
}
var test_module = {}
bundle(test_module, ["a.js", "b.js", "log_num.js", "many_parameters.js"], "test_module/")
a.js
function a() {
console.log("a")
}
b.js
function b() {
console.log("b")
}
log_num.js
// it works with parameters too
function log_num(num) {
console.log(num)
}
many_parameters.js
function many_parameters(a, b, c) {
var calc = a - b * c
console.log(calc)
}
If we restrict our tools to the "native JS way", there is an import() proposal, currently at Stage 3 on the TC39 proposal process:
https://github.com/tc39/proposal-dynamic-import
System.js offers a similar approach to dynamically load modules.
Have you looked at RequireJS. From the home page:
RequireJS is a JavaScript file and module loader. It is optimized for in-browser use, but it can be used in other JavaScript environments, like Rhino and Node. Using a modular script loader like RequireJS will improve the speed and quality of your code.
It has support for Definition Functions with Dependencies
If the module has dependencies, the first argument should be an array of dependency names, and the second argument should be a definition function. The function will be called to define the module once all dependencies have loaded. The function should return an object that defines the module. The dependencies will be passed to the definition function as function arguments, listed in the same order as the order in the dependency array:
That would allow you to "split up" your module into what ever arbitrary pieces you decide and "assemble" them at load time.
Does anyone know how to get the name of a module in node.js / javascript
so lets say you do
var RandomModule = require ("fs")
.
.
.
console.log (RandomModule.name)
// -> "fs"
If you are trying to trace your dependencies, you can try using require hooks.
Create a file called myRequireHook.js
var Module = require('module');
var originalRequire = Module.prototype.require;
Module.prototype.require = function(path) {
console.log('*** Importing lib ' + path + ' from module ' + this.filename);
return originalRequire(path);
};
This code will hook every require call and log it into your console.
Not exactly what you asked first, but maybe it helps you better.
And you need to call just once in your main .js file (the one you start with node main.js).
So in your main.js, you just do that:
require('./myRequireHook');
var fs = require('fs');
var myOtherModule = require('./myOtherModule');
It will trace require in your other modules as well.
This is the way transpilers like babel work. They hook every require call and transform your code before load.
I don't know why you would need that, but there is a way.
The module variable, which is loaded automatically in every node.js file, contains an array called children. This array contains every child module loaded by require in your current file.
This way, you need to strict compare your loaded reference with the cached object version in this array in order to discover which element of the array corresponds to your module.
Look this sample:
var pieces = require('../src/routes/headers');
var childModuleId = discoverChildModuleId(pieces, module);
console.log(childModuleId);
function discoverChildModuleId(object, moduleObj) {
"use strict";
var childModule = moduleObj.children.find(function(child) {
return object === child.exports;
});
return childModule && childModule.id;
}
This code will find the correspondence in children object and bring its id.
I put module as a parameter of my function so you can export it to a file. Otherwise, it would show you modules of where discoverChildModule function resides (if it is in the same file won't make any difference, but if exported it will).
Notes:
Module ids have the full path name. So don't expect finding ../src/routes/headers. You will find something like: /Users/david/git/...
My algorithm won't detect exported attributes like var Schema = require('mongoose').Schema. It is possible to make a function which is capable of this, but it will suffer many issues.
From within a module (doesn't work at the REPL) you can...
console.log( global.process.mainModule.filename );
And you'll get '/home/ubuntu/workspace/src/admin.js'
UPDATE: Worked out a solution I'm comfortable with, the answer is below.
I have an app that uses manual bootstrapping, with a chunk of code that essentially looks like this:
(function() {
'use strict';
angular.element( document ).ready( function () {
function fetchLabels( sLang, labelPath ) {
// retrieves json files, builds an object and injects a constant into an angular module
}
function rMerge( oDestination, oSource ) {
// recursive deep merge function
}
function bootstrapApplication() {
angular.element( document ).ready( function () {
angular.bootstrap( document, [ 'dms.webui' ] );
});
}
fetchLabels( 'en_AU' ).then( bootstrapApplication );
}
It works great - essentially fetches two json files, combines them and injects the result as a constant, then bootstraps the app.
My question is how to unit test these functions? I want to write something to test the fetchLabels() and rMerge() methods, but I'm not sure how to go about it. My first thought was separate the methods out into a service and use it that way, but I wasn't sure if I could actually invoke my own service this way before I've even bootstrapped the application?
Otherwise, can anyone suggest a way to separate out these methods into something standalone that I can test more readily?
OK after some fiddling, I found a solution I'm happy with.
I moved the methods into the Labels Service, and modified the labels module file to preset an empty object into the relevant constant.
I added an exception to my grunt injection setup to ignore the label service files, and instead manually called them earlier where I am calling the module file itself.
I took the direct injection code for $http and $q from the bootstrap script and put them inside the Label service, as they aren't available for normal injection prior to bootstrap.
Finally I manually loaded the labels module, injected the Labels service, then used the Labels service to execute the aforementioned methods.
So my bootstrap script now looks like this:
(function () {
'use strict';
var injector = angular.injector( [ 'labels' ] );
var Labels = injector.get( 'Labels' );
/**
* Executes the actual bootstrapping
*/
function bootstrapApplication() {
angular.element( document ).ready( function () {
angular.bootstrap( document, [ 'myAppModule' ] );
});
}
// get the label files and build the labels constant, then load the app
Labels.fetchLabels( 'en_AU', '/lang/' ).then( bootstrapApplication );
})();
And I can properly unit test the Labels service discretely. The loading order and adding extra bits to the normal way of loading modules for us is an exception, but I'm comfortable that it's specific purpose warrants the exception, and it leaves us in a much more flexible position for testing.