Loading different module for Webpack than Server Script - javascript

The best way I can think to explain my question is with an example:
Say I have a small single page app with server side rendering. On the client side new page requests resolve using XHR, but on the server I'd like to just run the server-side resolvers directly.
The components of the front end call a getter on a data object eg: Data.page.get() which either polls the server or returns a locally stored value. So I want webpack to pack the relevant client code. But I want to point to a different Data object on the server that calls whatever logic the server uses to resolve the XHR request, be it a db call or a filesystem call or whatever.
At the moment I've got var Data = require(./data) getting the server logic. Is there a way to tell webpack 'Hey, don't use ./data here use ./data-client instead? Or am I going about this backwards somehow?
This seems like a pretty simple concept but I'm still pretty new to javascript and programming in general and I'm a little stumped as to the best way to do it.
Or should I be processing the node script with webpack too like described here: can webpack output separate script and module files for browser and node?
I'm currently using webpack 1, but happy to solve the problem using webpack 2 instead.

It turned out to be fairly simple.
Webpack resolves modules with an extension priority:
1) .webpack.js
2) .web.js
3) .js
So in my case require('./data') will resolve todata.jswith node, and so long as I have adata.web.js` it will resolve to that in webpack.
At least to my understanding.

Related

How to integrate own library with WebPack which needs to span WebWorkers and AudioWorklets

Goal
I am the author of a JavaScript library which can be consumed through AMD or ESM within various runtime environments (Browser, Node.js, Dev Servers). My library needs to spawn WebWorkers and AudioWorklets using the file it is contained. The library detects in which context it is running and sets up the required stuff for the execution context.
This works fine as long users (user=integrator of my library) do not bring bundlers like WebPack into the game. To spawn the WebWorker and AudioWorklet I need the URL to file in which my library is contained in and I need to ensure the global initialization routines of my library are called.
I would prefer to do as much as possible of the heavy lifting within my library, and not require users to do a very specialized custom setup only for using my library. Offloading this work to them typically backfires instantly and people open issues asking for help on integrating my library into their project.
Problem 1: I am advising my users to ensure my library is put into an own chunk. Users might setup the chunks based on their own setup as long the other libs don't cause any troubles or side effects in the workers. Especially modern web frameworks like React, Angular and Vue.js are typical problem children here, but also people tried to bundle my library with jQuery and Bootstrap. All these libraries cause runtime errors when included in Workers/Worklets.
The chunking is usually done with some WebPack config like:
config.optimization.splitChunks.cacheGroups.alphatab = {
chunks: 'all',
name: 'chunk-mylib',
priority: config.optimization.splitChunks.cacheGroups.defaultVendors.priority + 10,
test: /.*node_modules.*mylib.*/
};
The big question mylib now has: What is the absolute URL of the generated chunk-mylib.js as this is now the quasi-entrypoint to my library now with bundling and code splitting in place:
document.currentScript points usually to some entry point like an app.js and not the chunks.
__webpack_public_path__ is pointing to whatever the user sets it to in the webpack config.
__webpack_get_script_filename__ could be used if the chunk name would be known but I haven't found a way to get the name of the chunk my library is contained in.
import.meta.url is pointing to some absolute file:// url of the original .mjs of my library.
new URL(import.meta.url, import.meta.url) causes WebPack to generate an additional .mjs file with some hash. This additional file is not desired and also the generated .mjs contains some additional code breaking its usage in browsers.
I was already thinking to maybe create a custom WebPack plugin which can resolve the chunk my library is contained in so I can use it during runtime. I would prefer to use as much built-in features as possible.
Problem 2: Assuming that problem 1 is solved I could now spawn a new WebWorker and AudioWorklet with the right file. But as my library is wrapped into a WebPack module my initialization code will not be executed. My library only lives in a "chunk" and is not an entry and I wouldn't know that this splitting would allow mylib to run some code after the chunk was loaded by the browser.
Here I am rather clueless. Maybe chunks are not the right way of splitting for this purpose. Maybe some other setup is needed I am not yet aware of that its possible?
Maybe also this could be done best with a custom WebPack plugin.
Visual Representation of the problem: With the proposed chunking rule we get an output as shown in the blocks. Problem 1 is the red part (how to get this URL) and Problem 2 is the orange part (how to ensure my startup logic is called when the background worker/worklet starts)
Actual Project I want to share my actual project for better understanding of my use case. I am talking about my project alphaTab, a music notation rendering and playback library. On the Browser UI thread (app.js) people integrate the component into the UI and they get an API object to interact with the component. One WebWorker does the layouting and rendering of the music sheet, a second one synthesizes the audio samples for playback and the AudioWorklet sends the buffered samples to the audio context for playback.
I think the worker code should be handled as an assets instead of a source code. Maybe you could add a simple CLI to generate a ".alphaTab" folder on the root of the project and add instructions for your user to copy that to the "dist"or "public folder".
Even if come up with a Webpack specific solution, you would have to work your way around other bundlers/setups (Vite, rollup, CRA, etc).
EDIT: You would also need to add an optional parameter to the initialization for passing the script path. Not fully automated, but simpler that having to setup complex bundler configs
Disabling import.meta
Regarding import.meta.url, this link might help. It looks like you'd disable it in your webpack config by setting module.parser.javascript.importMeta to false.
Reworking Overall Architecture
For the rest, it sounds like a bit of a mess. You probably shouldn't be trying to import the same exact chunk code into your workers/worklets, since this is highly dependent on how webpack generates and consumes chunks. Even if you manage to get it to work today, it might break in the future if the webpack team changes how they internally represent chunks.
Also from a user's perspective, they just want to import the library and have it just work without fiddling with all of the different build steps.
Instead, a cleaner way would to be to generate separate files for the main library, the AudioWorklet, and the Web Worker. And since you already designed the worklet and web worker to use your library, you can just use the prebuilt, non-module library for them, and have a separate file for the entry point for webpack/other bundlers.
The most straightforward way would be to have users add your original non-module js library in with the bundle that they build, and have the es module load Web Workers and Audio Worklets using that non-module library's url.
Of course, from a user's perspective, it'd be easier if they didn't have to copy over additional files and put them in the right directory (or configure a scripts directory). The straightforward way would be to load the web worker or worklet from a CDN (like https://unpkg.com/#coderline/alphatab#1.2.2/dist/alphaTab.js), but there are restrictions from loading web workers cross origin, so you'd have to use a work around like fetching it and then loading it from a blob url (like that found here). This unfortunately makes initializing the Worker/Worklet asynchronous.
Bundling Worker code
If this isn't an option, you can bundle a library, Web Worker/Worklet code into one file by stringifying the Worker/Worklet code and loading it via a blob or data url. In your particular use case, it's a little painful from an efficiency standpoint considering how much code will be duplicated in the bundled output.
For this approach, you'd have multiple build steps:
Build the library that's used by your Web Worker and/or Audio Worklet.
Build the single library by stringifying the previous libraries/library.
This is all complicated by there being only one entry file for the library, web worker, and audio worklet. In the long term, you'd probably benefit by rewriting entry points for these different targets, but for now, we could reuse the current workflow and change the build steps by using different plugins. For the first build, we'll make a plugin that returns a dummy string when it tries to import the worker library, for the second, we'll have it return the stringified contents of that library. I'll use rollup, since that's what your project uses. The code below is mostly for illustrative purposes (which saves the worker library as dist/worker-library.js); I haven't actually tested it.
First plugin:
var firstBuildPlugin = {
load(id) {
if (id.includes('worker-library.js')) {
return 'export default "";';
}
return null;
}
}
Second plugin:
var secondBuildPlugin = {
transform(code, id) {
if (id.includes('worker-library.js')) {
return {
code: 'export default ' + JSON.stringify(code) + ';',
map: { mappings: '' }
};
}
return null;
}
}
Using these plugins, we can import the web worker/audio worklet library via import rawCode from './path/to/worker-library.js';. For your case, since you'd be reusing the same library, you may want to create a new file with an export, so the to prevent multiple bundling of the same code:
libraryObjectURL.js:
import rawCode from '../dist/worker-library.js'; // may need to tweak the path here
export default URL.createObjectURL(
new Blob([rawCode], { type: 'application/javascript' })
);
And to actually use it:
import libraryObjectURL from './libraryObjectURL.js'; // may need to tweak the path here
//...
var worker = new Worker(libraryObjectURL);
To then actually build it, your rollup.config.js would look something like:
module.exports = [
{
input: `dist/lib/alphatab.js`,
output: {
file: `dist/worker-library.js`,
format: 'iife', // or maybe umd
//...
plugins: [
firstBuildPlugin,
//...
]
}
},
{
input: `dist/lib/alphatab.js`,
output: {
file: `dist/complete-library.mjs`,
format: 'es',
//...
plugins: [
secondBuildPlugin,
//...
]
}
},
// ...
Preserving old code
Finally, for your other builds, you may still want to preserve the old paths. You can use #rollup/plugin-replace for this, by using a placeholder that will be replaced in the build process.
In your files, you could replace:
var worker = new Worker(libraryObjectURL);
with:
var worker = new Worker(__workerLibraryURL__);
and in the build process use:
// ...
// for the first build:
plugins: [
firstBuildPlugin,
replace({ __workerLibraryURL__: 'libraryObjectURL')
// ...
],
// ...
// for the second build:
plugins: [
secondBuildPlugin,
replace({ __workerLibraryURL__: 'libraryObjectURL')
// ...
],
// ...
// for all other builds:
plugins: [
firstBuildPlugin,
replace({ __workerLibraryURL__: 'new URL(import.meta.url)') // or whatever the old code was
// ...
],
You may need to use another replacement for your AudioWorklet url if it's different. In cases where the worker-library file isn't used, the imported libraryObjectURL will be tree shook out.
Future work:
You may want to look into having multiple outputs for your different targets: web worker, audio worklet, and library code. They really aren't supposed load the same exact file. This would negate the need for the first plugin (that ignores certain files), and it might make things more manageable and efficient.
More Reading:
Loading files as raw strings (you can see/use TrySound's plugin here; it's a simple plugin)
Loading strings as blob or data URLs (see https://stackoverflow.com/a/10372280/)
I found a way to solve the described problem but there are still some open pain points because the WebPack devs are rather trying to avoid vendor specific expressions and prefer to rely on "recognizable syntax constructs" to rewrite the code as they see fit.
The solution does not work in a fully local environment, but it works together with NPM:
I am launching my worker now with /* webpackChunkName: "alphatab.worker" */ new Worker(new URL('#coderline/alphatab', import.meta.url))) where #coderline/alphatab is the name of the library installed through NPM. This syntax construct is detected by WebPack and will trigger generation of a new special JS file containing some WebPack bootstrapper/entry-point which loads the library for startup. So effectively it looks after compilation like this:
For this to work, users should configure the WebPack to place the library in an own chunk. Otherwise it can happen that the library is maybe inlined into the webpack generated worker file instead of also loaded from a common chunk. It would work also without a common chunk, but it would defy the benefits of even using webpack because it duplicates the code of the library to spawn it as worker (double loading time and double disk usage).
Unfortunately this currently only works for Web Workers for now because WebPack has no support for Audio Worklet at this point.
Also there are some warnings due to cyclic dependencies produced by WebPack because there seem to be a a cycle between chunk-alphatab.js and alphatab.worker.js. In this setup it should not be a problem.
In my case there is no difference between the UI thread code and the one running in the worker. If users decide to render to an HTML5 canvas through a setting, rendering happens in the UI thread, and if SVG rendering is used it is off-loaded to a worker. The whole layout and rendering pipeline is the same on both sides.

Best way to organise back end (Node) processes and front end (Vue / Nuxt) that uses part of these processes

I have a few tiny node apps doing some basic stuff like checking things and filling my DB (triggered by CRON). In my Nuxt app I will need to use part of what is insinde these Node apps. What is the best way to organise it, do I keep them separated or do I fuse them with my Nuxt app ? Do I copy what I need from these node apps and adapt it in Nuxt, do I use the serverside middlewares to add those node apps inside my Nuxt project or do I create my Nuxt app with express and I use /server/index.js to add my node apps there in some way ?
Let's take an example. You have a node app that check very hour some data and fill a DB. In the Nuxt app you have a page showing the content of the DB but you want first to be sure that nothing new has to be added in the DB since the last hour. The code I would have to run in th every Nuxt page is the same code as the Node app (check and fill the DB). It looks a bit stupid (and hard to maintain and update) to have twice the same code at two places. But I 'm not sure how would I have this node app running every hour in my Nuxt app. Any advice would be greatly appreciated
Here is a control flow that may help your thinking about designing this CRON microservice. There are many ways to do this, and this may not be the best approach, but I think it will work for your use case.
Have a services directory in your server (can also be called middleware).
Include a cron.js file that contains the logic for the task runner.
Within cron.js, issue a scheduled response from node to Vue, such as a JSON keyword like res.JSON({message: 'checkNewData'}). This will be something called a "server sent event". A server sent event is simply an event that happens autonomously on a defined schedule within node environment.
In Vue, at the root level App.vue, use the created() hook to register an event listener that will listen for the server sent "checkNewData" JSON object. When this event listener hears the JSON response, it should trigger Vue to check the appropriate component, package up any new data, and send it down to the DB in a post or put http call, depending on whether you're adding new data, or replacing the old with the new.
This configuration would give you a closed-loop system for automatic updates. The next challenge would be making this operation client-specific, but that is something to worry about once you got this working. Again, others may have a different approach to this, but this is how I would handle the flow.

Call Python script from local JavaScript App

so I've looked around quite a bit now and wasn't able to find quite the use case I think I am confronted with.
For some background:
I'm fairly new to JavaScript and have never had to call any other program/script from it. Now I did develop a Python script that pulls some data from online sources, formats it and dumps it into JSON files. In order to display this data in a proper way I figured I would use Electron.
While handling the JSON files is completely fine (would be quite sad if it wasn't I guess), I need to be able to call the Python script updating the data from my Electron UI. As everything is local, I hoped, that there would be an easier way, than setting up some server for the Python script to run on, just to be able to trigger its execution from my Desktop App. This is especially true, as I don't even need to get or process any returns, I just want to trigger the execution of that script.
So the question now is: is there such an "easy" way to execute Python scripts from an Electron/JavaScript based locally saved Desktop app?
Thanks in advance for any answers!
Like a previous commenter mentioned, you should be able to follow this SO answer in Node.js (which is what Electron uses).
To expound upon that answer just a bit, I'd recommend using the built-in Python JSON utility to dump JSON to the standard out (just printing out the JSON string), and the using the built-in Node.js JSON utility to parse that JSON string into a javascript object for use in your application.
Alright, so after being redirected to this thread, which I can only recommend reading through if you have an interest in this issue, I took their solution and altered a little, which took me a bit of time, due to some confusion, which I now would like to spare you guys!
To re-introduce the issue: The goal is to call a python script from a JavaScipt/Electron based UI. The python script only needs to be executed, but it needs to happen onClick, as it is an update function.
Now this is the code I used:
const exec = require("child_process").exec;
function triggerUpdateAndRefreshFooter() {
exec('python relativePathToScript/update.py',
function(error, stdout, stderr) { //callback function, receives script output
refreshFooter(); //don't use the output, but I could here
}
)
}
I did have some issues figuring out all of that const stuff in the other thread, as well as having to guess IF I could just execute my script in a separate function. In the end this did work!
I hope this was helpful!

Can AdonisJs be used for REST APIS?

Sorry for a nooby question. I'd ask it anyway!
I am playing around with AdonisJs. I understand it is a MVC framework. But I want to write REST APIs using the said framework. I could not find much help on the internet.
I have two questions:
Does the framework support writing REST APIs?
If yes to 1. then what could be the best starting point?
1. I've created 3 API projects with AdonisJS and think it's ideal for quick setup. It has many functions already included from start, supports database migrations and is pretty well documented in general.
You can create routes easily with JSON responses:
http://adonisjs.com/docs/3.2/response
Route.get('/', function * (request, response) {
const users = yield User.all()
response.json(users)
})
Or add them to a controller, and even fairly easily add route authentication with token protection (all documented):
Route.post('my_api/v1/authenticate', 'ApiController.authenticate')
Route.group('api', function () {
Route.get('users', 'ApiController.getUsers')
}).prefix('my_api/v1').middleware('auth:api')
2. Take a look at the official tutorial, you can probably finish it in about half an hour. http://adonisjs.com/docs/3.2/overview#_simplest_example
Start with defining some routes and try out echoing simple variables with JSON and just in regular views.
Move the test logic to Controllers
Read a bit more about the database migrations and add some simple models.
Don't forget the Commands and Factory, as you can easily define test data commands there. This will save you a lot of time in the long run.
Just keep in mind that you need to have a server with Node.JS installed to run the system on production (personally I'm keeping it running using a tool like Node Forever JS.
In order to create just a RESTful api you can use
npm i -g #adonisjs/cli
# Create a new Adonis app
adonis new project-name --api-only

read or write file to filesystem with angular

I will read/write a file with angular from/to my hdd. I normaly use node module "fs". Whats the best practice to combine this module with angular to use it in node webkit?
Thanks!
Edit: (can't use require in angular to load npm modules. any ideas?)
.service("WindowService", WindowService);
function WindowService() {
this.gui = require('nw.gui');
}
i wrote this example, if you had any questions as to how things are working? you may also want to check out this as well, which the former example uses to work outside of the browser.
but since the main question is regarding the error involving the require function, i will elaborate. require is a function implemented by the node runtime, it was added because there was no way initially built into js to import code from fileA to fileB. so when your in the browser you dont need to require anything, just make sure you have the file added to the html ie: <script src="my/file.js"></script>. but if you really want to do require in the browser, just use browserfy.
I have similar experiences to you. I usually wrap modules to services and use it as normal angular service with DI.
This makes code more readable and maintanable. Also, when you want to change node module, you are changing it in one place.
for your project, i will look to
socket.io > for broadcast websocket, and update your angular scope...
shokidar > better than FS, with less bug than fs

Categories