Use service workers with Webpack dev-server inside Docker container - javascript

I'm trying to use service workers inside a web app using Webpack and Docker.
Everything I made for the moment is working well (service worker, webpack config, worker registration...)
Actually my app is running inside a Docker container, in this container I can start my webpack build to create all my JS files.
But now I would like to be able to use webpack dev-server and HMR with my service worker.
To do that I used https://github.com/oliviertassinari/serviceworker-webpack-plugin that correctly made a reference of my service worker inside the manifest.json
Then when I access through my web browser to my application, every built assets are found except my worker.
I run my dev-server using (I use webpack-encore), inside my docker container
encore dev-server --hot --host 0.0.0.0 --port 8080
To load my assets the browser request them on 0.0.0.0:8080 but my worker is registered from localhost:8000 so the request fail with 404 error because the worker is located 0.0.0.0:8080/sw.js instead localhost:8080/sw.js.
I would know if this was possible to fix this behavior and made my service worker working inside the webpack-dev-server on a docker container.
I know service workers are listening only their scope, in my case localhost:8000/* but the webpack-dev-server is 0.0.0.0:8080.
This is why I asking if this is possible to change this behavior to made it work, and if someone already had this problem.
Thanks

You need to expose port 8000 from your Docker container as well as port 8080. You must expose multiple ports from your Docker container for multiple web services on different ports.
docker run -p 8080:8080 -p 8000:8000
Official documentation

Related

Laravel Mix HMR Server Does Not Launch

Laravel Mix Version: 6.0.43
Node Version (node -v): 16.13.1
NPM Version (npm -v): 8.1.2
OS: Windows 10 21h2
Description:
THIS IS HAPPENING ON A FRESH NEW INSTALL OF LARAVEL AND MY OTHER PROJECTS
Running npm run hot changes the script tag sources to http://localhost:8080/*/*.* from http://localhost/*/*.* HOWEVER I always get net::ERR_EMPTY_RESPONSE from localhost:8080. The HMR server doesn't launch at all. The terminal output of the command also have no mention of spinning up a new web server.
PS C:\Users\Eric Wang\Documents\GitHub\test-laravel-mix> npm run hot
● Mix █████████████████████████ emitting (95%)
emit
● Mix █████████████████████████ done (99%) plugins
WebpackBar:done
✔ Mix
Compiled successfully in 5.51s
Laravel Mix v6.0.43
✔ Compiled Successfully in 5336ms
┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────┐├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────┤│ css/app.css │ 47.6 KiB │└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────┘webpack compiled successfully
Here's a picture of the browser failing to fetch the bundle files
Steps To Reproduce:
I am running Docker 4.5.1 using legacy Hyper-V.
I containerized Laravel and PHP BUT not the frontend and JS. I am running Laravel Mix on my main system.
Clone the fresh installation of Laravel from https://github.com/ericwang401/test-laravel-mix
Clone Laradock in the project folder using git clone https://github.com/laradock/laradock.git
CD to the Laradock folder and make .env file with cp .env.example .env
Inside .env file set PHP_VERSION to PHP_VERSION=8.0 AND DO NOT EDIT MYSQL SETTINGS
Now edit the Laravel environment file
DB_CONNECTION=mysql
DB_HOST=mysql
DB_PORT=3306
DB_DATABASE=default
DB_USERNAME=default
DB_PASSWORD=secret
Start up the Laravel app in Laradock folder using docker-compose up -d nginx mysql
Enter into bash mode in the Docker container docker-compose exec workspace bash
Install Composer dependencies BUT NOT NPM DEPENDENCIES YET composer i
Now exit out of the Docker container CNTRL + D
Install NPM dependencies in project root ON YOUR MAIN SYSTEM npm i
Run on your main system npm run hot
Now go to http://localhost and IT SHOULD be a white screen
Check console logs and it should give net::ERR_EMPTY_RESPONSE when it tries to fetch the bundle files
REMEMBER: the backend is running inside Docker
The frontend (Laravel Mix) is running on the host system
This issue is happening on a FRESH project installation of Laravel 9 + Jetstream AND it's also happening on my other older projects like https://github.com/StratumPanel/Stratum-Panel
The HMR server is simply not launching.
I found out the issue. The problem was that the default port, 8080, Laravel Mix HMR was using couldn't be binded to. Webpack Dev Server doesn't respond with a message of failing to bind to a port. To confirm this issue, I replicated the environment on my friend's PC and it too couldn't bind to port 8080, but this time it reported an error that the dev server couldn't bind to port 8080.
I fixed this issue by specifying
mix.options({
hmrOptions: {
host: 'localhost',
port: 4206
}
});
And it works! On both my friend's pc and my pc.
I used the exact same reproduction instructions on my friend's PC.
I spent way too long investigating this issue 😭

Angular/CLI: How to change the port for auto reload?

As I understand when running ng serve the Angular app polls the dev server to find out if a reload is necessary. This server is expected to live on port 4200 (default).
I run a dev environment with multiple docker container to keep things nice and isolated. So some of my angular apps listen on different ports. This seems to lead to the following error message appearing in my browser console every couple of seconds:
zone.js:2744 GET http://localhost:4200/sockjs-node/info?t=1518533495440 net::ERR_EMPTY_RESPONSE
Where can I change the configuration to point the live reload client in the correct direction? angular-cli.json doesn't seem to do anything.
I found the solution. You have to start ng serve with the --public-host option (aliases: --live-reload-client). So I adapted my start command in the package.json from
"start": "ng serve --host 0.0.0.0"
to
"start": "ng serve --host 0.0.0.0 --live-reload-client http://localhost:4400"
The whole point is that I don't want to change the port of the dev server itself. It should run inside the Docker container on port 4200. But port 4200 is mapped in Docker to the outside port 4400 (in my case). I call http://localhost:4400 to run the app in my browser and the reload client should also call http://localhost:4400 for updates. But it called http://localhost:4200. With the --live-reload-client option everything works.
You can configure the default HTTP port and the one used by the LiveReload server with two command-line options :
ng serve --host 0.0.0.0 --port 4201 --live-reload-port 49153
It seems --live-reload-port is no longer used in newer versions.
Live reload port is identical to development server port.
So setting --port 4201 will use 4201 as the live reload port.
You can change the port using --port
ng serve --port 4201
Will start the server on port 4201
While running your application instance use this command:
ng serve --port 123
instead of:
ng serve
whilst changing in package.json doesn't make any impact.

Debuging expressjs server alongside electron

I've successfully running expressjs server alongside electron as described on here:
Run Node.js server file automatically after launching Electron App
The problem now, there is no output from command line related to server activity. I simply run
electron .
on project directory, then no other output related to the server.
Is there any way to get that server activity logged onto cli as normally I run with (like just) node server.js ??
Perhaps you can try using the node module concurrently. This will allow you to run two commands at once and is used commonly in development with electron.
Instead of getting the server and electron to run from within the same file, separate it into a server file and an electron file. For example, one of my main development techniques is this:
concurrently "npm run server" "npm run start-app"
Which runs my hot-reload server, that my electron app connects to in development mode.
By using concurrently you can see the output of each process as well.
A good example of this technique in practice is with the electron-react-boilerplate repository. If you're interesting in this style of development I recommend you clone that repository and give npm run dev a try, to see how their development process works (whether you're using React or not).

Run sails.js with least privileges in Production

I'm using Sails.js 0.10.5 on Node 0.10.33 on Ubuntu Trusty. I'd like to execute the node process as a non-root user with the least possible privileges in the production environment. I'm comfortable with the various options for binding to ports below 1024 but I'm more concerned with directory permissions.
Ideally, I'd prefer the node process only have write access to its log files and nothing else. It should only have read access to the directory containing app.js and below.
At the moment I have needed to grant write access to the ./.tmp directory and also to the ./views directory due to the grunt tasks that run at startup. I'd rather perform the grunt tasks at deploy time as a different user instead of at run-time. The sails www command appeared promising but I couldn't get the desired outcome.
Can someone please point me in the right direction for running Sails.js with zero write access to its assets, views, etc?
Use sails www to build static assets
chmod -R 440 all files and directories, so that your user and the webserver (group) can access the files.
Use nginx/apache to host a webserver on port 80/443 and proxy requests to sails (running on its own port or over a unix socket).
Run sails using PM2 to keep it running and have it manage/collect logs.
Sails will lift, but will be unable to write its .tmp directory, which shouldn't even be necessary since all your static files will be routed to the www directory through nginx/apache.
The simplest solution to me seems to be to separate the grunt tasks that need the elevated privileges out into a separate file that you can call with a different user on deploy. Then sails won't need to run anything and can be read only.
EDITED: I use PM2 with apache as proxy ( with mod WS ).
You can use one proxy like apache to route from port 80 to others internal server ports based on host.
With this way you can run multiple apps in same server.
It has a lot of usefull functions like see the logs how varius apps in terminal, restart and log crashed apps, run app as user, app status ... etc .
Pm2 link: https://github.com/Unitech/pm2
PM2 configs: https://github.com/Unitech/PM2/blob/development/ADVANCED_README.md#options

Web console for docker container

Situation
There is a data-only-container named: app-data
There is a copier container worker
Initially app/data was created and that's it, it's status is 'stop' / exited.
Then the copier worker copies some files to app/data as its volume, for example an server.py
Okay this was done like this:
docker run --rm --volumes-from app-data /somelocation/server.py ~/www/
the copier takes 2 arguments, the file to copy and where to copy
With that at hand, I can run a image named app/serve to server the file in the volume like so:
docker run --volumes-from app-data -d -P app/serve
the image entry point do this http-server, and workdir is same as the data volume, so it'll run
Hell yeah it works, and the python web app can be accessed in the host with a highport like so: 0.0.0.0:49124 because
the app/serve image expose ports 5000 which is used by the sever.py script
I had the above working, you can focus below.
So the app can run huh? But I want to do more complex things that just running the app (I ran the app using the remote api), what if I want to run the app using a web console connected to a docker container.
My Idea of how would it be possible
use term.js
expose 2 ports e.g. (4000, 5000)
use port 4000, so that when term.js executes in port 4000, it'll generate a highport for the client to use in the web console socket connection
use port 5000 to allow whatever web application ran in port 5000 will be available in a another highport, to be accessed like so 0.0.0.0:49124
This is a prototype implementation of how runnable/dockworker do this, I am still new to nodejs that is why I cannot comprehend how they do it, could there be a less-featured simpler execution of this idea?

Categories