Cross domain source-maps (javascript) - javascript

Let's take the following concrete example:
A Javascript "kernel" providing some super nice service to third-part applications is "distributed" (as a JS bundle) at : http://myJSKernel.com
domain-specific applications can download, configure and use the kernel's service from http://myJSKernel.com. Custom domains could be anything like: (http://consumer1.com, http://consumer2.org...)
On the basis of this scenario, while developing, I never execute the kernel alone, but I build smaller "domain-specific test applications" that are depend on and make use of the kernel.
The kernel being a JS "bundle" which is "babelified", "uglified", and "transfromed" in all the possible ways you can imagine, it is extremely hard to debug since the error messages do not point to the correct line numbers and so on...
Therefore I asked Webpack to also generate a source-map along with the kernel's bundle. So I have: "kernel.js" and "kernel.js.map", both available at http://myJSKernel.com
Now, I'm not sure about how the domain-specific applications (http://consumer1.com, http://consumer2.org...) can load the "kernel.js.map" when there is an error in the kernel ("kernel.js").
I hope my question is clear enough.
Thanks in advance for the help.

Related

How to prevent access to my website via JavaScript

I was wondering if it would be possible to create a JavaScript function to check a condition, and if that is True then I deny access to the code? Right now I am checking the user agent, and if it doesn't meet given criteria then I delete the HTML tag. However, if they go to the network tab then they can still see the GET requests and responses for my code.
This is a website running on localhost because it's an Electron app by the way.
I thought maybe I could issue a 403 error but I'm not sure if that's possible via JS.
Thanks
In an Electron app, your code is still JavaScript source code. You can obfuscate it and/or put it in a ASAR archive to make it harder to read and fine, but the code is still there and accessible to anyone who wants to go to the effort. (For instance, if you use VS Code, you can see the source in resources/app/out directory and its subdirectories.)
You can make it even harder to find and understand the source if you're willing to put in more work. V8, the JavaScript engine Node.js uses, has a feature called startup snapshots: You run V8 and have it load your script (your obfuscated code), take a heap snapshot (after GC), and write it to a file. Then you specify the heap snapshot when loading V8, and it just hauls in the snapshot instead of reading and parsing code. The Atom team have done this on top of Electron. In their case the motivation was startup performance, not hiding the source code, but it has the effect of making your code even harder to find.
Note I said harder, not impossible. At the end of the day, if you want the code to execute on the end-user's computer, that code is going to be accessible to the end user. This is true even of compiled applications, although of course in that case a lot of organizational information that helps a human understand the code is lost in compilation, which happens less with obfuscated code. (But a good obfuscator makes code extremely opaque to human beings.)

Node modules undefined in my webpack bundled script

I am making a web app which, based on a jped picture, recognizes characters and renders it in an interactive interface for the user - this includes some async code. There are 4 js scripts file, which all require npm modules, and an html view.
In order to test the app client-side, I have decided to bundle the scripts together.
It shows the following error message:
Uncaught ReferenceError: require is not defined
List of my npm modules whose code returns this error at run time:
isexe: requires fs
destroy: fs
tesseractocr: child_process, fs
I have tried:
browserify my scripts into a bundle, but I read that it would not work with async functions ;
webpack the scripts into a bundle, but Node modules like fs and child_process are returning 'undefined' ;
adding a specific Node module, child-process-ctor, to force child_process to the included
Alas, the same error message is returned.
Questions:
Is bundling the scripts the right approach?
Is the problem that webpack does not transpile fs and child_process correctly?
Which possible solutions should I consider?
Thanks all. This is my 1st question on SO -- any feedback is much welcome!
PS: This might be redundant with Using module "child_process" without Webpack.
Okay this answer is a follow up to my comments, which answer the question more directly. However here I'll go into more detail than is probably necessary, but it will thoroughly answer what you asked. Plus it's educational and I'd say it's pretty fun once you start really digging into it :D
To start at the beginning. As the internet in its early days became more advanced the need for a type of "front end logic" increased and Netscape's response to this demand was to birth a competitive, cutting edge programming language in record time.
And by record time I mean 10 days, and by competitive I mean barely functional.
That's right Javascript was born in 10 days (literally). As you can imagine it was a pretty poor language, but it worked well enough that people started using it.
Because it was the programming language of the internet, and because of how fast the internet grew, enough people started to use it that the thought of removing it became scary.
If you changed it you would destroy backward compatibility with millions of websites. The other idea would be to keep it, but also implement a new standard. However it would be hard to justify this because javascript already took a lot of work to upkeep, upkeeping multiple standards would be a nightmare (cough... flash cough).
Javascipt was easy enough for "new" programmers to learn, but the problem was javascript's only 1 language in a world where php, ruby, mySql, Mongo, Css, Html all rule as dominant kings in their respective kingdoms.
So someone thought it was a good idea to move javascript to the server and thus node.js was born.
However for javascript to mean anything on the server it had to be able to do things that you wouldn't want it to be able to do in your browser. For example, scan your hard drive and edit your files.
If every website you visit could start scanning and uploading everything in your system well....
However if your server software can't edit or read files you need it to well....
You get the idea. It's the same language, but because of security issues node.js has some differences. Mainly the modules it's allowed to use.
Now for the fun part. Can you run node.js client side in a browser. Technically yes. In fact now that we're dumping entire operating systems into a subset of javascript called asm.js there really isn't anything javascript can't do with enough processing power.
However even if you dump the entire node.js engine (which is basically a stripped down version of chrome) into asm.js you would still have the same security limitations placed by the "Host" browser and so your modules could only run within the sandbox the browser provides.
So it is technically just a browser within another browser Running at half the speed with the same security limitations.
Is it something I would recommend doing? Of course not.
Is it something that people haven't already tried before? Of course not.
So I hope that helps answer your question.

How to use fonts in the node.js backend?

Background:
I am building a node.js-based Web app that needs to make use of various fonts. But it only needs to do so in the backend since the results will be delivered as an image. Consequently, the client/browser does not need access to the fonts at all in my case.
Question:
I will try to formulate the question as little subjective as possible:
What are the typical options to provide a node.js backend with a large collection of fonts?
The options I came up with so far are:
Does one install these hundreds or thousands of fonts in the operating system of the (in my case: Ubuntu) server?
Does one somehow serve the fonts from a cloud storage such as S3 or (online) database such as a Mongo DB server?
Does one use a local file system to store the fonts and retrieve them?
...other options
I am currently leaning towards Option 1 because this is the way a layman like me does it on a local machine.
Without starting a discussion here, where could I find resources discussing the (dis-)advantages of the different options?
EDIT:
Thank you for all the responses.
Thanks to these, I noticed that I need to clarify something. I need the fonts to be used in SVG processing libraries such as p5.js, paper.js, raphael.js. So I need to make the fonts available to these libraries that are run on node.js.
The key to your question is
hundreds or thousands of fonts
Until I took that in there is no real difference between your methods. But if that number is correct (kind of mind-boggling though) I would:
not install them in the OS. What happens if you move servers without an image? Or move OS?
Local File system would be a sane way of doing it, though you would need to keep track manually of all the file names and paths for your code.
MongoDB - store file names+paths in the collection..and store the actual fonts in your system.
In advent of moving servers you would have to pick up the directory where all the actual files are stored and the DB where you hold the file name+paths.
If you want you can place it all in a MongoDB but then that file would also be huge, I assume - that is up to you.
Choice #3 is probably what I would do in such a case.
If you have a decent enough server setup (e.g. a VPS or some other VM solution where you control what's installed) then another option you might want to consider is to do this job "out of node". For instance, in one of my projects where I need to build 175+ as-perfect-as-can-be maths statements, I offload that work to XeLaTeX instead:
I run a node script that takes the input text and builds a small but complete .tex file
I then tell node to call "xelatex theFileIJustMade.tex" which yields a pdf
I then tell node to call "pdfcrop" on that pdf, to remove the margins
I then tell node to call "pdf2svg", which is a free and amazingly effective utility
Then as a final step mostly to conserve space and bandwidth, I use "svgo" which is a nodejs based svg optimizer that can run either as normal script code, or as CLI utility.
(some more details on that here, with concrete code here)
Of course, depending on how responsive a system you need, you can do entirely without steps 3 and 5. There is a limit to how fast we can run, but as a server-side task there should never be the expectation of real-time responsiveness.
This is a good example of remembering that your server runs inside a larger operating system that might also offer tools that can do the job. While you're using Node, and the obvious choice is a Node solution, Node is also a general purpose programming language and can call anything else through spawn and exec, much like python, php, java, C#, etc. As such, it's sometimes worth thinking about whether there might be another tool that is even better suited for your needs, especially when you're thinking about doing a highly specialized job like typesetting a string to SVG.
In this case, LaTeX was specifically created to typeset text from the command line, and XeLaTeX was created to do that with full Unicode awareness and clean, easy access to fonts both from file and from the system, with full OpenType feature control, so would certainly qualify as just as worthwhile a candidate as any node-specific solution might be.
As for the tools used: XeLaTeX and pdfcrop come with TeX Live (installed using whatever package manager your OS uses, or through MiKTeX on Windows, but I suspect your server doesn't run on windows) pdf2svg is freely available on github, and svgo is available from npm)

JS Compiler / Package Manager for use with version control

I'm trying to grasp a bit of an idea here. And hoping someone can help clarify the best practice.
How do teams or pairs approach using a build system for javascript like grunt.js ?
I really want to break our large javascript files into smaller pieces and converting to AMD/Require isn't an option right now.
It seems the easiest way is to concatenate and minify to a master file. And we are using version control (SVN).
So I'm wondering what the best practice is here?
Are we asking for constant conflicts with the production file? How do other teams approach this?
Hope i made my question clear enough.
thanks in advance...
We recently faced a similar dilemma at my organization. Using AMD or RequireJS wasn't an option due the the sheer amount of legacy JavaScript code we have.
We ultimately went with grunt and came up with a "build" task that concatenates and minifies. Then, there's a completely separate "deploy" task that gzip's the files and uploads to Amazon S3.
We don't check in our concatenated/minified code to source control. In general, it's a good operational practice to have separate build and deploy tasks for your source code. For larger development teams, the deploy/build process would traditionally be done in a CI tool that runs whenever someone commits to SVN/git.
In your case, a simpler arrangement would be if you just manually deployed code from your development machine instead of have it automated from a CI tool. The problem with this setup is it's easy to run into conflicts with other team members.
That said, there is a growing number of open-source (Jenkins) or cloud-hosted (CircleCI) tools that might be worthwhile for you to look into.
Do not commit the output. Use a ci tool like teamcity to build and deploy. Only commit source files to source control.

Is that possible to develop a ACM ONLINE JUDGE system using NODE.JS(or PYTHON)?

I'm a newer and if the question is so easy I apologize for that.
Assume I want to dev a classical online judge system, obviously the core part is
get users' code to a file
compile it on server
run it on server (with some sandbox things to prevent damage)
the program exit itself, then check the answer.
or get the signal of program collapsing.
I wonder if it's possible to do all the things using Node.js, how to do the sandbox things. Is there any example for compile-sandbox-run-abort-check thing?
additional:
is it more convenient to dev a such system using PYTHON?
thanks in advance.
Most of these steps are standard --- create a file, run a system call to compile something, muck around with I/O --- I think any language should be able to do them, except the very crucial step of "run in a sandbox." I am aware of several solutions for sandboxing:
use OS commands to restrict or remove abilities (chroot, setrlimit, file system permissions in linux)
remove all dangerous functionality from the language being graded
interrupt system events
run the sandbox inside a virtual machine.
This list is probably not exhaustive. The system I am involved with, http://cscircles.cemc.uwaterloo.ca uses option #1. Again, most of the work is done in system calls so I can't imagine that one language is so much better than the other? We use php for the big-level stuff and C to do the sandboxing. Does that help answer your question?
To accomplish the sandbox, it would be fairly easy to do this by simply running your code inside of a closure that reassigns all of the worrisome calls to NaN
for instance, if the code executes inside a closure where eval=NaN

Categories