I have a problem.
My problem is that every time I make changes to my node.js server code, I have to restart the entire thing to see the results.
Instead of this, I remember seeing something about being able to pipe chrome directly into the server's source code, and "Hot edit" it. That is to say, changes to the code immediately take effect and the server keeps runnings.
I hope that I am being clear.
It would be a real time saver to directly edit code (especially for small things) while the server is actually running and have it instantly take effect.
Does anyone know how to do this?
See my answer to my own question that answers this question: https://stackoverflow.com/a/11157223/813718
In short, there's a npm module named forever which does what you want. It can monitor the source files and restart the node instance when a change has been detected.
I do not quite understand the pipe-to-chrome part... But there seems to be a node module which listens to changes of userdefined files and restarts the server automatically:
How can I edit on my server files without restarting nodejs when i want to see the changes?
https://github.com/isaacs/node-supervisor
Yes, there is such thing.
Just take advantage of the evil-so-called eval() function of Javascript.
(You might need something like websocket to connect with the server and alert it about the change)
I am on the half-way of implement the same feature, but there are a lot of things to consider if you want to reserve the server states (current values of variables for example)
ABOUT THE pipe-to-chrome part
May be this was what you mentioned?
https://github.com/node-inspector/node-inspector/wiki/LiveEdit
Related
In an Assignment it was asked what is the first thing to do before testing changes done in JavaScript, after deploying an application. My answer was to clean the browser cache because the cached content may affect the new changes. I want to know whether it is a valid and a good answer or are there some things to do prior to that? Thank you
A word of advice, not all users understand the principle of clearing the browser cache to obtain changes. I suggest you to increment you'r javascript file instead of taking into consideration that users will clear their cache.
But before deploying a modification, I advise you to do some tests. Several types of test exist, you can look for example at unit tests.
What a vague question to be asked, however I personally think about how a real user would use the system when I test it. Therefore, would I trust the user to clear the cache / cookies etc everytime they used the system? No. I would expect to be able to either close the browser and open (or simply refresh the screen). As mentioned in another answer, 'cache busting' should be handled by the developer during the build process, for example by hashing the javascript bundle.
I used to get into a habit of opening dev tools all of the time and relying on the 'disable cache' toggle, however after a few times of getting caught out by real users seeing different behaviour that what i was seeing during development, I moved to ensure that the bundles weren't cached in both dev and prod.
I would like to open https://krunker.io/ through Puppeteer. However, whenever I open up Krunker.io through Puppeteer, it blocks me, saying "Puppeteer Detected". Is there an easy workaround to this?
One answer I got was this:
You need to make a matchmaker seek game request to get a websocket URL, and then you connect to it and simulate being a client
As I started coding Node.js and in Javascript just under 5 weeks ago, I am not sure how to do this. (I asked, and he said "just do it". It's probably not that hard, I am just not that good at Node). Here is all of the answers I came across:
i just made my rce code in assembly and then link it with chrome executable and then using a hex dumper replace the rce function call bytes with a reference pointer to my own code.
also you need to make sure your rce code has the correct signature otherwise the rebuilt chrome executable will crash as soon as it reaches your rce runtime code
you can also append a EYF_33 byte after the ACE_26 bytes to grant GET requests to make it possible to create 2 PATCH requests at a time with different structures makiong it possible to create fully independent websocket connection to the krunker api and send more AES authorization messages at a time
Not sure what this means ¯\_(ツ)_/¯.
Is there a simple way to do this, or better yet, a step-by-step tutorial on how to do this (on a mac)?
Thanks :)
In most cases it is detecting by user agent. Simplified you can use puppeteer-extra and the plugin puppeteer-extra-plugin-stealth to change your user agent.
I'm developing an app that should receive a .CSV file, save it, scan it, and insert data of every record into DB and at the end delete the file.
With a file with about 10000 records there aren't problems but with a larger file the PHP script is correctly runned and all data are saved into DB but is printed ERROR 504 The server didn't respond in time..
I'm scanning the .CSV file with the php function fgetcsv();.
I've already edit settings into php.ini file (max execution time (120), etc..) but nothing change, after 1 minute the error is shown.
I've also try to use a javascript function to show an alert every 10 seconds but also in this case the error is shown.
Is there a solution to avoid this problem? Is it possible pass some data from server to client every tot seconds to avoid the error?
Thank's
Its typically when scaling issues pop up when you need to start evolving your system architecture, and your application will need to work asynchronously. This problem you are having is very common (some of my team are dealing with one as I write) but everyone needs to deal with it eventually.
Solution 1: Cron Job
The most common solution is to create a cron job that periodically scans a queue for new work to do. I won't explain the nature of the queue since everyone has their own, some are alright and others are really bad, but typically it involves a DB table with relevant information and a job status (<-- one of the bad solutions), or a solution involving Memcached, also MongoDB is quite popular.
The "problem" with this solution is ultimately again "scaling". Cron jobs run periodically at fixed intervals, so if a task takes a particularly long time jobs are likely to overlap. This means you need to work in some kind of locking or utilize a scheduler that supports running the job sequentially.
In the end, you won't run into the timeout problem, and you can typically dedicate an entire machine to running these tasks so memory isn't as much of an issue either.
Solution 2: Worker Delegation
I'll use Gearman as an example for this solution, but other tools encompass standards like AMQP such as RabbitMQ. I prefer Gearman because its simpler to set up, and its designed more for work processing over messaging.
This kind of delegation has the advantage of running immediately after you call it. The server is basically waiting for stuff to do (not unlike an Apache server), when it get a request it shifts the workload from the client onto one of your "workers", these are scripts you've written which run indefinitely listening to the server for workload.
You can have as many of these workers as you like, each running the same or different types of tasks. This means scaling is determined by the number of workers you have, and this scales horizontally very cleanly.
Conclusion:
Crons are fine in my opinion of automated maintenance, but they run into problems when they need to work concurrently which makes running workers the ideal choice.
Either way, you are going to need to change the way users receive feedback on their requests. They will need to be informed that their request is processing and to check later to get the result, alternatively you can periodically track the status of the running task to provide real-time feedback to the user via ajax. Thats a little tricky with cron jobs, since you will need to persist the state of the task during its execution, but Gearman has a nice built-in solution for doing just that.
http://php.net/manual/en/book.gearman.php
Maybe a noob question, but I am really confused.
I'm trying to make an online quiz, where user has to choose between 4 possible answers. But I got stuck, well, because 1 question is really bothering me. So:
I have a JavaScript function which changes background color of a button and disables it, when user clicks on the wrong answer. And the right answer button (submit) calls for non-existent function, just to confuse people, if they decide to take a quick look at the source code. Everything went great, but then I started to think. What if user decides to take a deeper look at my source code? He would be able to see my JavaScript functions and, well, he could probably figure out the right answer without really playing.
Is there any way I could hide the source (as I understand, it's not possible with JavaScript), but maybe I could use something else? What are other developers using in these situations? Any suggestions? :)
If you want to be really safe, you have to do it on the server side (e.g. PHP). What you could possibly do to make it more difficult for cheaters is to obfuscate your javascript code by one of the various js obfuscators like this one which can never bring full security.
The reason users can cheat is because the answer is in the JavaScript code.
Solution: Don't put the answer in the JavaScript code. Keep it on a server instead.
Something like this: Whenever the user changes their answer to the question...
JavaScript sends answer to server.
Server replies with YES or NO.
JavaScript displays reply to user.
The user never sees the answer, only whether it was true or false.
tl;dr Learn AJAX.
This is a good question and one that every web developer should be asking themselves...
This is a major problem that we as web developers face daily. Our JavaScript is view able and sometimes even editable. Trust NOTHING that comes from the client!
What I usually do is
Generate some type of unique hash on the server
Embed this hash into the HTML and JavaScript.
Send this hash along with any request to the server.
Validate the hash on the server by recreating the hash again and making sure it matches the hash that was sent.
The hash could be a concatenation of the item's id, a session id, some salt and any other identifiable piece of data that you can reconstruct to validate the request.
To complete the other answers, providing how learning how to set up a server could cost you a few days, I'll shortcut you a 5-min guide to setting a simple node.js server that will hide your answer.
Install node.js.
Create a file called server.js with the contents below.
var express = require("express"),
app = express();
app.get("first_answer",function(req,res){
res.send("b");
});
app.listen(8080);
Open the terminal, type
npm install express
cd path_to_the_folder_you_placed_the_file
node server.js
This will install the express.js module and run the server.
Now, on your site, using jQuery, just request the answer from the server. Example using jQuery:
var user_answer = ...
$.get("http://localhost:8080/first_answer",function(answer){
if (answer == user_answer)
alert("you got it right!");
});
For more complex examples and best use of the server, just read on express.js documentation. (;
I'm a newer and if the question is so easy I apologize for that.
Assume I want to dev a classical online judge system, obviously the core part is
get users' code to a file
compile it on server
run it on server (with some sandbox things to prevent damage)
the program exit itself, then check the answer.
or get the signal of program collapsing.
I wonder if it's possible to do all the things using Node.js, how to do the sandbox things. Is there any example for compile-sandbox-run-abort-check thing?
additional:
is it more convenient to dev a such system using PYTHON?
thanks in advance.
Most of these steps are standard --- create a file, run a system call to compile something, muck around with I/O --- I think any language should be able to do them, except the very crucial step of "run in a sandbox." I am aware of several solutions for sandboxing:
use OS commands to restrict or remove abilities (chroot, setrlimit, file system permissions in linux)
remove all dangerous functionality from the language being graded
interrupt system events
run the sandbox inside a virtual machine.
This list is probably not exhaustive. The system I am involved with, http://cscircles.cemc.uwaterloo.ca uses option #1. Again, most of the work is done in system calls so I can't imagine that one language is so much better than the other? We use php for the big-level stuff and C to do the sandboxing. Does that help answer your question?
To accomplish the sandbox, it would be fairly easy to do this by simply running your code inside of a closure that reassigns all of the worrisome calls to NaN
for instance, if the code executes inside a closure where eval=NaN