Why aren't Web Workers used more? [closed] - javascript

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Web Workers are a technology that I brush up against from time to time, whether as the subject of blog post, or a mention in a presentation.
During a more recent presentation I attended, the speaker said about web workers:
I'm not really sure why they aren't used more.
I realised, having thought about it, that for a technology with such obvious benefits & use cases, web workers seem to have had a fairly slow, or narrow adoption.
Is there some inherent issue with Web workers that makes them less useful? Am I just looking in the wrong places for examples of their use? Or is it that Javascript programmers in general are not particularly used to creating multi-threaded applications.

The main reasons they're not used much (in my opinion):
Inertia. They're a relatively new tech, and people haven't taken the time to learn them yet. You went to a talk about it, which means you're ahead of the curve; there's a lot of people out there who haven't even heard the term 'web worker' yet, much less thinking about coding them.
Browser compatibility. Older browsers don't support them. Most people still need to support at least IE8 for their sites, so can't use tech like this yet.
Why bother? The only reason for using a new technology is if it solves a problem or achieves something new. Most sites don't have any real need for web workers. Or even if they do, they don't see the need.
Not shiny enough. The web is a very visual medium, and a lot of the new browser features in the last few years have been very visual. It's easy to persuade someone to try a new feature if it looks good. Web workers are entirely non-visual; the benefits are abstract. Developers may get it, but for most companies the decisions about what to spend time and money on to improve a site are made by non-developers, which makes it harder for web workers to get a look in.

Since most answers to this question are a few years old, here's my perspective on the current state of things, as of May 2019.
Web Workers are now supported in commonly-targeted browsers, and I'd argue that browser support is no longer a significant obstacle to their use. Despite that, Web Workers still aren't commonly used, and most web developers I interact with don't use them.
Certainly not every website needs to do enough heavy lifting to justify a Web Worker; many websites don't need JavaScript at all. So, I wouldn't expect to see Web Workers in a majority of websites realistically.
But "heavy lifting" is not the only factor in justifying a Web Worker – another is ease of use. Web Workers are a significant challenge to use in many contexts. Note that the Web Worker API involves splitting your build outputs into separate files, or generating that code (e.g. with Data URIs or fn.toString()) at runtime. Web developers use many different build systems, and often have many dependencies in their projects. Configuring the project to create these additional build artifacts with the right dependencies can be difficult, or at least it varies with the build system you have.
Furthermore, this work almost invariably has to be done by each web developer, for each individual project. You could expect much higher Web Worker adoption if WWs were commonly used inside the popular libraries and frameworks that developers already depend on. But they aren't (for API-related reasons, if I had to guess), even for libraries that do a significant amount of synchronous work. And as long as using the main thread is the default, easiest thing to do, that's what most developers will continue to do.

my opinion:
They work well only when you need lots of calculation. In other cases, you loose time for sharing resources, merging.
Requires extra coding.
for simple tasks they dont give much benefit and JS usually are not doing lots of calculations.
do not work on every browser, IE8, ie9 do not support it (http://caniuse.com/webworkers)
no DOM access in worker.
Some people just use setTimeout, setInterval instead, BUT in these are not multi threats, only 1 CPU works at same moment.
They do not work well then there is only 1 CPU. Edit: you get benefit from runing processes in background.
sometimes it is difficult to share resources, it takes too much time and in final result is not good.
But when you need to do lots of calculations or run heavy process in background, and you can ignore old browsers, web workers works really well.

Not having most of the APIs supported within workers has put a dampener on their use for many of my projects.
Firefox won't have Websocket support until v35, performance.now in v34, and there's no date for IndexedDB support.
Chrome only recently added TextEncoder/Decoder in v38, and can't pass ImageData.
Some functionality can be shimmed, but others can't, or are especially painful to work around defeating the purpose.
WebSockets aren't finished.

Usually, the website with calculations are the Intranet sites. Most of the big companies use Microsoft products and they use IE as browser. It's not easy to have the latest version of IE because upgrade can break many intranet web sites. Currently my company uses IE 9, and they are planning to go IE 10 may be in 2 years.... I have a lot of application which can use Web Workers but I can't because I don't have IE 10...

Related

Feasibility of MMO 3D game on HTML5/WebGL [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I don't know if anyone thought about this but are games like World of Warcraft, Lineage II, or Aion, feasible with a browser front-end through WebGL? What are the things I would have to consider if I want to make a full fledged game with these new technologies? Also, what would be a good place to start?
This may be too open-ended, but I will take a stab.
First, there is no modeling programs that will output what you expect, as far as I know, since you will need javascript outputted.
Some browsers will use the hardware to accelerate the graphics, but that isn't a guarantee, and your only getting a bit of the cpu, sharing with the other tabs, so it may not be as smooth as you like.
If you have to download a large amount of data to run your program that will be a problem for the user.
I think the modeling program is the real challenge though, as you will have to basically do everything by hand, and the fact that it won't be very smooth will be an issue, unless you design for this.
But, for some game designs WebGL should be a fantastic choice.
I don't believe it's possible if your game must go beyond some cubes on heigtmaps.
Large amounts of coding in JS multiplied by browsers quirks. (Yes, I'm aware of JQuery, but it's not panacea)
Large resources hanging on tiny thread of browser cache
Ready-to-be-hacked client code exposed to a lot of browser tools like Firebug
Such game is much more realistic on Flash, especially with upcoming 11 version of player with hardware 3D.
In fact it is fully possible, and we will se such games.
We can expect libraries like O3D to take care of the browser quirks. We already have these problems on desktop platforms and libraries takes care of multi-platform portability there.
Browser cache can be a slight problem, but not a big one. It is possible to assign more cache to games, and we also have proxy servers like squid that can cache very large resources. If a group of players on a LAN share a proxy server they will also share large resource objects, if the game are well designed (ie the resource cannot have multiple generated names, but be have a common URL for all players.)
Also there are discussions about adding local storage possibilities for web applications.
And "ready to be hacked" is not a mayor issue. There are nothing to stop hackers from manipulating Flash or C++ applications, anti-cheating tools are already rendered useless. Blizzard is already relying on spotting "bot-like behavour" rather then try more anti-hacking measures.
However, I do not think that WoW will be the first flash-based games. In fact it will be Quake (http://playwebgl.com/games/quake-2-webgl/) as there is already a Quake-port for WebGL... There will be web games that makes use of WebGL, but do not count on Blizzard supporting it in the near future.
IE is the only browser that does not support WebGL and to be honest that does not matter. All other browsers do, and users will not mind running Chrome or Firefox. Or running both and choose the one that is faster for their game.
Who cares of marginalized browsers like IE and Opera. They are equally unimportant. Unless you count IE6 which will never support any of the stuff we are discussing, as it is discontinued and unsupported.
For caching local files, you should look into the File System APIs that are now in Chrome. This gives you programmatic access to a virtual file system, letting you control what resources you store locally.
The Application Cache can help you with static resources like the HTML, CSS, and JavaScript required for the game. However, you need to run as an "installed web app" (via the Chrome Web Store, for example) to get unlimited storage. Browsers are building quota management systems to help make this easier.
WebGL is great, and the libraries are emerging to help make it easier. There's no clear "winner" but lots of options.
JavaScript is pretty fast these days, thanks to improvements like CrankShaft. For even better performance, you can use Native Client to run your C/C++ code and post messages back and forth to JavaScript.
There are two big issues that I can see. One is helping the middleware companies port their work to JavaScript or Native Client. The second is improving the speed with which we can move data from JavaScript into WebGL.
Runescape one of the most played browser games for many years is rewriting their engine in with WebGL... (They currently use Java applets)
"If you can find a way to minimize the cost of transporting massive amounts (possibly gigs) of resources"
Actually http already has the minimal cost of transporting gigs of static resources. With its native resource allocation scheme, the URL, it has the ultimate caching abilities. Not only does browsers know how to cache static resources by URL, but fast and efficient proxy servers exists that can handle terrabyte of data.
The main secret to this is the HTTP HEAD requeset, where the browser of proxy server efficiently can check if it has the latest version of the resource and re-syncronize it. Also it is possible trough HTTP headers to mark a resource as eternal or very long-living (immutable). re-syncronization will then be impossible, instead updates will be done by creating a new resource with a new name.
There is a myth that HTTP is somehow inefficient as a resource transport system, when in fact it have been designed to be very efficient.
WoW and other clients that use proprietary protocol are notoriously inefficient compared to HTTP-based clients. These client cannot be accelerated using proxy servers. Windows update, Apt and Yum all have in common that they update OS resources with HTTP and have been able to leverage Akamai:s vast global networks of proxy servers among other similar resources in order to efficiently transfer URL resources in the scale of many gigabytes per client.

the thin line between web apps and desktop apps

I've been working a lot lately with web apps, mostly with javascript and json-rich web UIs.
I have to say I get impressed all the time with what I can achieve through these technologies.
More and more, I ask myself whether I would have preferred to go with a classic GUI to start with (whether it was C#/VB.Net + WinForms, or C/C++ + GTK/QT or Java or anything). I, however, have been able to accomplish everything I wanted in terms of a user interface with web-related technology.
And although I feel I have everything I need, more and more things keep coming in (and will keep coming in forever), like HTML5, new javascript capabilities, and probably even more things.
So, as web apps become even more and more able, I ask you:
How thin is the line between web apps and desktop apps as of now?
What is the future of this line? How capable will web apps be in the distant future? In this sense, is there a definition of what web apps should be, or are they just going to improve it more and more forever?
I would like to know what W3 has to say about it, though I haven't looked into it yet.
In reality we have simply come full circle in the computing world. The web browser of today is simply the green screen terminal of 30 and 40 years ago.
It used to be that you would buy time on a University's computer to run your program and then pay for time it took for your program to process and run. This was inefficient from an end user stand point since it was done in a batch and queue process so your results would have to wait till the next day. From the University's point of view though they had more computing power than they knew what to do with, so farming it out made sense and gave a nice revenue stream.
Flash forward a few years and desktops began to be as powerful if not more powerful than the University's computers and the days of batch and queue processing died off. But desktop centric applications suffer from a single fundamental flaw, multi-user needs. If more than one user needs to use the application at the same time, a server is needed in the mix to handle the multi session data needs.
The client application is useful for doing things such as data validation, but the thicker the client, the larger the risk you run with different versions of the client populating data at the server wrong.
The solution, the "web" client. Using the term web though is actually wrong in my personal opinion. The html/browser based client removes the issues found with multiple versions of a desktop client since all users are using the same version all the time. Gone are the days of deploying an upgrade across thousands of desktops. The browser based client simply needs updating on the server side and all users are instantly getting the new features.
To answer this question about the future, let's look more than a year into the past:
http://www.codinghorror.com/blog/2009/08/all-programming-is-web-programming.html
And in fact, it references an even earlier post from 3 years earlier. The future is Atwood's Law: any application that can be written in JavaScript, will eventually be written in JavaScript.
http://www.codinghorror.com/blog/2007/07/the-principle-of-least-power.html
Apart from some UI issues, web apps are real apps.
What is the future? Wish I had a crystal ball...
However, I would hazard to guess that the trend will go on and web will subsume most if not all of desktop applications.
Both have still their meaning. Webapps will get cover global connected applications, applications that exists because there is the web. They get really more important from day to day or the builder make us thinking they are important.
GUI still will be there because for a lot of people with not so much computer skills it is still easier to operate and understand. And there are really very very complex GUI application that maybe will get never into the web (CAD for example). Their complexity will be always in front of the progress of web development. You cannot catch them.
So I believe this line is notable and will be there for long time. Not all will get into the web.
Having just made the choice of using a "web" API or Desktop API here are the most significant differentiators that I see right now:
Support of native features
For example on the iPhone: Direct access to low level APIs
With the current browser development speed we should be there soon
Offline workflows
First steps done here with offline mode in HTML5
API support for "desktop UIs" (flexible, configurable, fast)
Libraries such as ExtJS are not there yet, but close
With WebGL, Canvas, and more and more powerful CSS features it has become a lot easier to create powerful UIs
All in all there is still quite a bit of work to be done but I think a few years from now there will be no difference between web and desktop applications, some of them will work offline, some won't.
Microsoft had that vision with .hta a long time ago, at that time it just wasn't powerful enough. Google is continuing now with Chrome.
Web apps will get closer to Desktop apps as the time goes. The reason behind this is the requirements. More and more people are hooking into internet and giving time or wasting time on net. So, requirement for browser is increasing. Second, as business are going global (globalization !) It is already global but in future the requirement is much more. Even small shop need to use internet for tax etc. Developing countries are using net in governance so checking for tax is easy. For all these even if a owner has 4 small shops then he need to have a aggregate data for his selling. So, all 4 shops better need interconnection and calculate everything financial every day.
People in a single team are working from remote. So, they need to share documents on regular basis. So, Google docs etc. Google docs has capability for online editing from various user at a same time. and still docs keeps synchronized.
Competition are increasing day by day. So, all business data need to be on one place for analytic. Who will collect all data from desktop application each day and synchronize each day. So, even if company will use Desktop app for speed and reliability then also he need some kind of net connection and synchronization software for those desktop applications. In this way you see that desktop app are getting closer to web app!
So, if you visualize all these scenarios then you will find it very difficult to avoid web applications.
Web app has future. For efficiency and speed, Web app will have a sort of software that will act as a desktop app and get downloaded when you use that.

Should your website work without JavaScript [duplicate]

This question already has answers here:
Do web sites really need to cater for browsers that don't have Javascript enabled? [closed]
(20 answers)
Closed 9 years ago.
We're developing a web application that is going to be used by external clients on the internet. The browsers we're required to support are IE7+ and FF3+. One of our requirements is that we use AJAX wherever possible. Given this requirement I feel that we shouldn't have to cater for users without javascript enabled, however others in the team disagree.
My question is, if, in this day and age, we should be required to cater for users that don't have javascript enabled?
Coming back more than 10 years later, it's worth noting my first two bullet points have faded to insignificance, and the situation has improved marginally for the third (accessible browsers do better) and fourth (Google runs more js) as well.
There are a lot more users on the public internet who may have trouble with javascript than you might think:
Mobile browsers (smartphones) often have very poor or buggy javascript implementations. These will often show up in statistics on the side of those that do support javascript, even though they in effect don't. This is getting better, but there are still lots of people stuck with old or slow android phones with very old versions of Chrome or bad webkit clones.
Things like NoScript are becoming more popular, so you should at least have a nice initial page for those users.
If your customer is in any way part of the U.S. Goverment, you are legally required to support screen readers, which typically don't do javascript, or don't do it well.
Search engines will, at best, only run a limited set of your javascript. You want to work well enough without javascript to allow them to still index your site.
Of course, you need to know your audience. You might be doing work for a corporate intranet where you know that everyone has javascript (though even here I'd argue there's a growing trend where these sites are made available to teleworkers with unknown/unrestricted browsers). Or you might be building an app for the blind community where no one has it. In the case of the public internet, you can typically figure about 95% of your users will support it in some fashion (source cited by someone else in one of the links below). That number sounds pretty high, but it can be misleading; turn it around, and if you don't support javascript you're turning away 1 visitor in 20.
See these:
https://stackoverflow.com/questions/121108/how-many-people-disable-javascript
https://stackoverflow.com/questions/822872/do-web-sites-really-need-to-cater-for-browsers-that-dont-have-javascript-enabled>
You should weigh the options and ask yourself:
1) what percentage of users will have javascript turned off. (according to this site, only 5% of the world has it turned off or not available.)
2) will those users be willing to turn it on
3) of those that aren't willing to turn it on, or switch to another browser or device that has javascript enabled, is the lost revenue more than the effort to build a separate non-javascript version?
Instinctively, I say most times the answer is no, don't waste the time building two sites.
My question is, if, in this day and age, we should be required to cater for users that don't have javascript enabled?
Yes, definitely, if the AJAX functionality is core to the working of your site. If you don't, you are effectively denying users who don't have Javascript enabled access to your website, and although this is a rather small proportion (<5% I believe), it means that they won't be able to use your site at all, because the core functions are not available to them.
Of course if you're doing more trivial things with AJAX that just enhance the user experience but are not actually central to the core functionality of the site, then this probably isn't necessary.
Depends really.
I personally switch off JavaScript all the time because I don't trust lots of sites.
However, since you users have explicitly asked for your application, you can assume they will trust it and there is no point in doing extra work.
More, if you have that strong AJAX-affinity requirement, the question seems odd enough.
This is a bit like beating a dead horse, but I'll have a go at it, sure.
I think there could be two basic approaches to this:
1.
Using ajax (and, basically,
javascript) to enhance the experience
of the users, while making sure, that
all of the application's features
work without javascript.
When I am
following this principle, I develop the
interface in two phases - first
without considering javascript at all
(say, using a framework, that doesn't
know about javascript) and then
augment certain workflows by adding
ajax-y validation (don't like pure js
validation, sorry) and so on.
This means, if the user has javascript disabled, your app shall in no way break or become unusable for him.
2.
Using javascript to its fullest, "no javascript - no go" style. If javascript is not available, the user will not be able to use your application at all. It is important to note, that, in my opinion, there is no middle ground, - if you are trying to be in both worlds at once, you are doing too much extra work. Removing the constraints of supporting no-javascript users, obviously adds more opportunities to create a richer user experience. And it makes creating that experience much easier.
I think that depends on the type of web application you are going to build. For example in an e-commerce application the checkout process should propably work without java script because there are some people who deactivate js for checking out (in our experience). In a web 2.0 application in my opinion it isn't necessary to support non-js browser experience.
Developing for both also complicates the development process and is more cost intensive. you have double your web test efforts (testing with and without js) and also think different in the planning phase.
I think it depends on the market segment you're aiming for, if you're going for a tech crowd -such as Stackoverflow.com, or perhaps slashdot- then you're probably fine in expecting users to have JS installed and active.
Other sites, with a medially tech-aware audience, may suffer from users knowing enough about JS-based exploits to have deactivated JS, but with not enough knowledge to enable Scriptblock (or other browser-equivalent).
The non-tech aware audience are probably with the tech-crowd, since they possibly just don't know how to disable JS -or why they may want to- regardless of the risk.
In short, you should cater to spiders without JavaScript enabled, but only to the degree necessary to index the data that you want to expose to the public. Your browser requirements of IE7+ and FF3+ exclude far more people than the total number of people who disable JavaScript. And of those who do disable it, the vast majority know how to enable it when necessary.
I asked myself the same question the other day and came up with the answer that in order to use my application one must have Javascript enabled. I also checked various Ajax powered sites. Even Stackoverflow.
But considering this fact I also believe that you do need to support some degree of prehistoric applications. The main idea is to not let application break when users don't have enabled Javascript. Application should still display relevant data, but its functionality would be limited.
To add to some of the old discussion on this page. Google is now searching JavaScript: http://www.i-programmer.info/news/81-web-general/4248-google-now-searches-javascript.html
This is an issue that I was thinking about just a few days ago. Here is some information
In Google Chrome there is no way (menu/option) inside the browser to turn off Javascript.
Many websites including those from leading names like Google, etc., will not work without Javascript.
According to stats over 95% of visitors have Javascript enabled now.
These stats made me think. Do I have to break my back writing a lot of background code and everything for users who have disabled Javascript?
My conclusion was this. Yes, I have to include Javascript support, but not at the cost of sanity. I.e. I can afford to give it a low priority.
So I am going to have support for non-javascript browsing, but I will build most of it after my site is deployed.

JavaScript/CSS vs. Silverlight vs. Flex

We currently have a quite complex business application that contains a huge lot of JavaScript code for making the user interface & interaction feel as close to working with a traditional desktop application as possible (since that's what our users want). Over the years, this Javascript code has grown and grown, making it hard to manage & maintain and making it ever more likely that adding new functionallity will break some existing one. Needless to say, lots of this code also isn't state of the art anymore.
Thus, we have some ongoing discussion whether the client-side part of the application should be written anew in either Flex or Silverlight, or written anew with some state of the art JavaScript framework like jQuery, or whether we should simply carry on with what we have and gradually try to replace the worst bits of the existing code. What makes this even harder to decide is that writing the UI anew will probable cost us 6-12 person months.
I'd like to hear your thoughts on that issue (maybe some of you have already had to make a similar decission).
EDIT: To answer some of the questions that came up with the answers: The back-end code is written in C#, the target audience are (usually) non-technical users from the companies we sell the software to (not the general public, but not strictly internal users either), the software 'only' has to run in desktop browsers but not necessarily on mobile devices, and the client app is a full-blown UI.
In all honesty, I would refactor the old JavaScript code and not rewrite the application. Since you are asking about which platform to put it in, I would guess that your team isn't an expert in any of them (not slamming the team, it's just a simple fact that you have to consider when making a decision). This will work against you as you'll have double duty rewriting and learning how to do things on the new platform.
By keeping it in JavaScript, you can slowly introduce a framework if you choose and do it iteratively (Replace on section of code, test it, release it, and fix any bugs). This will allow you to do it at a slower pace and get feedback along the way. That way too, if the project is canceled part way through, you aren't out all the work, because the updated code is being used by the end users. Remember the waterfall model, which is essentially what a full swap out of will be almost never works.
As much as I hate to admit this, as it is always the most fun for developers, shifting platforms, and replacing an entire system at once rarely works. There are countless examples of this, Netscape for one. Here is the post from Spolsky on it. (I would also recommend the book Dreaming in Code. It is an excellent example of a software project that failed and how and why). Remember to rewrite a system from scratch you are essentially going to have to go through every line of code and figure what it does and why. At first you think you can skip it, but eventually it comes down to this. Like you said, your code is old, and that means there are most likely hacks in it to get something done. Some of these you can ignore, and others will be, "I didn't know the system needed it to do that."
These things spring to mind:
As you have a .Net backend and you have some ability to force your customers onto a specific platform, Silverlight is an option;
Since your client is a full-blown UI you want widgets and possibly other features like Drag and Drop;
I haven't seen any requirements that to me would justify starting over (which often doesn't work out) in Flex/Silverlight (eg streaming video, SVG support. Added to your team's familiarity with Javascript, I think you can't make a compelling case for doing it in anything other than Javascript.
But of course Javascript is lots of things and there are [lots of Javascript frameworks1. The most important divider is whether your intent is to "decorate" a set of Web pages or you need a full set of Widgets to create a desktop-like application on the Web. Your question indicate it is the latter.
As such--and I may get downvoted for saying this--I don't think jQuery is the answer and I say this as someone who loves jQuery. jQuery (imho) is great to enhance Webpages and abstract cross-browser low-level functionality but the most important factor for complex UI developer is this:
It's all about the widgets.
And yes I'm aware of jQuery UI but it's a lot sparser than the others when it comes to widgets. I suggest you take a look at the samples and widget galleries of some frameworks:
YUI Examples Gallery;
ExtJS demo; and
SmartClient feature explorer.
The others (jQuery, Dojo, Mootools, Prototype) are more "compact" frameworks arguably less suited to your purpose.
Also consider the license of each framework when making your decision.
My thoughts on the above three are:
ExtJS has somewhat angered the community in that it started out as LGPL but had a controversial license change (that thread is at 76 pages!) to GPL/commercial at version 2.1. The problem with that the community no longer has an active participation in the framework's development. Not the mainline version anyway. This means it's being developed and supported by a small team (possibly one person) rather than the community. IMHO it's not worth paying a commercial license for that and GPL is probably prohibitive in your situation;
YUI is supported by Yahoo and available under a far more permissive and far less invasive BSD license. It's mature, well-used and well worth serious consideration; and
SmartClient impresses me a lot. It has perhaps the most permissive license of all (LGPL), is roughly seven years old, has an incredibly impressive array of widgets available. Check out their feature explorer.
Your decision should be based on how you get as much of your application "for free" as possible. You don't want to spending valuable developer time doing things like:
Coding UI widgets like trees and accordions;
Testing and fixing cross-browser Javascript and CSS issues;
Creating homegrown frameworks that greatly duplicate what existing frameworks do and do well.
I would seriously look at one of the above three as your path forward.
This decision is usually less about the technology, and more about your skill sets and comfort zones.
If you have guys that eat and breathe Javascript, but know nothing about .net or Flash/Flex then there's nothing wrong with sticking with Javascript and leaning on a library like jQuery or Prototype.
If you have skills in either of the others then you might get a quicker result using Silverlight or Flex, as you get quite a lot of functionality "for free" with both of them.
My opinion on this one's pretty simple: unless the app needs to be accessible publicly, unless it needs to be search-engine optimized and findable, and/or there's an otherwise compelling case for its having to remain strictly text-based, then the chips are stacked in favor of rich-client runtimes like Flash or Silverlight right out of the gate.
A big reason, if not the biggest, is that they eliminate the complexities of developing for multiple browsers and platforms. Again: they eliminate the runtime-environment variable. No more debugging old versions of Netscape and IE, no more object detection and consequent branching, no more wacky CSS hacks -- one codebase, and you're done. Offloading the runtime environment to Adobe or Microsoft will save you time, money and headaches, all else equal. (Sure, there's YUI, JQuery, etc., but they don't eliminate that variable -- they just abstract it. And they don't abstract all of it, either -- only some of it; ultimately, it's still up to you to test, debug, retest, debug, repeat.)
Of course, your situation's a bit more complicated by the existing-codebase problem, and it's difficult to say definitively which way you should go, because only you've got the code, and we're just geeks with opinions. But assuming, just by your asking the question, that a refactoring of your existing codebase would involve a significant-enough undertaking as to warrant even considering an alternative (and probably comparatively foreign) technology in the first place, which it sounds like it would, then my response is that your curiosity is well-placed, and that you should give them both a serious look before making a decision.
For my part, I'm a longtime server-side guy, ASP.NET/C# for the past several years, and I've written many a text-based line-of-business app in my time, the last few with strong emphasis on delivering rich soverign UIs with JavaScript. I've also spent the last couple of years almost exclusively with Flex. I've got experience in both worlds. And I can tell you without hesitation that right now, it's everyone else's job to beat Flex: it's just an amazingly versatile and productive product, and for line-of-business apps, it remains leaps and bounds ahead of Silverlight. I just can't recommend it highly enough; the data-binding and event-handling features alone are incredible time-savers, to say nothing of the complete freedom you'll have over layout, animation, etc. The list goes on.
So, my advice: take a long, careful look at Flex. In the end, you might find a ground-up rewrite is just too massive an undertaking to justify, and that's fine -- only you can make that determination. (And to be fair, you've just as much ability to make a mess of a Flex project as you do with a JavaScript project -- I know. I've done it.) But all told, Flex is probably the least-limiting, most flexible, most feature-rich and productive option out there right now, so it's certainly worth considering. Good luck!
Any javascript you have that has been developed 'Over the years' probably doesn't look anything like what's possible today. You undoubtedly have a lot of useful code there. nonetheless. So my recommendation would be re-write in javascript using jQuery and make use of one of the available GUI add-ons, perhaps look at Yahoos stuff. You will also be targeting the widest audience this way.
The GUI technology should be first and foremost determined by your target audience. For instance, if you target englobes iPhone users, I would not recommend Flex, because iPhone doesn't have a flash player at the moment.
Bear in mind that if you switch to a full fledged GUI toolkit like Silverlight, your users may find the L&F unnatural, since the usual request-reply cycle is not so evident with client-side frameworks.
After that, it is your developers that should have a word to say. Every toolkit needs maintenance, and if you are switching to a whole new toolkit the developers will have to familiarize with the new toolkit, which can be costly.
My suggestion is that you stick to javascript, since your devs are familiar with it, and gradually replace the old javascript with a new toolkit like prototype, jQuery or any other. You will probably redo some of the old stuff faster using a state-of-the-art toolkit. Remember that you can build beautiful apps with any tookit.
We have developed an extremely rich application using EXTJS with C# and a some C++ on the server. Not only do we have clients who are happy with the interface in their desktop browsers but with very little tweaking to the Javascript we were able to provide web browser support. Also, we have clients in third-world countries who cannot use Flash or Silverlight apps due to their field personnel using kiosks in internet cafes (many of which don't have Flash installed - forget Silverlight!). I think these issues and others make up for the difficulty of coding a complex app in javascript...
Check this comparison table for Flex vs Javascript:

Javascript Distributed Computing [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Why aren't there any Javascript distributed computing frameworks / projects? The idea seems absolutely awesome to me because:
The Client is the Browser
Iteration can be done with AJAX
Webmasters could help projects by linking the respective Javascript
Millions or even billions of users would help DC projects without even noticing
Please share your views on this subject.
EDIT: Also, what kind of problems do you think would be suitable for JSDC?
GIMPS for instance would be impossible to implement.
I think that Web Workers will soon be used to create distributed computing frameworks, there are some early attempts at this concept. Non-blocking code execution could have been done before using setTimeout, but it made a little sense as most browser vendors focused on optimizing their JS engines just recently. Now we have faster code execution and new features, so running some tasks unconsciously in background as we browse the web is probably just a matter of months ;)
There is something to be said for 'user rights' here. It sounds like you're describing a situation where the webmaster for Foo.com includes the script for, say, Folding#Home on their site. As a result, all visitors to Foo.com have some fraction of their CPU "donated" to Folding#Home, until they navigate away from Foo.com. Without some sort of disclaimer or opt-in, I would consider that a form of malware and avoid vising any site that did that.
That's not to say you couldn't build a system that asked for confirmation or permission, but there is definite potential for abuse.
I have pondered this myself in the context of item recommendation.
First, there is no problem with speed! JIT compiled javascript can be as fast as unoptimized C, especially for numeric code.
The bigger problem is that running javascript in the background will slow down the browser and therefore users may not like your website because it runs slowly.
There is obviously an issue of security, how can you verify the results?
And privacy, can you ensure sensitive data isn't compromised?
On top of this, it's quite a difficult thing to do. Can the number of visits you receive justify the effort that you'll have to put into it? It would be better if you could run the code transparently on either the server or client-side. Compiling other languages to javascript can help here.
In summary, the reason that it's not widespread is because developers' time is more valuable than server time. The risk of losing user data and the inconvenience to users outweighs the potential gains.
First that comes to my mind is security.
Almost all distributed protocols that I know have encryption, thats why they prevent security risks. Although this subject is not so innovative..
http://www.igvita.com/2009/03/03/collaborative-map-reduce-in-the-browser/
Also Wuala is a distributed system, that is implemented using java applet.
I know of pluraprocessing.com doing similar thing, not sure if exactly javascript, but they run Java through browser and runs totally in-memory with strict security.
They have 50,000 computers grid on which they have successfully run applications even like web-crawling (80legs).
I think we can verify results on some kind of problem.
Let's say we have n number of items and need to sort it. We'll give it to worker-1, worker-1 will give us the result. We can verify it O(n) time. Please consider that it take at least O(n*log(n)) time to produce the result. Additionally we should consider how large is n items? (concern about network speed)
Another example, f(x)=12345, and function is given. Purpose is to find value of x. We can test it by replace x with some worker's result. I think some problems that are not verifiable are difficult to give to someone.
The whole idea of Javascript Distributed Computing has number of disadvantages:
single point of failure - there is no direct way to comunicate between nodes
natural fails of nodes - every node is working as long as browser
no guarantee that message sent will be ever received - according to natural fails of nodes
no guarantee that message received have been ever sent - because some hacker can interpose
annoying load on client side
ethical problems
while there is only one (but very tempting) advantage:
easy and free access to milions of nodes - almost every device has JS supporting browser nowadays
However the biggest problem is corelation between scalability and annoyance. Let's say you offer some attractive web service and run computing on client side. More people you use for computing, more people are annoyed. More people are annoyed, less people use your service. Well, you can limit annoyance (computing), scalability or try something between.
Consider google for example. If google will run computations on client side, some people will start to use bing. How many ? Depends on annoyance level.
The only hope for Javascript Distributed Computing may be multimedial services. As long as they consume lots of CPU, nobody will notice any additional load.
I think the no.1 problem is javascript inefficiency at computing. It wouldn't be just worth it, because an application in pure c/c++ would be 100 times faster.
I found a question similar to this a while back, so I built a thingy that does this. It uses web workers and fetches scripts dynamically (but no Eval!). Web workers sandbox the scripts so they cannot access the window or the DOM. You can see the code here, and the main website here
The library has a consent popup on first load, so the user knows what's going on in the background.

Categories