Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
What is ORBX.js and how does it relate to the following points I thought were only available on a operating system (Not Web, but actually running in the system):
Watching a 1080p video without codecs
Playing a game (Like Left 4 Dead 2) on the Web
What characteristics does it offer for the web that we are used to seeing on a non web system (With this i mean running an app that is not web related, like a computer game)
How is this possible?
For a non-critical "this is super awesome" piece for those who have not yet heard of this technology, see this write-up: Today I saw the future (this is the same as mentioned by Jonathan in the comment above - and yes, its kind of a puff piece)
To digest it, this is the latest re-emergence of the concept of the thin-client. In short, the viewing device uses a web browser to launch a "stream" that is very much like a virtual/remote desktop. It specifically allows remote desktop applications itself, but it also allows something where the behind-the-scenes implementation is different; this is the "GPU cloud" referred to.
What makes this a thing is that it would effectively allow future computers/devices to target one stationary target - the ability to run a web browser and decode a video stream, and if they can do that then they can just as well any application or program imaginable, no matter how intensive it might be. This is because all the processing would be done in the cloud, with the only thing sent to the client being a video stream. In theory this could extend consumer hardware cycles and cut prices, as a 1ghz CPU would be just as good as a 100ghz.
In reality, the obstacles are bandwidth, latency, connectivity, and of course a successful implementation of this technology with wide-spread marketplace compatibility. Then, of course, you start having to pay for cloud computing as a cost of using your cloud-based applications. And without a fast internet connection, you are back to using native downloaded apps only.
For now there is no open encoder, which is a problem for developers and programmers. Only time will tell if it is a viable technology.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
With the release of Chrome 77 the Web Serial API became available with the use of an experimental flag. This is particularly useful for desktop applications running in NW.js or Electron, where NodeJS has previously (and still does to a large degree) provided a bridge between web and native.
I find myself very much wanting to abandon the use of NPM packages like serialport, which extend both NW.js and Electron to provide serial port access.
While Electron 8.0.1 does make available navigator.serial, it's not exactly clear how much of the API is actually implemented. To further complicate things, there is no good documentation for the API (at least in my search) besides https://wicg.github.io/serial/ and https://github.com/WICG/serial/blob/gh-pages/EXPLAINER.md. I've tried tinkering with it on my own, but it's not clear whether I'm using it incorrectly, or whether parts simply aren't implemented.
So what is the status of this API? Which parts are reliably implemented (in Chromium), and is there any indication of when this will be ready for prime time? I think a lot of people are wondering this as it opens quite a few doors for interaction with the user's PC.
Here are some resources for tracking the status of the Serial API and its implementation in Chromium,
Draft Specification, as you've pointed out it is incomplete and I'm working on fixing that.
Specification "explainer", this is a less formal introduction to the specification and a more up-to-date reference for the current design of the API.
Chrome Platform Status entry, this tracks the official implementation status in Chrome.
Chromium implementation tracking issue, star this issue for updates as the implementation work progresses.
Polyfill library, this library uses WebUSB to implement the API for standard USB CDC class devices. Think of this at the moment as a prototype of what the API could look like when implemented in a browser.
Code lab, if you're looking for a larger example of how to use the API this code lab explains how to get started communicating with a particular device in a step-by-step fashion.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am new to front end development. Let's say hypothetically my clients have modern hardware, 2.20 GHz CPU, 4GB ram. A flashy website that uses a lot of event handlers, and animations like "slide for div containers to move in and out on users click" and "jquery "on" click for mustache templates created on the fly with ajax (for elements being added after initial DOM load).
I know it depends if my programming skills are really terrible which could be causing the user end to perform really poorly, but lets say I program as close to efficient as possible, and use very well written plug-ins.
Question - Do clients with that hardware handle a highly customized design well? Are browser development tools the best out there in terms of troubleshooting and analyzing performance, or is there a highly popular widely used tool that does the job for a lot of developers?
My question focuses on two key points.
Client Side Performance based on sleek and flashy websites using
plugins mentioned below on modern hardware.
What do developers use to help them check hardware utilization,
profile and troubleshoot issues.
browser development tools is sufficient?
A popular tool widley used by developers that I haven't discovered?
Additional Notes
I am also using my application server to host these files since I am using MVC so its not completely static html files. Plugins include:
Jquery
BootstrapJS
Bootstrap Max Length
Jquery UI(Effects Corewith Slidejs, its 14Kb in size)
Jquery Uniform
MustacheJS
Jquery Uniform took the biggest hit on page load times I called it on about 100 elements when the page loaded. So I changed it by calling it on elements that needed it when the client clicks on a div to slide open on specific selectors.
What influenced my question?
Toying around and reading about Angularjs. Since everything is client side, I read mixed feelings on it slowing the client down and possibly speeding it up. Since I am already knee deep in Jquery I was curious how well it performs with the more modern hardware using a lot of flashy components and DOM manipulation that I mentioned above. This is my first front end design, so I know more seasoned Front End Developers know how well browsers handle these knew frameworks being used.
Why it's important
The temptation to add all the glitter and flashiness to a webpage to make it look more attractive to the client, could actually work against me is a concern.
In my experience you can have hundred of thousands of DOM elements, thousands of objects with arrays inside and nothing happens. In the web are many awful sites made up like Frankenstein's monsters with literally dozens of libraries and have not performance problems in computers from 15 years ago.
While you don't mess with 3d rendering and that experimental things you won't have problems in mainstream PCs. I don't know what your flashy cool features are. Also if you are very bad programming and make almost never-ending loops or something like that could be a problem, but that's anywhere.
Another problem is the download time, and is the most important thing to care about, if you have tons of code it will take longer to download. Usually in server applications you prefer performance before lightness but in the front end is better to download small files. Internet always will be slower than the CPU and RAM.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I am developing a video streaming app in node.js. As it might consume a lot of network bandwidth, I want to find out how can I in node.js track the bandwidth usage of the application as well as a whole the system itself?
Example of desired output:
This App: 100 kb Rx 18 mb Tx
Total: 200 kb Rx 20 mb Tx
Embrace yourself, as it is not going to be any easy.
Only nodejs module which supposedly tracks network traffic I found is network-debugger. I have little faith you will find an answer to your question in that project though.
Network monitoring is usually highly OS specific:
Windows: Only option is the WinPCap
MacOS: nettop. Few linux network utilities ports might work too.
Linux: you are lucky man. Lately nethogs has been the most popular network top utility and is available in most distributors through their package managers. Please have a look at this comparison article to learn more about linux network tools.
Yes, you are right, it is not a nodejs library, but as you guessed there is not one. Partially this might be because sys admins always preferred console based one-thing tools to monitor their servers, as a result there is a very little demand to implement network monitoring library in node.
To achieve the desired:
Choose one of the available network tools.
Play around with it, see if it is what you want (they are different, so make sure to try few).
If you are liking the information it gives, learn how to output it into a pipe-friendly format.
Now you need your node.js server to launch the tool in that pipe mode and catch the output.
Done.
There is an interesting article in great detail on how to code a system monitor in node.js using existing console based tools, might be of a great help.
you might have to choose one of those console based tools, learn how to launch it in pipe-friendly mode (so that the output can be piped to a file or a stream right into your node.js server)
Alternatively, you could reinvent the wheel and make your own network monitoring tool in node.js, but I fear it will end up being half JavaScript, half OS specific code in C or Objective-C.
Also note that you need adminsitrative rights (sudo or admin user on windows) to monitor the network. Meaning you might have to run node.js with elevated rights. That is an extra risk to keep in mind.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Recently I have come across datachannel. I would like to integrate two infrastructure to enable webrtc interoperability.
Lync server will provide signalling and presence to help locate lync client and other peer is connected to a IMS application server. The other peer is not a lync client and its a webrtc enabled browser. how will I transfer data by integrating this two infrastructure(Lync and IMS)?
Would be great to have some information on which application layer protocols can be used for transferring datastreams thru the peers that are interoperable?
Unless you're prepared to do a lot of low-level coding and/or high-level hair-pulling, my suspicion is that WebRTC isn't quite ready for a scenario like this one quite yet. There are some folks who have managed to get it working with some servers like Asterisk, and there's supposedly a general purpose SIP client available here: https://code.google.com/p/sipml5/. But from what I hear hanging out on the WebRTC mailing list, folks are having a fair bit of trouble with these integration scenarios. There's certainly nothing that just works out-of-the-box. Lync supports SDP and SIP, but I expect that you'd need to spend a lot of time figuring out how you need to transform the SDP that WebRTC generates before it ever gets to the Lync server.
[Edit 1/28/2013] - Beyond the issues above, the real problem maybe with the codecs that are supported by each platform. Currently, I believe that the only codec supported by WebRTC is Google's VP8, which doesn't appear to be supported natively by Lync. So you'd need a realtime gateway/transcoder sitting between them, translating between H.264 (or whatever protocol Microsoft Lync settles on) and VP8. Assuming you can find a gateway to do that - they may very well exist - I can't imagine it's going to scale very well.
Just to complement Ken Smith's answer, check out MCU Media Server by Medooze.
They claim to have transcoding and conferencing and they support WebRTC.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm looking into the possibilities of creating a Javascript application which makes use webGL.
Since webGL is only available in a couple of browsers, and I do not want to force people to use a certain browser (directly), I would like to offer a standalone app client download aswell.
Would it be possible to somehow create a borderless and standalone "fake browser" client which has my app embedded for both linux and windows?
This would allow me to distribute a standalone client without having to modifiy my application code.
jslibs is a standalone JavaScript runtime that has a good support of OpenGL. Look at these samples.
WebGL support has been offered in WebKit builds for more than a year. So if your clients use Macintosh computers then you could create a standalone application using WebKit.
And that is just for the time being. By next year, I think you will see support for WebGL in every major browser except IE, which is experiencing a dwindling user base.
Alternatively, you could write a plugin/addon for each major browser, including IE. That is more work for you. If you do not want to leave the web standards to the browser makers, you can take up the challenge yourself.
Probably not the best plan in the long term, though. They will continually optimize for speed, memory efficiency, rendering quality, and responsiveness. You probably will not want to put the same amount of continuous effort into it as them. You probably will not offer the same cross platform support they do.
Creating an application specific browser (ASB) with one of the standard toolkits, and then transitioning to using web browsers directly a short ways down the road is probably the way to go. No reason why you cannot work out the compatibility testing for that strategy now, since the browsers with support for it are already in public beta.
So you want to write a standalone platform-independant application in javascript that can use opengl?
I would try making a JOGL application.