Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
With the release of Chrome 77 the Web Serial API became available with the use of an experimental flag. This is particularly useful for desktop applications running in NW.js or Electron, where NodeJS has previously (and still does to a large degree) provided a bridge between web and native.
I find myself very much wanting to abandon the use of NPM packages like serialport, which extend both NW.js and Electron to provide serial port access.
While Electron 8.0.1 does make available navigator.serial, it's not exactly clear how much of the API is actually implemented. To further complicate things, there is no good documentation for the API (at least in my search) besides https://wicg.github.io/serial/ and https://github.com/WICG/serial/blob/gh-pages/EXPLAINER.md. I've tried tinkering with it on my own, but it's not clear whether I'm using it incorrectly, or whether parts simply aren't implemented.
So what is the status of this API? Which parts are reliably implemented (in Chromium), and is there any indication of when this will be ready for prime time? I think a lot of people are wondering this as it opens quite a few doors for interaction with the user's PC.
Here are some resources for tracking the status of the Serial API and its implementation in Chromium,
Draft Specification, as you've pointed out it is incomplete and I'm working on fixing that.
Specification "explainer", this is a less formal introduction to the specification and a more up-to-date reference for the current design of the API.
Chrome Platform Status entry, this tracks the official implementation status in Chrome.
Chromium implementation tracking issue, star this issue for updates as the implementation work progresses.
Polyfill library, this library uses WebUSB to implement the API for standard USB CDC class devices. Think of this at the moment as a prototype of what the API could look like when implemented in a browser.
Code lab, if you're looking for a larger example of how to use the API this code lab explains how to get started communicating with a particular device in a step-by-step fashion.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
It's quite basic and most browsers have the necessary features readily in place.
I miss a way to tell the browser that a given web-page is a web-application.
Why has not anyone implemented a cross-platform "web-app" specific HTML header tag that gives the user an option to appify a web-page i.e. for example start a chromeless browser?
It's simple. A tag in the header, and a event to trigger the browsers "install app" procedure, that basically just created a bookmark link with a custom icon, that would trigger the page in a chromeless/customized browser.
No more downloading and installing applications. Just local cache of a web-page and its scripts, that was automatically fetched on load if the user was online.
The web-developer could specify options in for example a manifest.json, like what to cache locally, what size the window should run in, in fullscreen, and if it should run completely chromeless or within a frame etc.
Most browsers have everything in place. Is there any reason as to why this is not standardized, I guess I'm not the first developer to think of this approach.
Chrome has a somewhat similar feature on desktop, but there is so little missing to provide a full fledged cross-platform browser-agnostic web-application platform. It is a future proof and backward-compatible approach as far as I know.
Standardization takes time.
There is a W3C Working Group dedicated to Web Apps. Here is a list of their publications: http://www.w3.org/2008/webapps/wiki/PubStatus.
Take for example an editor draft (ED) on manifest files: Manifest for Web apps. You'll also see they are hard at work on a Fullscreen API, a File API, and a Quota API. All very close to what you are asking for.
For example, here is the abstract from the Quota Management API Editor Draft (just a month old, emphasis mine):
This specification defines an API to manage usage and availability of local storage resources, and defines a means by which a user agent (UA) may grant Web applications permission to use more local space, temporarily or persistently, via various different storage APIs
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
What is ORBX.js and how does it relate to the following points I thought were only available on a operating system (Not Web, but actually running in the system):
Watching a 1080p video without codecs
Playing a game (Like Left 4 Dead 2) on the Web
What characteristics does it offer for the web that we are used to seeing on a non web system (With this i mean running an app that is not web related, like a computer game)
How is this possible?
For a non-critical "this is super awesome" piece for those who have not yet heard of this technology, see this write-up: Today I saw the future (this is the same as mentioned by Jonathan in the comment above - and yes, its kind of a puff piece)
To digest it, this is the latest re-emergence of the concept of the thin-client. In short, the viewing device uses a web browser to launch a "stream" that is very much like a virtual/remote desktop. It specifically allows remote desktop applications itself, but it also allows something where the behind-the-scenes implementation is different; this is the "GPU cloud" referred to.
What makes this a thing is that it would effectively allow future computers/devices to target one stationary target - the ability to run a web browser and decode a video stream, and if they can do that then they can just as well any application or program imaginable, no matter how intensive it might be. This is because all the processing would be done in the cloud, with the only thing sent to the client being a video stream. In theory this could extend consumer hardware cycles and cut prices, as a 1ghz CPU would be just as good as a 100ghz.
In reality, the obstacles are bandwidth, latency, connectivity, and of course a successful implementation of this technology with wide-spread marketplace compatibility. Then, of course, you start having to pay for cloud computing as a cost of using your cloud-based applications. And without a fast internet connection, you are back to using native downloaded apps only.
For now there is no open encoder, which is a problem for developers and programmers. Only time will tell if it is a viable technology.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Recently I have come across datachannel. I would like to integrate two infrastructure to enable webrtc interoperability.
Lync server will provide signalling and presence to help locate lync client and other peer is connected to a IMS application server. The other peer is not a lync client and its a webrtc enabled browser. how will I transfer data by integrating this two infrastructure(Lync and IMS)?
Would be great to have some information on which application layer protocols can be used for transferring datastreams thru the peers that are interoperable?
Unless you're prepared to do a lot of low-level coding and/or high-level hair-pulling, my suspicion is that WebRTC isn't quite ready for a scenario like this one quite yet. There are some folks who have managed to get it working with some servers like Asterisk, and there's supposedly a general purpose SIP client available here: https://code.google.com/p/sipml5/. But from what I hear hanging out on the WebRTC mailing list, folks are having a fair bit of trouble with these integration scenarios. There's certainly nothing that just works out-of-the-box. Lync supports SDP and SIP, but I expect that you'd need to spend a lot of time figuring out how you need to transform the SDP that WebRTC generates before it ever gets to the Lync server.
[Edit 1/28/2013] - Beyond the issues above, the real problem maybe with the codecs that are supported by each platform. Currently, I believe that the only codec supported by WebRTC is Google's VP8, which doesn't appear to be supported natively by Lync. So you'd need a realtime gateway/transcoder sitting between them, translating between H.264 (or whatever protocol Microsoft Lync settles on) and VP8. Assuming you can find a gateway to do that - they may very well exist - I can't imagine it's going to scale very well.
Just to complement Ken Smith's answer, check out MCU Media Server by Medooze.
They claim to have transcoding and conferencing and they support WebRTC.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm looking into the possibilities of creating a Javascript application which makes use webGL.
Since webGL is only available in a couple of browsers, and I do not want to force people to use a certain browser (directly), I would like to offer a standalone app client download aswell.
Would it be possible to somehow create a borderless and standalone "fake browser" client which has my app embedded for both linux and windows?
This would allow me to distribute a standalone client without having to modifiy my application code.
jslibs is a standalone JavaScript runtime that has a good support of OpenGL. Look at these samples.
WebGL support has been offered in WebKit builds for more than a year. So if your clients use Macintosh computers then you could create a standalone application using WebKit.
And that is just for the time being. By next year, I think you will see support for WebGL in every major browser except IE, which is experiencing a dwindling user base.
Alternatively, you could write a plugin/addon for each major browser, including IE. That is more work for you. If you do not want to leave the web standards to the browser makers, you can take up the challenge yourself.
Probably not the best plan in the long term, though. They will continually optimize for speed, memory efficiency, rendering quality, and responsiveness. You probably will not want to put the same amount of continuous effort into it as them. You probably will not offer the same cross platform support they do.
Creating an application specific browser (ASB) with one of the standard toolkits, and then transitioning to using web browsers directly a short ways down the road is probably the way to go. No reason why you cannot work out the compatibility testing for that strategy now, since the browsers with support for it are already in public beta.
So you want to write a standalone platform-independant application in javascript that can use opengl?
I would try making a JOGL application.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
What JavaScript Mobile development tool would you use based on momentum, existing documentation and functionality, and ability to get passed the App Stores strict policies?
Current PhoneGap release (0.8.0) released 2009-10-13 is tagged "Apple approved for App Store submissions". This blog post has more details.
I used PhoneGap to port a JavaScript game and I loved it. Unfortunately, the game was too slow (Mobile Safari is slow when you make changes to the DOM, and I was moving divs around as sprites) and I switched to native.
But since some people started getting having their PhoneGap apps rejected, I have become shy of the project. I'd love to hear an official stance from Apple, but I don't know if it'll ever come.
I found Phonegap to be the easiest to use. However Quickconnect seems to be more ambitious in terms of multi-platform support, the author tells me that Quickconnect has been used in many apps (but couldn't disclose which). Supposedly Phonegap apps were being rejected because those submitting were loading the entire apps off the web, however the framework does seem sluggish. Apple has not replied to the Phonegap team about the app rejection.
If I had to make a choice it would be Phonegap at this stage, but unless you really want the app on multiple platforms I don't see why you wouldn't use the great tools provided by Apple for native development.
At this point you might also want to look at Titanium by Appcelerator.
The development process is pretty simple, and they support both the iPhone and Android platforms.