Offline / Non-Realtime Rendering with the Web Audio API - javascript

The Problem
I'm working on a web application where users can sequence audio samples and optionally apply effects to the musical patterns they create using the Web Audio API. The patterns are stored as JSON data, and I'd like to do some analysis of the rendered audio of each pattern server-side. This leaves me with two options, as far as I can see:
Run my own rendering code server-side, trying to make it as faithful as possible to the in-browser rendering. Maybe I could even pull out the Web Audio code from the Chromium project and modify that, but this seems like potentially a lot of work.
Do the rendering client-side, hopefully faster-than-realtime, and then send the rendered audio to the server. This is ideal (and DRY), because there's only one engine being used for pattern rendering.
The Possible Solution
This question lead me to this code sample in the Chromium repository, which seems to indicate that offline processing is a possibility. The trick seems to be constructing a webkitAudioContext with some arguments (usually, a zero-argument constructor is used). The following are my guesses at what the parameters mean:
new webkitAudioContext(2, // channels
10 * 44100, // length in samples
44100); // sample rate
I adapted the sample slightly, and tested it in Chrome 23.0.1271.91 on Windows, Mac, and Linux. Here's the live example, and the results (open up the Dev Tools Javascript Console to see what's happening):
Mac - It Works!!
Windows - FAIL - SYNTAX_ERR: DOM Exception 12
Linux - FAIL - SYNTAX_ERR: DOM Exception 12
The webkitAudioContext constructor I described above causes the exception on Windows and Linux.
My Question
Offline rendering would be perfect for what I'm trying to do, but I can't find documentation anywhere, and support is less-than-ideal. Does anyone have more information about this? Should I be expecting support for this in Windows and/or Linux soon, or should I be expecting support to disappear soon on Mac?

I did some research on this a few months back, and there is a startRendering function on the audioContext, but I was told by Google people that the implementation was, at that time, due to change. I don't think this has happened yet, and it's still not a part of the official documentation, so I'd be careful building an app that depends on it.
The current implementation doesn't render any faster than realtime either (maybe slightly in very light applications), and sometimes even slower than realtime.
Your best bet is hitting the trenches and implement Web Audio server-side if you need non-realtime rendering. If you could live with realtime rendering there's a project at https://github.com/mattdiamond/Recorderjs which might be of interest.
Please note that I'm not a googler myself, and what I was told was not a promise in any way.

Related

Take Full Page Screenshots with Intern JS

I'm pretty new to JavaScript programming when it comes to UI automation and have come from a Java based Selenium background. I'm currently trying to get my head around InternJS and how to use it to take full page screenshots of any URL that I wish and on any device. The end goal is to take screenshots of a specific website on multiple devices for manual visual verification purposes and using a Sauce Labs account.
I was able to separate the takeScreenshot() functionality into a re-usable method as follows:
MyFile.prototype.takeScreenshotAndWriteToFile = function (fileName) {
return function () {
return this.parent
.takeScreenshot()
.then(function (fileAsBuffer) {
fileSystem.writeFile(fileName, fileAsBuffer, 'base64');
})
.catch(function (error) {
console.log(error);
});
};
};
However when I run this on various devices / browsers via our Sauce Labs account, I am getting the following results:
Desktop with Chrome and Firefox (latest versions on Windows 10) only takes a screenshot of what's immediately visible in the browser window at the time I make the request.
The same result as above applies to testing on Mobile devices and Tablets.
Oddly, Safari Version 11 (on Mac) does take a full page screenshot. Same method. Same implementation. Different result.
I'm completely confused as to why something so simple as wishing to take a full page screenshot is proving to be such a complicated issue... could anyone please point me in the right direction as to what I can do to achieve my goal?
Or if anyone knows of a better alternative to Intern JS in this case? I'm open to any ideas / advice at this stage.
As Florent pointed out, screenshots are actually handled by the driver that's interfacing with the browser (e.g. chromedriver) rather than Intern, which is interfacing with the driver. According to the WebDriver spec, a screenshot will only be of the viewport, not the full page. The JSON wire protocol (the precursor to WebDriver) is a bit vaguer on the subject. In any case, different browser drivers can and do behave differently in many situations.
Any testing system that uses WebDriver/Selenium to manage browsers (which is most of them, particularly the popular open source ones) is going to be subject to the capabilities of the driver, and may not support this feature out-of-the-box. However, it could probably be implemented in the testing system (higher level than the WebDriver), so it would be worth filing a feature request with Intern (or whichever WebDriver-based testing system you might be using) if it seems worthwhile.

Using SSDP to display all devices in network

I have googled this question quite often but am still a little confused as to whether what exactly I'm trying to do is possible or not.
Basically, I am trying to add a dropdown menu to my web application in which it lists all devices connected to the network. When I say devices, I'm not talking about all devices; I am talking about certain hardware devices that I am using in which SSDP is implemented. I have already created Node.js programs that send M-SEARCHes and successfully find all the devices but I understand that Node.js is not a browser javascript and there is no way I could display the output of a Node call in a terminal on a browser (please correct me if I am wrong).
After doing a bit more research into it, I realized that alternatives when doing something of this sort on a browser is to either create some sort of Chrome extension that is able to do SSDP and send M-searches, or to open websockets using a websocket API (don't think this is particularly useful in my case for SSDP but I may be wrong).
Given what I am trying to do, are either of these alternatives helpful. Is what I am trying to do even possible? Once again, I have done my research in this topic but I really haven't been able to find a clear answer. If it is possible, I'd really appreciate links to tutorials or just general ideas on how to accomplish what I am trying to do.
I know I posted something on StackOverflow recently about this, which got no answers or replies, but I have done more research into this topic and felt like I do have a better understanding. That being said, I'd still appreciate some direction as to how to approach this problem as I haven't found anything too useful online.
Thank you for your time!
Chrome extensions cannot access the sockets.udp API as far as I know. The right place to do that in Chrome would probably have been a Chrome App, as they can do UDP Multicast: https://codereview.chromium.org/12684008/ . In fact there seems to be an SSDP app already ...
Unfortunately Chrome Apps have been deprecated in favor of normal web apps (outside of Chrome OS at least), and as you've found out you can't do SSDP through normal web APIs yet. The socket API is under works but there's no telling if and when they might solve the security problems inherent in allowing a random web app to do things like join a local multicast group.
Websockets are unlikely to provide what you need.
Its possible.
Node.js is not a browser javascript and there is no way I could display the output of a Node call in a terminal on a browser
They both run Javascript. Run your nodjs in a terminal or pipe the output to a text file if terminal is not accessible. in both cases console.log() should be able to print out.
For SSDP on client and server side, use this : https://www.npmjs.com/package/node-ssdp
You need not use a Chrome app specically. You can write apps in Javascript based cross platform frameworks like Electron. Itll become a fully functional 'web'-app for PCs and for mobiles you can use Cordova and the likes.

How do I use Glimpse to find out what is causing my ASP.NET MVC application to hang?

I have an ASP.NET MVC application that makes pretty heavy use of javascript and JQuery for both administrative functions as well as customer-facing functions. Recently I reorganized the administrative screens to be able to more cleanly fit administrative controls for some new features.
I tested using IE and Chrome and found that there was a slight, but acceptable hang in one of the busier pages. However, the main person who uses the admin pages uses Firefox and kept reporting an unacceptable hang. I finally checked it out and found that what hangs in Chrome and IE for 2-3 seconds hangs in Firefox for 10-12 seconds, which is no good.
Not knowing where to turn, I wound up installing Glimpse and got it configured and running just fine, but I'm still having trouble figuring out how to drill into it to find out what area of the page is causing trouble. All I can tell so far is that it is definitely something with how the client (Firefox) is rendering. To be clear, it happens on all browsers, but for some reason it is way more pronounced in Firefox.
Can someone please give me some pointers on how to get started on diagnosing the issue? I'm not married to the idea of using Glimpse, but it seems like a pretty decent tool from what I can tell.
Thanks for your help.
Based on what you're describing, the problem appears to be client side. With that said, Glimpse may not be as well-suited as using Firefox's own profiler.
SHIFT+F5 will bring up the web developer performance screen. From there, you can begin/end a performance analysis and gain more insight into what may be taking longer than expected.
It may also be worthwhile to look at the network tab and make sure assets are loading in a timely manner.
Keep in mind as well that add-ins could play into the latency. If the end-user has a setup that performs post-page processing (such as Greasemonkey scripts or (recalling an earlier add-in) a Skype plugin that used to transform phone numbers on the page to direct-dial links), that would also play a part in the performance. A good way to rule these out is to hold down SHIFT while starting up Firefox (effectively running it in Safe Mode), which would determine if it's Firefox itself or an add-in that's to blame.

Is there a way to automate the testing of chrome extensions? [duplicate]

I'm going to write bunch of browser extensions (the same functionality for each popular browser). I hope, that some of the code will be shared, but I'm not sure about this yet. For sure some of extensions will use native API. I have not much experience with TDD/BDD, and I thought it's good time to start folowing these ideas from this project.
The problem is, I have no idea how to handle it. Should I write different tests for each browser? How far should I go with these tests? These extensions will be quite simple - some data in a local storage, refreshing a page and listening through web sockets.
And my observation about why is it hard for me - because there is a lot of behaviour, and not so much models, which are also dependent on a platform.
I practise two different ways of testing my browser extensions:
Unit tests
Integration test
Introduction
I will use the cross-browser YouTube Lyrics by Rob W extension as an example throughout this answer. The core of this extension is written in JavaScript and organized with AMD modules. A build script generates the extension files for each browser. With r.js, I streamline the inclusion of browser-specific modules, such as the one for cross-origin HTTP requests and persistent storage (for preferences), and a module with tons of polyfills for IE.
The extension inserts a panel with lyrics for the currently played song on YouTube, Grooveshark and Spotify. I have no control over these third-party sites, so I need an automated way to verify that the extension still works well.
Workflow
During development:
Implement / edit feature, and write a unit test if the feature is not trivial.
Run all unit tests to see if anything broke. If anything is wrong, go back to 1.
Commit to git.
Before release:
Run all unit tests to verify that the individual modules is still working.
Run all integration tests to verify that the extension as whole is still working.
Bump versions, build extensions.
Upload update to the official extension galleries and my website (Safari and IE extensions have to be hosted by yourself) and commit to git.
Unit testing
I use mocha + expect.js to write tests. I don't test every method for each module, just the ones that matter. For instance:
The DOM parsing method. Most DOM parsing methods in the wild (including jQuery) are flawed: Any external resources are loaded and JavaScript is executed.
I verify that the DOM parsing method correctly parses DOM without negative side effects.
The preference module: I verify that data can be saved and returned.
My extension fetches lyrics from external sources. These sources are defined in separate modules. These definitions are recognized and used by the InfoProvider module, which takes a query, (black box), and outputs the search results.
First I test whether the InfoProvider module functions correctly.
Then, for each of the 17 sources, I pass a pre-defined query to the source (with InfoProvider) and verify that the results are expected:
The query succeeds
The returned song title matches (by applying a word similarity algorithm)
The length of the returned lyrics fall inside the expected range.
Whether the UI is not obviously broken, e.g. by clicking on the Close button.
These tests can be run directly from a local server, or within a browser extension. The advantage of the local server is that you can edit the test and refresh the browser to see the results. If all of these tests pass, I run the tests from the browser extension.
By passing an extra parameter debug to my build script, the unit tests are bundled with my extension.
Running the tests within a web page is not sufficient, because the extension's environment may differ from the normal page. For instance, in an Opera 12 extension, there's no global location object.
Remark: I don't include the tests in the release build. Most users don't take the efforts to report and investigate bugs, they will just give a low rating and say something like "Doesn't work". Make sure that your extension functions without obvious bugs before shipping it.
Summary
View modules as black boxes. You don't care what's inside, as long as the output matches is expected or a given input.
Start with testing the critical parts of your extension.
Make sure that the tests can be build and run easily, possibly in a non-extension environment.
Don't forget to run the tests within the extension's execution context, to ensure that there's no constraint or unexpected condition inside the extension's context which break your code.
Integration testing
I use Selenium 2 to test whether my extension still works on YouTube, Grooveshark (3x) and Spotify.
Initially, I just used the Selenium IDE to record tests and see if it worked. That went well, until I needed more flexibility: I wanted to conditionally run a test depending on whether the test account was logged in or not. That's not possible with the default Selenium IDE (it's said to be possible with the FlowControl plugin - I haven't tried).
The Selenium IDE offers an option to export the existing tests in other formats, including JUnit 4 tests (Java). Unfortunately, this result wasn't satisfying. Many commands were not recognized.
So, I abandoned the Selenium IDE, and switched to Selenium.
Note that when you search for "Selenium", you will find information about Selenium RC (Selenium 1) and Selenium WebDriver (Selenium 2). The first is the old and deprecated, the latter (Selenium WebDriver) should be used for new projects.
Once you discovered how the documentation works, it's quite easy to use.
I prefer the documentation at the project page, because it's generally concise (the wiki) and complete (the Java docs).
If you want to get started quickly, read the Getting Started wiki page. If you've got spare time, look through the documentation at SeleniumHQ, in particular the Selenium WebDriver and WebDriver: Advanced Usage.
Selenium Grid is also worth reading. This feature allows you to distribute tests across different (virtual) machines. Great if you want to test your extension in IE8, 9 and 10, simultaneously (to run multiple versions of Internet Explorer, you need virtualization).
Automating tests is nice. What's more nice? Automating installation of extensions!
The ChromeDriver and FirefoxDriver support the installation of extensions, as seen in this example.
For the SafariDriver, I've written two classes to install a custom Safari extension. I've published it and sent in a PR to Selenium, so it might be available to everyone in the future: https://github.com/SeleniumHQ/selenium/pull/87
The OperaDriver does not support installation of custom extensions (technically, it should be possible though).
Note that with the advent of Chromium-powered Opera, the old OperaDriver doesn't work any more.
There's an Internet Explorer Driver, and this one does definitely not allow one to install a custom extension. Internet Explorer doesn't have built-in support for extensions. Extensions are installed through MSI or EXE installers, which are not even integrated in Internet Explorer. So, in order to automatically install your extension in IE, you need to be able to silently run an installer which installs your IE plugin. I haven't tried this yet.
Testing browser extensions posed some difficulty for me as well, but I've settled on implementing tests in a few different areas that I can invoke simultaneously from browsers driven by Selenium.
The steps I use are:
First, I write test code integrated into the extension code that can be activated by simply going to a specific URL. When the extension sees that URL, it begins running the tests.
Then, in the page that activates the testing in the extension I execute server-side tests to be sure the API performs, and record and log issues there. I record the methods invoked, the time they took, and any errors. So I can see the method the extension invoked, the web performance, the business logic performance, and the database performance.
Lastly, I automatically invoke browsers to point at that specific URL and record their performance along with other test information, errors, etc on any given client system using Selenium:
http://docs.seleniumhq.org/
This way I can break down the tests in terms of browser, extension, server, application, and database and link them all together according to specific test sets. It takes a bit of work to put it all together, but once its done you can have a very nice extension testing framework.
Typically for cross-browser extension development in order to maintain a single code-base I use crossrider, but you can do this with any framework or with native extensions as you wish, Selenium won't care, it is just driving the extension to a particular page and allowing you to interact and perform tests.
One nice thing about this approach is you can use it for live users as well. If you are providing support for your extension, have a user go to your test url and immediately you will see the extension and server-side performance. You won't get the Selenium tests of course, but you will capture a lot of issues this way - very useful when you are coding against a variety of browsers and browser versions.

Some kind of task manager for JavaScript in Firefox 3?

Recently I have been having issues with Firefox 3 on Ubuntu Hardy Heron.
I will click on a link and it will hang for a while. I don't know if its a bug in Firefox 3 or a page running too much client side JavaScript, but I would like to try and debug it a bit.
So, my question is "is there a way to have some kind of process explorer, or task manager sort of thing for Firefox 3?"
I would like to be able to see what tabs are using what percent of my processor via the JavaScript on that page (or anything in the page that is causing CPU/memory usage).
Does anybody know of a plugin that does this, or something similar? Has anyone else done this kind of inspection another way?
I know about FireBug, but I can't imagine how I would use it to finger which tab is using a lot of resources.
Any suggestions or insights?
It's probably the awesome firefox3 fsync "bug", which is a giant pile of fail.
In summary
Firefox3 saves its bookmarks and history in an SQLite database
Every time you load a page it writes to this database several times
SQLite cares deeply that you don't lose your bookmarks, so each time it writes, instructs the kernel to flush it's database file to disk and ensure that it's fully written
Many variants of linux, when told to flush like that, flush EVERY FILE. This may take up to a minute or more if you have background tasks doing any kind of disk intensive stuff.
The kernel makes firefox wait while this flush happens, which locks up the UI.
So, my question is, is there a way to have some kind of process explorer, or task manager sort of thing for Firefox 3?
Because of the way Firefox is built this is not possible at the moment. But the new Internet Explorer 8 Beta 2 and the just announced Google Chrome browser are heading in that direction, so I suppose Firefox will be heading there too.
Here is a post ( Google Chrome Process Manager ),by John Resig from Mozilla and jQuery fame on the subject.
There's a thorough discussion of this that explains all of the fsync related problems that affected pre-3.0 versions of FF. In general, I have not seen the behaviour since then either, and really it shouldn't be a problem at all if your system isn't also doing IO intensive tasks. Firebug/Venkman make for nice debuggers, but they would be painful for figuring out these kinds of problems for someone else's code, IMO.
I also wish that there was an easy way to look at CPU utilization in Firefox by tab, though, as I often find myself with FF eating 100% CPU, but no clue which part is causing the problem.
XUL Profiler is an awesome extension that can point out extensions and client side JS gone bananas CPU-wise. It does not work on a per-tab basis, but per-script (or so). You can normally relate those .js scripts to your tabs or extensions by hand.
It is also worth mentioning that Google Chrome has built-in a really good task manager that gives memory and CPU usage per tab, extension and plugin.
[XUL Profiler] is a Javascript profiler. It
shows elapsed time in each method as a
graph, as well as browser canvas zones
redraws to help track down consuming
CPU chunks of code.
Traces all JS calls and paint events
in XUL and pages context. Builds an
animation showing dynamically the
canvas zones being redrawn.
As of FF 3.6.10 it is not up to date in that it is not marked as compatible anymore. But it still works and you can override the incompatibility with the equally awesome MR Tech Toolkit extension.
There's no "process explorer" kind of tool for Firefox; but there's https://developer.mozilla.org/en-US/docs/Archive/Mozilla/Venkman with profiling mode, which you could use to see the time spent by chrome (meaning non-content, that is not web-page) scripts.
From what I've read about it, DTrace might also be useful for this sort of thing, but it requires creating a custom build and possibly adding additional probes to the source. I haven't played with it myself yet.

Categories