We have an Angular application which was written for us and we're trying to figure out a specific problem (see below for my voluminous essay). Basically, it's a system where some digital input is supposed to control both the backlight and the page that's displayed. If the digital input is on, the backlight should also be on, and the app should work normally, allowing user interaction.
However, without that digital input signal, the application should give a blank page and turn off the backlight so that the user sees nothing and cannot interact with a real page in some blind fashion.
The backlight is controlled by a back end process, which also deliver information to the front end (both of these run on the same device) using Angular routing. But the decision as to what's displayed on the web page is totally under the control of the front end.
We're seeing a situation where, if the digital input is toggled off and on quickly, we sometimes get the backlight on but with a blank page and I'm wondering whether there's some sort of race condition in the delivery of event through the router.
In terms of code, we have an effect which monitors the digital input signal from the back end (not directly) that looks something like:
#Effect({ dispatch: false })
receiveDigitalInputEvent$ = this.actions.pipe(
ofType(AppActionTypes.RECEIVE_DIGITAL_INPUT),
map((action: AppActions.ReceiveDigitalInput) => action.payload),
withLatestFrom(this.store.pipe(select(getRouterState))),
tap(([data, routerState]) => {
const currUrl = routerState ? routerState.state.url : '';
const { isDigitalInputSet, someOtherStuff } = data;
this.forceNavigateIfNeeded(isDigitalInputSet, currUrl);
})
);
forceNavigateIfNeeded(isDigitalInputSet: boolean, currUrl) {
if (isDigitalInputSet && currUrl.endsWith('/blank')) {
this.router.navigateByUrl('/welcome');
else if (! isDigitalInputSet && ! currUrl.endsWith('/blank')) {
this.router.navigateByUrl('/blank');
}
}
What this basically does is, on digital input event arrival:
if the input is on and the screen is currently blank, go to the welcome screen; or
if the input is off and you're not currently on the blank screen, go to the blank screen.
Now, this works normally, it's just the quick transitions that seem to be causing a problem.
I don't have that much knowledge of the Angular internals but I assume the incoming events are being queued somehow for delivery via the routing mechanism (I believe the back end pushed through stuff via web sockets to the front end).
I've looked into the Angular source and noticed that the navigateByUrl uses promises to do its work so the code may return to get the next message queue item before the page is actually changed. I just wanted someone with more Angular nous to check my reasoning or let me know if what I'm suggesting is rubbish. As to what I'm suggesting:
The digital input goes off and a message (OFF) is queued from the back end to the front end. The back end also turns off the backlight.
The digital input goes on again quickly and a message (ON) is queued from the back end to the front end. The back end also reactivates the backlight.
The front end receives the OFF message and processes it. Because we've gotten an OFF message while not on the blank page, we call navigateByUrl('/blank') which starts up a promise.
Before that promise can be fulfilled (and the routerState.state.url changed from non-blank to blank), we start processing the ON message. At this stage, we have an ON message with a (stale) non-blank page, so forceNavigateIfNeeded() does nothing.
The in-progress navigateByUrl('/blank') promise completes, setting the page to blank.
Now we seem to have the situation described as originally, a blank page with the backlight on.
Is this even possible with Angular/Typescript? It seems to rely on the fact that messages coming in are queued and are acted upon (in a single threaded manner) as quickly as possible, while front-end navigation (and, more importantly, the updated record of your current page) may take some time.
Can someone advise as to whether this is possible and, if so, what would be a good way to fix it?
I assume it's a bad idea to wait around in forceNavigateIfNeeded() for the promise to complete, but I'm not sure how else to do it.
Although your problem is quite complex I can see a suspicious code which might be worth inspecting.
...
withLatestFrom(this.store.pipe(select(getRouterState))),
...
This might be the cause of your issue because it's taking what's there, what's latest. It doesn't request the update nor does it wait for it. We had similar issue where also milliseconds were in play and I came up with the workaround, where I replaced withLatestFrom with switchMap operator. I'm not sure if it works in your case but you can give it try.
map((action: AppActions.ReceiveDigitalInput) => action.payload),
switchMap(data => //data === action.payload
this.store.select(getRouterState).pipe(map(routerState => ({ data, routerState }))
),
tap(({data, routerState}) => { ... }
EDIT:
In our case we used a custom selector in the switchMap, which compares current and previous state and returns comparison result. Honestly I'm not 100% sure if that would also help you but it's a necessary piece of a puzzle in our case.
export const getFeshRouterState= pipe(
select(getRouterState),
startWith(null as RouterStateInterface),
pairwise(),
map(([prev, curr]) => /* do some magic here and return result */),
);
Sorry for initial confusion.
I am using the HTML5 Widget Api to take a SoundCloud playlist with 14 songs and store the information about every song in an array.
I use the .getSounds method as following:
function loadTracks(){
myPlayer.player.getSounds(function(ret){
myPlayer.playlistInfo = ret;
});
}
It correctly returns an array with 14 spots. The first 5 spots contain exactly what i want, but the last 9 have different information that does not seem to be related to the song.
this is how the returned array looks like
I could recreate the problem with different playlists. I only get the correct information for the first 5 songs.
Does anyone has and idea on how to solve this? I set up a codepen for this
Thanks.
I was struggling with the same problem. For me it looks like SC.Widget.Events.READY is sometimes firing before the Widget is really completely ready.
A safe solution would be to loop through a function and check the array for its completeness until all the data you need is there. In my example on Code Pen I check the array every 200 ms for all titles and break out of the loop on success.
It looks like the data is all valid - resource type: sound, type: track - but it looks like the API is not returning the full set of information for each song in the playlist beyond the fifth. It's only returning the artwork URL and extended information for the first 5, but I believe the rest of the songs are still accessible by their id. If you need the extra Information, you may have to query the SoundCloud API for each specific song beyond the fifth index (if length > 5), which will probably return the full info for each song. You'll have to do many queries with this method, however
The real answer... It is not possible. getSounds wont return all the track info for a playlist at once like you expect it to. The reasons:
The Widget API is an half-baked, insufficient, poorly-documented, abandoned API.
The full SoundCloud API has more functionality, but has been closed off for many many years, and will likely never return.
SoundCloud has no real developer support at all anymore.
SoundCloud as a whole company seems to have been circling the drain financially for several years (I recall several years ago they almost shut down before getting a new CEO or something). I speculate that this is causing the above shortcomings.
But I didn't create a new answer just to say that.
I need a media player for a redesign of my website. The SoundCloud widget is so ugly and incomplete and inflexible, but SoundCloud as a service already provides streaming audio and recording of song plays/downloads/comments, which would be a big effort to reimplement. Also for some reason, SoundCloud is the standard for embedding sharable audio on websites (look at any sample library demo page). Bandcamp has a widget too, but it can only do widgets by albums and not playlists, and also doesn't show things like play numbers or supporters, and is also completely un-customizable. So I really wanted to find a way to make this dumb Widget API work.
Here is a REALLY HACKY AND UGLY way I think I found that hopefully works consistently. Use with caution.
It literally just goes through each track with .next() as fast as it can, calling getCurrentSound repeatedly on each until it loads and returns the full song info.
// once you know how many tracks are in the widget...
const fullTracks = [];
// do the following process same amount of times as number of tracks
for (let track = 0; track < tracks.length; track++) {
// make periodic attempts (up to a limit) to get the full info
for (let tries = 0; tries < 100; tries++) {
// promisify getCurrentSound (doesn't handle error)
const sound = await new Promise((resolve) => player.getCurrentSound(resolve));
// check if full info has loaded for current sound yet
if (sound.artwork_url) {
// record full info
fullTracks.push(sound);
// stop re-trying
break;
} else
// wait a bit before re-trying
await new Promise((resolve) => window.setTimeout(resolve, 10));
}
// move to next track
player.next();
}
// reverse the process; go all the way back to first track again
for (let track = 0; track < tracks.length; track++)
player.prev();
On my machine and internet connection, for 13 tracks, this takes about 300ms. It also throws a bunch of horrible errors in the console that you can't try/catch to suppress.
I'm experiencing a very strange issue with Meteor whereby a single page is taking a very long time to load. I've gone down numerous routes to try and troubleshoot before identifying a cause which isn't really making sense to me.
Fundamentally I have a two-step process that collects some initial data and then inserts a couple of hundred records into the database. On the second page I then need to be able to assign users to each of the new records. I can also go back into the same page and edit the assigned users so in this second scenario the inserts have already happened - same lag.
The insert code is relatively straightforward.
Client Side
setup assessment meta-data & create new assessment record + items:
// set updatingAssessment variable to stop subs re-loading as records are inserted
Session.set('updatingAssessment', true);
Meteor.call('insertAssessment', assessment, function(err, result) {
if (err) { showError(err.reason); }
var assessmentId = result.toHexString();
// some code excluded to get 'statements' array
Meteor.call('insertAssessmentItems',
assessmentId,
statements,
function(err, result) {
Session.set('updatingAssessment', false);
});
}
// hide first modal and show 2nd
Session.set("assessmentStage1", false);
Session.set("assessmentStage2", true);
Server methods
Volume wise, we're inserting 1 assessment record and then ~200 assessment items
insertAssessment: function(assessment) {
this.unblock();
return Assessments.insert({
<<fields>>
}, function(err, result) { return result; });
},
insertAssessmentItems: function(assessmentId, statements) {
this.unblock();
for (i = 0; i < statements.length; i++) {
AssessmentItems.insert({
<<fields>>
}, function(err, result) { });
}
}
Initially I tried numerous SO suggestions around not refreshing subscriptions while the inserts were happening, using this.unblock() to free up DDP traffic, making queries on the slow page non-reactive, only returning the bare minimum fields, calling the inserts asynchronously, etc, etc... all of which made no difference with either the initial insert, or subsequent editing of the same records (i.e go out of the page and come back so the inserts have already happened).
After commenting out all database code and literally returning a string from the meteor method (with the same result / lag) I looked at the actual template generation code for the html page itself.
After commenting out various sections, I identified that a helper to build up a dropdown list of users (repeated for each of those ~200 records) was causing the massive spike in load time. When I commented out the dropdown creation the page load happens in <5s which is within a more acceptable range although still slower than i would like.
Two oddities with this, which I was hoping somebody could shed some light on:
Has anything changed recently in Meteor that would have caused a slowdown as I haven't significantly changed my code for a couple of months and it was working before, albeit still around 10 seconds to fully render the page. The page also used to be responsive when loaded, whereas now the CPU is stuck at 100% and selecting users is very laggy.
Why is the actual template loading blocking my server method returns? There is no doubt I need to re-write my template to be more efficient but I spent a lot of time trying to figure out why my server methods were not returning values quickly. I even analysed the DDP traffic and there was no activity while it was waiting for the template to load - no reloaded subs, no method returns, nothing.
Thanks in advance,
David
I was asked to develop a tab panel with 6 tabs, each having 30 to 40 elements. Each tab is acting as a form in accumulating the details of a person and the last tab is a Summary page which displays all the values entered in the first five tabs. I was asked to provide summary as a tab because, the user can navigate to summary tab at any instance and look at the details entered by him/ or glace the summary. i am following ExtJs MVC pattern. Payload is coming from / going to Spring MVC Application. (JSON)
Using tab change event in controller and if the newtab is summary I am rendering the page with show hide functionality.
Method 1 :In controller I have used Ext.getCmp('id of each element inside the tabs') and show hide the components in summary tab based on the value entered by the user. This killed my app in IE8 popping a message saying that the "script is slow and blah blah..." i had to click on NO for 5 to 6 times for the summary tab to render and display the page.
Method 2 :In controller I used ref and selectos to acccess all the items in tabs. I have used itemId for each and every field in summary tab. like this.getXyz().show(). I thought it would be fast. Yes it was in Google chrome. but my app in IE8 is slow compared to goolge chrome/firefox
Any suggestions regarding this and plan to decrease page render time. The summary page has more than 1000 fields. Please feel free to shed ur thoughts or throw some ideas on this.
thank you!!
I've got a few suggestions you can try. First, to answer your title, I think the fastest simple way to lookup components in javascript is to build a hash map. Something like this:
var map = {};
Ext.each(targetComponents, function(item) {
map[item.itemId] = item;
});
// Fastest way to retrieve a component
var myField = map[componentId];
For the rendering time, be sure that the layout/DOM is not updated each time you call hide or show on a child component. Use suspendLayouts to do that:
summaryTabCt.suspendLayouts();
// intensive hide-and-seek business
// just one layout calculation and subsequent DOM manipulation
summaryTabCt.resumeLayouts(true);
Finally, if despite your best efforts you can't cut on the processing time, do damage control. That is, avoid freezing the UI the whole time, and having the browser telling the user your app is dead.
You can use setTimeout to limit the time your script will be holding the execution thread at once. The interval will let the browser some time to process UI events, and prevent it from thinking your script is lost into an infinite loop.
Here's an example:
var itemsToProcess = [...],
// The smaller the chunks, the more the UI will be responsive,
// but the whole processing will take longer...
chunkSize = 50,
i = 0,
slice;
function next() {
slice = itemsToProcess.slice(i, i+chunkSize);
i += chunkSize;
if (slice.length) {
Ext.each(slice, function(item) {
// costly business with item
});
// defer processing to give time
setTimeout(next, 50);
} else {
// post-processing
}
}
// pre-processing (eg. disabling the form submit button)
next(); // start the loop
up().down().action....did the magic. I have replaced each and every usage of Ext.getCmp('id'). Booooha... it's fast and NO issues.
this.up('tabpanel').down('tabName #itemIdOfField').actions.
actions= hide(), show(), setValues().
Try to check deferredRender is true. This should only render the active tab.
You also can try a different hideMode. Especially hideMode:'offsets ' sounds promising
Quote from the sencha API:
hideMode: 'offsets' is often the solution to layout issues in IE specifically when hiding/showing things
As I wrote in the comment, go through this performance guide: http://docs.sencha.com/extjs/4.2.2/#!/guide/performance
In your case, this will be very interesting for you:
{
Ext.suspendLayouts();
// batch of updates
// show() / hide() elements
Ext.resumeLayouts(true);
}
I have a web-based documentation searching/viewing system that I'm developing for a client. Part of this system is a search system that allows the client to search for a term[s] contained in the documentation. I've got the necessary search data files created, but there's a lot of data that needs to be loaded, and it takes anywhere from 8-20 seconds to load all the data. The data is broken into 40-100 files, depending on what documentation needs to be searched. Each file is anywhere from 40-350kb.
Also, this application must be able to run on the local file system, as well as through a webserver.
When the webpage loads up, I can generate a list of what search data files I need load. This entire list must be loaded before the webpage can be considered functional.
With that preface out of the way, let's look at how I'm doing it now.
After I know that the entire webpage is loaded, I call a loadData() function
function loadData(){
var d = new Date();
var curr_min = d.getMinutes();
var curr_sec = d.getSeconds();
var curr_mil = d.getMilliseconds();
console.log("test.js started background loading, time is: " + curr_min + ":" + curr_sec+ ":" + curr_mil);
recursiveCall();
}
function recursiveCall(){
if(file_array.length > 0){
var string = file_array.pop();
setTimeout(function(){$.getScript(string,recursiveCall);},1);
}
else{
var d = new Date();
var curr_min = d.getMinutes();
var curr_sec = d.getSeconds();
var curr_mil = d.getMilliseconds();
console.log("test.js stopped background loading, time is: " + curr_min + ":" + curr_sec+ ":" + curr_mil);
}
}
What this does is processes an array of files sequentially, taking a 1ms break between files. This helps prevent the browser from being completely locked up during the loading process, but the browser still tends to get bogged down by loading the data. Each of the files that I'm loading look like this:
AddToBookData(0,[0,1,2,3,4,5,6,7,8]);
AddToBookData(1,[0,1,2,3,4,5,6,7,8]);
AddToBookData(2,[0,1,2,3,4,5,6,7,8]);
Where each line is a function call that is adding data to an array. The "AddToBookData" function simply does the following:
function AddToBookData(index1,value1){
BookData[BookIndex].push([index1,value1]);
}
This is the existing system. After loading all the data, "AddToBookData" can get called 100,000+ times.
I figured that was pretty inefficient, so I wrote a script to take the test.js file which contains all the function calls above, and processed it to change it into a giant array which is equal to the data structure that BookData is creating. Instead of making all the function calls that the old system did, I simply do the following:
var test_array[..........(data structure I need).......]
BookData[BookIndex] = test_array;
I was expecting to see a performance increase because I was removing all the function calls above, this method takes slightly more time to create the exact data structure. I should note that "test_array" holds slightly over 90,000 elements in my real world test.
It seems that both methods of loading data have roughly the same CPU utilization. I was surprised to find this, since I was expecting the second method to require little CPU time, since the data structure is being created before hand.
Please advise?
Looks like there are two basic areas for optimising the data loading, that can be considered and tackled separately:
Downloading the data from the server. Rather than one large file you should gain wins from parallel loads of multiple smaller files. Experiment with number of simultaneous loads, bear in mind browser limits and diminishing returns of having too many parallel connections. See my parallel vs sequential experiments on jsfiddle but bear in mind that the results will vary due to the vagaries of pulling the test data from github - you're best off testing with your own data under more tightly controlled conditions.
Building your data structure as efficiently as possible. Your result looks like a multi-dimensional array, this interesting article on JavaScript array performance may give you some ideas for experimentation in this area.
But I'm not sure how far you'll really be able to go with optimising the data loading alone. To solve the actual problem with your application (browser locking up for too long) have you considered options such as?
Using Web Workers
Web Workers might not be supported by all your target browsers, but should prevent the main browser thread from locking up while it processes the data.
For browsers without workers, you could consider increasing the setTimeout interval slightly to give the browser time to service the user as well as your JS. This will make things actually slightly slower but may increase user happiness when combined with the next point.
Providing feedback of progress
For both worker-capable and worker-deficient browsers, take some time to update the DOM with a progress bar. You know how many files you have left to load so progress should be fairly consistent and although things may actually be slightly slower, users will feel better if they get the feedback and don't think the browser has locked up on them.
Lazy Loading
As suggested by jira in his comment. If Google Instant can search the entire web as we type, is it really not possible to have the server return a file with all locations of the search keyword within the current book? This file should be much smaller and faster to load than the locations of all words within the book, which is what I assume you are currently trying to get loaded as quickly as you can?
I tested three methods of loading the same 9,000,000 point dataset into Firefox 3.64.
1: Stephen's GetJSON Method
2) My function based push method
3) My pre-processed array appending method:
I ran my tests two ways: The first iteration of testing I imported 100 files containing 10,000 rows of data, each row containing 9 data elements [0,1,2,3,4,5,6,7,8]
The second interation I tried combining files, so that I was importing 1 file with 9 million data points.
This was a lot larger than the dataset I'll be using, but it helps demonstrate the speed of the various import methods.
Separate files: Combined file:
JSON: 34 seconds 34
FUNC-BASED: 17.5 24
ARRAY-BASED: 23 46
Interesting results, to say the least. I closed out the browser after loading each webpage, and ran the tests 4 times each to minimize the effect of network traffic/variation. (ran across a network, using a file server). The number you see is the average, although the individual runs differed by only a second or two at most.
Instead of using $.getScript to load JavaScript files containing function calls, consider using $.getJSON. This may boost performance. The files would now look like this:
{
"key" : 0,
"values" : [0,1,2,3,4,5,6,7,8]
}
After receiving the JSON response, you could then call AddToBookData on it, like this:
function AddToBookData(json) {
BookData[BookIndex].push([json.key,json.values]);
}
If your files have multiple sets of calls to AddToBookData, you could structure them like this:
[
{
"key" : 0,
"values" : [0,1,2,3,4,5,6,7,8]
},
{
"key" : 1,
"values" : [0,1,2,3,4,5,6,7,8]
},
{
"key" : 2,
"values" : [0,1,2,3,4,5,6,7,8]
}
]
And then change the AddToBookData function to compensate for the new structure:
function AddToBookData(json) {
$.each(json, function(index, data) {
BookData[BookIndex].push([data.key,data.values]);
});
}
Addendum
I suspect that regardless what method you use to transport the data from the files to the BookData array, the true bottleneck is in the sheer number of requests. Must the files be fragmented into 40-100? If you change to JSON format, you could load a single file that looks like this:
{
"file1" : [
{
"key" : 0,
"values" : [0,1,2,3,4,5,6,7,8]
},
// all the rest...
],
"file2" : [
{
"key" : 1,
"values" : [0,1,2,3,4,5,6,7,8]
},
// yadda yadda
]
}
Then you could do one request, load all the data you need, and move on... Although the browser may initially lock up (although, maybe not), it would probably be MUCH faster this way.
Here is a nice JSON tutorial, if you're not familiar: http://www.webmonkey.com/2010/02/get_started_with_json/
Fetch all the data as a string, and use split(). This is the fastest way to build an array in Javascript.
There's an excellent article a very similar problem, from the people who built the flickr search: http://code.flickr.com/blog/2009/03/18/building-fast-client-side-searches/