I'm trying to start a loader to show that I have a process going on. Converting a kml file to geojson. The converting part works fine. Larger kml files take longer to convert and I want to show that my web page is doing something while it's converting the file. I have this:
function convertFunction() {
loader.style.display = "block";
if(format.value = "kml"){
out.value = JSON.stringify(toGeoJSON["kml"]((new DOMParser()).parseFromString(input.value, 'text/xml')), null, 4);
}
};
I'm used to working with Android and when I order my processes like that, it usually starts the first one first.
Why does my loader not start until after my file is converted and placed into my out text view? How can I get it to work the way I want it to?
I'm not uploading the file to a server first. I'm reading the text from the file, placing the text in a text view(input), getting the text from the input, convert and put new text in the output text view. Then last button to create the geojson file and download it.
The reason that your loader isn't showing is that your conversion process is blocking code. This means that it will hold up the main thread (the UI thread) until the conversion process is done. The loader will never be shown because the conversion will already be done before the browser has a chance to show it.
A possible solution:
Use a Web Worker to carry out the conversion process on a separate thread. This will keep your UI thread free so that your loader can display and animate.
Another possible (simpler) workaround:
Display something on the page to indicate that the conversion process is going on, and use setTimeout to delay the conversion process just enough to allow the page to update. Note that you wouldn't be able to show any sort of animation during the conversion in this case because the blocked UI thread would prevent it from animating.
function convertFunction() {
if(format.value === "kml"){
loadingMessage.style.display = "block";
setTimeout(function () {
out.value = JSON.stringify(toGeoJSON["kml"]((new DOMParser()).parseFromString(input.value, 'text/xml')), null, 4);
loadingMessage.style.display = 'none';
// run any code that would use out.value
}, 100);
}
};
I am attempting to use JSLink ..finally.. and I am having some trouble that I cannot seem to straighten out. For my first venture down the rabbit hole I chose something super simple for use as proof of concept. So I looked up a tutorial and came up with a simple script to draw a box around the Title field of each entry and style the text. I cannot get this to work. Is there any chance you can take a look at this code for me? I used the following tokens in the JSLink box.
~sitecollection/site/folder/folder/file.js
And
~site/folder/folder/file.js
The .js file is stored on the same site as the List View WebPart I am attempting to modify. The list only has the default “Title” column.
(function () {
var overrideContext = {};
overrideContext.Templates = {};
overrideContext.Templates.Item = overrideTemplate;
SPClientTemplates.TemplateManager.RegisterTemplateOverrides(overrideContext);
}) ();
function overrideTemplate(ctx) {
return “<div style=’font-size:40px;border:solid 3px black;margin-bottom:6px;padding:4px;width:200px;’>” + ctx.CurrentItem.Title + “</div>”;
}
It looks as though you are attempting to override the context (ctx) item itself, where you actually just want to override the list field and the list view in which the field is displayed. Make sense?
Firstly, change overrideContext.Templates.Item to overrideContext.Templates.Fields :
(function () {
var overrideContext = {};
overrideContext.Templates = {};
overrideContext.Templates.Fields = {
// Add field and point it to your rendering function
"Title": { "View": overrideTemplate },
};
SPClientTemplates.TemplateManager.RegisterTemplateOverrides(overrideContext);
}) ();
Then when the JSLink runs the renderer looks for the Title field in the List view, and applies your overrideTemplate function.
function overrideTemplate(ctx) {
return “<div style=’font-size:40px;border:solid 3px black;margin-bottom:6px;padding:4px;width:200px;’>” + ctx.CurrentItem.Title + “</div>”;
}
In terms of running multiple JSLinks on a SharePoint page, it is quite possible to run multiple JSLink scripts, they just need to be separated by the pipe '|' symbol. I use SharePoint Online a lot and I see the following formatting working all the time (sorry Sascha!).
~site/yourassetfolder/yourfilename.js | ~site/yourassetfolder/anotherfilename.js
You can run as many scripts concurrently as you want, just keep separating them with the pipe. I've seen this on prem also, however you might want to swap out '~sites' for '~sitecollection' and make sure the js files you are accessing are at the top level site in the site collection if you do so!
I have noticed when running multiple JSLinks on a list or page because they are all doing Client Side Rendering, too many will slow your page down. If this happens, you might want to consider combining them into one JSLink script so that the server only has to call one file to return to the client to do all the rendering needed for your list.
Hope this helps.
Edit: I've redacted actual numbers and replaced them with pseudoswears because I've been told sharing performance data is against Netsuite's TOS.
I'm integrating our accounting system with NetSuite using restlets, and overall it has gone pretty well with the glaring exception of performance. I have figured out that nlapiLoadRecord is Satan's own child from a performance perspective, so I avoid it whenever possible favoring the search api and now my read restlets are pretty snappy. However, whenever I ask a restlet to write anything it is as slow as a tortoise stuck in cold tar. I'm generally profiling the nlapiSubmitRecord itself at between SLOW and REALLY DAMN SLOW seconds. This seems insane to me. No one would use NetSuite if the performance were always this poor for writing. I'll include a couple of code examples below. Any tips on speeding up NetSuite restlet write performance would be appreciated.
In this first one receivedInvoice is the incoming data, and findCurrencyCode and findCustomerByCustomerNumber are well performing functions that do those things. I just clocked this at a nearly unbelievable HOLY MONKEYS THAT IS SLOW seconds for a simple invoice with one line item, nearly all of the time passing while I waited for nlapiSubmitRecord to finish.
var createdInvoice = nlapiCreateRecord('invoice');
createdInvoice.setFieldValue('customform', Number(receivedInvoice.transactionType));
createdInvoice.setFieldValue('memo', receivedInvoice.message);
createdInvoice.setFieldValue('duedate', receivedInvoice.dateDue);
createdInvoice.setFieldValue('currency', findCurrencyCode(receivedInvoice.currencyUnit));
createdInvoice.setFieldValue('location', Number(receivedInvoice.location));
createdInvoice.setFieldValue('postingperiod', findPostingPeriod(receivedInvoice.datePosted));
var customer = findCustomerByCustomerNumber(receivedInvoice.customerNumber);
createdInvoice.setFieldValue('entity', customer.customerId );
createdInvoice.setFieldValue('custbody_end_user', customer.customerId );
createdInvoice.setFieldValue('department', customer.departmentId);
var itemCount = receivedInvoice.items.length;
for(var i = 0; i < itemCount; i++)
{
createdInvoice.selectNewLineItem('item');
createdInvoice.setCurrentLineItemValue('item', 'item',receivedInvoice.items[i].item);
createdInvoice.setCurrentLineItemValue('item', 'quantity', receivedInvoice.items[i].quantity);
createdInvoice.setCurrentLineItemValue('item', 'rate',receivedInvoice.items[i].price);
createdInvoice.setCurrentLineItemValue('item', 'custcol_list_rate',receivedInvoice.items[i].price);
createdInvoice.setCurrentLineItemValue('item', 'amount',receivedInvoice.items[i].totalAmount);
createdInvoice.setCurrentLineItemValue('item', 'description',receivedInvoice.items[i].description);
createdInvoice.commitLineItem('item');
}
var recordNumber = nlapiSubmitRecord(createdInvoice,false,true);
In this one I think I'm committing a performance heresy by opening the record in dynamic mode, but I'm not sure how else to get the possible line items. Simply opening a new record in dynamic mode clocks in at around SLOW seconds. Again, the submit is where most time is being eaten (often around OH DEAR SWEET MOTHER OF HORRIBLE seconds), although this one eats a decent amount of time as I mess with the line items, again presumably because I have opened the record in dynamic mode.
var customerPayment = nlapiCreateRecord('customerpayment',{recordmode: 'dynamic'});
customerPayment.setFieldValue('customer', parseInt(customerId));
customerPayment.setFieldValue('payment', paymentAmount);
customerPayment.setFieldValue('paymentmethod', paymentMethod);
customerPayment.setFieldValue('checknum', transactionId);
customerPayment.setFieldValue('currency', currency);
customerPayment.setFieldValue('account', account);
var applyCount = customerPayment.getLineItemCount('apply');
if(applyCount>0)
{
for(var i=1;i<=applyCount;i++)
{
var thisInvoice = customerPayment.getLineItemValue('apply','refnum',i);
if(thisInvoice == invoiceToPay)
{
customerPayment.selectLineItem('apply', i);
customerPayment.setCurrentLineItemValue('apply','apply','T');
customerPayment.setCurrentLineItemValue('apply', 'amount',paymentAmount);
customerPayment.commitLineItem('apply');
}
}
}
nlapiSubmitRecord(customerPayment,false,true);
A few thoughts:
(just to get it off my chest) Integrating your accounting system with Netsuite just sounds odd. Netsuite is an accounting system and generally is the accounting system of record for the orgs that use it. If you are not using Netsuite for accounting you might want to consider what utility it has for the price and get off it.
When I integrate an external system with Netsuite I generally try to make it async. I do this by getting the raw info into a custom record and then kick off a scheduled script to process the queued update. This lets my api return quickly. When I process the queue I store errors in the queue record so that if anything comes up I can fix the data or code and re-submit it.
One apparently major source of slow transaction submit (aside from slow UE scripts) is the state of your books. I had a startup customer who did really well but they never closed their books and they were using IIRC Average Costing. Every time they saved a posting transaction Netsuite was recalculating the whole period (which at the point where things were grinding to a halt was about 2 years of records for a very busy site). When they started closing periods transaction save times went way down. When they converted to standard costing transaction save times went down again (I imagine LIFO would be faster than Avg and slower than standard but YMMV)
Also notes on your code.
The normal way to create an invoice is
nlapiTransformRecord('customer', customer.customerId, 'invoice'); or
nlapiTransformRecord('customer', customer.customerId, 'invoice', {recordmode:'dynamic'});
I've never tested whether this has an effect on submit times but it might help since NS will start the save from a slightly better place (grasping at straws but with NS sometimes non-obvious changes have speed benefits)
Also not sure how changing the customform in dynamic mode works. When you know the form you can also add that to the init defaults:
{recordmode:'dynamic', customform:receivedInvoice.transactionType}
One of the reasons why you Submit is slow might be there are a lot of User Event scripts attached to the record. Since it is the Restlet that saves the record, it will trigger the User Event scripts.
I was asked to develop a tab panel with 6 tabs, each having 30 to 40 elements. Each tab is acting as a form in accumulating the details of a person and the last tab is a Summary page which displays all the values entered in the first five tabs. I was asked to provide summary as a tab because, the user can navigate to summary tab at any instance and look at the details entered by him/ or glace the summary. i am following ExtJs MVC pattern. Payload is coming from / going to Spring MVC Application. (JSON)
Using tab change event in controller and if the newtab is summary I am rendering the page with show hide functionality.
Method 1 :In controller I have used Ext.getCmp('id of each element inside the tabs') and show hide the components in summary tab based on the value entered by the user. This killed my app in IE8 popping a message saying that the "script is slow and blah blah..." i had to click on NO for 5 to 6 times for the summary tab to render and display the page.
Method 2 :In controller I used ref and selectos to acccess all the items in tabs. I have used itemId for each and every field in summary tab. like this.getXyz().show(). I thought it would be fast. Yes it was in Google chrome. but my app in IE8 is slow compared to goolge chrome/firefox
Any suggestions regarding this and plan to decrease page render time. The summary page has more than 1000 fields. Please feel free to shed ur thoughts or throw some ideas on this.
thank you!!
I've got a few suggestions you can try. First, to answer your title, I think the fastest simple way to lookup components in javascript is to build a hash map. Something like this:
var map = {};
Ext.each(targetComponents, function(item) {
map[item.itemId] = item;
});
// Fastest way to retrieve a component
var myField = map[componentId];
For the rendering time, be sure that the layout/DOM is not updated each time you call hide or show on a child component. Use suspendLayouts to do that:
summaryTabCt.suspendLayouts();
// intensive hide-and-seek business
// just one layout calculation and subsequent DOM manipulation
summaryTabCt.resumeLayouts(true);
Finally, if despite your best efforts you can't cut on the processing time, do damage control. That is, avoid freezing the UI the whole time, and having the browser telling the user your app is dead.
You can use setTimeout to limit the time your script will be holding the execution thread at once. The interval will let the browser some time to process UI events, and prevent it from thinking your script is lost into an infinite loop.
Here's an example:
var itemsToProcess = [...],
// The smaller the chunks, the more the UI will be responsive,
// but the whole processing will take longer...
chunkSize = 50,
i = 0,
slice;
function next() {
slice = itemsToProcess.slice(i, i+chunkSize);
i += chunkSize;
if (slice.length) {
Ext.each(slice, function(item) {
// costly business with item
});
// defer processing to give time
setTimeout(next, 50);
} else {
// post-processing
}
}
// pre-processing (eg. disabling the form submit button)
next(); // start the loop
up().down().action....did the magic. I have replaced each and every usage of Ext.getCmp('id'). Booooha... it's fast and NO issues.
this.up('tabpanel').down('tabName #itemIdOfField').actions.
actions= hide(), show(), setValues().
Try to check deferredRender is true. This should only render the active tab.
You also can try a different hideMode. Especially hideMode:'offsets ' sounds promising
Quote from the sencha API:
hideMode: 'offsets' is often the solution to layout issues in IE specifically when hiding/showing things
As I wrote in the comment, go through this performance guide: http://docs.sencha.com/extjs/4.2.2/#!/guide/performance
In your case, this will be very interesting for you:
{
Ext.suspendLayouts();
// batch of updates
// show() / hide() elements
Ext.resumeLayouts(true);
}
I have a web-based documentation searching/viewing system that I'm developing for a client. Part of this system is a search system that allows the client to search for a term[s] contained in the documentation. I've got the necessary search data files created, but there's a lot of data that needs to be loaded, and it takes anywhere from 8-20 seconds to load all the data. The data is broken into 40-100 files, depending on what documentation needs to be searched. Each file is anywhere from 40-350kb.
Also, this application must be able to run on the local file system, as well as through a webserver.
When the webpage loads up, I can generate a list of what search data files I need load. This entire list must be loaded before the webpage can be considered functional.
With that preface out of the way, let's look at how I'm doing it now.
After I know that the entire webpage is loaded, I call a loadData() function
function loadData(){
var d = new Date();
var curr_min = d.getMinutes();
var curr_sec = d.getSeconds();
var curr_mil = d.getMilliseconds();
console.log("test.js started background loading, time is: " + curr_min + ":" + curr_sec+ ":" + curr_mil);
recursiveCall();
}
function recursiveCall(){
if(file_array.length > 0){
var string = file_array.pop();
setTimeout(function(){$.getScript(string,recursiveCall);},1);
}
else{
var d = new Date();
var curr_min = d.getMinutes();
var curr_sec = d.getSeconds();
var curr_mil = d.getMilliseconds();
console.log("test.js stopped background loading, time is: " + curr_min + ":" + curr_sec+ ":" + curr_mil);
}
}
What this does is processes an array of files sequentially, taking a 1ms break between files. This helps prevent the browser from being completely locked up during the loading process, but the browser still tends to get bogged down by loading the data. Each of the files that I'm loading look like this:
AddToBookData(0,[0,1,2,3,4,5,6,7,8]);
AddToBookData(1,[0,1,2,3,4,5,6,7,8]);
AddToBookData(2,[0,1,2,3,4,5,6,7,8]);
Where each line is a function call that is adding data to an array. The "AddToBookData" function simply does the following:
function AddToBookData(index1,value1){
BookData[BookIndex].push([index1,value1]);
}
This is the existing system. After loading all the data, "AddToBookData" can get called 100,000+ times.
I figured that was pretty inefficient, so I wrote a script to take the test.js file which contains all the function calls above, and processed it to change it into a giant array which is equal to the data structure that BookData is creating. Instead of making all the function calls that the old system did, I simply do the following:
var test_array[..........(data structure I need).......]
BookData[BookIndex] = test_array;
I was expecting to see a performance increase because I was removing all the function calls above, this method takes slightly more time to create the exact data structure. I should note that "test_array" holds slightly over 90,000 elements in my real world test.
It seems that both methods of loading data have roughly the same CPU utilization. I was surprised to find this, since I was expecting the second method to require little CPU time, since the data structure is being created before hand.
Please advise?
Looks like there are two basic areas for optimising the data loading, that can be considered and tackled separately:
Downloading the data from the server. Rather than one large file you should gain wins from parallel loads of multiple smaller files. Experiment with number of simultaneous loads, bear in mind browser limits and diminishing returns of having too many parallel connections. See my parallel vs sequential experiments on jsfiddle but bear in mind that the results will vary due to the vagaries of pulling the test data from github - you're best off testing with your own data under more tightly controlled conditions.
Building your data structure as efficiently as possible. Your result looks like a multi-dimensional array, this interesting article on JavaScript array performance may give you some ideas for experimentation in this area.
But I'm not sure how far you'll really be able to go with optimising the data loading alone. To solve the actual problem with your application (browser locking up for too long) have you considered options such as?
Using Web Workers
Web Workers might not be supported by all your target browsers, but should prevent the main browser thread from locking up while it processes the data.
For browsers without workers, you could consider increasing the setTimeout interval slightly to give the browser time to service the user as well as your JS. This will make things actually slightly slower but may increase user happiness when combined with the next point.
Providing feedback of progress
For both worker-capable and worker-deficient browsers, take some time to update the DOM with a progress bar. You know how many files you have left to load so progress should be fairly consistent and although things may actually be slightly slower, users will feel better if they get the feedback and don't think the browser has locked up on them.
Lazy Loading
As suggested by jira in his comment. If Google Instant can search the entire web as we type, is it really not possible to have the server return a file with all locations of the search keyword within the current book? This file should be much smaller and faster to load than the locations of all words within the book, which is what I assume you are currently trying to get loaded as quickly as you can?
I tested three methods of loading the same 9,000,000 point dataset into Firefox 3.64.
1: Stephen's GetJSON Method
2) My function based push method
3) My pre-processed array appending method:
I ran my tests two ways: The first iteration of testing I imported 100 files containing 10,000 rows of data, each row containing 9 data elements [0,1,2,3,4,5,6,7,8]
The second interation I tried combining files, so that I was importing 1 file with 9 million data points.
This was a lot larger than the dataset I'll be using, but it helps demonstrate the speed of the various import methods.
Separate files: Combined file:
JSON: 34 seconds 34
FUNC-BASED: 17.5 24
ARRAY-BASED: 23 46
Interesting results, to say the least. I closed out the browser after loading each webpage, and ran the tests 4 times each to minimize the effect of network traffic/variation. (ran across a network, using a file server). The number you see is the average, although the individual runs differed by only a second or two at most.
Instead of using $.getScript to load JavaScript files containing function calls, consider using $.getJSON. This may boost performance. The files would now look like this:
{
"key" : 0,
"values" : [0,1,2,3,4,5,6,7,8]
}
After receiving the JSON response, you could then call AddToBookData on it, like this:
function AddToBookData(json) {
BookData[BookIndex].push([json.key,json.values]);
}
If your files have multiple sets of calls to AddToBookData, you could structure them like this:
[
{
"key" : 0,
"values" : [0,1,2,3,4,5,6,7,8]
},
{
"key" : 1,
"values" : [0,1,2,3,4,5,6,7,8]
},
{
"key" : 2,
"values" : [0,1,2,3,4,5,6,7,8]
}
]
And then change the AddToBookData function to compensate for the new structure:
function AddToBookData(json) {
$.each(json, function(index, data) {
BookData[BookIndex].push([data.key,data.values]);
});
}
Addendum
I suspect that regardless what method you use to transport the data from the files to the BookData array, the true bottleneck is in the sheer number of requests. Must the files be fragmented into 40-100? If you change to JSON format, you could load a single file that looks like this:
{
"file1" : [
{
"key" : 0,
"values" : [0,1,2,3,4,5,6,7,8]
},
// all the rest...
],
"file2" : [
{
"key" : 1,
"values" : [0,1,2,3,4,5,6,7,8]
},
// yadda yadda
]
}
Then you could do one request, load all the data you need, and move on... Although the browser may initially lock up (although, maybe not), it would probably be MUCH faster this way.
Here is a nice JSON tutorial, if you're not familiar: http://www.webmonkey.com/2010/02/get_started_with_json/
Fetch all the data as a string, and use split(). This is the fastest way to build an array in Javascript.
There's an excellent article a very similar problem, from the people who built the flickr search: http://code.flickr.com/blog/2009/03/18/building-fast-client-side-searches/