I have a not too big grid (30x20) with numbers in cells. I have to display all, calculate them in different ways (by columns, rows, some cells, etc.) and write values to some cells. This data is also written and read from db table fields. Everything is working, excluding simple (theoretically) mask tools.
In time of e.g. writing data to the field in the table I try to start mask and close it on finish. I used such a “masks” very often but only in this situation I have a problem and can’t solve it.
I prepare this mask the following way:
msk = new Ext.LoadMask(Ext.getBody(), { msg: "data loading ..." });
msk.show();
[writing data loops]
msk.hide();
msk.destroy();
I also tried to use grid obiect in place of Ext.getBody(), but without result.
I found also that the program behaves in a special way – loops which I use to write data to the table field are "omitted" by this mask, and it looks like loops are working in the background (asynchronously).
Would you be so kind as to suggest something?
No, no, no, sorry guys but my description isn’t very precise. It isn’t problem of loading or writing data to the database. Let’s say stores are in the memory but my problem is to calculate something and write into the grid. Just to see this values on the screen. Let me use my example once again:
msk = new Ext.LoadMask(Ext.getBody(), { msg: "data loading ..." });
msk.show();
Ext.each(dataX.getRange(), function (X) {
Ext.each(dataY.getRange(), function (Y) {
…
X.set('aaa', 10);
…
}
msk.hide();
msk.destroy();
And in such a situation this mask isn’t visible or is too fast to see it.
In the mean time I find (I think) a good description of my problem but still can’t find a solution for me. When I use e.g. alert() function I see this mask, when I use delay anyway, mask is too fast. Explanation is the following:
The reason for that is quite simple - JS is single threaded. If you modify DOM (for example by turning mask on) the actual change is made immediately after current execution path is finished. Because you turn mask on in beginning of some time-consuming task, browser waits with DOM changes until it finishes. Because you turn mask off at the end of method, it might not show at all. Solution is simple - invoke store rebuild after some delay.*
I have no idea how is your code looks in general but this is some tip that you could actually use.
First of all loading operations are asynchronously so you need to make that mask show and then somehow destroy when data are loaded.
First of all check if in your store configuration you have autoLoad: false
If yes then we can make next step:
Since Extjs is strongly about MVC design pattern you should have your controller somewhere in your project.
I suppose you are loading your data on afterrender or on button click event so we can make this:
In function for example loadImportantData
loadImportantData: function(){
var controller = this;
var store = controller.getStore('YourStore'); //or Ext.getStore('YourStore'); depends on your configuration in controller
var myMask = new Ext.LoadMask(Ext.getBody(), {msg:"Please wait..."});
myMask.show();
store.load({
callback: function (records, operation, success) {
//this callback is fired when your store load all data.
//then hide mask.
myMask.hide();
}
});
}
When data is loaded your mask will disappear.
If you have a reference to the grid, you can simply call grid.setLoading(true) to display a loading mask over the grid at any time.
Related
I have an autocomplete widget which needs to return options from a database of objects.
On doing so, once the user selects an item the widget will populate other hidden textfields with values from the particular object they chose. - All of this works and has been used on previous projects
However this particular database is far too big (44k+ objects, filesize is several mb and has taken far too long to load in practice) so we've tried various ways of splitting it up. So far the best has been by first letter of the object label.
As a result I'm trying to create a function which tracks the users input into a textfield and returns the first letter. This is then used to AJAX a file of that name (e.g. a.js).
That said I've never had much luck trying to track user input at this level and normally find that it takes a couple keystrokes for everything to get working when I'm trying to get it done on the first keystroke. Does anyone have any advice on a better way of going about this objective? Or why the process doesn't work straight away?
Here is my current non-working code to track the user input - it's used on page load:
function startupp(){
console.log("starting");
$("#_Q0_Q0_Q0").on("keyup", function(){
console.log("further starting!");
if($("#_Q0_Q0_Q0").val().length == 1){
console.log("more starting");
countryChange(($("#_Q0_Q0_Q0").val()[0]).toUpperCase());
}
else{
console.log("over or under");
}
});
}
And an example of the data (dummy values):
tags=[
{
label:"label",
code:"1",
refnum:"555555",
la:"888",
DCSF:"4444",
type:"Not applicable",
status:"Open",
UR:"1",
gRegion:"North West"
},
....
];
edit: fixes applied:
Changed startupp from .change(function) to .on("keyup", function) - keydown could also be used, this is personal preference for me.
Changed the autocomplete settings to have minLength: 4, - as the data starts loading from the first letter this gives it the few extra split ms to load the data before offering options and also cuts down how much data needs to be shown (helps for a couple of specific instances).
Changed how the source is gathered by changing the autocomplete setting to the following:
source: function(request, response) {
var results = $.ui.autocomplete.filter(tags, request.term);
response(results.slice(0, 20));
},
where tags is the array with the data.
all seems to be working now.
You should bind to keydown event:
function startupp(){
console.log("starting");
$("#_Q0_Q0_Q0").keydown(function(){
console.log("further starting!");
if($(this).length() == 1){
console.log("more starting");
countryChange(($(this).val()[0]).toUpperCase());
}
else{
console.log("over or under");
}
});
}
I was asked to develop a tab panel with 6 tabs, each having 30 to 40 elements. Each tab is acting as a form in accumulating the details of a person and the last tab is a Summary page which displays all the values entered in the first five tabs. I was asked to provide summary as a tab because, the user can navigate to summary tab at any instance and look at the details entered by him/ or glace the summary. i am following ExtJs MVC pattern. Payload is coming from / going to Spring MVC Application. (JSON)
Using tab change event in controller and if the newtab is summary I am rendering the page with show hide functionality.
Method 1 :In controller I have used Ext.getCmp('id of each element inside the tabs') and show hide the components in summary tab based on the value entered by the user. This killed my app in IE8 popping a message saying that the "script is slow and blah blah..." i had to click on NO for 5 to 6 times for the summary tab to render and display the page.
Method 2 :In controller I used ref and selectos to acccess all the items in tabs. I have used itemId for each and every field in summary tab. like this.getXyz().show(). I thought it would be fast. Yes it was in Google chrome. but my app in IE8 is slow compared to goolge chrome/firefox
Any suggestions regarding this and plan to decrease page render time. The summary page has more than 1000 fields. Please feel free to shed ur thoughts or throw some ideas on this.
thank you!!
I've got a few suggestions you can try. First, to answer your title, I think the fastest simple way to lookup components in javascript is to build a hash map. Something like this:
var map = {};
Ext.each(targetComponents, function(item) {
map[item.itemId] = item;
});
// Fastest way to retrieve a component
var myField = map[componentId];
For the rendering time, be sure that the layout/DOM is not updated each time you call hide or show on a child component. Use suspendLayouts to do that:
summaryTabCt.suspendLayouts();
// intensive hide-and-seek business
// just one layout calculation and subsequent DOM manipulation
summaryTabCt.resumeLayouts(true);
Finally, if despite your best efforts you can't cut on the processing time, do damage control. That is, avoid freezing the UI the whole time, and having the browser telling the user your app is dead.
You can use setTimeout to limit the time your script will be holding the execution thread at once. The interval will let the browser some time to process UI events, and prevent it from thinking your script is lost into an infinite loop.
Here's an example:
var itemsToProcess = [...],
// The smaller the chunks, the more the UI will be responsive,
// but the whole processing will take longer...
chunkSize = 50,
i = 0,
slice;
function next() {
slice = itemsToProcess.slice(i, i+chunkSize);
i += chunkSize;
if (slice.length) {
Ext.each(slice, function(item) {
// costly business with item
});
// defer processing to give time
setTimeout(next, 50);
} else {
// post-processing
}
}
// pre-processing (eg. disabling the form submit button)
next(); // start the loop
up().down().action....did the magic. I have replaced each and every usage of Ext.getCmp('id'). Booooha... it's fast and NO issues.
this.up('tabpanel').down('tabName #itemIdOfField').actions.
actions= hide(), show(), setValues().
Try to check deferredRender is true. This should only render the active tab.
You also can try a different hideMode. Especially hideMode:'offsets ' sounds promising
Quote from the sencha API:
hideMode: 'offsets' is often the solution to layout issues in IE specifically when hiding/showing things
As I wrote in the comment, go through this performance guide: http://docs.sencha.com/extjs/4.2.2/#!/guide/performance
In your case, this will be very interesting for you:
{
Ext.suspendLayouts();
// batch of updates
// show() / hide() elements
Ext.resumeLayouts(true);
}
I am exploring using DGrid for my web application. I am trying to have a table similar to this.
The code for the example above is here.
The table uses a memory store as the source of its data - the summary field there is what shows up when we click and expand each row.
I want the details(i.e the text shown when we click a row) to be fetched from the server on clicking on the row instead of being statically loaded along with rest of the page.
How do I modify the above code to be able to do that?
(My requirement is that of an HTML table, each row expandable on clicking, where the data on each expansion is fetched from the server, using AJAX for instance. I am just exploring dgrid as an option, as I get sortable columns for free with dgrid. If there is a better option, please let me know)
EDIT: Basically I am looking for ideas for doing that and not expecting anyone to actually give me the code. Being rather unfamiliar with Dojo, I am not sure what would be the right approach
If your ajax call returns html, you could place a dijit/layout/ContentPane in your renderer, and set the url of the contents you want to fetch in the ContentPane's href property. Assuming that your initial data (the equivalent of the example's memory store) would have a property called "yourServiceCallHref" containing the url you want to lazy load, your could try this :
require(["dijit/layout/ContentPane", ...], function(ContentPane){
renderers = {
...,
table: function(obj, options){
var div = put("div.collapsed", Grid.prototype.renderRow.apply(this, arguments)),
cp = new ContentPane({
href : obj.yourServiceCallHref
}),
expando = put(div, "div.expando", cp.domNode);
cp.startup();
return div;
}
});
If your service returns json, you could probably do something with dojo/request in a similar fashion. Just add your dom creation steps in your request callback and put them inside the div called "expando"...
Another option would be to replace the Memory store by a JsonRest store, and have the server output the same json format than the one you see on the Memory store. That means all the data would be fetched in a single call though...
Hope you can assist.
I am currently trying to conduct one of the most simplest tasks via a user event script - that is to set a new value in the 'discount rate' field on the sales order. My script works fine when testing it on the client, but when the scheduled script is triggered, the field fails to set/update.
The following code is within a 'beforesubmit' operation. Can you spot what I have done wrong?
function beforeSubmit_discountVAT(type){
if(nlapiGetContext().getExecutionContext() !='scheduled')
return;
var getDiscountVal = nlapiGetFieldValue('discountrate');
var correctDiscount = getDiscountVal / 1.2;
nlapiSetFieldValue('discountrate', correctDiscount);
}
In short - All i want to do is deduct the discount value by 20%. Can you use 'nlapiSetFieldValue' when a user event script is triggered from a scheduled script?
Thanks in advance.
AWB
When editing an existing record, nlapiSetFieldValue cannot be used in a Before Load event on fields that are stored with the record. From the function's JSDocs:
Sets the value of a given body field. This API can be used in user event beforeLoad scripts to initialize field on new records or non-stored fields.
nlapiSetFieldValue can thus only be used reliably in Before Load on new records or non-stored fields.
Realizing this is a month old so you've probably found a solution, I would move your code to the Before Submit event. I tested this using a Scheduled Script:
var customer = nlapiLoadRecord('customer', '31294');
nlapiSubmitRecord(customer);
and User Event on the Customer record, Before Submit event:
if (nlapiGetContext().getExecutionContext() === 'scheduled') {
nlapiSetFieldValue('url', 'http://www.google.com/', false);
}
This works as expected for me in a 2013.1 Sandbox environment.
Using nlapiSubmitField in After Submit as mentioned in the other answer is an unnecessarily long operation that will use extra governance units. Not a huge deal if that's the only thing your script is doing, but if you ever expand the script or add looping, it can add up quickly in terms of performance and governance usage.
Also, it may not be necessary, but you should ensure that getDiscountVal is a Number by doing:
var getDiscountVal = parseInt(nlapiGetFieldValue('discountrate'), 10);
If it comes back as a String then your division operation may give a strange result which may also cause nlapiSetFieldValue to fail or set the field to an odd value.
two suggestions
Make sure that your script is being executed by adding a few nlapiLogExecution statements
Instead of doing it in beforesubmit, change this field in aftersubmit function using nlapiSubmitField
"Can you use 'nlapiSetFieldValue' when a user event script is triggered from a scheduled script?"
Yeah Absolutely yes.Context is also correct its scheduled.
Please make sure that you are submitting the record nlapiSubmitRecord(recordObj) at the end.
If you are not comfortable with nlapiSubmitRecord() you can obviously use nlapiSubmitField()
If still you get stuck up then please paste the complete code so that we could assist you.
Cheers!!!
I have web app that does periodic scan operations, and on a specific page, shows the status of those operations (and any previously-completed ones). I have an Ajax request that I send using jQuery, and it returns the same page I'm currently on, modified given a time variable (last updated) to include just the running scans and any recently completed ones.
Apparently, after leaving this open overnight, which isn't a normal use case, on IE8 an 'out of memory at line 112' (nothing notable at line 112 in anything) was returned. I'm trying to figure out what I'm doing wrong, and where it could be leaking.
My question is: since I'm reloading the same page, but only taking a piece of it, are the 'ready' handlers getting rerun or something? For the most part, the active operations table is going to be empty, so it's not like I'm continually increasing the size of the table or something obvious.
function updateActiveScanList()
{
$.ajax({
method: "POST",
url: "ScanList.action",
data: { updatedTime: $('#updatedTime').val() },
success: function(data) {
// Update the active scan list.
$('#activescans').html( $("#activescans", data) );
// the recent scans table update requires more massaging, omitted for brevity,
// since there's nothing else done there, this happens even if nothing else is
// ever inserted.
});
}
$(document).ready(
function(){
setInterval( updateActiveScanList, 30000 );
}
);
You can use a tool like sIEve to detect the things that eat your memory.
I guess the number of used DOM-nodes(they don't need to be a part of the document-tree) will increase with every manipulation.
It would be the best if you forget jQuery for the part of the DOM-manipulation, the methods jQuery uses are known as prone to this issue, while they partial use some "dirty" things like innerHTML.
Can you give an example of what you like to have inside #activescans?