We have an Angular application which was written for us and we're trying to figure out a specific problem (see below for my voluminous essay). Basically, it's a system where some digital input is supposed to control both the backlight and the page that's displayed. If the digital input is on, the backlight should also be on, and the app should work normally, allowing user interaction.
However, without that digital input signal, the application should give a blank page and turn off the backlight so that the user sees nothing and cannot interact with a real page in some blind fashion.
The backlight is controlled by a back end process, which also deliver information to the front end (both of these run on the same device) using Angular routing. But the decision as to what's displayed on the web page is totally under the control of the front end.
We're seeing a situation where, if the digital input is toggled off and on quickly, we sometimes get the backlight on but with a blank page and I'm wondering whether there's some sort of race condition in the delivery of event through the router.
In terms of code, we have an effect which monitors the digital input signal from the back end (not directly) that looks something like:
#Effect({ dispatch: false })
receiveDigitalInputEvent$ = this.actions.pipe(
ofType(AppActionTypes.RECEIVE_DIGITAL_INPUT),
map((action: AppActions.ReceiveDigitalInput) => action.payload),
withLatestFrom(this.store.pipe(select(getRouterState))),
tap(([data, routerState]) => {
const currUrl = routerState ? routerState.state.url : '';
const { isDigitalInputSet, someOtherStuff } = data;
this.forceNavigateIfNeeded(isDigitalInputSet, currUrl);
})
);
forceNavigateIfNeeded(isDigitalInputSet: boolean, currUrl) {
if (isDigitalInputSet && currUrl.endsWith('/blank')) {
this.router.navigateByUrl('/welcome');
else if (! isDigitalInputSet && ! currUrl.endsWith('/blank')) {
this.router.navigateByUrl('/blank');
}
}
What this basically does is, on digital input event arrival:
if the input is on and the screen is currently blank, go to the welcome screen; or
if the input is off and you're not currently on the blank screen, go to the blank screen.
Now, this works normally, it's just the quick transitions that seem to be causing a problem.
I don't have that much knowledge of the Angular internals but I assume the incoming events are being queued somehow for delivery via the routing mechanism (I believe the back end pushed through stuff via web sockets to the front end).
I've looked into the Angular source and noticed that the navigateByUrl uses promises to do its work so the code may return to get the next message queue item before the page is actually changed. I just wanted someone with more Angular nous to check my reasoning or let me know if what I'm suggesting is rubbish. As to what I'm suggesting:
The digital input goes off and a message (OFF) is queued from the back end to the front end. The back end also turns off the backlight.
The digital input goes on again quickly and a message (ON) is queued from the back end to the front end. The back end also reactivates the backlight.
The front end receives the OFF message and processes it. Because we've gotten an OFF message while not on the blank page, we call navigateByUrl('/blank') which starts up a promise.
Before that promise can be fulfilled (and the routerState.state.url changed from non-blank to blank), we start processing the ON message. At this stage, we have an ON message with a (stale) non-blank page, so forceNavigateIfNeeded() does nothing.
The in-progress navigateByUrl('/blank') promise completes, setting the page to blank.
Now we seem to have the situation described as originally, a blank page with the backlight on.
Is this even possible with Angular/Typescript? It seems to rely on the fact that messages coming in are queued and are acted upon (in a single threaded manner) as quickly as possible, while front-end navigation (and, more importantly, the updated record of your current page) may take some time.
Can someone advise as to whether this is possible and, if so, what would be a good way to fix it?
I assume it's a bad idea to wait around in forceNavigateIfNeeded() for the promise to complete, but I'm not sure how else to do it.
Although your problem is quite complex I can see a suspicious code which might be worth inspecting.
...
withLatestFrom(this.store.pipe(select(getRouterState))),
...
This might be the cause of your issue because it's taking what's there, what's latest. It doesn't request the update nor does it wait for it. We had similar issue where also milliseconds were in play and I came up with the workaround, where I replaced withLatestFrom with switchMap operator. I'm not sure if it works in your case but you can give it try.
map((action: AppActions.ReceiveDigitalInput) => action.payload),
switchMap(data => //data === action.payload
this.store.select(getRouterState).pipe(map(routerState => ({ data, routerState }))
),
tap(({data, routerState}) => { ... }
EDIT:
In our case we used a custom selector in the switchMap, which compares current and previous state and returns comparison result. Honestly I'm not 100% sure if that would also help you but it's a necessary piece of a puzzle in our case.
export const getFeshRouterState= pipe(
select(getRouterState),
startWith(null as RouterStateInterface),
pairwise(),
map(([prev, curr]) => /* do some magic here and return result */),
);
Sorry for initial confusion.
Related
I am using React with dhtmlx-gantt library to create gantt chart. I met with issue when using the filter function together with useEffect/useLayoutEffect lifecycle.
The gist of it is that the browser is not painting/rendering the correct UI on the screen on certain condition.
The start state load screen looks like this:
6 Task
After filter, this should be how it looks like:
Should be left with 5 task after filtering away duration > 4
But this is how it is:
Left with 5 task but an empty row is shown rather than "refreshing" (not sure if this is the right term to use)
I have created a github repo with different scenario describing the problem, and how to reproduce those issue. More information on how to run the sample can be found in the README.md. Please let me know if more information needs to be provided.
Sample 1: Using conditional rendering will cause issue on painting UI changes
Sample 2: Turning smart_rendering config on and off cause issue on painting UI changes
Sample 3: Calling the function within the parent component and in child component with exact same code cause issue on painting UI
My desired outcome is to able to render the UI correctly, whether or not this code to filter the data is ran on parent or child component.
I should also mention that a workaround was to use if (document.querySelector(".gantt_grid")) gantt.render(); rather than gantt.refreshData() in onBeforeTaskDisplay event which will then correctly paint the UI changes. But I still like to understand why does this happens. Is there anything I did wrongly in term of using the React lifecycle and so on.
Thank you.
Your code looks fine and should work correctly.
The issue is on dhtmlxGantt end, it has been confirmed and is now fixed in the dev branch.
The bug itself was caused by the new smart rendering mechanism introduced in v6.2.0.
It caches previously calculated positions of tasks in order to minimize calculations. In certain circumstances when a gantt instance has been initialized multiple times, it didn't invalidate that cache when it was necessary. Because of that, tasks were displayed at the same positions as they had before filtering was applied (thus the blank row where the first task has been).
In short, the issue will be fixed in the next bugfix update - v6.2.6.
If everything goes as planned, it will be released tomorrow (2019-09-19)
Try gantt.render() after gantt.refreshData() in your code:
useEffect(() => {
const onBeforeTaskDisplay = gantt.attachEvent("onBeforeTaskDisplay", function (id, task) {
console.log("filters", task.text, filter)
if (filter && task.duration > 4) {
return false;
}
return true;
});
gantt.refreshData();
gantt.render();
// This should have been here
return () => {
gantt.detachEvent(onBeforeTaskDisplay);
}
}, [filter])
Edit: I've redacted actual numbers and replaced them with pseudoswears because I've been told sharing performance data is against Netsuite's TOS.
I'm integrating our accounting system with NetSuite using restlets, and overall it has gone pretty well with the glaring exception of performance. I have figured out that nlapiLoadRecord is Satan's own child from a performance perspective, so I avoid it whenever possible favoring the search api and now my read restlets are pretty snappy. However, whenever I ask a restlet to write anything it is as slow as a tortoise stuck in cold tar. I'm generally profiling the nlapiSubmitRecord itself at between SLOW and REALLY DAMN SLOW seconds. This seems insane to me. No one would use NetSuite if the performance were always this poor for writing. I'll include a couple of code examples below. Any tips on speeding up NetSuite restlet write performance would be appreciated.
In this first one receivedInvoice is the incoming data, and findCurrencyCode and findCustomerByCustomerNumber are well performing functions that do those things. I just clocked this at a nearly unbelievable HOLY MONKEYS THAT IS SLOW seconds for a simple invoice with one line item, nearly all of the time passing while I waited for nlapiSubmitRecord to finish.
var createdInvoice = nlapiCreateRecord('invoice');
createdInvoice.setFieldValue('customform', Number(receivedInvoice.transactionType));
createdInvoice.setFieldValue('memo', receivedInvoice.message);
createdInvoice.setFieldValue('duedate', receivedInvoice.dateDue);
createdInvoice.setFieldValue('currency', findCurrencyCode(receivedInvoice.currencyUnit));
createdInvoice.setFieldValue('location', Number(receivedInvoice.location));
createdInvoice.setFieldValue('postingperiod', findPostingPeriod(receivedInvoice.datePosted));
var customer = findCustomerByCustomerNumber(receivedInvoice.customerNumber);
createdInvoice.setFieldValue('entity', customer.customerId );
createdInvoice.setFieldValue('custbody_end_user', customer.customerId );
createdInvoice.setFieldValue('department', customer.departmentId);
var itemCount = receivedInvoice.items.length;
for(var i = 0; i < itemCount; i++)
{
createdInvoice.selectNewLineItem('item');
createdInvoice.setCurrentLineItemValue('item', 'item',receivedInvoice.items[i].item);
createdInvoice.setCurrentLineItemValue('item', 'quantity', receivedInvoice.items[i].quantity);
createdInvoice.setCurrentLineItemValue('item', 'rate',receivedInvoice.items[i].price);
createdInvoice.setCurrentLineItemValue('item', 'custcol_list_rate',receivedInvoice.items[i].price);
createdInvoice.setCurrentLineItemValue('item', 'amount',receivedInvoice.items[i].totalAmount);
createdInvoice.setCurrentLineItemValue('item', 'description',receivedInvoice.items[i].description);
createdInvoice.commitLineItem('item');
}
var recordNumber = nlapiSubmitRecord(createdInvoice,false,true);
In this one I think I'm committing a performance heresy by opening the record in dynamic mode, but I'm not sure how else to get the possible line items. Simply opening a new record in dynamic mode clocks in at around SLOW seconds. Again, the submit is where most time is being eaten (often around OH DEAR SWEET MOTHER OF HORRIBLE seconds), although this one eats a decent amount of time as I mess with the line items, again presumably because I have opened the record in dynamic mode.
var customerPayment = nlapiCreateRecord('customerpayment',{recordmode: 'dynamic'});
customerPayment.setFieldValue('customer', parseInt(customerId));
customerPayment.setFieldValue('payment', paymentAmount);
customerPayment.setFieldValue('paymentmethod', paymentMethod);
customerPayment.setFieldValue('checknum', transactionId);
customerPayment.setFieldValue('currency', currency);
customerPayment.setFieldValue('account', account);
var applyCount = customerPayment.getLineItemCount('apply');
if(applyCount>0)
{
for(var i=1;i<=applyCount;i++)
{
var thisInvoice = customerPayment.getLineItemValue('apply','refnum',i);
if(thisInvoice == invoiceToPay)
{
customerPayment.selectLineItem('apply', i);
customerPayment.setCurrentLineItemValue('apply','apply','T');
customerPayment.setCurrentLineItemValue('apply', 'amount',paymentAmount);
customerPayment.commitLineItem('apply');
}
}
}
nlapiSubmitRecord(customerPayment,false,true);
A few thoughts:
(just to get it off my chest) Integrating your accounting system with Netsuite just sounds odd. Netsuite is an accounting system and generally is the accounting system of record for the orgs that use it. If you are not using Netsuite for accounting you might want to consider what utility it has for the price and get off it.
When I integrate an external system with Netsuite I generally try to make it async. I do this by getting the raw info into a custom record and then kick off a scheduled script to process the queued update. This lets my api return quickly. When I process the queue I store errors in the queue record so that if anything comes up I can fix the data or code and re-submit it.
One apparently major source of slow transaction submit (aside from slow UE scripts) is the state of your books. I had a startup customer who did really well but they never closed their books and they were using IIRC Average Costing. Every time they saved a posting transaction Netsuite was recalculating the whole period (which at the point where things were grinding to a halt was about 2 years of records for a very busy site). When they started closing periods transaction save times went way down. When they converted to standard costing transaction save times went down again (I imagine LIFO would be faster than Avg and slower than standard but YMMV)
Also notes on your code.
The normal way to create an invoice is
nlapiTransformRecord('customer', customer.customerId, 'invoice'); or
nlapiTransformRecord('customer', customer.customerId, 'invoice', {recordmode:'dynamic'});
I've never tested whether this has an effect on submit times but it might help since NS will start the save from a slightly better place (grasping at straws but with NS sometimes non-obvious changes have speed benefits)
Also not sure how changing the customform in dynamic mode works. When you know the form you can also add that to the init defaults:
{recordmode:'dynamic', customform:receivedInvoice.transactionType}
One of the reasons why you Submit is slow might be there are a lot of User Event scripts attached to the record. Since it is the Restlet that saves the record, it will trigger the User Event scripts.
I finally got RANGE_ADD mutations to work. Now I'm a bit confused and worried about their performance.
One outputField of a RANGE_ADD mutation is the edge to the newly created node. Each edge has a cursor. cursorForObjectInConnection()(docs) is a helper function that creates that cursor. It takes an array and a member object. See:
newChatroomMessagesEdge: {
type: chatroomMessagesEdge,
resolve: async ({ chatroom, message }) => {
clearCacheChatroomMessages();
const messages = await getChatroomMessages(chatroom.id);
let messageToPass;
for (const m of messages) {
if (m.id === message.id) {
messageToPass = m;
}
}
return {
cursor: cursorForObjectInConnection(messages, messageToPass),
node: messageToPass,
};
},
},
plus a similar edge for user messages.
Now this is what confuses me. Say you want to make a chatroom app. You have a userType, a chatroomTypeand a messageType. Both the userType and the chatroomType have a field messages. It queries for all of the user's or chatroom's messages respectively and is defined as a relay-connection that points to messageType. Now, each time a user sends a new message, two new RANGE_ADD mutations are committed. One that creates an edge for the chatroom's messages and one that creates an edge for the user's messages. Each time, because of cursorForObjectInConnection(), a query for all of the user's messages and one for all of the chatroom's messages is sent to a database. See:
const messages = await getChatroomMessages(chatroom.id);
As one can imagine, there occur lots of "message-sent-mutations" in a chatroom and the number of messages in a chatroom grows rapidly.
So here is my question: Is it really necessary to query for all of the chatroom messages each time? Performance-wise, this seems like an awful thing to do.
Sure, although I did not look into it yet, there are optimistic updates I can use to ease the pain client-side - a sent message gets displayed immediately. But still, the endless database queries continue.
Also, there is Dataloader. But as far as I understand Dataloader, it is used as a per request batching and caching mechanism - especially in conjunction with GraphQL. Since each mutation should be a new request, Dataloader does not help on that front either it seems.
If anyone can shed some light on my thoughts and worries, I'd be more than happy :)
I'm experiencing a very strange issue with Meteor whereby a single page is taking a very long time to load. I've gone down numerous routes to try and troubleshoot before identifying a cause which isn't really making sense to me.
Fundamentally I have a two-step process that collects some initial data and then inserts a couple of hundred records into the database. On the second page I then need to be able to assign users to each of the new records. I can also go back into the same page and edit the assigned users so in this second scenario the inserts have already happened - same lag.
The insert code is relatively straightforward.
Client Side
setup assessment meta-data & create new assessment record + items:
// set updatingAssessment variable to stop subs re-loading as records are inserted
Session.set('updatingAssessment', true);
Meteor.call('insertAssessment', assessment, function(err, result) {
if (err) { showError(err.reason); }
var assessmentId = result.toHexString();
// some code excluded to get 'statements' array
Meteor.call('insertAssessmentItems',
assessmentId,
statements,
function(err, result) {
Session.set('updatingAssessment', false);
});
}
// hide first modal and show 2nd
Session.set("assessmentStage1", false);
Session.set("assessmentStage2", true);
Server methods
Volume wise, we're inserting 1 assessment record and then ~200 assessment items
insertAssessment: function(assessment) {
this.unblock();
return Assessments.insert({
<<fields>>
}, function(err, result) { return result; });
},
insertAssessmentItems: function(assessmentId, statements) {
this.unblock();
for (i = 0; i < statements.length; i++) {
AssessmentItems.insert({
<<fields>>
}, function(err, result) { });
}
}
Initially I tried numerous SO suggestions around not refreshing subscriptions while the inserts were happening, using this.unblock() to free up DDP traffic, making queries on the slow page non-reactive, only returning the bare minimum fields, calling the inserts asynchronously, etc, etc... all of which made no difference with either the initial insert, or subsequent editing of the same records (i.e go out of the page and come back so the inserts have already happened).
After commenting out all database code and literally returning a string from the meteor method (with the same result / lag) I looked at the actual template generation code for the html page itself.
After commenting out various sections, I identified that a helper to build up a dropdown list of users (repeated for each of those ~200 records) was causing the massive spike in load time. When I commented out the dropdown creation the page load happens in <5s which is within a more acceptable range although still slower than i would like.
Two oddities with this, which I was hoping somebody could shed some light on:
Has anything changed recently in Meteor that would have caused a slowdown as I haven't significantly changed my code for a couple of months and it was working before, albeit still around 10 seconds to fully render the page. The page also used to be responsive when loaded, whereas now the CPU is stuck at 100% and selecting users is very laggy.
Why is the actual template loading blocking my server method returns? There is no doubt I need to re-write my template to be more efficient but I spent a lot of time trying to figure out why my server methods were not returning values quickly. I even analysed the DDP traffic and there was no activity while it was waiting for the template to load - no reloaded subs, no method returns, nothing.
Thanks in advance,
David
I was asked to develop a tab panel with 6 tabs, each having 30 to 40 elements. Each tab is acting as a form in accumulating the details of a person and the last tab is a Summary page which displays all the values entered in the first five tabs. I was asked to provide summary as a tab because, the user can navigate to summary tab at any instance and look at the details entered by him/ or glace the summary. i am following ExtJs MVC pattern. Payload is coming from / going to Spring MVC Application. (JSON)
Using tab change event in controller and if the newtab is summary I am rendering the page with show hide functionality.
Method 1 :In controller I have used Ext.getCmp('id of each element inside the tabs') and show hide the components in summary tab based on the value entered by the user. This killed my app in IE8 popping a message saying that the "script is slow and blah blah..." i had to click on NO for 5 to 6 times for the summary tab to render and display the page.
Method 2 :In controller I used ref and selectos to acccess all the items in tabs. I have used itemId for each and every field in summary tab. like this.getXyz().show(). I thought it would be fast. Yes it was in Google chrome. but my app in IE8 is slow compared to goolge chrome/firefox
Any suggestions regarding this and plan to decrease page render time. The summary page has more than 1000 fields. Please feel free to shed ur thoughts or throw some ideas on this.
thank you!!
I've got a few suggestions you can try. First, to answer your title, I think the fastest simple way to lookup components in javascript is to build a hash map. Something like this:
var map = {};
Ext.each(targetComponents, function(item) {
map[item.itemId] = item;
});
// Fastest way to retrieve a component
var myField = map[componentId];
For the rendering time, be sure that the layout/DOM is not updated each time you call hide or show on a child component. Use suspendLayouts to do that:
summaryTabCt.suspendLayouts();
// intensive hide-and-seek business
// just one layout calculation and subsequent DOM manipulation
summaryTabCt.resumeLayouts(true);
Finally, if despite your best efforts you can't cut on the processing time, do damage control. That is, avoid freezing the UI the whole time, and having the browser telling the user your app is dead.
You can use setTimeout to limit the time your script will be holding the execution thread at once. The interval will let the browser some time to process UI events, and prevent it from thinking your script is lost into an infinite loop.
Here's an example:
var itemsToProcess = [...],
// The smaller the chunks, the more the UI will be responsive,
// but the whole processing will take longer...
chunkSize = 50,
i = 0,
slice;
function next() {
slice = itemsToProcess.slice(i, i+chunkSize);
i += chunkSize;
if (slice.length) {
Ext.each(slice, function(item) {
// costly business with item
});
// defer processing to give time
setTimeout(next, 50);
} else {
// post-processing
}
}
// pre-processing (eg. disabling the form submit button)
next(); // start the loop
up().down().action....did the magic. I have replaced each and every usage of Ext.getCmp('id'). Booooha... it's fast and NO issues.
this.up('tabpanel').down('tabName #itemIdOfField').actions.
actions= hide(), show(), setValues().
Try to check deferredRender is true. This should only render the active tab.
You also can try a different hideMode. Especially hideMode:'offsets ' sounds promising
Quote from the sencha API:
hideMode: 'offsets' is often the solution to layout issues in IE specifically when hiding/showing things
As I wrote in the comment, go through this performance guide: http://docs.sencha.com/extjs/4.2.2/#!/guide/performance
In your case, this will be very interesting for you:
{
Ext.suspendLayouts();
// batch of updates
// show() / hide() elements
Ext.resumeLayouts(true);
}