How do I decrease the governance cost of the following code - javascript

It netsuite there is a limit on how frequently you can use certain APIs (as well as certain scripts). For what I am doing I believe the following is the applicable cost:
nlapiLoadSearch: 5
nlobjSearchResultSet.getSearch(): 10
It takes about an hour, but every time my script(which follows) errors out, probably due to this. How do I change it to make it have less governance cost?
function walkCat2(catId, pad){
var loadCategory = nlapiLoadRecord("sitecategory", "14958149");
var dupRecords = nlapiLoadSearch('Item', '1951'); //load saved search
var resultSet = dupRecords.runSearch(); //run saved search
resultSet.forEachResult(function(searchResult)
{
var InterID=(searchResult.getValue('InternalID')); // process- search
var LINEINX=loadCategory.getLineItemCount('presentationitem');
loadCategory.insertLineItem("presentationitem",LINEINX);
loadCategory.setLineItemValue("presentationitem", "presentationitem", LINEINX, InterID+'INVTITEM'); //--- Sets the line value.-jf
nlapiSubmitRecord(loadCategory , true);
return true; // return true to keep iterating
});
}

nlapiLoadRecord uses 5 units, nlapiLoadSearch uses 5, then actually it is resultSet.forEachResult that uses another 10. On top of that, you are running nlapiSubmitRecord for each search result, which will use 10 more units for each result.
It looks to me like all you are doing with your search results is adding line items to the Category record. You do not need to submit the record until you are completely done adding all the lines. Right now, you are submitting the record after every line you add.
Move the nlapiSubmitRecord after your forEachResult call. This will reduce your governance (and especially your execution time) from 10 units per search result to just 10 units.

Different APIs have different costs associated with them[see suiteanswers ID 10365]. Also, different types of scripts (user, scheduled, etc) have different max limits on what the total usage limit can be. [see suiteanswers ID 10481]
Your script should consume less than that limit else NetSuite will throw an error.
You can use the following line of code to measure your remaining usage at different points in your code.
nlapiLogExecution('AUDIT', 'Script Usage', 'RemainingUsage:'+nlapiGetContext().getRemainingUsage());
One strategy to avoid the max usage exceeded exception is to change the type of script to "scheduled script" since that has the maximum limit. Given that your loop is working off a search, the resultset could be huge and that may cause even a scheduled script to exceed its limits. In such case, you would want to introduce checkpoints in your code and make it reentrant. That way if you see the nlapiGetContext().getRemainingUsage() is less than your threshold, you offload the remaining work to a subsequent scheduled script.

Related

Trying to write a basic script for a Google Sheets that will count total missed minutes by late students

I am (VERY) new to Apps Script and JS generally. I am trying to write a script that will automatically tally the difference between student entry time and start time of a course to deliver total minutes missed.
I have been able to get a function working that can do this for a single cell value, but am having trouble iterating it across a range. Doubtless this is due to a fundamental misunderstanding I have about the for loop I am using, but I am not sure where to look for more detailed information.
Any and all advice is appreciated. Please keep in mind my extreme "beginner status".
I have tried declaring a blank variable and adding multiple results of previously written single-cell functions to that total, but it is returning 0 regardless of given information.
I am including all three of the functions below, the idea is that each will do one part of the overall task.
function LATENESS (entry,start) {
return (entry-start)/60000
}
function MISSEDMINUTES(studenttime,starttime) {
const time = studenttime;
const begin = starttime;
if (time=="Present") {
return 0
} else if (time=="Absent") {
return 90
} else {
return LATENESS(time,begin)
}
}
function TOTALMISSED(range,begintime) {
var total = 0
for (let i = 0; i < range.length; i++) {
total = total + MISSEDMINUTES(i,begintime)
}
}```
If you slightly tweak your layout to have the 'missing minutes' column immediately adjacent to the column of names, you can have a single formula which will calculate the missing minutes for any number of students over any number of days:
Name
*
2/6
2/7
2/8
2/9
John Smith
-
Present
Present
Absent
10:06
Lucy Jenkins
-
Absent
Absent
Absent
Absent
Darren Polter
-
Present
Present
Present
10:01
With 'Name' present in A1, add the following to cell B1 (where I've marked an asterisk):
={"mins missed";
byrow(map(
C2:index(C2:ZZZ,counta(A2:A),counta(C1:1)),
lambda(x,switch(x,"Present",0,"Absent",90,,0,1440*(x-timevalue("10:00"))))),
lambda(row,sum(row)))}
We are MAPping a minute value onto each entry in the table (where 'Present'=0, 'Absent'=90 & a time entry = the number of minutes difference between then and 10am), then summing BYROW.
Updated
Based on the example, you could probably have a formula like the below one to conduct your summation:
=Sum(ARRAYFORMULA(if(B2:E2="Absent",90,if(isnumber(B2:E2),(B2:E2-$K$1)*60*24,0))))
Note that k1 has the start time of 10:00. Same sample sheet has working example.
Original Answer
I'm pretty sure you could do what you want with regular sheets formulas. Here'a sample sheet that shows how to get the difference in two times in minutes and seconds... Along with accounting for absent.
Here's the formula used that will update with new entries.
=Filter({if(B2:B="Absent",90*60,Round((C2:C-B2:B)*3600*24,0)),if(B2:B="Absent",90,Round((C2:C-B2:B)*3600*24/60,1))},(B2:B<>""))
This example might not solve all your issues, but from what I'm seeing, there's no need to be using an app script. If this doesn't cover it, post some sample data using Mark down table.

Batch write more than 25 items on DynamoDB using Lambda

Edit x1: Replaced the snippet with the full file
I'm currently in the process of seeding 1.8K rows in DynamoDB. When a user is created, these rows need to be generated and inserted. They don't need to be read immediately (Let's say, in less then 3 - 5 seconds). I'm currently using AWS Lambda and I'm getting hit by a timeout exception (Probably because more WCUs are consumed than provisioned, which I have 5 with Auto-Scaling disabled).
I've tried searching around Google and StackOverflow and this seems to be a gray area (which is kind of strange, considering that DynamoDB is marketed as an incredible solution handling massive amounts of data per second) in which no clear path exists.
We know that DynamoDB limits the inserts of 25 items per batch to prevent HTTP overhead. Meaning that we could call unlimited number of batchWrite and increase the WCUs.
I've tried calling the unlimited number of batchWrite by just firing it and not awaiting them (Will this count? I've read that since JS is single threaded the requests will be handled one by one anyways, except that I wouldn't have to wait the resposne if I don't use a promise.... Currently using Node 10 and Lambda), and nothing seems to happen. If I promisify the call and await it, I'd get a Lambda timeout exception (Probably because it ran out of WCUs).
I currently have 5 WCUs and 5RCUs (are these too small for these random-spiked operations?).
I'm kind of stuck as I don't want to be randomly increasing the WCUs for short periods of time. In addition, I've read that Auto-Scaling doesn't automatically kick in, and Amazon will only resize the Capacity Units 4 times a day.
How to write more than 25 items/rows into Table for DynamoDB?
https://www.keithrozario.com/2017/12/writing-millions-of-rows-into-dynamodb.html
What should I do about it?
Here's the full file what I'm using to insert into DynamoDB
const aws = require("aws-sdk");
export async function batchWrite(
data: {
PutRequest: {
Item: any;
};
}[]
) {
const client = new aws.DynamoDB.DocumentClient({
region: "us-east-2"
});
// 25 is the limit imposed by DynamoDB's batchWrite:
// Member must have length less than or equal to 25.
// This verifies whether the data is shaped correctly and has no duplicates.
const sortKeyList: string[] = [];
data.forEach((put, index) => {
const item = put.PutRequest.Item;
const has = Object.prototype.hasOwnProperty; // cache the lookup once, in module scope.
const hasPk = has.call(item, "pk");
const hasSk = has.call(item, "sk");
// Checks if it doesn't have a sort key. Unless it's a tenant object, which has
// the accountType attribute.
if (!hasPk || !hasSk) {
throw `hasPk is ${hasPk} and hasSk is ${hasSk} at index ${index}`;
}
if (typeof item["pk"] !== "string" || typeof item["sk"] !== "string") {
throw `Item at index ${index} pk or sk is not a string`;
}
if (sortKeyList.indexOf(item.sk) !== -1) {
throw `The item # index ${index} and sortkey ${item.sk} has duplicate values`;
}
if (item.sk.indexOf("undefined") !== -1) {
throw `There's an undefined in the sortkey ${index} and ${item.sk}`;
}
sortKeyList.push(put.PutRequest.Item.sk);
});
// DynamoDB only accepts 25 items at a time.
for (let i = 0; i < data.length; i += 25) {
const upperLimit = Math.min(i + 25, data.length);
const newItems = data.slice(i, upperLimit);
try {
await client
.batchWrite({
RequestItems: {
schon: newItems
}
})
.promise();
} catch (e) {
console.log("Total Batches: " + Math.ceil(data.length / 25));
console.error("There was an error while processing the request");
console.log(e.message);
console.log("Total data to insert", data.length);
console.log("New items is", newItems);
console.log("index is ", i);
console.log("top index is", upperLimit);
break;
}
}
console.log(
"If no errors are shown, creation in DynamoDB has been successful"
);
}
There are two issues that you're facing but I'll attempt to address them.
A full example of the items being written and the actual batchWrite request with the items shown has not been provided, so it is unclear if the actual request is properly formatted. Based on the information provided, and the issue being faced, it appears that the request is not correctly formatted.
The documentation for the batchWrite operation in the AWS Javascript SDK can be found here, and a previous answer here shows a solution for correctly building and formatting a batchWrite request.
Nonetheless, even if the request is formatted correctly, there still exists a second issue which is that there is sufficient capacity provisioned to handle the write requests to insert 1800 records in the required amount of time which has an upper limit of 5 seconds.
TL;DR the quick and easy solution to the capacity issue is to switch from Provisioned Capacity to On Demand capacity. As is shown below, the math indicates that unless you have consistent and/or predictable capacity requirements, most of the time On Demand capacity is going to not only remove the management overhead of provisioned capacity, but it's also going to be substantially less expensive.
As per the AWS DynamoDB documentation for provisioned capacity here, a Write Capacity Unit or WCU is billed, and thus defined, as follows:
Each API call to write data to your table is a write request. For items up to 1 KB in size, one WCU can perform one standard write request per second.
The AWS documentation for the batchWrite / batchWriteItem API here indicates that a batchWrite API request supports up to 25 items per request and individual items can be up to 400kb. Further to this, the number of WCU's required to process the batchWrite request depends on the size of the items in the request. The AWS documentation for managing capacity in DynamoDB here, advises the number of WCU's required to process a batchWrite request is calculated as follows:
BatchWriteItem — Writes up to 25 items to one or more tables. DynamoDB processes each item in the batch as an individual PutItem or DeleteItem request (updates are not supported). So DynamoDB first rounds up the size of each item to the next 1 KB boundary, and then calculates the total size. The result is not necessarily the same as the total size of all the items. For example, if BatchWriteItem writes a 500-byte item and a 3.5 KB item, DynamoDB calculates the size as 5 KB (1 KB + 4 KB), not 4 KB (500 bytes + 3.5 KB).
The size of the items in the batchWrite request has not been provided, but for the sake of this answer the assumption is made that they are <1KB each. With 25 items of <1KB each in the request, a minimum Provisioned Capacity of 25 WCU's is required to process a single batchWrite request per second. Assuming that the minimum 25 required WCU's are provisioned, considering the 5 second time limit on inserting the items, with just 25 WCU's provisioned, only one request with 25 items can be made per second which totals 125 items inserted in the 5 second time limit. Based on this, in order to achieve the goal of inserting 1800 items in 5 seconds 360 WCU's are need to achieve the goal.
Based on the current pricing for Provisioned Capacity found here, 360 WCU's of provisioned capacity would have a cost of approximately $175/month (not considering free tier credits).
There are two options for how you can handle this issue
Increase provisioned capacity. To achieve 1800 items in 5 seconds you're going to need to provision 360 WCU's.
The better option is to simply switch to On Demand capacity. The question mentioned that the write requests are “random-spiked operations”. If write requests are not predictable and consistent operations on a table, then the outcome is often over provisioning of the table and paying for idle capacity. “On Demand” capacity solves this and adheres to the Serverless philosophy of only paying for what you use where you are only billed for what you consume. Currently, on demand pricing is $1.25 / 1 million WCU's consumed. Based on this, if every new user is generating 1800 new items to be inserted, it would take 97,223 new users being created per month, before provisioning capacity for the table is competitive vs using on demand capacity. Put another way, until a new user is being registered on-average every 26 seconds, the math suggests sticking with on-demand capacity (worth noting that this does not consider RCU's or other items in the table or other access patterns).

How to generically solve the problem of generating incremental integer IDs in JavaScript

I have been thinking about this for a few days trying to see if there is a generic way to write this function so that you don't ever need to worry about it breaking again. That is, it is as robust as it can be, and it can support using up all of the memory efficiently and effectively (in JavaScript).
So the question is about a basic thing. Often times when you create objects in JavaScript of a certain type, you might give them an ID. In the browser, for example with virtual DOM elements, you might just give them a globally unique ID (GUID) and set it to an incrementing integer.
GUID = 1
let a = createNode() // { id: 1 }
let b = createNode() // { id: 2 }
let c = createNode() // { id: 3 }
function createNode() {
return { id: GUID++ }
}
But what happens when you run out of integers? Number.MAX_SAFE_INTEGER == 2⁵³ - 1. That is obviously a very large number: 9,007,199,254,740,991 quadrillions perhaps. Many billions of billions. But if JS can reach 10 million ops per second lets say in a pick of the hat way, then that is about 900,719,925s to reach that number, or 10416 days, or about 30 years. So in this case if you left your computer running for 30 years, it would eventually run out of incrementing IDs. This would be a hard bug to find!!!
If you parallelized the generation of the IDs, then you could more realistically (more quickly) run out of the incremented integers. Assuming you don't want to use a GUID scheme.
Given the memory limits of computers, you can only create a certain number of objects. In JS you probably can't create more than a few billion.
But my question is, as a theoretical exercise, how can you solve this problem of generating the incremented integers such that if you got up to Number.MAX_SAFE_INTEGER, you would cycle back from the beginning, yet not use the potentially billions (or just millions) that you already have "live and bound". What sort of scheme would you have to use to make it so you could simply cycle through the integers and always know you have a free one available?
function getNextID() {
if (i++ > Number.MAX_SAFE_INTEGER) {
return i = 0
} else {
return i
}
}
Random notes:
The fastest overall was Chrome 11 (under 2 sec per billion iterations, or at most 4 CPU cycles per iteration); the slowest was IE8 (about 55 sec per billion iterations, or over 100 CPU cycles per iteration).
Basically, this question stems from the fact that our typical "practical" solutions will break in the super-edge case of running into Number.MAX_SAFE_INTEGER, which is very hard to test. I would like to know some ways where you could solve for that, without just erroring out in some way.
But what happens when you run out of integers?
You won't. Ever.
But if JS can reach 10 million ops per second [it'll take] about 30 years.
Not much to add. No computer will run for 30 years on the same program. Also in this very contrived example you only generate ids. In a realistic calculation you might spend 1/10000 of the time to generate ids, so the 30 years turn into 300000 years.
how can you solve this problem of generating the incremented integers such that if you got up to Number.MAX_SAFE_INTEGER, you would cycle back from the beginning,
If you "cycle back from the beginning", they won't be "incremental" anymore. One of your requirements cannot be fullfilled.
If you parallelized the generation of the IDs, then you could more realistically (more quickly) run out of the incremented integers.
No. For the ids to be strictly incremental, you have to share a counter between these parallelized agents. And access to shared memory is only possible through synchronization, so that won't be faster at all.
If you still really think that you'll run out of 52bit, use BigInts. Or Symbols, depending on your usecase.

Does JavaScript run out of timeout IDs?

Surprisingly I can not find the answer to this question anywhere on the web.
In the documentation it is stated that setTimeout and setInterval share the same pool of ids, as well as that an id will never repeat. If that is the case then they must eventually run out because there is a maximum number the computer can handle? What happens then, you can't use timeouts anymore?
TL;DR;
It depends on the browser's engine.
In Blink and Webkit:
The maximum number of concurrent timers is 231-1.
If you try to use more your browser will likely freeze due to a endless loop.
Official specification
From the W3C docs:
The setTimeout() method must run the following steps:
Let handle be a user-agent-defined integer that is greater than zero that will identify the timeout to be set by this call.
Add an entry to the list of active timeouts for handle.
[...]
Also:
Each object that implements the WindowTimers interface has a list of active timeouts and a list of active intervals. Each entry in these lists is identified by a number, which must be unique within its list for the lifetime of the object that implements the WindowTimers interface.
Note: while the W3C mentions two lists, the WHATWG spec establishes that setTimeout and setInterval share a common list of active timers. That means that you can use clearInterval() to remove a timer created by setTimeout() and vice versa.
Basically, each user agent has freedom to implement the handle Id as they please, with the only requirement to be an integer unique for each object; you can get as many answers as browser implementations.
Let's see, for example, what Blink is doing.
Blink implementation
Previous note: It's not such an easy task to find the actual source code of Blink. It belongs to the Chromium codebase which is mirrored in GitHub. I will link GitHub (its current latest tag: 72.0.3598.1) because its better tools to navigate the code. Three years ago, they were pushing commits to chromium/blink/. Nowadays, active development is on chromium/third_party/WebKit but there is a discussion going on about a new migration.
In Blink (and in WebKit, which obviously has a very similar codebase), the responsible of maintaining the aforementioned list of active timers is the DOMTimerCoordinator belonging to each ExecutionContext.
// Maintains a set of DOMTimers for a given page or
// worker. DOMTimerCoordinator assigns IDs to timers; these IDs are
// the ones returned to web authors from setTimeout or setInterval. It
// also tracks recursive creation or iterative scheduling of timers,
// which is used as a signal for throttling repetitive timers.
class DOMTimerCoordinator {
The DOMTimerCoordinator stores the timers in the blink::HeapHashMap (alias TimeoutMap) collection timers_ which key is (meeting the specs) int type:
using TimeoutMap = HeapHashMap<int, Member<DOMTimer>>;
TimeoutMap timers_;
That answer your first question (in the contex of Blink): the maximum number of active timers for each context is 231-1; much lower than the JavaScript MAX_SAFE_INTEGER (253-1) that you mentioned but still more than enough for normal use cases.
For your second question, "What happens then, you can't use timeouts anymore?", I have so far just a partial answer.
New timers are created by DOMTimerCoordinator::InstallNewTimeout(). It calls the private member function NextID() to retrieve an available integer key and DOMTimer::Create for the actual creation of the timer object. Then, it inserts the new timer and the corresponding key into timers_.
int timeout_id = NextID();
timers_.insert(timeout_id, DOMTimer::Create(context, action, timeout,
single_shot, timeout_id));
NextID() gets the next id in a circular sequence from 1 to 231-1:
int DOMTimerCoordinator::NextID() {
while (true) {
++circular_sequential_id_;
if (circular_sequential_id_ <= 0)
circular_sequential_id_ = 1;
if (!timers_.Contains(circular_sequential_id_))
return circular_sequential_id_;
}
}
It increments in 1 the value of circular_sequential_id_ or set it to 1 if it goes beyond the upper limit (although INT_MAX+1 invokes UB, most C implementations return INT_MIN).
So, when the DOMTimerCoordinator runs out of IDs, tries again from 1 up until it finds one free.
But, what happen if they are all in use? What does prevent NextID() from entering in a endless loop? It seems that nothing. Likely, Blink developers coded NextID() under the assumption that there will never be 231-1 timers concurrently. It makes sense; for every byte returned by DOMTimer::Create() you will need a GB of RAM to store timers_ if it is full. It can add up to TB if you store long callbacks. Let alone the time needed to create them.
Anyway, it looks surprising that no guard against an endless loop has been implemented, so I have contacted Blink developers, but so far I have no response. I will update my answer if they reply.

DynamoDb: Thousands of item to write with low capacity

I just started writing some Lambda functions, my problem is this one:
I have around 7000 items to write.
Those items are having two index the primary the id and a secondary the spotname.
To write all those functions in the dynamodb with a batch write i wrote this code:
Unfortunately i face an issue with the batchwrite (25 items limit) and i solved it in the following way:
for (var j = 0; j < event.length; j++){
if(event[j][0] && event[j][1] && event[j][2] && event[j][3]){
requests.push(new Station(event[j][0],event[j][1],event[j][2],event[j][3]));
if(requests.length == 25 || j == (event.length -1)) { // when you have 25 ready..
var params = {
RequestItems: {
'Stations': requests
}
};
requests=[];
DynamoDB.batchWrite(params, function(err, data) {
if (err){
console.log("Error while batchWrite into dynamoDb");
console.log(err);
}
else{
console.log("Pushed all the added elements");
}
});
}
}
}
Now, i noticed that with a low capacity:
Table Read: 5 Write: 5
spotname-index Read: 5 Write: 5
I manage to write in the database only 1500 records.
Any advice?
I had this problem, this is how I solved it.
Increase the capacity for short period of time. Learnt it is by the hour. If you increase the capacity, try to use it within one hour. Then bring it down.
You cannot bring it down more than 4 times as of now. So you get 4 times in a day to bring your capacity down. You can increase the write capacity any number of times.
Second Approach is,
You can control the rate of write to Dynamo, so you spread your writes evenly across your capacity.
Make sure you write capacity is always higher than the incoming average record capacity.
Hope it helps.
Using the batch write API for DynamoDB doesn't actually use less throughput. It is really intended to reduce the amount of overhead for the HTTP requests when sending a large number of requests to DynamoDB. However, this means that one or more of the items that was attempted to be written may fail and it is your responsibility to detect this and retry those requests. This is likely why some of the records are not ending up in the database. To fix this issue you should look at the response to the batch write and retry those writes yourself.
In contrast, when putting single records at a time the AWS SDK will automatically retry. If you a are using a single thread as in the case above and switch to not using batch while your requests will definitely be throttled they will be given time to retry and succeed which just slows down the execution while keeping the throughput of the table low.
The better option is to temporarily raise the write throughput of the table to a higher value sufficient to support the bulk load. For this example I'd recommend a value beteen 50 and 100 writes. A single threaded load operation will likely be rate limited by the round trip time to the DynamoDB API well below these numbers. For loading only 7000 items I'd recommend avoiding the batch write API as it requires implementing retry logic yourself. However, if you are loading a lot more data or need the load to complete in less time the batch API can give you a theoretical 25 times performances improvement on the HTTP overhead assuming you are not being throttled.

Categories