I have pretty high traffic peaks, thus I'd like to overwrite the dynamodb retry limit and retry policy.
Somehow I'm not able to find the right config property to overwrite the retry limit and function.
my code so far
var aws = require( 'aws-sdk');
var table = new aws.DynamoDB({params: {TableName: 'MyTable'}});
aws.config.update({accessKeyId: process.env.AWS_ACCESS_KEY_ID, secretAccessKey: process.env.AWS_SECRET_KEY});
aws.config.region = 'eu-central-1';
I found the following amazon variables and code snippets, however I'm not sure how to wire this up with the config?
retryLimit: 15,
retryDelays: function retryDelays() {
var retryCount = this.numRetries();
var delays = [];
for (var i = 0; i < retryCount; ++i) {
if (i === 0) {
delays.push(0);
} else {
delays.push(60*1000 *i); // Retry every minute instead
// Amazon Defaultdelays.push(50 * Math.pow(2, i - 1));
}
}
return delays;
}
The config is pretty limited, and the only retry parameter you can set on it is maxRetries.
maxRetries (Integer) — the maximum amount of retries to attempt with a request. See AWS.DynamoDB.maxRetries for more information.
You should set the maxRetries to a value that is appropriate to your use case.
aws.config.maxRetries = 20;
The retryDelays private API uses internally the maxRetries config setting, so setting that parameter globally like in my code above should work. The retryLimit is completely useless, and forget about it.
The number of retries can be set through configuration, but seems that there is not an elegant way to set the retry delay/backoff strategy etc.
The only way to manipulate those is to listen to the retry event, and manipulate the retry delay (and related behavior) in a event handler callback:
aws.events.on('retry', function(resp) {
// Enable or disable retries completely.
// disabling is equivalent to setting maxRetries to 0.
if (resp.error) resp.error.retryable = true;
// retry all requests with a 2sec delay (if they are retryable)
if (resp.error) resp.error.retryDelay = 2000;
});
Be aware that there is an exponential backoff strategy that runs internally, so the retryDelay is not literally 2s for subsequent retries. If you look at the internal service.js file you will see how the function looks:
retryDelays: function retryDelays() {
var retryCount = this.numRetries();
var delays = [];
for (var i = 0; i < retryCount; ++i) {
delays[i] = Math.pow(2, i) * 30;
}
return delays;
}
I don't think it's a good idea to modify internal API's, but you could do it by modifying the prototype of the Service class:
aws.Service.prototype.retryDelays = function(){ // Do some }
However, this will affect all services, and after looking in depth at this stuff, it is obvious their API wasn't built to cover your use-case in an elegant way, through configuration.
The javascript AWS SDK does not allow the DynamoDB service to overwrite the retryDelayOptions and thus does not allow the customBackoff to be defined. These configurations are allowed for the rest of the services, but for some reason does not work for DynamoDB.
This page notes that :
Note: This works with all services except DynamoDB.
Therefore, if you want to define a customBackoff function, ie: determine a retryDelay, it is not possible through configuration. The only way I have found was to overwrite the private method retryDelays of the DynamoDB object (aws-sdk-js/lib/services/dynamodb.js).
Here is an example of this being done where a exponential backoff with jitter is implemented :
AWS.DynamoDB.prototype.retryDelays = (retryCount: number): number => {
let temp = Math.min(cap, base * Math.pow(2, retryCount));
let sleep = Math.random() * temp + 1;
return sleep;
};
Max retries or retry limit can be set through the maxRetries property of the DynamoDB configuration object as such :
let dynamodb = new AWS.DynamoDB({
region: 'us-east-1',
maxRetries: 30
});
See Also :
https://github.com/aws/aws-sdk-js/issues/402
https://github.com/aws/aws-sdk-js/issues/1171
https://github.com/aws/aws-sdk-js/issues/1100
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.RetryAndBackoff
https://www.awsarchitectureblog.com/2015/03/backoff.html
Related
I've never received an error like this before,
I have a file that defines functions for making API calls, currently I'm reading the endpoint base URLs from the environment variables:
/**
* Prepended to request URL indicating base URL for API and the API version
*/
const VERSION_URL = `${process.env.NEXT_PUBLIC_API_BASE_URL}/${process.env.NEXT_PUBLIC_API_VERSION}`
I tried to make a quick workaround because environment variables weren't being loaded correctly, by hardcoding the URLS incase the variable wasn't defined.
/**
* Prepended to request URL indicating base URL for API and the API version
*/
const VERSION_URL = `${process.env.NEXT_PUBLIC_API_BASE_URL || 'https://hardcodedURL.com'}/${process.env.NEXT_PUBLIC_API_VERSION || 'v1'}`
In development and production mode when running on my local machine it works fine (docker container). However, as soon as it's pushed to production, I then get the following screen:
This is the console output:
framework-bb5c596eafb42b22.js:1 TypeError: Path must be a string. Received undefined
at t (137-10e3db828dbede8a.js:46:750)
at join (137-10e3db828dbede8a.js:46:2042)
at J (898-576b101442c0ef86.js:1:8158)
at G (898-576b101442c0ef86.js:1:10741)
at oo (framework-bb5c596eafb42b22.js:1:59416)
at Wo (framework-bb5c596eafb42b22.js:1:68983)
at Ku (framework-bb5c596eafb42b22.js:1:112707)
at Li (framework-bb5c596eafb42b22.js:1:98957)
at Ni (framework-bb5c596eafb42b22.js:1:98885)
at Pi (framework-bb5c596eafb42b22.js:1:98748)
cu # framework-bb5c596eafb42b22.js:1
main-f51d4d0442564de3.js:1 TypeError: Path must be a string. Received undefined
at t (137-10e3db828dbede8a.js:46:750)
at join (137-10e3db828dbede8a.js:46:2042)
at J (898-576b101442c0ef86.js:1:8158)
at G (898-576b101442c0ef86.js:1:10741)
at oo (framework-bb5c596eafb42b22.js:1:59416)
at Wo (framework-bb5c596eafb42b22.js:1:68983)
at Ku (framework-bb5c596eafb42b22.js:1:112707)
at Li (framework-bb5c596eafb42b22.js:1:98957)
at Ni (framework-bb5c596eafb42b22.js:1:98885)
at Pi (framework-bb5c596eafb42b22.js:1:98748)
re # main-f51d4d0442564de3.js:1
main-f51d4d0442564de3.js:1 A client-side exception has occurred, see here for more info: https://nextjs.org/docs/messages/client-side-exception-occurred
re # main-f51d4d0442564de3.js:1
Viewing the source at t (137-10e3db828dbede8a.js:46:750)
I'm completely at a lost at what this means or what is happening. Why would hardcoding in a string for the path result in this client error? The lack of a readable source code is making this impossible for me to understand what's happening.
Quick googling suggests that I should upgrade some package, but the error is so vague, I'm not even sure what package is giving the issue.
This is the roughly the how the version URL path is being used
/**
* Send a get request to a given endpoint
*
* **Returns a Promise**
*/
function GET(token, data, parent, api) {
return new Promise((resolve, reject) => {
try {
let req = new XMLHttpRequest()
let endpoint = `${VERSION_URL}/${parent}/${api}` // base url with the params not included
let params = new URLSearchParams() // URLSearchParams used for adding params to url
// put data in GET request params
for (const [key, value] of Object.entries(data)) {
params.set(key, value)
}
let query_url = endpoint + "?" + params.toString() // final query url
req.open("GET", query_url, true)
req.setRequestHeader("token", token) // put token into header
req.onloadend = () => {
if (req.status === 200) {
// success, return response
resolve([req.response, req.status])
} else {
reject([req.responseText, req.status])
}
}
req.onerror = () => {
reject([req.responseText, req.status])
}
req.send()
} catch (err) {
reject(["Exception", 0])
}
})
}
From my experience, this problem can happen for multiple reasons. The most common one is because you didn't put the data accessing checker properly when data comes from an API. Sometimes this things we don't see in browser but it gives such error when you deploy.
For example:
const response = fetch("some_url");
const companyId = response.data.companyId; ❌
const companyId = response?.data?.companyId; ✔️
I'm trying to set Cloudflare's workers to track the circulation of some ERC20 tokens as an exercise to learn web3 and wasm. Thought it could be simple enough, but about 90% of the time so far has been trying to solve this elusive error
A hanging Promise was canceled. This happens when the worker runtime is waiting for a Promise from JavaScript to resolve but has detected that the Promise cannot possibly ever resolve because all code and events related to the Promise's request context have already finished.
I look for additional information online, but it seems my error is from a different type(?).
Here's a simple snippet of code to reproduce.
mod erc20_abi;
use erc20_abi::ERC20_ABI;
use cfg_if::cfg_if;
use ethers::{
contract::Contract,
core::{abi::Abi, types::Address},
prelude::{AbiError, U256},
providers::{Http, Provider},
};
use num_format::{Locale, ToFormattedString};
use std::convert::TryFrom;
use wasm_bindgen::prelude::*;
cfg_if! {
// When the `wee_alloc` feature is enabled, use `wee_alloc` as the global
// allocator.
if #[cfg(feature = "wee_alloc")] {
extern crate wee_alloc;
#[global_allocator]
static ALLOC: wee_alloc::WeeAlloc = wee_alloc::WeeAlloc::INIT;
}
}
#[wasm_bindgen]
pub async fn handle() -> String {
let web3_ethereum = Provider::<Http>::try_from(WEB3_URL_ETHEREUM).unwrap();
let abi: Abi = serde_json::from_str(ERC20_ABI).unwrap();
let token_contract_ethereum = Contract::new(parse_address(ADDRESS_ETH),
abi, web3_ethereum);
let convert_wei_to_decimal = |bignumber: U256| -> String {
(bignumber.as_u128() / u128::pow(10, 18)).to_formatted_string(&Locale::en)
};
// I believe this is the problem, since just returning a String works fine.
let total_supply_ethereum = token_contract_ethereum
.method::<_, U256>("totalSupply", ())
.unwrap()
.call()
.await
.unwrap();
convert_wei_to_decimal(total_supply_ethereum)
}
fn parse_address(address: &str) -> Address {
address.parse::<Address>().unwrap()
}
This is the worker/workers.js file
addEventListener('fetch', (event) => {
event.respondWith(handleRequest(event.request))
})
const { handle } = wasm_bindgen;
const instance = wasm_bindgen(wasm);
/**
* Fetch and log a request
* #param {Request} request
*/
async function handleRequest(request) {
await instance;
const output = await handle();
let res = new Response(output, { status: 200 });
res.headers.set('Content-type', 'text/html');
return res;
}
Cargo.toml
[package]
name = "circulating-supply"
version = "0.1.0"
license = "GPL-3.0-or-later"
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[lib]
crate-type = ["cdylib", "rlib"]
[profile.release]
opt-level = 's' # Optimize for size.
lto = true
panic = "abort"
codegen-units = 1
[dependencies]
ethers = { git = "https://github.com/gakonst/ethers-rs" }
serde_json = "1.0.68"
num-format = "0.4.0"
cfg-if = "1.0.0"
wee_alloc = { version = "0.4.5", optional = true }
wasm-bindgen = "0.2.78"
wasm-bindgen-futures = "0.4.28"
js-sys = "0.3.55"
wrangler dev will compile it fine, but going to http://127.0.0.1:8787 will result in Error 1101
In my case a dependency used sth. not available in wasm runtime.
I guess ethers cryptography dependencies also depend on sth. like getrandom.
Adding this to Cargo.toml solved my issue.
[target.wasm32-unknown-unknown.dependencies]
getrandom = { version = "0.1", features = ["wasm-bindgen"] }
This will force your dependencies based on getrandom use the wasm features of getrandom.
I have a use case in which I would like to generate logs for my JS web application, which could be stored as a file on the client-side (stored on the user's machine).
Therefore, I want to know that what approach can I follow for generating logs in JS?
If you mean you want to do this from browser-hosted JavaScript code, I'm afraid you can't. Browser-based JavaScript code can't write to arbitrary files on the user's computer. It would be a massive security problem.
You could keep a "log" in web storage, but note that web storage has size limits so you wouldn't want to let it grow too huge.
Here's a barebones logging function that adds to a log in local storage:
function log(...msgs) {
// Get the current log's text, or "" if there isn't any yet
let text = localStorage.getItem("log") || "";
// Add this "line" of log data
text += msgs.join(" ") + "\r\n";
// Write it back to local storage
localStorage.setItem("log", text);
}
Obviously you can then build on that in a bunch of different ways (log levels, date/time logging, etc.).
You can use local storage to simulate file :
Create id for each line of your "file" and store the number of the last line
function logIntoStorage (pMsg) {
if (!pMsg) pMsg = "pMsg is not here !";
if ((typeof pMsg) != "string") pMsg = "pMsg is Not a string:"+(typeof pMsg);
let logNb = "logNb";
let padLong = 7;
let strLg = "0";
let lg = 0;
let maxSize = 50; // max nb of lines in the log
// Reading log num line
strLg = localStorage.getItem(logNb);
if(!strLg) { //logNb not stored yet
lg = 0;
strLg = "0";
localStorage.setItem(logNb, lg.toString(10)); // store the number of cur line
} else { // Read logNb from storage
strLg = localStorage.getItem(logNb);
lg = parseInt(strLg,10);
}
if (lg >= maxSize) {
lg = maxSize; // size limit nb lines.
pMsg = "LIMIT SIZE REACHED";
}
// log msg into localStorage at logLine:0000####
let s = ("0000000000000000"+strLg).substr(-padLong); // padding zeros
localStorage.setItem("logLine:"+s, pMsg);
if (lg >= maxSize) return;
lg++; // point to the next line
localStorage.setItem(logNb, lg.toString(10));
}
In modern Chrome you can actually "stream" data to the user's disk, after they give permission, thanks to the File System Access API.
To do so, you have to request for a file to save to, calling showSaveFilePicker().
Once you get the user's approval you'll receive a handle from where you'll be able to get a WriteableStream.
Once you are done writing, you just have to .close() the writer.
onclick = async () => {
if( !("showSaveFilePicker" in self) ) {
throw new Error( "unsupported browser" );
}
const handle = await showSaveFilePicker();
const filestream = await handle.createWritable();
const writer = await filestream.getWriter();
// here we have a WritableStream, with direct access to the user's disk
// we can write to it as we wish
writer.write( "hello" );
writer.write( " world" );
// when we're done writing
await writer.ready;
writer.close();
};
Live example.
I have this problem when I try to upload more than a few hundred of files at the same time.
The API interface is for one file only so I have to call the service sending each file. Right now I have this:
onFilePaymentSelect(event): void {
if (event.target.files.length > 0) {
this.paymentFiles = event.target.files[0];
}
let i = 0;
let save = 0;
const numFiles = event.target.files.length;
let procesed = 0;
if (event.target.files.length > 0) {
while (event.target.files[i]) {
const formData = new FormData();
formData.append('file', event.target.files[i]);
this.payrollsService.sendFilesPaymentName(formData).subscribe(
(response) => {
let added = null;
procesed++;
if (response.status_message === 'File saved') {
added = true;
save++;
} else {
added = false;
}
this.payList.push({ filename, message, added });
});
i++;
}
}
So really I have a while for sending each file to the API but I get the message "429 too many request" on a high number of files. Any way I can improve this?
Working with observables will make that task easier to reason about (rather than using imperative programming).
A browser usually allows you to make 6 request in parallel and will queue the others. But we don't want the browser to manage that queue for us (or if we're running in a node environment we wouldn't have that for ex).
What do we want: We want to upload a lot of files. They should be queued and uploaded as efficiently as possible by running 5 requests in parallel at all time. (so we keep 1 free for other requests in our app).
In order to demo that, let's build some mocks first:
function randomInteger(min, max) {
return Math.floor(Math.random() * (max - min + 1)) + min;
}
const mockPayrollsService = {
sendFilesPaymentName: (file: File) => {
return of(file).pipe(
// simulate a 500ms to 1.5s network latency from the server
delay(randomInteger(500, 1500))
);
}
};
// array containing 50 files which are mocked
const files: File[] = Array.from({ length: 50 })
.fill(null)
.map(() => new File([], ""));
I think the code above is self explanatory. We are generating mocks so we can see how the core of the code will actually run without having access to your application for real.
Now, the main part:
const NUMBER_OF_PARALLEL_CALLS = 5;
const onFilePaymentSelect = (files: File[]) => {
const uploadQueue$ = from(files).pipe(
map(file => mockPayrollsService.sendFilesPaymentName(file)),
mergeAll(NUMBER_OF_PARALLEL_CALLS)
);
uploadQueue$
.pipe(
scan(nbUploadedFiles => nbUploadedFiles + 1, 0),
tap(nbUploadedFiles =>
console.log(`${nbUploadedFiles}/${files.length} file(s) uploaded`)
),
tap({ complete: () => console.log("All files have been uploaded") })
)
.subscribe();
};
onFilePaymentSelect(files);
We use from to send the files one by one into an observable
using map, we prepare our request for 1 file (but as we don't subscribe to it and the observable is cold, the request is just prepared, not triggered!)
we now use mergeMap to run a pool of calls. Thanks to the fact that mergeMap takes the concurrency as an argument, we can say "please run a maximum of 5 calls at the same time"
we then use scan for display purpose only (to count the number of files that have been uploaded successfully)
Here's a live demo: https://stackblitz.com/edit/rxjs-zuwy33?file=index.ts
Open up the console to see that we're not uploading all them at once
So I was messing around in node.js and ran this code :
var http = require("http");
function get() {
var headers = {
'Accept-Encoding': 'gzip'
};
var startedAt = new Date().getTime();
for (var i = 0; i < 1; i++)
http.get({
host: "www.example.net",
path: "/catalog/",
header: headers
}, function (response) {
var body;
response.on('data', function (d) {});
response.on('end', function (e) {
console.log(new Date().getTime() - startedAt);
});
});
}
get()
I discovered it is almost 3x slower than GET request over Google Chrome extensions. I have copied the headers exactly, yet there is still almost a 100ms difference in speed.
Any ideas how to speed this up?
I'm finding times around 50ms/request with your same logic so I'm going to assume you are running this loop many times and taking an average. If that is the case then you are probably running a version of node < 0.12 and http.globalAgent.maxSockets has a default of 5 (which is only allowing 5 concurrent connections at a time in your case).
Try setting http.globalAgent.maxSockets = Infinity; as the setting is in current versions of Node.