Zapier code: trigger multiple webhooks - javascript

I'm trying to send multiple webhooks based on the number of items in a JSON array. I'm using a example from:
how to trigger webhook from zapier code
I tried it and it almost works as planned. I have a problem that when I have a JSON array of 4 items, it sends 16 webhooks instead of 4. If I have A JSON array of 3 items, it sends out 9 webhooks instead of 3.
I use inputData.items for insertion of the JSON array. Does anyone know why the items in the JSON array are multiplied?
I used:
const elements = JSON.parse(inputData.items)
var body = elements;
var options = {
"url": "URL.COM",
"method": "POST",
"headers": {'Content-Type': 'application/json'},
"body": JSON.stringify(body)
},
requests = elements.map(mapDataToSettings);
function mapDataToSettings(elem) {
var settings = Object.assign({}, options);
settings.data = JSON.stringify(elem);
return settings;
};
Promise.all(requests.map(grabContent))
.then(function(data){ callback(null, {requestsMade: data});});
function grabContent(options) {
return fetch(options.url, options)
.then(function(res) {return res.json();});
};
Does anyone see why my webhooks are triggering to often?
Thanks

David here, from the Zapier Platform team.
It's hard to say for sure without seeing your zap, but my guess is that you're feeding .items in from a previous Code step where it's returned as an array? If so, you're running into an undocumented code feature where subsequent steps are run for each item in the array. This is usually desirable, but since you're doing a loop in the code, you don't need it.
Assuming that's the case, you've got two options:
Return a sjon string from the previous step (instead of an array) so this code step only runs once.
Change this code to only receive one item and perform a request with it.
If something else is going on, update your question with the zap id (it's not sensitive info) and I can take a look!

Related

Sending an array with axios.get as params is undefined

I am making a get request with additional params options, since I am using that request on a filter, so the params are filters for what to get back:
const res = await axios.get("http://localhots:3000/getsomedata", {
params: {
firstFilter: someObject,
secondFilter: [someOtherObject, someOtherObject]
}
});
The request goes through just fine, on the other end, when I console.log(req.query); I see the following:
{
firstFilter: 'someObject',
'secondFilter[]': ['{someOtherObject}', '{someOtherObject}'],
}
If I do req.query.firstFilter that works just fine, but req.query.secondFilter does not work and in order for me to get the data, I have to do it with req.query["secondFilter[]"], is there a way to avoid this and be able to get my array of data with req.query.secondFilter?
My workaround for now is to do:
const filter = {
firstFilter: req.query.firstFilter,
secondFilter: req.query["secondFilter[]"]
}
And it works of course, but I don't like it, I am for sure missing something.
Some tools for parsing query strings expect arrays of data to be encoded as array_name=1&array_name=2.
This could be a problem if you have one or more items because it might be an array or might be a string.
To avoid that problem PHP required arrays of data to be encoded as array_name[]=1&array_name[]=2 and would discard all but the last item if you left the [] out (so you'd always get a string).
A lot of client libraries that generated data for submission over HTTP decided to do so in a way that was compatible with PHP (largely because PHP was and is very common).
So you need to either:
Change the backend to be able to parse PHP style
Change your call to axios so it doesn't generate PHP style
Backend
The specifics depend what backend you are using, but it looks like you might be using Express.js.
See the settings.
You can turn on Extended (PHP-style) query parsing by setting it to "extended" (although that is the default)
const app = express()
app.set("query parser", "extended");
Frontend
The axios documentation says:
// `paramsSerializer` is an optional function in charge of serializing `params`
// (e.g. https://www.npmjs.com/package/qs, http://api.jquery.com/jquery.param/)
paramsSerializer: function (params) {
return Qs.stringify(params, {arrayFormat: 'brackets'})
},
So you can override that
const res = await axios.get("http://localhots:3000/getsomedata", {
params: {
firstFilter: someObject,
secondFilter: [someOtherObject, someOtherObject]
},
paramsSerializer: (params) => Qs.stringify(params, {arrayFormat: 'repeat'})
});
My example requires the qs module
This has to do with params not being serialized correctly for HTTP GET method. Remember that GET has no "body" params similar to POST, it is a text URL.
For more information I refer to this answer, which provides more detailed info with code snippets.

Fetching data from multiples pages of API

I'm fetching data from a PS4 Games API but it's split into 400+ pages. I wanted to get the data from all pages, but the solution I came up with did not work very well. It gives me an error 'JSON Value of type NSNull cannot be converted to a valid URL'. Also, I don't think the for loop works well either, it shows me it going through all the pages when it displays the results in my list.
Additionally, this API is dynamic because new games keep getting released. So how could I get data up to latest page without manually changing my last page number everytime? I looked at some questions here but I couldn't fit it into my code
My code is rather long so I'm just going to post the part that matter:
componentDidMount() {
var i;
for (i = 0; i < 400; i++) {
fetch(`https://api.rawg.io/api/games?page=${i+1}&platforms=18`, {
"method": "GET",
"headers": {
"x-rapidapi-host": "rawg-video-games-database.p.rapidapi.com",
"x-rapidapi-key": "495a18eab9msh50938d62f12fc40p1a3b83jsnac8ffeb4469f"
}
})
.then(res => res.json())
.then(json => {
const { results: games } = json;
this.setState({ games });
//setting the data in the games state
});
}
}
The API also has an item that gives me the link of the next page, I think there is a way to use 'next' and fetch data from that URL
If anyone could help, that would be AWESOME. Thank you in advance
its my first answer on this forum, so... I hope be helpfull.
For me you have 2 options:
Each request send 2 variables in the answer count and next, or you make a for loop with count/20 as limit (20 is the number of items gaves in the answer), or you make a while loop until the next variable give null as answer (currently at page 249).
What is currently happening is you are making 400 requests and when each comes in it is overwriting the component with the response it received. It does not care about what is already there or any of the other requests.
An approach you could try instead is as the responses come in append the results to the ongoing list and update the state with the running list.
Going forward and for your other question about handling new releases. Instead of running 400 queries every time the application is used, try looking into caching the results. When the app loads you can see if cache exists and load or query if it does not. The rawg.io /games endpoint has a parameter for ordering by release. When the application loads in future you can conditionally loop until you reach a game that is already in cache at which point terminate.

Ajax GET: multiple data-specific calls, or fewer less specific calls?

I'm developing a web app using a Node.js/express backend and MongoDB as a database.
The below example is for an admin dashboard page where I will display cards with different information relating to the users on the site. I might want to show - on the sample page - for example:
The number of each type of user
The most common location for each user type
How many signups there are by month
Most popular job titles
I could do this all in one route, where I have a controller that performs all of these tasks, and bundles them as an object to a url that I can then pull data from using ajax. Or, I could split each task into its own route/controller, with a separate ajax call to each. What I'm trying to decide is what are the best practices around making multiple ajax calls on a single page.
Example:
I am building up a page where I will make an interactive table using DataTables for different types of user ( currently have two: mentors and mentees). This example requires just two data requests (one for each user type), but my final page will be more like 10.
For each user type, I am making an ajax get call for each user type, and building the table from the returned data:
User type 1 - Mentees
$.get('/admin/' + id + '/mentees')
.done(data => {
$('#menteeTable').DataTable( {
data: data,
"columns": [
{ "data": "username"},
{ "data": "status"}
]
});
})
User type 2 - Mentors
$.get('/admin/' + id + '/mentors')
.done(data => {
$('#mentorTable').DataTable( {
data: data,
"columns": [
{ "data": "username"},
{ "data": "position"}
]
});
})
This then requires two routes in my Node.js backend:
router.get("/admin/:id/mentors", getMentors);
router.get("/admin/:id/mentees", getMentees);
And two controllers, that are structured identically (but filter for differnt user types):
getMentees(req, res, next){
console.log("Controller: getMentees");
let query = { accountType: 'mentee', isAdmin: false };
Profile.find(query)
.lean()
.then(users => {
return res.json(users);
})
.catch(err => {
console.log(err)
})
}
This works great. However, as I need to make multiple data requests I want to make sure that I'm building this the right way. I can see several options:
Make individual ajax calls for each data type, and do any heavy lifting on the backend (e.g. tally user types and return) - as above
Make individual ajax calls for each data type, but do the heavy lifting on the frontend. In the above example I could have just as easily filtered out isAdmin users on the data returned from my ajax call
Make fewer ajax calls that request less refined data. In the above example I could have made one call (requiring only one route/controller) for all users, and then filtered data on the frontend to build two tables
I would love some advice on which strategy is most efficient in terms of time spent sourcing data
UPDATE
To clarify the question, I could have achieved the same result as above using a controller setup something like this:
Profile.find(query)
.lean()
.then(users => {
let mentors = [],
mentees = []
users.forEach(user => {
if(user.accountType === 'mentee') {
mentees.push(user);
} else if (user.accountType === 'mentor') {
mentors.push(user);
}
});
return res.json({mentees, mentors});
})
And then make one ajax call, and split the data accordingly. My question is: which is the preferred option?
TL;DR: Option 1
IMO I wouldn't serve unprocessed data to the front-end, things can go wrong, you can reveal too much, it could take a lot for the unspecified client machine to process (could be a low power device with limited bandwidth and battery power for example), you want a smooth user experience, and javascript on the client churning out information from a mass of data would detract from that. I use the back-end for the processing (prepare the information how you need it), JS for retrieving and placing the information (AJAX) on the page and things like switching element states, and CSS for anything moving around (animations and transitions etc) as much as possible before resorting to JS.
Also for the routes, my approach would be each distinct package of information (dataTable) has a route, so you're not overloading a method with too many purposes, keep it simple and maintainable. You can always abstract away anything that's identical and repeated often.
So to answer your question, I'd go with Option 1.
You could also offer a single 'page-load' endpoint, then if anything changes update the individual tables later using their distinct endpoints. This initial 'page-load' call could collate the information from the endpoints on the backend and serve as one package of data to populate all tables initially. One initial request with one lot of well-defined data, then the ability to update an individual table if the user requests it (or there is a push if you get into that).
It is really good question. First of all you should realize how your application will manage with received data. If it is huge amount of data that are not changed on fronend but with different views and whole data needs for these views it might be cached into frontend (like user settings data - application always reads it but rare changes) then you could follow with your second options. Other case if frontend works only with small part of huge amount of database data (like log data for specific user) it is preferably to preprocess (filtering) on server side your first and third options. Actually second options is preferable only for caching unchanged data on frontend as for me.
After clarifying the question you could use grouping for your request and lodash library:
Profile.find(query)
.lean()
.then(users => {
let result = [];
result = _(users)
.groupBy((elem) => elem.accountType)
.map((vals, key) => ({accountType: key, users: vals}))
.value();
});
return res.json(result);
});
Certainly you could map your data as you comfortable. This way allows to get all types of accounts (not only 'mentee' and 'mentor')
Usually there are 3 things in such architectures:
1. Client
2. API Gateway
3. Micro services (Servers)
In your case :
1. Client is JS application code
2. API Gateway + Server is Nodejs/express (Dual responsibility)
Point 1 to be noted
Servers only provides core APIs. So this API for a server should be only a user api like:
/users?type={mentor/mentee/*}&limit=10&pageNo=8
i.e anyone can ask for all data or filtered data using type query string.
Point 2 to be noted
Since Web pages are composed of multiple data points and making call for every data point to the same server increases the round trip and makes the UX worse, API gateways are there. So in this case JS would not directly communicate with core server, it communicates with API Gateway with and APIs like:
/home
The above API internally calls below APIs and aggregates the data in a single json with mentor and mentee list
/users?type={mentor/mentee/*}&limit=10&pageNo=8
This API simply passes the call to core server with query attributes
Now since in your code, API gateway and Core server is merged into single layer, this is how you should setup your code:
getHome(req, res, next){
console.log("Controller: /home");
let queryMentees = { accountType: 'mentee', isAdmin: false };
let queryMentors = { accountType: 'mentor', isAdmin: true };
mentes = getProfileData(queryMentees);
mentors = getProfileData(queryMentors);
return res.json({mentes,mentors});
}
getUsers(req, res, next){
console.log("Controller: /users");
let query = {accountType:request.query.type,isAdmin:request.query.isAdmin};
return res.json(getProfileData(query));
}
And a common ProfileService.js class with a function like:
getProfileData(query){
Profile.find(query)
.lean()
.then(users => {
return users;
})
.catch(err => {
console.log(err)
})
}
More info about API Gateway Pattern here
If you can't estimate how many types need on your app then needs to be use parameters,
If I wrote like this application I don't write multiple function for calling ajax and don't write multiple route and controller,
Client side like this
let getQuery = (id,userType)=>{
$.get('/admin/' + id + '/userType/'+userType)
.done(data => {
let dataTable = null;
switch(userType){
case "mentee":
dataTable = $('#menteeTable');
break;
case "mentor":
dataTable = $('#mentorTable');
break;
//.. you can add more selector for datatables but I wouldn't prefer this way you can generate "columns" property on server like "data" so meaning that you can just use one datatable object on client side
}
dataTable.DataTable( {
data: data,
"columns": [
{ "data": "username"},
{ "data": "status"}
]
});
})
}
My prefer for client side
let getQuery = (id,userType)=>{
$.get('/admin/' + id + '/userType/'+userType)
.done(data => {
$('#dataTable').DataTable( {
data: data.rows,
"columns": data.columns
]
});
})
}
Server response should support {data: [{}...], columns:[{}....]} like this on this scenario Datatables examples
Server side like this
Router just one
router.get("/admin/:id/userType/:userType", getQueryFromDB);
Controller
getQueryFromDB(req, res, next){
let query = { accountType: req.params.userType, isAdmin: false };
Profile.find(query)
.lean()
.then(users => {
return res.json(users);
})
.catch(err => {
console.log(err)
})
}
So main meaning about your question for me that mentees, mentors etc... are parameters like as "id"
make sure that your authentication checked which users have access userType data for both code samples mine and your code, someone can reach your data with just change routing
Have a nice weekend
from performance and smoothness of ui on user device:
Sure it would be better to do 1 ajax request for all core data (which is important to show as soon as possible), and possibly perform more requests for less priority data with some tiny delay. Or do 2 requests: one for 'fast' data and another for 'slow' (if this is applicable) because:
On one hand, many ajax requests could slowdown ui there could be a limitation for amount of ajax requests getting done at same time (it is browser dependent an could be from 2 to 10) so if for ex. in ie there will be limit of 2 then with 10 ajaxes there will be an queue of waiting ajax requests
But on the other hand if there is much data to show or some data takes longer to prepare it could result in long waiting for backend response to show something.
Talking of heavy lifting: It is not good to make such things on UI side anyway, because:
User device can be not good with resources and 'slow'.
Javascript is synchronous and as a consequence, any long loop 'freeze' UI for time it required to run that loop.
Talking of filtering users:
Profile.find(query)
.lean()
.then(users => {
let mentors = [],
mentees = []
users.forEach(user => {
if(user.accountType === 'mentee') {
mentees.push(user);
} else if (user.accountType === 'mentor') {
mentors.push(user);
}
});
return res.json({mentees, mentors});
})
seems to have one problem, possibly query will have sortings and limits, if so final result will be inconsistent, it possibly end up with only mentees or only mentors, i think you should do 2 separate queries to data storage anyways
from project structuring, maintainability, flexibility, reusability, and so on, of course it is good to decouple things as much as possible.
So, finally, imagine you made:
1. many microservices like for each widget 1 backend microcervice but there is a layer which allows to aggregate results to optimize traffic from UI in 1-2 ajax query.
2. many ui modules each working with own data, received from some service, which do 1-2 calls for aggregating backend and distributes different datasets it recieved to many frontend modules.
At back end just make one dynamic parametric method API. you can pass mentor, mentee,admin etc as role.you should have some type of user authentication and authorization to check if user a can see users in role B or not.
Regarding UI its up to user they want one page with drop-down filter or they want URLs to bookmark.
Like multiple url /admin /mentor etc.
or one url with querystring and dropdown./user?role=mentor,/user?role=admin.
Based on url you have to make controllers. I generally prefer drop down and fetch data (by default all mentors might be the selection).
This is a specific invitation suited for invitations of a romantic nature (e.g. dates or engagement parties).

Getting number of records from JSON server end point

I'm creating a mock application with JSON server as the backend and I'm wondering if it is possible to get the total number of records contained at an end point without loading all the records themselves? Assuming the db.json file looks like the JSON snippet below, how would I find out that the end point only has one record without fetching the record itself, provided it's possible?
{
"books": [{
"title": "The Da Vinci Code",
"rating": "0"}]
}
You can simply retrieve the X-Total-Count header
This is a screen-shot of a response headers returned by JSON Server when enabling pagination i.e using the _page parameter (e.g. localhost:3000/contacts?_page=1)
Whenever you fetch the data, json-server actually returns the total count by default (it has an x-total-count property:
Example:
axios
.get("http://localhost:3001/users", {
params: {
_page: 1,
_limit: 10
}
})
.then(res => {
console.log(res.data); // access your data which is limited to "10" per page
console.log(res.headers["x-total-count"]); // length of your data without page limit
});
You've three options. I'd recommend the 3rd one to you:
Return all the records and count them. This could be slow and send a lot of data over the wire but probably is the smallest code change for you. It also opens you up to attacks where people can hammer your server by requesting many records repeatedly.
Add a new endpoint. You could add a new endpoint that simply returns the count. It's simple but slightly annoying having a 2nd endpointime to document and maintain.
Modify the existing endpoint. Return something like
{
count: 157,
rows: [...data]
}
The benefit of 3 is its all in one endpoint. It also nears you toward a point where you can add a skip and take parameter in future to allow pagination of the resultant data.
You will write another end point that returns number of records. Usually also you may want end point for limit and offset to be used with pagination.
let response = await fetch("http://localhost:3001/books?_page=1");
let total = response.headers.get('X-Total-Count');

How do I access the data given in the onHttpRequest function in the Firefox Add-on SDK?

I am trying to read the response headers 'name' and 'value'. The end goal is to compare them to some pre-set name and a value to see if they match.
Here is what I have so far, it's the function that run every time I get a response header.
var observer = require("observer-service");
observer.add("http-on-examine-response", onHttpRequest);
function onHttpRequest(subject, data)
{
console.log("request subject...." + subject);
console.log("request data...." + data);
}
The output is as follows:
request subject....[xpconnect wrapped nsISupports]
request data....null
I was hoping to know how to get the rest of the data out of the response.
Any help would be great, thanks.
The subject for http-on-examime-response implements nsIHttpChannel, among some other things. You may use .QueryInterface() or instanceof (which internally kinda uses QueryInteface, so that this works as well) to get to that interface.
const {Ci} = require("chrome");
if (subject instanceof Ci.nsIHttpChannel) {
console.log("content-type", subject.getResponseHeader("content-type"));
subject.visitResponseHeaders(function(header, value) {
console.log(header, value);
});
}
There are a couple of other questions around here going into more detail on how to use these notifications... Also, mxr can help a lot checkout out what interfaces there are, how it fits together and how one could use it (in particular the existing tests are great to see some uses for all kinds of stuff).
There is also the "nsITraceableChannel, Intercept HTTP Traffic" article going into more details, e.g. on how to use nsITraceableChannel to get the payload data from such a channel.

Categories