Artillery - testing an API a several times - javascript

I'm trying to use Artillery to test one of my API and resolving a potential bug. Here is my code :
"config": {
"target": "http://websocket.target",
"phases": [
{"duration": 3, "arrivalRate": 4}
]
},
"scenarios": [
{
"name": "target",
"engine": "socketio",
"flow": [
{
"emit": {
"namespace": "/test/basket",
"channel": "add",
"data": {
"foodId":91789,
"restaurantId":3,
}
}
},
{
"think":0
}
]
}
]
}
I've decided to simulate this situation:
4 users add food to the basket for 3 seconds (without delay). But the most crucial stuff for me is their concurrency. Does Artillery have a specific flag or attribute for this feature?

Artillery does not provide a way to set a fixed concurrency level. The desired concurrency level can be achieved by having virtual users maintain the connection to the server for some period of time with think like in your test script.

Related

How to call FullTextSearchKnowledgeArticle action using REST calls?

How can we call MSCRM action using some HTTP Client request (c#)?
Can any one please assist on this.
The documentation is not covering this action, and I was able to pull this payload from couple of references. But I could not test this in my environment, please test it yourself.
The sample will look like this:
{
"SearchText": "",
"UseInflection": false,
"RemoveDuplicates": false,
"StateCode": 3,
"QueryExpression": {
"#odata.type": "Microsoft.Dynamics.CRM.QueryExpression",
"EntityName": "knowledgearticle",
"ColumnSet": {
"AllColumns": true
},
"Distinct": false,
"NoLock": false,
"PageInfo": {
"PageNumber": 1,
"Count": 10,
"ReturnTotalRecordCount": true,
"PagingCookie": ""
},
"LinkEntities": [],
"Criteria": {
"FilterOperator": "And",
"Conditions": [
{
"EntityName": "knowledgearticle",
"AttributeName": "languagelocaleid",
"Operator": "Equal",
"Values": [
"56940B3E-300F-4070-A559-5A6A4D11A8A3"
]
}
]
}
}
}
Reference.
Make a POST request to the the following URL.
[Your organization root URL]/api/data/v9.1/FullTextSearchKnowledgeArticle
Here is one sample payload that works. You can optionally add additional filters to filter the search result.
{
"SearchText":"test",
"UseInflection":true,
"RemoveDuplicates":true,
"StateCode":3,
"QueryExpression":{
"#odata.type":"Microsoft.Dynamics.CRM.QueryExpression",
"EntityName":"knowledgearticle",
"ColumnSet":{
"AllColumns":true
},
"PageInfo":{
"PageNumber":1,
"Count":10
},
"Orders":[
{
"AttributeName":"modifiedon",
"OrderType":"Descending"
}
]
}
}
Refer the link below for sample code for connecting to Dynamics.
CDSWebApiService class library (C#)

Extracting only portion of JSON document with REST API search call in MarkLogic

I am looking out methods of Extracting only portion of JSON document with REST API search call in MarkLogic using JavaScript or XQuery.
I have tried using query options of re extract-document-data but was not successful. Tried checking my extract path using CTS.validextract path but that function was not recognised in Marklogic 9.0-1
Do I have to using specific search options like constraints or structured query.
Could you please help out? TIA.
I have below such sample document
{
"GenreType": {
"Name": "GenreType",
"LongName": "Genre Complex",
"AttributeDataType": "String",
"GenreType Instance Record": [
{
"Name": "GenreType Instance Record",
"Action": "NoChange",
"TitleGenre": [
"Test1"
],
"GenreL": [
"Test1"
],
"GenreSource": [
"ABC"
],
"GenreT": [
"Test1"
]
},
{
"Name": "GenreType Instance Record",
"Action": "NoChange",
"TitleGenre": [
"Test2"
],
"GenreL": [
"Test2"
],
"GenreSource": [
"PQR"
],
"GenreT": [
"Test2"
]
}
]
}
}
in which i need to search a document with attribute "TitleGenre" WHERE GenreSource = “ABC” in the GenreType complex attribute. It's an array in json document.
I was using the search option as below, (writing search option in XML, but searching the in json documents)
<extract-path>/GenreType/"GenreType Instance Record"[#GenreSource="ABC"]</extract-path>
I am still facing the issues. If possible could you please let me know how json documents can be searched for such specific requirement? #Wagner Michael
You can extract document data by using the extract-document-data option.
xquery version "1.0-ml";
let $doc := object-node {
"GenreType": object-node {
"Name": "GenreType",
"LongName": "Genre Complex",
"AttributeDataType": "String",
"GenreType-Instance-Record": array-node {
object-node {
"TitleGenre": array-node {
"Test1"
},
"GenreSource": array-node {
"ABC"
}
},
object-node {
"TitleGenre": array-node {
"Test2"
},
"GenreSource": array-node {
"PQR"
}
}}
}
}
return xdmp:document-insert("test.xml", $doc);
import module namespace search = "http://marklogic.com/appservices/search"
at "/MarkLogic/appservices/search/search.xqy";
search:search(
"Genre Complex",
<options xmlns="http://marklogic.com/appservices/search">
<extract-document-data>
<extract-path>/GenreType/GenreType-Instance-Record[GenreSource = "ABC"]</extract-path>
</extract-document-data>
</options>
)
In this case /GenreType/GenreType-Instance-Record is the xpath to the extracted element.
Relating to your comment, i also added a predicate [GenreSource = "ABC"]. This way only GenreType-Instance-Record which have a GenreSource of "ABC" are being extracted!
Result:
....
<search:extracted kind="array">[{"GenreType-Instance-Record":{"TitleGenre":["Test1"], "GenreSource":["ABC"]}}]
</search:extracted>
....
Note:
You can add multiple <search:extract-path> elements!
I had to change the name of GenreType Instance Record to GenreType-Instance-Record. I am not sure if you can have property names with whitespaces and access them with xpath. I couldn't get it working this way.
Please post your search options, if this does not work for you.
Edit: Added a predicate to the extract-path.
Thank you so much Wagner, for your prompt trials. Helped me look out for accurate solution to my problem as of now. I have used below extract path, as i could not modify the names in documents. /GenreType/array-node("GenreType Instance Record")/object-node()/TitleGenre[following-sibling::GenreSource="ABC"]

Is there a way to get and set a control variable during the aggregation process?

I have simplified my specific problem so it's easier to understand, but the data I want to aggregate are user events on a video player page, and it looks like this:
{_id:"5963796a46d12ed9891f8c80",eventName:"Click Freature 1",creation:1499691279492},
{_id:"59637a5a46d12ed9891f8e0d",eventName:"Video Play",creation:1499691608106},
{_id:"59637a9546d12ed9891f8e90",eventName:"Click Freature 1",creation:1499691664633},
{_id:"59637c0f46d12ed9891f9146",eventName:"Video Pause",creation:1499692055335}
So the events are consistent and on a chronological order. Let's say I want to count the number of times the user clicked feature 1, but only when the video is playing.
I believe I would have to have some control variable like "isVideoPlaying", that is set to true when a "Video Play" event comes up, and is set to false in case of a "Video Pause", and then add the "Click Feature 1" events to the count only when it's set to ture.
Is there any way to do something like that?
Is there a way to get and set a control variable during the
aggregation process?
No, there is no way to keep track of previous/next when aggregation pipeline is executed.
The idea is to convert the events for each event into its own time array values.
You have two options.
Breakdown
Video Play : [1,5,7]
Video Pause : [3,6,10]
Features : [2,4,8,9]
Play-Features : 2 8,9
Video play-pause pair : [1,3],[5,6],[7,10]
Pause-Features : 4
Video pause-play pair : [3,5],[6,7],[10,-]
Expected Output
{count:3}
First Option: (You do all the work in aggregation pipeline )
Use extra stages to transform the documents into the events-array structure.
Consider below documents
db.collection.insertMany([
{eventName:"Video Play",creation:1},
{eventName:"Click Features 1",creation:2},
{eventName:"Video Pause",creation:3},
{eventName:"Click Features 1",creation:4},
{eventName:"Video Play",creation:5},
{eventName:"Video Pause",creation:6},
{eventName:"Video Play",creation:7},
{eventName:"Click Features 1",creation:8},
{eventName:"Click Features 1",creation:9},
{eventName:"Video Pause",creation:10}
]);
You can use below aggregation
The aggregation below uses two $group stage to convert the events into its time array followed by $project stage to project ($let) each event creations array into a variables.
For logic explanation inside $let see the option 2
db.collection.aggregate([
{
"$sort": {
"eventName": 1,
"creation": 1
}
},
{
"$group": {
"_id": "$eventName",
"creations": {
"$push": "$creation"
}
}
},
{
"$group": {
"_id": "null",
"events": {
"$push": {
"eventName": "$_id",
"creations": "$creations"
}
}
}
},
{
"$project": {
"count": {
"$let": {
"vars": {
"video_play_events": {
"$arrayElemAt": [
"$events.creations",
{
"$indexOfArray": [
"$events.eventName",
"Video Play"
]
}
]
},
"click_features_event": {
"$arrayElemAt": [
"$events.creations",
{
"$indexOfArray": [
"$events.eventName",
"Click Features 1"
]
}
]
},
"video_pause_events": {
"$arrayElemAt": [
"$events.creations",
{
"$indexOfArray": [
"$events.eventName",
"Video Pause"
]
}
]
}
},
"in": {*}
}
}
}
}
])
*You have events creations array for each event at this point. Insert below aggregation code and replace $video_play_events with $$video_play_events and so on to access variables from $let stage.
Second Option: ( You save events in its own array )
db.collection.insert([
{
"video_play_events": [
1,
5,
7
],
"click_features_event": [
2,
4,
8,
9
],
"video_pause_events": [
3,
6,
10
]
}
])
You can manage the array growth by adding extra field "count" to limit the no of events you can store in one document.
You can have multiple document for a chosen time slice.
This will simplify the aggregation to below.
The aggregation below iterates over video_play_events and filters all the click features for each play and pause pair (pl and pu).
$size to count no of features elements between each play and pause pair followed by $map + $sum to count all features event for all play pause pairs.
db.collection.aggregate([
{
"$project": {
"count": {
"$sum": {
"$map": {
"input": {
"$range": [
0,
{
"$subtract": [
{
"$size": "$video_play_events"
},
1
]
}
]
},
"as": "z",
"in": {
"$let": {
"vars": {
"pl": {
"$arrayElemAt": [
"$video_pause_events",
"$$z"
]
},
"pu": {
"$arrayElemAt": [
"$video_play_events",
{
"$add": [
1,
"$$z"
]
}
]
}
},
"in": {
"$size": {
"$filter": {
"input": "$click_features_event",
"as": "fe",
"cond": {
"$and": [
{
"$gt": [
"$$fe",
"$$pl"
]
},
{
"$lt": [
"$$fe",
"$$pu"
]
}
]
}
}
}
}
}
}
}
}
}
}
}
])
Notes:
You run the risk of hitting 16 MB document limit based on no of documents you are trying to aggregate in both cases.
You can use async module to run parallel queries with appropriate filters to contain the data you are aggregating followed by client side logic to count all the parts.

Combine $geoNear with Another Collection

I have 2 collections, resto and meal (each meal document has resto id to which it belongs to). I want to fetch nearby restos that have at least 1 meal. Right now, I'm able to fetch nearby restaurants, but how do I combine make sure they have at least 1 meal?
restoModel.aggregate([{
"$geoNear": {
"near": {
"type": "Point",
"coordinates": coordinates
},
"minDistance": 0,
"maxDistance": 1000,
"distanceField": "distance",
"spherical": true,
"limit": 10 // fetch 10 restos at a time
}
}]);
Sample resto doc:
{
_id: "100",
location: { coordinates: [ -63, 42 ], type: "Point" },
name: "Burger King"
}
Sample meal doc:
{
resto_id: "100", // restaurant that this meal belongs to
name: "Fried Chicken",
price: 12.99
}
I can create a pipeline, fetch 10 restaurants each joined with its associated meal documents, and remove restaurants that don't have meal. But a single fetch could return 0 documents if all of them have no meal. How do I make sure it keeps searching until 10 meal-having restos are returned?
This actually has a few approaches to consider, which have their own benefits or pitfalls associated.
Embedding
The cleanest and simplest approach is simply to actually embed the "menu" and "count" within the parent document of the restaurant instead.
This is actually also quite reasonable since you appear to be stuck in thinking in relational modelling terms, where MongoDB is not an RDBMS and nor "should" it generally be used as one. Instead we play to the strengths of what MongoDB can do.
The structure would then be like this:
{
_id: "100",
location: { coordinates: [ -63, 42 ], type: "Point" },
name: "Burger King",
menuCount: 1,
menu: [
{
name: "Fried Chicken",
price: 12.99
}
]
}
This is actually then quite simple to query, and in fact we can simply apply using a regular $nearSphere since we really have no further need for aggregation conditions:
restoModel.find({
"location": {
"$nearSphere": {
"$geometry": {
"type": "Point",
"coordinates": coordinates
},
"$maxDistance": 1000
}
},
"menuCount": { "$gt": 1 }
}).skip(0).limit(10)
Simple and effective. This is in fact exactly why you should be using MongoDB, since the "related" data is already embedded in the parent item. There are of course "trade-offs" to this, but the biggest advantages are in speed and efficiency.
Maintaining the menu items within the parent as well as the present count is also simple, as we can simply "increment" the count when new items are added:
restoModel.update(
{ "_id": id, "menu.name": { "$ne": "Pizza" } },
{
"$push": { "menu": { "name": "Pizza", "price": 19.99 } },
"$inc": { "menuCount": 1 }
}
)
Which adds the new item where it does not already exist and increments the number of menu items, all in one atomic operation, which is another reason why you embed relationships where the updates have an effect on both parent and child at the same time.
This is really what you should be going for. Sure there are limits to what you actually can embed, but this is just a "menu" and is of course relatively small in size comparison to the other sorts of relationships we could define.
Elliot of MongoDB actually put it best by stating "The entire content of War and Peace as text fit's within 4MB", and that was at a time when the limit on a BSON Document was 4MB. Now it's 16MB and more than capable of handling any "menu" most customers could be bothered browsing through.
Aggregate with $lookup
Where you are keeping to a standard relational pattern there are going to be some problems to overcome. Mostly here is the big difference from "embedding" is that since the data for the "menu" is in another collection, then you need $lookup in order to "pull" those in, and subsequently "count" how many there are.
In relation to a "nearest" query, unlike the sample above we cannot put those additional constraints "within the 'near' query itself", which means that out of the default 100 results returned by $geoNear, some of the items "may not" meet the additional constraint, which you have no choice but to apply later, "after" the $lookup is performed:
restoModel.aggregate([
{ "$geoNear": {
"near": {
"type": "Point",
"coordinates": coordinates
},
"spherical": true,
"limit": 150,
"distanceField": "distance",
"maxDistance": 1000
}},
{ "$lookup": {
"from": "menuitems",
"localField": "_id",
"foreignField": "resto_id",
"as": "menu"
}},
{ "$redact": {
"$cond": {
"if": { "$gt": [ { "$size": "$menu" }, 0 ] },
"then": "$$KEEP",
"else": "$$PRUNE"
}
}},
{ "$limit": 10 }
])
As such your only option here is to "increase" the number of "possible" returns, and then do the additional pipeline stages to "join", "calculate" and "filter". Also leaving the eventual $limit to it's own pipeline stage.
A noted problem here is with "paging" of results. Which is because the "next page" needs to essentially "skip over" the results of the prior page. To this end, it is better to implement a "forward paging" concept, much as described in this post: Implementing Pagination In MongoDB
Where the general idea is to "exclude" the prior "seen" results, via $nin. This is actually something that can be done using the "query" option of $geoNear:
restoModel.aggregate([
{ "$geoNear": {
"near": {
"type": "Point",
"coordinates": coordinates
},
"spherical": true,
"limit": 150,
"distanceField": "distance",
"maxDistance": 1000,
"query": { "_id": { "$nin": list_of_seen_ids } }
}},
{ "$lookup": {
"from": "menuitems",
"localField": "_id",
"foreignField": "resto_id",
"as": "menu"
}},
{ "$redact": {
"$cond": {
"if": { "$gt": [ { "$size": "$menu" }, 0 ] },
"then": "$$KEEP",
"else": "$$PRUNE"
}
}},
{ "$limit": 10 }
])
Then at least you don't get the same results as the previous page. But it's a bit more work, and quite a lot more work than what can be done with the embedded model as shown earlier.
Conclusion
The general case leads towards "embedding" as the better option for this use case. You have a "small" number of related items, and the data makes more sense actually being directly associated with the parent as typically you want the menu and the restaurant information at the same time.
Modern releases of MongoDB since 3.4 do allow a "view" to be created, but the general premise is based on usage of the aggregation pipeline. As such we could "pre-join" the data in a "view", however since any query operations effectively pick up the underlying aggregation pipeline statement to process, the standard query operators of $nearSphere and the like cannot be applied, as standard queries are actually "appended" to the defined pipeline. In a similar manner you also cannot use $geoNear with "views".
Maybe the constraints will change in the future, but right now the limitations make this not viable as an option since we cannot perform the required queries on the "pre-joined" source with a more relational design.
So you can basically do it in either of the two ways presented, but for my money I would model as embedded here instead.

Backbone? Can.js? Ghetto DIY? How should I work with this data?

I'm working on an application that lets our security dispatchers update a page that contains current road and campus conditions. The backend is a nodejs/express stack with and the data is a simple JSON structure that looks something like this:
{
"campus": {"condition": "open", "status": "normal"},
"roads": {"condition": "wet", "status": "alert"},
"adjacentroads": {"condition": "not applicable", "status": "warning"},
"transit": {"condition": "on schedule", "status": "normal"},
"classes": {"condition": "on schedule", "status": "normal"},
"exams": {"condition": "on schedule", "status": "normal"},
"announcements" : "The campus is currently under attack by a herd of wild velociraptors. It is recommended that you do not come to campus at this time. Busses are delayed.",
"sidebar": [
"<p>Constant traffic updates can be heard on radio station AM1234. Traffic updates also run every 10 minutes on AM5678 and AM901.</p>",
"<p>This report is also available at <strong>555-555-1234</strong> and will be updated whenever conditions change.</p>"
],
"links": [
{
"category": "Transportation Links",
"links": [
{
"url": "http://www.localtransit.whatever",
"text" : "Local Transit Agency"
},
{
"url": "http://m.localtransit.whatever",
"text" : "Local Transit Agency Mobile Site"
}
]
},
{
"category": "Weather Forecasts",
"links": [
{
"url": "http://weatheroffice.ec.gc.ca/canada_e.",
"text" : "Environment Canada"
},
{
"url": "http://www.theweathernetwork.com",
"text" : "The Weather Network"
}
]
},
{
"category": "Campus Notices & Conditions",
"links": [
{
"url": "http://www.foo.bar/security",
"text" : "Security Alerts & Traffic Notices"
},
{
"url": "http://foo.bar/athletics/whatever",
"text" : "Recreation & Athletics Conditions"
}
]
},
{
"category": "Wildlife Links",
"links": [
{
"url": "http://velociraptors.info",
"text" : "Velociraptor Encounters"
}
]
}
],
"lastupdated": 1333151930179
}
I'm wondering what the best way of working with this data on the client side would be (e.g. on the page that the dispatchers use to update the data). The page is a mix of selects (the campus, roads, etc conditions), TinyMCE textareas (announcements and sidebar) and text inputs (links). I'm open to changing this data structure if necessary but it seems to me to work well. I've been looking at Backbone, and also Can.JS but I'm not sure if either of those are suitable for this.
Some additional information:
there's no need to update an individual item in the data structure separatly; I plan on POSTing the entire structure when it's saved. That said...
there's actually two different views, one for the dispatchers and another for their supervisors. The dispatchers only have the ability to change the campus, roads, etc conditions through drop-downs and furthermore can only change the "condition" key; each possible condition has a default status assigned to it. Supervisors can override the default status, and have access to the announcements, sidebar and links keys. Maybe I do need to rethink the previous point about POSTing the whole thing at once?
the supervisors need to be able to add and remove links, as well as add and remove entire link categories. This means that DOM elements need to be added and removed, which is why I'm thinking of using something like Backbone or Can.js instead of just writing some ghetto solution that looks at all the form elements and builds the appropriate JSON to POST to the server.
Suggestions welcomed!
CanJS works great with nested data. can.Model is inheriting can.Observe which allows you to listen to any changes in the object structure.
If you include can.Observe.Delegate you have even more powerful event mechanism (example from the docs):
// create an observable
var observe = new can.Observe({
name : {
first : "Justin Meyer"
}
})
var handler;
//listen to changes on a property
observe.delegate("name.first","set",
handler = function(ev, newVal, oldVal, prop){
this //-> "Justin"
ev.currentTarget //-> observe
newVal //-> "Justin Meyer"
oldVal //-> "Justin"
prop //-> "name.first"
});
// change the property
observe.attr('name.first',"Justin")

Categories