I am trying to adapt a script I already have to run using .csv data input. When the script is ran without the .csv, it runs perfectly for any configurations I choose to use. When it runs using the .csv, whatever scenario is in the first row will run perfect, but everything from there on will fail. The reason for the failure is because some of my variables are being reused from the first thread and I don't know how to stop this from happening.
This is what my script looks like:
HTTP Request - GET ${url} (url is declared in the CSV data input, and changes each run)
-> postprocessor that extracts Variable_1, Variable_2 and Variable_3
Sampler1
-> JSR223 preprocessor: creates payloadSampler1 using javascript, example:
var payloadSampler1 = { };
payloadSampler1.age = vars.get("Variable_2");
payloadSampler1.birthDate = "1980-01-01";
payloadSampler1.phone = {};
payloadSampler1.phone.number = "555-555-5555";
vars.put("payloadSampler1", JSON.stringify(payloadSampler1));
Sampler2
-> JSR223 preprocessor: creates payloadSampler1 using javascript (same as above but for different values)
Sampler3
-> JSR223 preprocessor: creates payloadSampler1 using javascript (same as above but for different values)
Sampler4
-> JSR223 preprocessor: creates payloadSampler1 using javascript (same as above but for different values)
HTTP Request - POST ${url}/${Variable_1}/submit
-> JSR223 preprocessor: creates payloadSubmit using javascript, and mix and matching the results from the above samplers - like so:
var payloadSubmit = { };
if (vars.get("someVar") != "value" && vars.get("someVar") != "value2" && vars.get("differentVar") != "true") {
payloadSubmit.ageInfo = [${payloadSampler1}];
}
if (vars.get("someVar2") != "true") {
payloadSubmit.paymentInfo = [${payloadSampler2}];
}
payloadSubmit.emailInfo = [${payloadSampler3"}];
payloadSubmit.country = vars.get("Variable_3");
vars.put("payloadSubmit", JSON.stringify(payloadSubmit));
-> BodyData as shown in the screenshot:
request
I have a Debug PostProcessor to see the values of all these variables I am creating. For the first iteration of my script, everything is perfect. For the second one, however, the Debug PostProcessor shows the values for all payloadSamplers and all the Variables correctly changed to match the new row data (from the csv), but, the final variable, payloadSubmit just reuses whatever the values where for the first thread iteration.
Example:
Debug PostProcessor at the end of first iteration shows:
Variable_1=ABC
Variable_2=DEF
Variable_3=GHI
payloadSampler1={"age":"18","email":null,"name":{"firstName":"Charles"}},{"age":"38","email":null}}
payloadSampler2={"paymentChoice":{"cardType":"CreditCard","cardSubType":"VI"}},"amount":"9.99","currency":"USD"}
payloadSampler3={"email":"tes#email.com"}
payloadSubmit={"ageInfo":[{"age":"18","email":null,"name":{"firstName":"Charles"}},{"age":"38","email":null}],"paymentInfo":[{"paymentChoice":{"cardType":"CreditCard","cardSubType":"VI"}},"amount":"9.99","currency":"USD"],"emailInfo":[{"email":"tes#email.com"}],"country":"GHI"}
But at the end of the 2nd iteration it shows:
Variable_1=123
Variable_2=456
Variable_3=789
payloadSampler1={"age":"95","email":null,"name":{"firstName":"Sam"}},{"age":"12","email":null}}
payloadSampler2={"paymentChoice":{"cardType":"CreditCard","cardSubType":"DC"}},"amount":"19.99","currency":"USD"}
payloadSampler3={"email":"tes2#email.com"}
payloadSubmit={"ageInfo":[{"age":"18","email":null,"name":{"firstName":"Charles"}},{"age":"38","email":null}],"paymentInfo":[{"paymentChoice":{"cardType":"CreditCard","cardSubType":"VI"}},"amount":"9.99","currency":"USD"],"emailInfo":[{"email":"tes#email.com"}],"country":"USA"}
I can also see that the final HTTP Request is indeed sending the old values.
My very limited understanding is that because I am invoking the variables like so "${payloadSampler1}" it will use the value that was set for that the first time the sampler was ran (back in the 1st thread iteration). These are the things I have tried:
If I use vars.get("payloadSubmit") on the body of an HTTP Sampler, I get an error, so that is not an option. If I use vars.get("payloadSampler1") on the Samplers that create the variables, extra escape characters are added, which breaks my JSON. I have tried adding a counter to the end of the variable name and having that counter increase on each thread iteration, but the results is the same. All the variables and samplers other than the last one have updated values, but the last one will always reuse the variables from the first thread iteration.
I also tried to use ${__javaScript(vars.get("payloadSubmit_"+vars.get("ThreadIteration")))}, but the results are always the same.
And I have also tried using the ${__counter(,)} element, but if I set it to TRUE, it will always be 1 for each thread iteration, and if I set it to FALSE, it starts at 2 (I am assuming it is because I use counter in another sampler within this thread - but even after removing that counter this still happens).
I am obviously doing something (or many things) wrong.
If anyone can spot what my mistakes are, I would really appreciate hearing your thoughts. Or even being pointed to some resource I can read for an approach I can use for this. My knowledge of both javascript and jmeter is not great, so I am always open to learn more and correct my mistakes.
Finally, thanks a lot for reading through this wall of text and trying to make sense of it.
It's hard to tell where exactly your problem is without seeing the values of these someVar and payload, most probably something cannot be parsed as a valid JSON therefore on 2nd iteration your last JSR223 PostProcessor fails to run till the end and as the result your payloadSubmit variable value doesn't get updated. Take a closer look at JMeter GUI, there is an yellow triangle with exclamation sign there which indicates the number of errors in your scripts. Also it opens JMeter Log Viewer on click
if there is a red number next to the triangle - obviously you have a problem and you will need to see the jmeter.log file for the details.
Since JMeter 3.1 it is recommended to use Groovy language for any form of scripting mainly due to the fact that Groovy has higher performance comparing to other scripting options. Check out Parsing and producing JSON guide to learn more on how to work with JSON data in Groovy.
Related
(EDIT: I solved my issue! Though I still don't understand the situation I see in the debugger. See my answer for more details)
(TL;DR: index is always undefined when used with a certain array. Doubt that would be enough info, but maybe for someone who's experienced this before.)
So basically, I'm using an array in javascript, and I started noticing some odd behaviour, so I went to the debugger, and I found that a defined variable representing the index was being treated as undefined. It's ONLY the case with this specific array, and it's index. I don't get errors saying that it's undefined, but when I look in the debugger, it says it's undefined when I hover over the variable in the array call (but it's defined if I hover over it anywhere before the array call), and I'm getting bugs that make it clear that the array is not being used properly. It makes absolutely no sense to me, but maybe someone's encountered a similar issue.
Take this example of code, It's drawing a tilemap layer for my MapRenderer class. The culprit here is "this.Map.layers". When I go into this function in the debugger, layerIndex is defined if I hover over the function parameter, but if I hover over it on the array call, it says it's undefined, and the whole logic breaks.
DrawLayer(ctx, camPos, layerIndex)
{
// Get the map/tile position based on the camera position, to decide which tile to start drawing.
var mapCamPos = new Point(Math.floor(camPos.x/TILESIZE),
Math.floor(camPos.y/TILESIZE));
// Get the max tile position based on camera position, to decide where to stop drawing.
var camPosLimit = new Point(Math.ceil(this.DrawSize.x/TILESIZE)+mapCamPos.x,
Math.ceil(this.DrawSize.y/TILESIZE)+mapCamPos.y);
// loop through all tiles we need to draw using rows and columns.
for(var row=mapCamPos.y;row<this.Map.layers[layerIndex].height&&row<=camPosLimit.y;row++)
{
for(var col=mapCamPos.x;col<this.Map.layers[layerIndex].width&&col<=camPosLimit.x;col++)
{
var currentTileID = this.GetTileID(layerIndex, row, col);
if (currentTileID >= 0 && !isNaN(currentTileID))
{
var drawPos = new Point(((col*TILESIZE)-camPos.x), ((row*TILESIZE)-camPos.y));
this.Spritesheet.PlayFrame(currentTileID);
this.Spritesheet.Draw(ctx, drawPos);
}
}
}
}
This is happening in many instances of my code wherever I'm using that array. I want to add how this started, because all of this logic was working previously. I had my tilemap working with multiple csv files, which I loaded as 2d arrays into an array. Today, I decided to switch it all to use one json file, as it is simply cleaner (one file rather than one csv per map layer), and I can add extra properties and stuff in the future rather than just having the tileIDs. So, in the above example, this.Map gets initialized through an ajax call(using jquery) to read the json file, before DrawLayer ever gets called. Still, I don't see why this would cause this. Doing "mapRenderer.Map.layers" in the console tells me that it's a normal array, and when I try calling it normally from the console, it works fine. I'm so confused at this issue. I had literally the same function before and it worked, just that my array has changed a bit(it used to be just "this.Layers", instead of "this.Map.layers"), but it's still a normal array... I don't see why it would behave so differently just because it was generated via json...
Any help or explanations would be greatly appreciated, thanks.
I still don't understand the situation I see in the debugger, maybe it's a firefox bug, or feature I don't understand. But I managed to fix my issue. It was a basic logic bug: I'm using the "Tiled" map editor, and when you export those maps to CSVs, the tile IDs are zero-based, meaning empty tiles are -1. When you export to json, they aren't zero-based, meaning empty tiles are 0, which I failed to notice, and this was the root of all my current issues. If anyone can explain why the firefox debugger might say defined variables are "undefined" when you hover over them, that would still be good to know.
I am developing a salesreport system which is heavily relying on JSON communication. I have a script that records client visits into a Javascript Object. Which works fine, apparently.
salesReport = [];
...
salesReport.push({
"nr": visitCounter,
"kto": ActiveAccount,
"dok": dokName
});
Each time a visit is logged the push function is activated.
Onthe first run I get the expected result:
[{"nr":1,"kto":"52803","dok":""}]
But when I push again, I get this result:
[[[[[{"nr":1,"kto":"52803","dok":""}],{"nr":2,"kto":"52350","dok":""}], {"nr":3,"kto":"52539","dok":""}],{"nr":4,"kto":"50869","dok":""}],{"nr":5,"kto":"52135","dok":""}]
The '[' brackets are added at the beginning of the output, and at the end of each post. Why is that ?
Shouldn't the '[' and ']' only be added at the beginning and at the end? and then only one time ?
So, it seems there was an idiotic error in another script.
At the end of each session the visitor log is stored in a local storage file. Which is then read back into the javascript Object if another session is started the same day.
The problem was that I had used the .push function to read the "old" data back into the object. Thus creating a double push of sorts which led the system to think that it was all one entry instead of several entries.
So in the end it was "my bad".
logging this in case someone else in the future might experience the same thing.
I have a database with roughly 1.2M names. I'm using Twitter's typeahead.js to remotely fetch the autocomplete suggestions when you type someone's name. In my local environment this takes roughly 1-2 seconds for the results to appear after you stop typing (the autocomplete doesn't appear while you are typing), and 2-5+ seconds on the deployed app on Heroku (using only 1 dyno).
I'm wondering if the reason why it only shows the suggestions after you stop typing (and a few seconds delay) is because my code isn't as optimized?
The script on the page:
<script type="text/javascript">
$(document).ready(function() {
$("#navPersonSearch").typeahead({
name: 'people',
remote: 'name_autocomplete/?q=%QUERY'
})
.keydown(function(e) {
if (e.keyCode === 13) {
$("form").trigger('submit');
}
});
});
</script>
The keydown snippet is because without it my form doesn't submit for some reason when pushing enter.
my django view:
def name_autocomplete(request):
query = request.GET.get('q','')
if(len(query) > 0):
results = Person.objects.filter(short__istartswith=query)
result_list = []
for item in results:
result_list.append(item.short)
else:
result_list = []
response_text = json.dumps(result_list, separators=(',',':'))
return HttpResponse(response_text, content_type="application/json")
The short field in my Person model is also indexed. Is there a way to improve the performance of my typeahead?
I don't think this is directly related Django, but I may be wrong. I can offer some generic advice for this kind of situations:
(My money is on #4 or #5 below).
1) What is an average "ping" from your machine to Heroku? If it's far, that's a little bit extra overhead. Not much, though. Certainly not much when compared to then 8-9 seconds you are referring to. The penalty will be larger with https, mind you.
2) Check the value of waitLimitFn and rateLimitWait in your remote dataset. Are they the default?
3) In all likelyhood, the problem is database/dataset related. First thing to check is how long it takes you to establish a connection to the database (do you use a connection pool?).
4) Second thing: how long it takes to run the query. My bet is on this point or the next. Add debug prints, or use NewRelic (even the free plan is OK). Have a look at the generated query and make sure it is indexed. Have your DB "explain" the execution plan for such a query and make it is uses the index.
5) Third thing: are the results large? If, for example, you specify "J" as the query, I imagine there will be lots of answers. Just getting them and streaming them to the client will take time. In such cases:
5.1) Specify a minLength for your dataset. Make it at least 3, if not 4.
5.2) Limit the result set that your DB query returns. Make it return no more than 10, say.
6) I am no Django expert, but make sure the way you use your model in Django doesn't make it load the entire table into memory first. Just sayin'.
HTH.
results = Person.objects.filter(short__istartswith=query)
result_list = []
for item in results:
result_list.append(item.short)
Probably not the only cause of your slowness but this horrible from a performance point of view: never loop over a django queryset. To assemble a list from a django queryset you should always use values_list. In this specific case:
results = Person.objects.filter(short__istartswith=query)
result_list = results.values_list('short', flat=True)
This way you are getting the single field you need straight from the db instead of: getting all the table row, creating a Person instance from it and finally reading the single attribute from it.
Nitzan covered a lot of the main points that would improve performance, but unlike him I think this might be directly related to Django (at at least, sever side).
A quick way to test this would be to update your name_autocomplete method to simply return 10 random generated strings in the format that Typeahead expects. (The reason we want them random is so that Typeahead's caching doesn't skew any results).
What I suspect you will see is that Typeahead is now running pretty quick and you should start seeing results appear as soon as your minLength of string has been typed.
If that is the case then we will need to into what could be slowing the query up, my Python skills are non-existent so I can't help you there sorry!
If that isn't the case then I would maybe consider doing some logging of when $('#navPersonSearch') calls typeahead:initialized and typeahead:opened to see if they bring up anything odd.
You can use django haystack, and your server side code would be roughly like:
def autocomplete(request):
sqs = SearchQuerySet().filter(content_auto=request.GET.get('q', ''))[:5] # or how many names you need
suggestions = [result.first_name for result in sqs]
# you have to configure typeahead how to process returned data, this is a simple example
data = json.dumps({'q': suggestions})
return HttpResponse(data, content_type='application/json')
I need to implement a simple way to handle localization about weekdays' names, and I came up with the following structure:
var weekdaysLegend=new Array(
{'it-it':'Lunedì', 'en-us':'Monday'},
{'it-it':'Martedì', 'en-us':'Tuesday'},
{'it-it':'Mercoledì', 'en-us':'Wednesday'},
{'it-it':'Giovedì', 'en-us':'Thursday'},
{'it-it':'Venerdì', 'en-us':'Friday'},
{'it-it':'Sabato', 'en-us':'Saturday'},
{'it-it':'Domenica', 'en-us':'Sunday'}
);
I know I could implement something like an associative array (given the fact that I know that javascript does not provide associative arrays but objects with similar structure), but i need to iterate through the array using numeric indexes instead of labels.
So, I would like to handle this in a for cycle with particular values (like j-1 or indexes like that).
Is my structure correct? Provided a variable "lang" as one of the value between "it-it" or "en-us", I tried to print weekdaysLegend[j-1][lang] (or weekdaysLegend[j-1].lang, I think I tried everything!) but the results is [object Object]. Obviously I'm missing something..
Any idea?
The structure looks fine. You should be able to access values by:
weekdaysLegend[0]["en-us"]; // returns Monday
Of course this will also work for values in variables such as:
weekdaysLegend[i][lang];
for (var i = 0; i < weekdaysLegend.length; i++) {
alert(weekdaysLegend[i]["en-us"]);
}
This will alert the days of the week.
Sounds like you're doing everything correctly and the structure works for me as well.
Just a small note (I see the answer is already marked) as I am currently designing on a large application where I want to put locals into a javascript array.
Assumption: 1000 words x4 languages generates 'xx-xx' + the word itself...
Thats 1000 rows pr. language + the same 7 chars used for language alone = wasted bandwitdh...
the client/browser will have to PARSE THEM ALL before it can do any lookup in the arrays at all.
here is my approach:
Why not generate the javascript for one language at a time, if the user selects another language, just respond(send) the right javascript to the browser to include?
Either store a separate javascript with large array for each language OR use the language as parametre to the server-side script aka:
If the language file changes a lot or you need to minimize it per user/module, then its quite archivable with this approach as you can just add an extra parametre for "which part/module" to generate or a timestamp so the cache of the javascript file will work until changes occures.
if the dynamic approach is too performance heavy for the webserver, then publish/generate the files everytime there is a change/added a new locale - all you'll need is the "language linker" check in the top of the page, to check which language file to server the browser.
Conclusion
This approach will remove the overhead of a LOT of repeating "language" ID's if the locales list grows large.
You have to access an index from the array, and then a value by specifying a key from the object.
This works just fine for me: http://jsfiddle.net/98Sda/.
var day = 2;
var lang = 'en-us';
var weekdaysLegend = [
{'it-it':'Lunedì', 'en-us':'Monday'},
{'it-it':'Martedì', 'en-us':'Tuesday'},
{'it-it':'Mercoledì', 'en-us':'Wednesday'},
{'it-it':'Giovedì', 'en-us':'Thursday'},
{'it-it':'Venerdì', 'en-us':'Friday'},
{'it-it':'Sabato', 'en-us':'Saturday'},
{'it-it':'Domenica', 'en-us':'Sunday'}
];
alert(weekdaysLegend[day][lang]);
I have a Greasemonkey script that uses a Javascript object to maintain some stored objects. It covers quite a large volume of information, but substantially less than it successfully stored and retrieved prior to encountering my problem. One value refuses to save, and I can not for the life of me determine why. The following problem code:
Works for other larger objects being maintained.
Is presently handling a smaller total amount of data than previously worked.
Is not colliding with any function or other object definitions.
Can (optionally) successfully save the problem storage key as "{}" during code startup.
this.save = function(table) {
var tables = this.tables;
if(table)
tables = [table];
for(i in tables) {
logger.log(this[tables[i]]);
logger.log(JSON.stringify(this[tables[i]]));
GM_setValue(tables[i] + "_" + this.user, JSON.stringify(this[tables[i]]));
logger.log(tables[i] + "_" + this.user + " updated");
logger.log(GM_getValue(tables[i] + "_" + this.user));
}
}
The problem is consistently reproducible and the logging statments produce the following output in Firebug:
Object { 54,10 = Object } // Expansion shows complete contents as expected, but there is one oddity--Firebug highlights the object keys in purple instead of the usual black for anonymous objects.
{"54,10":{"x":54,"y":10,"name":"Lucky Pheasant"}} // The correctly stringified JSON.
bookmarks_HonoredMule updated
undefined
I have tried altering the format of the object keys, to no effect. Further narrowing down the issue is that this particular value is successfully saved as an empty object ("{}") during code initialization, but skipping that also does not help. Reloading the page confirms that saving of the nonempty object truly failed.
Any idea what could cause this behavior? I've thoroughly explored the possibility of hitting size constraints, but it doesn't appear that can be the problem--as previously mentioned, I've already reduced storage usage. Other larger objects save still, and the total number of objects, which was not high already, has further been reduced by an amount greater than the quantity of data I'm attempting to store here.
It turns out the issue was that of this.save() being called from an unsafeWindow context. This is a security violation, but one that should have resulted in an access violation exception being thrown:
Error: Greasemonkey access violation: unsafeWindow cannot call GM_getValue.
Instead GM_setValue returns having done nothing, and the subsequent logging instructions also execute, so there was no hint of the issue and the documentation may be out of date.
In my quest to solve this problem by any means, I abstracted away GM_ storage functions so I could use other storage mechanisms, so the workaround will be to put all save instructions in a pre-existing cleanup routine that runs in setInterval, similar to the fix described in the aforementioned documentation. (The use of an existing interval is to prevent excessive creation of timers which have in the past degraded performance over browser uptime.)