Jquery .push gives weird output: '[[[[{....}]' - javascript

I am developing a salesreport system which is heavily relying on JSON communication. I have a script that records client visits into a Javascript Object. Which works fine, apparently.
salesReport = [];
...
salesReport.push({
"nr": visitCounter,
"kto": ActiveAccount,
"dok": dokName
});
Each time a visit is logged the push function is activated.
Onthe first run I get the expected result:
[{"nr":1,"kto":"52803","dok":""}]
But when I push again, I get this result:
[[[[[{"nr":1,"kto":"52803","dok":""}],{"nr":2,"kto":"52350","dok":""}], {"nr":3,"kto":"52539","dok":""}],{"nr":4,"kto":"50869","dok":""}],{"nr":5,"kto":"52135","dok":""}]
The '[' brackets are added at the beginning of the output, and at the end of each post. Why is that ?
Shouldn't the '[' and ']' only be added at the beginning and at the end? and then only one time ?

So, it seems there was an idiotic error in another script.
At the end of each session the visitor log is stored in a local storage file. Which is then read back into the javascript Object if another session is started the same day.
The problem was that I had used the .push function to read the "old" data back into the object. Thus creating a double push of sorts which led the system to think that it was all one entry instead of several entries.
So in the end it was "my bad".
logging this in case someone else in the future might experience the same thing.

Related

TypeError: Cannot call method "indexOf" of null

I'm triying to find the records that includes "SO -" or "NS - SO" or "SO –" or "SWAT" on THE "RESUMEN" field from a CSV file to asigne a new category (in this cases would be "Call Center"). So, I used "indexOf" funtion witch worked so well.
The problem comes when I change the data source (It is a CSV too), this gave me next error on that step:
"Caused by: org.mozilla.javascript.EcmaError: TypeError: Cannot call method "indexOf" of null (script#2)"
The objective is to assign a category by identifying the words on the source file
My code
if (RESUMEN.indexOf("SO -")!=-1 || RESUMEN.indexOf("NS - SO")!=-1 || RESUMEN.indexOf("SO –" )!=-1 || RESUMEN.indexOf("SWAT")!=-1)
{
var RESULTADO = "Call Center"
}
else RESULTADO = ""
I expect to assigne call center category like I got with the first file (I did not change nothing)
regards!
You're overcomplicating the issue.
Before the answer, remember something, there are several steps, and combinations of steps, that achieve an incredible number of transformations to make usable patterns, the last resort IS User defined Java Expression.
Seems like what you want to achieve is a Value Mapping, thou the difference from a direct value map in your case, is that the row you're testing must contain "SO -", and the other cases, somewhere in the text.
With this simple filter, you can transform your data that contains those informations as you desire, and on the "FALSE" side, treat it for errors.
This will expand your transformation a bit, but when you need to change something it will be easier than with a single step with a lot of code.
As another answer pointed out, you can achieve the same result with different steps, you don't need the javascript step.
But, if you want to go that route, you should first convert null values into, e.g., empty strings.
Simply add this to the beginning of your javascript code:
if (!RESUMEN){ RESUMEN = ''}
That'll convert nulls to empty strings and then indexOf returns correctly.

JMeter reusing previous thread payload instead of new thread payload

I am trying to adapt a script I already have to run using .csv data input. When the script is ran without the .csv, it runs perfectly for any configurations I choose to use. When it runs using the .csv, whatever scenario is in the first row will run perfect, but everything from there on will fail. The reason for the failure is because some of my variables are being reused from the first thread and I don't know how to stop this from happening.
This is what my script looks like:
HTTP Request - GET ${url} (url is declared in the CSV data input, and changes each run)
-> postprocessor that extracts Variable_1, Variable_2 and Variable_3
Sampler1
-> JSR223 preprocessor: creates payloadSampler1 using javascript, example:
var payloadSampler1 = { };
payloadSampler1.age = vars.get("Variable_2");
payloadSampler1.birthDate = "1980-01-01";
payloadSampler1.phone = {};
payloadSampler1.phone.number = "555-555-5555";
vars.put("payloadSampler1", JSON.stringify(payloadSampler1));
Sampler2
-> JSR223 preprocessor: creates payloadSampler1 using javascript (same as above but for different values)
Sampler3
-> JSR223 preprocessor: creates payloadSampler1 using javascript (same as above but for different values)
Sampler4
-> JSR223 preprocessor: creates payloadSampler1 using javascript (same as above but for different values)
HTTP Request - POST ${url}/${Variable_1}/submit
-> JSR223 preprocessor: creates payloadSubmit using javascript, and mix and matching the results from the above samplers - like so:
var payloadSubmit = { };
if (vars.get("someVar") != "value" && vars.get("someVar") != "value2" && vars.get("differentVar") != "true") {
payloadSubmit.ageInfo = [${payloadSampler1}];
}
if (vars.get("someVar2") != "true") {
payloadSubmit.paymentInfo = [${payloadSampler2}];
}
payloadSubmit.emailInfo = [${payloadSampler3"}];
payloadSubmit.country = vars.get("Variable_3");
vars.put("payloadSubmit", JSON.stringify(payloadSubmit));
-> BodyData as shown in the screenshot:
request
I have a Debug PostProcessor to see the values of all these variables I am creating. For the first iteration of my script, everything is perfect. For the second one, however, the Debug PostProcessor shows the values for all payloadSamplers and all the Variables correctly changed to match the new row data (from the csv), but, the final variable, payloadSubmit just reuses whatever the values where for the first thread iteration.
Example:
Debug PostProcessor at the end of first iteration shows:
Variable_1=ABC
Variable_2=DEF
Variable_3=GHI
payloadSampler1={"age":"18","email":null,"name":{"firstName":"Charles"}},{"age":"38","email":null}}
payloadSampler2={"paymentChoice":{"cardType":"CreditCard","cardSubType":"VI"}},"amount":"9.99","currency":"USD"}
payloadSampler3={"email":"tes#email.com"}
payloadSubmit={"ageInfo":[{"age":"18","email":null,"name":{"firstName":"Charles"}},{"age":"38","email":null}],"paymentInfo":[{"paymentChoice":{"cardType":"CreditCard","cardSubType":"VI"}},"amount":"9.99","currency":"USD"],"emailInfo":[{"email":"tes#email.com"}],"country":"GHI"}
But at the end of the 2nd iteration it shows:
Variable_1=123
Variable_2=456
Variable_3=789
payloadSampler1={"age":"95","email":null,"name":{"firstName":"Sam"}},{"age":"12","email":null}}
payloadSampler2={"paymentChoice":{"cardType":"CreditCard","cardSubType":"DC"}},"amount":"19.99","currency":"USD"}
payloadSampler3={"email":"tes2#email.com"}
payloadSubmit={"ageInfo":[{"age":"18","email":null,"name":{"firstName":"Charles"}},{"age":"38","email":null}],"paymentInfo":[{"paymentChoice":{"cardType":"CreditCard","cardSubType":"VI"}},"amount":"9.99","currency":"USD"],"emailInfo":[{"email":"tes#email.com"}],"country":"USA"}
I can also see that the final HTTP Request is indeed sending the old values.
My very limited understanding is that because I am invoking the variables like so "${payloadSampler1}" it will use the value that was set for that the first time the sampler was ran (back in the 1st thread iteration). These are the things I have tried:
If I use vars.get("payloadSubmit") on the body of an HTTP Sampler, I get an error, so that is not an option. If I use vars.get("payloadSampler1") on the Samplers that create the variables, extra escape characters are added, which breaks my JSON. I have tried adding a counter to the end of the variable name and having that counter increase on each thread iteration, but the results is the same. All the variables and samplers other than the last one have updated values, but the last one will always reuse the variables from the first thread iteration.
I also tried to use ${__javaScript(vars.get("payloadSubmit_"+vars.get("ThreadIteration")))}, but the results are always the same.
And I have also tried using the ${__counter(,)} element, but if I set it to TRUE, it will always be 1 for each thread iteration, and if I set it to FALSE, it starts at 2 (I am assuming it is because I use counter in another sampler within this thread - but even after removing that counter this still happens).
I am obviously doing something (or many things) wrong.
If anyone can spot what my mistakes are, I would really appreciate hearing your thoughts. Or even being pointed to some resource I can read for an approach I can use for this. My knowledge of both javascript and jmeter is not great, so I am always open to learn more and correct my mistakes.
Finally, thanks a lot for reading through this wall of text and trying to make sense of it.
It's hard to tell where exactly your problem is without seeing the values of these someVar and payload, most probably something cannot be parsed as a valid JSON therefore on 2nd iteration your last JSR223 PostProcessor fails to run till the end and as the result your payloadSubmit variable value doesn't get updated. Take a closer look at JMeter GUI, there is an yellow triangle with exclamation sign there which indicates the number of errors in your scripts. Also it opens JMeter Log Viewer on click
if there is a red number next to the triangle - obviously you have a problem and you will need to see the jmeter.log file for the details.
Since JMeter 3.1 it is recommended to use Groovy language for any form of scripting mainly due to the fact that Groovy has higher performance comparing to other scripting options. Check out Parsing and producing JSON guide to learn more on how to work with JSON data in Groovy.

passing data from a model/database to a channel using presence

I have a simple chat application that I built and I want to be able to display user uploaded images (locally hosted) next to the user names on the channel's html page. Currently, I am using presence to track the users who are logged into the channel etc. I was able to override the fetch/2 function with the understanding that it would allow me to add a couple map fields to the :metas symbol with user model data.
From what I can tell based on extensive IO.inspecting of different parts of each of the functions; fetch/2, handle_info/2, and some console.logging on my JS layer, the fetch/2 function is not actually getting any data out of the database nor is it assigning it to the :metas map.
here is my current fetch/2 function:
def fetch(_topic, entries) do
query =
from u in User,
where: u.id in ^Map.keys(entries),
select: {u.id, u}
users = query |> Repo.all |> Enum.into(%{})
for {key, %{metas: metas}} <- entries, into: %{} do
{key, %{metas: metas, user: users[key]}}
end
It is basically ripped directly from the documentation. In Theory, the function above should query my User Model and grab all of the user data based on the User.id that is being passed to it through the entries map. Users[keys] comes back as empty despite users being a full map of my User model.
Also, according to the documentation, the query is only supposed to run on join so as not to overload the DB but it seems to run 4-5 times every time I refresh the page. Another thing to note, is that the user.id inside of entries seems to be a string type. Im not sure if this is important, I've tried passing a integer from the JS layer and also using Interger.parse from the actual fetch/2 function to change this to no avail.
When I inspect the users map I get this:
{"1" => %MyApp.User{__meta__: #Ecto.Schema.Metadata<:loaded, "users">,
email: "test#test.com", encrypt_pass: "$pbkdf2-
sha512$160000$ebfY956TgIXhEAF.mqLJAg$QWzBubfeiy4Xrf‌​.EsFiU0jEZAuKvV4ZO5a‌​
8QpeFr817C61DuaNfyo5‌​6WWzj6jak2homCFWAINb‌​PrFtCSXUPWTw", gravatar: %
{file_name: "logo.png", updated_at: #Ecto.DateTime<2017-04-20 22:00:08>},
id: 1, inserted_at: ~N[2017-04-20 22:00:09.071000], password: nil,
updated_at: ~N[2017-04-20 22:00:09.090000]}}
My users[key] returns an empty map like this %{} and converting the input into an integer throws an error, (Poison.EncodeError) unable to encode value: {nil, "users"} if i convert it inside of the elixir code, and where: u.id in ^["undefined"], select: {u.id, u} (elixir) lib/enum.ex:1755: Enum."-reduce/3-lists^foldl/2-0-"/3 – if I convert it from the JS layer.
The original fetch/2 output is an array with online_at: 1492764577562 and phx_ref: "OAyzaGE82xc=" in the :metas map and my user id or email in the users var.
What is it that I am missing here? I know that the fetch/2 function only executes as a callback to the Presence.list/1 function which I am calling in my handle_info/2 channel function. I am also calling Presence.list in my JS layer and mapping it to my presences so that I can produce the list of usernames in the HTML. Am I just misunderstanding how this works or is there some other more simple way that I should be going about this? If you need to see more code I can supply more.
edit: I have a much better understanding of what is happening here. my entries map is actually this:
%{"1" => %{metas: [%{online_at: 1492798247818, phx_ref: "ELHwA+gWF+0="}]}}
So basically, the string for the user id, "1" is being mapped onto the metas map. When I try to just take that key out of the map with the Map.keys(entries) function it isn't able to pull anything out of the DB because it is a string, however, when I change it to an integer from the JavaScript side it throws an error because for whatever reason phoenix is expecting that key to be a string type. Strangely enough if I change the id from an id to an email and try to query the db with the email it doesn't work either. despite the email in the Database being string and the metas map expecting a string key for the entries map.
I am going to rebuild this channel part of the app from the ground up and see what is causing this problem. Then I will come back and see if I was able to fix the error.
You should validate the keys in the entries map first.
ids = Map.keys(entries)
true = Enum.all?(ids, &is_integer/1)
Ecto will convert strings to integers when interpolating into a query:
iex(40)> Ecto.Query.from(u in Users, where: u.id in ^[1, 2, "3", nil], select: u.id) |> Repo.all()
outputs the following debug log:
[debug] QUERY OK source="users" db=0.8ms
SELECT u0."id" FROM "users" AS r0 WHERE (r0."id" = ANY($1)) [[1, 2, 3, nil]]
Notice it coerced the string "3" to an integer and allowed the nil.
However a map will not be so kind:
iex(42)> users = %{1 => %{name: "joe"}, 2 => %{name: "jill"}}
%{1 => %{name: "joe"}, 2 => %{name: "jill"}}
iex(43)> users["1"]
nil
So in the code where you are using the keys from entries for database lookups and map lookups, it could be producing different results.
I've already figured out that the problem has very little to do with my fetch/2 function itself, rather, it had to do with my implementation of the presence module and channel in this case. Basically, the fetch/2 function was being called 4 times every time some one entered the chat room and two out of the four times it was being called with an empty list value [].
Obviously, you can't query a Ecto model with an empty list so it was throwing an error in that case as well. I tried putting guards on the fetch function to filter out the empty list calls but it would not show me the metas map data that I was looking for even when the query succeeded.
Also, the other main problem was my implementation or lack of implementation of a token. I wouldn't have to pass around the user model data through fetch function metas map if I was using a token for joining the chat room rather then just a user (aka the just a username). After making that realization, I was able to successfully connect the user model data with the channel and show it through the JS layer and ultimately, put it on the client.
Anyways guys, thanks for the suggestions. You may not have answered the question (It was my fault for asking the wrong question), but you certainly helped me get there. And also gave me the tools to form a much better understanding of the framework in general on the way.
If/When I have any more questions, I will make sure that I am asking the correct questions before posting them to stack overflow, that way I wont be wasting time.

Improving Twitter's typeahead.js performance with remote data using Django

I have a database with roughly 1.2M names. I'm using Twitter's typeahead.js to remotely fetch the autocomplete suggestions when you type someone's name. In my local environment this takes roughly 1-2 seconds for the results to appear after you stop typing (the autocomplete doesn't appear while you are typing), and 2-5+ seconds on the deployed app on Heroku (using only 1 dyno).
I'm wondering if the reason why it only shows the suggestions after you stop typing (and a few seconds delay) is because my code isn't as optimized?
The script on the page:
<script type="text/javascript">
$(document).ready(function() {
$("#navPersonSearch").typeahead({
name: 'people',
remote: 'name_autocomplete/?q=%QUERY'
})
.keydown(function(e) {
if (e.keyCode === 13) {
$("form").trigger('submit');
}
});
});
</script>
The keydown snippet is because without it my form doesn't submit for some reason when pushing enter.
my django view:
def name_autocomplete(request):
query = request.GET.get('q','')
if(len(query) > 0):
results = Person.objects.filter(short__istartswith=query)
result_list = []
for item in results:
result_list.append(item.short)
else:
result_list = []
response_text = json.dumps(result_list, separators=(',',':'))
return HttpResponse(response_text, content_type="application/json")
The short field in my Person model is also indexed. Is there a way to improve the performance of my typeahead?
I don't think this is directly related Django, but I may be wrong. I can offer some generic advice for this kind of situations:
(My money is on #4 or #5 below).
1) What is an average "ping" from your machine to Heroku? If it's far, that's a little bit extra overhead. Not much, though. Certainly not much when compared to then 8-9 seconds you are referring to. The penalty will be larger with https, mind you.
2) Check the value of waitLimitFn and rateLimitWait in your remote dataset. Are they the default?
3) In all likelyhood, the problem is database/dataset related. First thing to check is how long it takes you to establish a connection to the database (do you use a connection pool?).
4) Second thing: how long it takes to run the query. My bet is on this point or the next. Add debug prints, or use NewRelic (even the free plan is OK). Have a look at the generated query and make sure it is indexed. Have your DB "explain" the execution plan for such a query and make it is uses the index.
5) Third thing: are the results large? If, for example, you specify "J" as the query, I imagine there will be lots of answers. Just getting them and streaming them to the client will take time. In such cases:
5.1) Specify a minLength for your dataset. Make it at least 3, if not 4.
5.2) Limit the result set that your DB query returns. Make it return no more than 10, say.
6) I am no Django expert, but make sure the way you use your model in Django doesn't make it load the entire table into memory first. Just sayin'.
HTH.
results = Person.objects.filter(short__istartswith=query)
result_list = []
for item in results:
result_list.append(item.short)
Probably not the only cause of your slowness but this horrible from a performance point of view: never loop over a django queryset. To assemble a list from a django queryset you should always use values_list. In this specific case:
results = Person.objects.filter(short__istartswith=query)
result_list = results.values_list('short', flat=True)
This way you are getting the single field you need straight from the db instead of: getting all the table row, creating a Person instance from it and finally reading the single attribute from it.
Nitzan covered a lot of the main points that would improve performance, but unlike him I think this might be directly related to Django (at at least, sever side).
A quick way to test this would be to update your name_autocomplete method to simply return 10 random generated strings in the format that Typeahead expects. (The reason we want them random is so that Typeahead's caching doesn't skew any results).
What I suspect you will see is that Typeahead is now running pretty quick and you should start seeing results appear as soon as your minLength of string has been typed.
If that is the case then we will need to into what could be slowing the query up, my Python skills are non-existent so I can't help you there sorry!
If that isn't the case then I would maybe consider doing some logging of when $('#navPersonSearch') calls typeahead:initialized and typeahead:opened to see if they bring up anything odd.
You can use django haystack, and your server side code would be roughly like:
def autocomplete(request):
sqs = SearchQuerySet().filter(content_auto=request.GET.get('q', ''))[:5] # or how many names you need
suggestions = [result.first_name for result in sqs]
# you have to configure typeahead how to process returned data, this is a simple example
data = json.dumps({'q': suggestions})
return HttpResponse(data, content_type='application/json')

Greasemonkey failing to GM_setValue()

I have a Greasemonkey script that uses a Javascript object to maintain some stored objects. It covers quite a large volume of information, but substantially less than it successfully stored and retrieved prior to encountering my problem. One value refuses to save, and I can not for the life of me determine why. The following problem code:
Works for other larger objects being maintained.
Is presently handling a smaller total amount of data than previously worked.
Is not colliding with any function or other object definitions.
Can (optionally) successfully save the problem storage key as "{}" during code startup.
this.save = function(table) {
var tables = this.tables;
if(table)
tables = [table];
for(i in tables) {
logger.log(this[tables[i]]);
logger.log(JSON.stringify(this[tables[i]]));
GM_setValue(tables[i] + "_" + this.user, JSON.stringify(this[tables[i]]));
logger.log(tables[i] + "_" + this.user + " updated");
logger.log(GM_getValue(tables[i] + "_" + this.user));
}
}
The problem is consistently reproducible and the logging statments produce the following output in Firebug:
Object { 54,10 = Object } // Expansion shows complete contents as expected, but there is one oddity--Firebug highlights the object keys in purple instead of the usual black for anonymous objects.
{"54,10":{"x":54,"y":10,"name":"Lucky Pheasant"}} // The correctly stringified JSON.
bookmarks_HonoredMule updated
undefined
I have tried altering the format of the object keys, to no effect. Further narrowing down the issue is that this particular value is successfully saved as an empty object ("{}") during code initialization, but skipping that also does not help. Reloading the page confirms that saving of the nonempty object truly failed.
Any idea what could cause this behavior? I've thoroughly explored the possibility of hitting size constraints, but it doesn't appear that can be the problem--as previously mentioned, I've already reduced storage usage. Other larger objects save still, and the total number of objects, which was not high already, has further been reduced by an amount greater than the quantity of data I'm attempting to store here.
It turns out the issue was that of this.save() being called from an unsafeWindow context. This is a security violation, but one that should have resulted in an access violation exception being thrown:
Error: Greasemonkey access violation: unsafeWindow cannot call GM_getValue.
Instead GM_setValue returns having done nothing, and the subsequent logging instructions also execute, so there was no hint of the issue and the documentation may be out of date.
In my quest to solve this problem by any means, I abstracted away GM_ storage functions so I could use other storage mechanisms, so the workaround will be to put all save instructions in a pre-existing cleanup routine that runs in setInterval, similar to the fix described in the aforementioned documentation. (The use of an existing interval is to prevent excessive creation of timers which have in the past degraded performance over browser uptime.)

Categories