We've ran into a weird edge case in our checkout flow where, after we create the order via the API and launch the PayPal window for the user's approval, they are able to escape out of the checkout flow somehow without triggering the onCancel callback.
We're contacting some users to figure out exactly how they did this, but we've been able to reproduce it by simply closing the original (parent) window while the PayPal window was still open. As far as we can tell, this or some other very similar situation (like a power failure) is the only way to hit this edge case.
Is there some sort of best practice for handling this situation? Obviously we could have some sort of chron job which looks for old non-accepted/non-canceled orders, but we would prefer some way of addressing them imediately. I've found docs for the beforeunload event. Should I just cancel the order with that if the page is closed?
There are many potential situations that can lead to an order not being approved by a payer, such as the power failure you mentioned or a window being left open forever with no action, or a browser crashing. Nothing about your business logic should depend on onCancel ever being called.
As for handling such a situation, simply don't capture such PayPal orders. They expire on their own. Your own system's record of the payment attempt / cart order can expire when you want it to.
Related
I need to capture telemetry information that captures details such as when the user opens a form, closes it or navigates away from the form.
To do this, I have javascript calls to a telemetry api. In the case below, when the user navigates away or closes the tab, I would like to trigger "mymethod" which will call the api method to capture this event.
I am trying to trigger a javascript method when the user navigates aways from the form. I have this script which is on the CRM form. The below code does not work.
window.onbeforeunload = function() {
console.log('onbeforeunload triggered...');
mymethod();
return true;
};
Ideally I would like to be able to detect when the user navigates away from the page, or closes the page. Any suggestions appreciated. Thanks in advance.
This will probably never work - that type of code within CRM is unsupported.
Microsoft Dynamics 365 and the importance of staying supported.
Microsoft provide a set of tools and guidelines describing the things
we can do, they also tell us the – unsupported – things we shouldn’t
do. It’s all on the MSDN. Un-supported scenarios that commonly occur:
All JavaScript interactions within the application pages must only be performed using functions defined in Xrm.Page & Xrm.Utility
namespaces, i.e. don’t directly interact with the page DOM.
I would suggest asking a new question which focuses on your end goal. You have told us something that doesn't work (but we wouldnt really expect it to) - you havn't actually told us what you are trying to achieve.
Is there a way of checking the number of currently subscribed clients to a certain publish function? Problem is that I have different groups where every group has its own unique ghash.
When a user chooses to leave a group and enters a new one, this ghash changes and THE SAME publish function is subscribed, although with a different ghash of course.
So I am looking for a way to check how many clients are subscribed to each group/ghash at a time (at the server side). I've been fiddeling around all day with stuff like this but it does not work that well to be honest. I am also listening for the "unsub" event of sockets and all that but still ... this is all buggy as hell.
If some one's interested in my whole code, you can find it here! (I found it too long to paste it here into my post.)
I really hope someone can help! :-)
cheers, P
EDIT: Or in other words: Is there a way to count the number of clients currently connected to a sockjs websocket where all these websockets were called with the same params?
=========================================================================
EDIT 2:
New version: LINK
For some reason this is not working at all ... No inserts are made because the ghash provided to the subscription is NEVER equal to any of the actual socket subscriptions (--> see lin 20: ghash is never equal to ghash2). I just don't understand how this is possible? the whole subscription function is called each time the Session ghash changes. How can this var never be equal to the param submitted to the actual socket (submission)? (it's always also a ghash, but always a ghash of another group).
I am really lost here! :-(
I now see you are doing straight old node style socket.io programming. I've done similar things in node projects. This is maybe the real question. On the docs for Meteor they don't even use the word socket. Maybe someone else would get into that new question with you, but this question about tracking subscribers is answered by this answer.
I think meteor is a new world, and will handle such stuff for you, if you adapt to its way of thinking. For example, make a collection of messages, with a field for chatroom. Each client picks their chatroom, finds those messages.find({chatroom:'box5'}), and displays them. A new message automatically goes to every client that is listening to that chatroom. Let Meteor use sockets for you.
Answer to counting clients subscribed to something:
Pseudo code:
Make an object to hold the counts of each subscription signature
counts = {}
on signup, Make a string that represents the subscription uniquely, add it to your counting object.
counts['params as string'] += 1;
on signout
counts['params as string'] -= 1;
The logic to know when no one is still subscribed is this:
done = (0 == counts['params as string'] )
Apparently as I know as of now it is not possible to do this.
I did some research and tried many things but for some reason sometimes multiple websockets are opened for trasnfering the same data to the same client. --> counting the number of clients connected is impossible via this approach.
Just triggering an event when my ghash changes is also not good enough as a close of the browser window would not trigger it.
I think having a functionality to count the number of clients "viewing the same data changes" (can't think of a better way to put it) would be awesome. Maybe some meteor core dev can just put his/her 2 cents in here so we know if this is even possible at all.
I hope someone can come up with a solution at some point .. I can't! :(
My user-status package tracks the number of clients connected to a Meteor app by tracking the number of subscriptions to a global publish function. You may be able to draw some inspiration from it. It's not granular at the per-publication level, but you can certainly do the same thing for publications that you are interested in.
https://github.com/mizzao/meteor-user-status
The main points to note are
each open session will call the subscription (users may have more than one tab open)
each time a user logs in our out, the subscription will update
you can read the per-session id in the publish function
you can listen to the close event for the SockJS socket for browser tabs being closed, etc.
I don't think it would be too hard to do this for groups; I am doing the same thing for another project.
I want to track multiple events using GA _trackEvent method across multiple domains.
Because of the nature of the report I want to generate, I must do something like this:
for (var i=0; var < books.length; i++)
{
//showing values for current books[i]
_gaq.push(['_trackEvent', 'Books Displayed', 'Fantasy', 'Lord of The Rings']);
}
So, when my books list is populated I want to send appropriate GA event. It is important that I send each item separately so I can drill-down on Event Dashboard to preview all items in 'Fantasy' category and so on.
Note, books list is never longer than about 10 items.
The problem I'm experiencing at the moment is that for no good reason Google code is ignoring some of my requests. The way how Google event tracking works, is that with every call to _trackEvent, Google is dropping gif on the page:
http://www.google-analytics.com/__utm.gif
that has loads of parameters, and one of them - utme contains my data:
__utm.gif?utmt=event&utme=5(Books%20Displayed*Fantasy*Lord%20of%20The%20Rings)
Using Fiddler (or Firebug Net tab) I can check if this request is really coming out from the browser.
Unfortunately, it seems like every time about half of my requests are completely ignored by google and _trackEvent is not translated to __utm.gif call.
I have a feeling it has something to do with the frequency of the _trackEvent call. Because I am using them inside a for loop, all events are spawned with minimal interval between. It seems like Google doesn't like it, and ignores my calls.
I did test it, adding 2 seconds interval between each call and it worked. But this solution is unacceptable - I can't make user wait for 20 seconds to send all events.
Unfortunately this flaw makes GA Event Tracking completely useless - I can't just "hope" GA code will correctly record my event because the report won't be precise. The worst thing about it is that there is no proper documentation on Google saying what is the maximum allowed number of requests per second (they only state that max request per session is 500 what is a lot more than what I generate anyway).
My question is - did you experience similar problems with Google Event tracking before and how did you manage to fix it? Or does it mean I must completely abandon GA Tracking because it will never be precise enough?
First off, I want to point out that the 500 limit per session is for all requests to Google, not just for events. So that includes any other custom tracking you are doing, and that also includes normal page view hits.
This to me sounds more like a general js issue than a GA issue. Something along the lines of you pushing code for GA to process faster than it can process so some are falling through the cracks. I don't think there really is anything you can do about that except for delay each push as you have done...though I think you could probably lower that interval from 2s to maybe as low as 500ms...but still, that would at best drop you down to a 5 second wait, which IMO is a lot better than 20s but still too long.
One solution that might work would be for you to skip using _gaq.push() and output an image tag with the URL and params directly for each one. This is sort of the same principle as the "traditional" GA code that came before the async version, and is what most other analytics tools still do today.
If you want my honest opinion though...in my experience with web analytics, I think the most likely thing here is that you need to re-evaluate what you are tracking in the first place.
Judging by the context of your values, (and this is just a guess) it looks to me like you have for instance a page where a user can see a list of books, like a search results page or maybe a general "featured books" page or something similar, and you are wanting to track all the books a user sees on that page.
Based on my experience with web analytics, you are being way too granular about the data you collect. My advice to you is to sit down and ask yourself "How actionable is this data?" That is, after all, the point of web analytics - to help make actionable decisions.
All day long I see clients fall into the trap of wanting to know absolutely every minute detail about stuff because they think it will help them answer something or make some kind of decision, and 99% of the time, it doesn't. It is one thing to track as an event individual books a user views, like an individual product details page, where you'd be tracking a single event.
Or for search results page...track it as a "search" event, popping stuff like what the term searched was, how many results given, etc.. but not details about what was actually returned.
I guess if I knew more details about your site and what this tracking is for, I could maybe give you more solid advice :/
This is probably due to the 1 event per second limit
"Events Per Session Limit
In addition to general collection limits and quotas, the following limit applies to event tracking in ga.js:
The first 10 event hits sent to Google Analytics are tracked immediately, thereafter tracking is rate limited to one event hit per second.
As the number of events in a session approaches the collection limit, additional events might not be tracked. For this reason, you should:
avoid scripting a video to send an event for every second played and other highly repetitive event triggers
avoid excessive mouse movement tracking
avoid time-lapse mechanisms that generate high event counts
(from https://developers.google.com/analytics/devguides/collection/gajs/eventTrackerGuide)
That is why your 2 seconds delay works. You can theoretically cut that in half, though a factor of safety would probably reduce that cut.
I want to implement a 'live search' or 'search suggestions' feature in a web application that uses the Dojo Framework. It would be similar to the way Google and Bing searches display matches as you type: when you type in the search box, a list of potential matches appears below. Searches would be performed server side, with the results sent back to the browser using AJAX.
Does anyone know of a good way to implement this using Dojo?
Here are some potential options:
The built-in widget dijit.form.ComboBox
This has very similar functionality, but I've only seen it used with limited data sets. The examples always use small lists (such as the 50 states in USA) and preload the entire data set for client-side filtering. However I presume I could hook it up to a dojox.data.JsonQueryRestStore for server-side search — can anyone confirm whether that works?
QueryBox http://marumushi.com/code/querybox/
This implementation mainly does the job, but it has some minor bugs and doesn't look like it's being maintained. I'd have to do some bugfixes on the code before using it.
Medryx http://blog.medryx.org/2008/09/10/dijitsearch-part-2/
This also looks like it does the job, but it is described as 'alpha-level' code and the link to the code seems to be broken...
I could probably make one of the above work, but I'd like to know if there are any better alternatives out there.
I implemented it 5 years ago when Dojo was at 0.2:
http://www.lazutkin.com/blog/2005/12/23/live-filtering/
While the code is ancient, it is trivial, and hopefully it'll give you ideas on how to attack it. The rough sketch:
Attach an event handler to your input box, which is triggered on changes — use "onkeyup" to detect a change in the input box.
Wait until user stopped typing by setting a timer in your event handler, if it is not set yet. 200-500ms are good waiting times.
The timeout plays a dual role:
It throttles our requests to a server to prevent overloading.
It plays on our perception of time and our typing habits.
If our timeout is up, and we don't wait for a server ⇒ send server a string we have so far.
If we are still waiting for a server, cancel the request and ask again.
This part is app-specific: we don't want to overload a server, and sometimes a server cannot handle broken connections well.
In the example I don't cancel the XHR call, but wait it to finish first before submitting new request.
Server responds with relevant results, which are promptly shown.
In the blog post I implemented it as a widget. Obviously the exact packaging is up to you.
I notice my site does not have a single <form> with the exception of logging in. I am not sure how or why it happened but i found i use jquery and ajax to post everything then refresh (or not if i dont need to). How and why would the user suffer from this?
Some of my 'forms' include
Leaving a comment on a page
Removing messages
Sending a private message (which i then do document.location=nextPage)
Marking as a favourite.
Forgetting for a moment that none of the site would work without javascript disabled, another side effect you might not realize, is that there is no default submit behavior anymore. This means a user cannot finish typing their entry and hit enter to submit the form. This is important for search forms and the like, but less important for comment forms.
Wrapping form fields in a form tag and having a submit input element (even if it is display:none) allows for a default submit action on enter or return. If you do this, you simply call preventDefault() into the submit() event handler to stop the real submit, and make an AJAX one instead.
Ok, back to the JS disabled thought. You have to make a choice:
Work backwards to implement unobtrusive JS on your site. Basically, the site works with or without JS, but it will work better (or more refined) with JS enabled
Or place a prominent message alerting your users to the fact that your site requires JS for the site to work. The cons to this method is you might limit the usefulness of your site in some corporate networks and on some mobile devices. As far as people who willingly turn of JS, the alert can let them decide if they want to stay around or not.
Given that the site is already coded, take a look at your target audience and make your best decision given the time and energy required to make the site work without JS.
From your comment, I see that you don't intend to support users without JavaScript, so the accessibility and backward compatibility argument is moot.
However, you are likely creating a lot of unnecessary requests by posting with AJAX and then refreshing the page. This is not how AJAX was intended to be used and is possibly an anti-pattern. The idea of AJAX is to send specific and receive specific data, and to update the page based on that received data, not refreshing.
Also, by not using form's, you disable the default submit functionality (pressing <enter>) as Doug Neiner pointed out