Building a javascript web analytics tool from scratch - javascript

I am fairly new to javascript, I do know basics. I am looking to build my own (from scratch) java script library just like google analytics.js that will track user behavior on websites. Basically I'm looking to collect data like
Click through data
Dwell time
Page hits etc..
I spent lot of time trying to find website/tutorials to get me started on this but I keep ending up on google analytics.js or some private tools.
What I am looking for :
Is there any good starting point/resource/website which can help me build this js library
Are there reference for archetecture of end to end system including back-end?
Any open-source library that I can directly use?
Some things I already looked into
Chaoming build your own analytics tool
Splunk BYO analytics

At it's most basic, the architecture of such an application would only require a client, server, and database.
You can use basic javascript functions to record specific user actions on the frontend and then push them to your server. To identify your users you can set a cookie with a unique id. Then, everytime you send data to your server, you will get the specific user request as well so you can keep track of their actions. (Be careful of privacy laws first though).
For page hits, simply send a response to the server everytime someone opens your site - so call this function as soon as your Javascript loads. On the server, send a request to increment the appropriate value in your database.
For user dwell time, write a function that records the date when the user first hits your site and then count how long they stay there. Push your data to the server every so often and save updates to the user record by adding the new time spent to the current time spent. You could also watch for when a user is about to exit out of the site and then send the data all at once that way - although this method is more fragile.
For clicks and hovers, set up onclick and mouseover event handlers on your links or whatever elements you want to track. Then push the url of the link they clicked or whatever data you want - like "Clicked navbar after 200 seconds on site and after hovering over logo`.
If you want suggestions on specific technologies, then I suggest Node.js for your server side code and MongoDB for your database. There are many tutorials out there on how to use these technologies together. Look up javascript events for a list of the different things you can watch for on the frontend.
These are the building blocks you need. Now you just have to work on defining the data you want and using these technologies to get it.

Related

Custom web page and javascript routine running in the background

The Dynamics documentation is just awful and I couldn't find an answer to this simple question:
In the web version of the CRM, is it possible to register a web page that can be toggled by the user and that itself has an internal state (updated regularly by an interval set with setInterval) that will persist even if the users closes the page (not the entire CRM, just the sub-page)?
We need the user to provide some information for a CTI integration, and this background process to keep alive the CTI session by polling an API while the user session is active. In addition, we need to reuse the component where the user provides the CTI information to be notified if the session fails and restore it or close it if necessary. The real purpose for this is to make a screen pop (push content information about the incoming call to the agent) which I know can be done using Xrm.Utility, although doing it with a REST API method would be much better, RouteTo Aciton looks like the best method to do this, but I'm not sure it will proactively show the item in the user's browser.
I'm not sure this question is as simple as you suggest, it seems relatively complicated, and involves an integration. I'm not suprised the Dynamics documentation doesn't provide an answer for this specific and unique scenario.
I don't believe there is any single feature within Dynamics that will meet this requirement.
You could use a HTML web resource or a web page from a seperate web site iframed into CRM. I think the possible use of these depends on your expected user experience; I believe the user would need to have the page loaded at all times showing these controls (e.g. user is looking at a dashboard) - I don't see how the controls could interact with the user client side otherwise. You could show the controls in multiple places however.
Xrm.Utility is one way to open a record, but it can also be done by Open forms, views, dialogs, and reports with a URL.
RouteToAction looks like it just adds a record into the user queue, the user would need to refresh the queues to see the changes. I don't believe there is any way for a server side REST API call to natively redirect the user.
You could add JavaScript to do this, however you might struggle to add the JavaScript into every page of CRM.
Where I have worked on a CTI integration in the past (assuming you mean computer telephony integration), we always had some other component doing the screen pops - the client's all had a desktop app installed as part of the telephony solution.
Perhaps you could look into browser notifications, or a browser plugin?

Is there any function to update your web site when another site post something new?

As example, i want to update my item list every time Amazon add a new product, is possible to do it without knowing their system or DB?
Unfortunately, no!!! This is the disadvantage of relying on a 3rd party site for the content of your site. However, using the API of the site, whose data you want to access, can give this functionality, but this will not work for all the sites.
If the 3rd-party site does not provide an API to access their data, you'll need to "scrape" the site of that data. In theory this is easy, however, large companies like Amazon deliberately attempt to foil scraping attempts. See an open source project dedicated to this exact prupose: https://github.com/adamlwgriffiths/amazon_scraper The author says it best:
Amazon have resorted to moving more and more content into iFrames which this scraper can't handle. I envisage a time where most data will be inaccessible without more complex logic.
I've spent a long time trying to get these scrapers working and it's a never ending battle. I don't have the time to continually keep up the pace with Amazon. If you are interested in improving Amazon Scraper, please let me know (creating an issue is fine). Any help is appreciated.
If you want to build a custom tool to scrape public websites, I would check out Node.js. It is popular due to its ability to query the page DOM effectively. There are some good writeups out there to get started: https://scotch.io/tutorials/scraping-the-web-with-node-js

How to prevent script sharing from it's users?

I am making a small payment system, basically it's just a point system, you pay say 1 USD and you get 100 points which is used later on in a game project to get bonuses. It's a script for game servers, something like a user panel.
Now, the script system is ready, but I'm afraid to give it away, since than someone will share it and it will spread all over the gaming area. What would be the solution keeping it working only if I give them a permission?
I thought about re-making whole code and make it work on my website but I don't think that people will want to put their SQL data to website that located NOT on their host. Please help me out, at least with some clues, maybe its possible to make some widgets? or maybe some license system?
I'm really lost.
You should implement the logic on the server side as an api REST call and include in the script only an ajax call to the api. You can limit the use of the api through an api key that you'll provide only to qualified sites.
You'd need to implement some sort or serverside authentication/api so that only varified users can use the script. Much like how software checks a licence.
On script load your javascript could make a ajax call to a server passing through the users IP, auth key, username etc etc.
This can then be varified on the server, maybe returning a dynamically generated url containing a javascript file which contains your business logic
(so that urls are dynamically generated for that users session only)
That way people cant hot link the script, and the script you give out is solely the ajax call
(With the business logic script injected on auth)

patterns for building Web/Mobile apps that processes a lot of data on the client side

I'm trying to build a single page web app using Backbone. the app looks and behaves like a mobile app running on a tablet.
The web app is built to help event organizers manage their lists of people attending their events, and this includes the ability to search and filter those lists of attendees.
I load all attendees list when the user opens the attendees screen. and whenever the user starts to search or filter the attendees, the operation happens on the client side.
This way always works perfectly when the event has about ~400 attendees or less, but when the number of attendees gets bigger than that (~1000), the initial download time takes longer (makes sense) .. but after all data is loaded, searching and filtering is still fast relatively.
I originally decided to go with the option of fully loading all the data each time the app is loaded; to do all search operations on the client side and save my servers the headache and make search results show up faster to the user.
I don't know if this is the best way to build a web/mobile app that processes a lot data or not.
I wish there's a known pattern for dealing with these kinds of apps.
In my opinion your approach to process the data on the client side makes sense.
But what do you mean with "fully loading all the data each time the app is loaded"?
You could load the data only once at the beginning and then work with this data throughout the app lifecycle without reloading this data every time.
What you also could do is store the data which you have initially fetched to HTML5 localstorage. Then you only have to refetch the data from the server if something changed. This should reduce your startup time.

Statistics gathering in Rails 3.1 (Analytics)

I am building an analytics system for my rails application and I want to monitor every time I pull a certain object from the database, and I'd like to put the in the model file. I have objects that are being displayed on the page and I need to see the amount of views and clicks that they get. I assume the views can be handled by just figuring out when the object is pulled from the database (if someone could tell me how to do that) and I figured javascript to monitor the clicks. Would you all agree with this? Or is there a better way. I am using Rails 3.1 with MongoMapper and MongoDB
To store the data simply send an ajax request from the browser with the information you want to store in a POST request to a rails resource like :click#create. Be sure to include the relevant data attributes within the request.
You may want to collect the requests and then send them all in a batch based on time or a use clicking a "done" button or something of that sort.
Recording the fact that someone clicked (from javascript) is different than recording when an object is retrieved from the database. You could write a before filter for each of the methods in the class or possibly implement an active record callback for something of that sort.

Categories