How to get page views on JavaScript? - javascript

How to get page views on JavaScript?
I want to get and display how many times a page is viewed, just like Stack Overflow.
How to do it by JavaScript? Thx!

This can be done in javascript. You've to use browsers local storage to store the page view count to use it. You can use either window.localStorage or window.sessionStorage.
sessionStorage will work like php session and will only be available through one browsing session, if you close the browser then sessionStorage data will be removed by browser. But, localStorage will not be removed by browser until user manually delete browser data.
I'm gonna show you two implementations:
Using localStorage: you can store a variable into localStorage which can be used to count the page view. :
var myStorage = window.localStorage, pageCount;
window.addEventListener('load', function(){
if(!myStorage.getItem("pageCount")){
myStorage.setItem('pageCount', 1);
} else {
pageCount = myStorage.getItem("pageCount");
pageCount = pageCount + 1;
myStorage.setItem('pageCount', pageCount );
}
console.log('page view count', myStorage.getItem("pageCount"));
});
Or using window.sessionStorage :
var mySession = window.sessionStorage, pageCount;
window.addEventListener('load', function(){
if(!mySession.getItem("pageCount")){
mySession.setItem('pageCount', 1);
} else {
pageCount = mySession.getItem("pageCount");
pageCount = pageCount + 1;
mySession.setItem('pageCount', pageCount );
}
console.log('page view count of current browsing session', mySession.getItem("pageCount"));
});

Using client-side JavaScript will not get the job done.
However, if you still want to use JavaScript then you should take a look at Node.js. Ideally, you would want to record the number of unique sessions, although, you can still record a page refresh as a page view.
This can be done with the following piece of code provided that you know how Node.js works:
var http = require('http'); // Require HTTP module
var pageViewCount = 0; // Declare and initialise the page view count
var server = http.createServer(function (req, res) {
res.writeHead(200, { 'Content-Type': 'text/plain' }); // Headers
pageViewCount++; // Increase the page view count for every successful page load request
console.log('There are currently ' + pageViewCount + ' views'); // Output in console the amount of total page views
res.end();
});
server.listen(8000);
console.log('Server is currently running');
You can find more information regarding Node.js here: https://nodejs.org/en/docs/
Keep in mind that this answer requires some knowledge of Node.js but confirms that server-side JavaScript can be used to provide a solution to this particular problem.

With nodejs (serverside javascript) and express ( a library) you could do the following:
var counter = 0;
var app = require("express");
app.get("/",function(req,res){
counter++;
res.render("main.ejs",{counter});
});
app.listen(80);
main.ejs should look like this:
The page was visited <%= counter %> times...

You might want to store that in Database.

Related

Extracting table value from an URL with Node JS

I am quite new to Node JS and express but I am trying to build a website which serves static files. After some research I've found out that NodeJS with Express can be quite useful for this.
So far I managed to serve some static html files which are located on my server, but now I want to do something else:
I have an URL to an html page, and in that html page, there is a table with some information.
I want to extract specific a couple of values from it, and 1) save it as JSON in a file, 2) write those values in a html page. I've tried to play with jQuery, but so far I've been unsuccessful.
This is what I have so far:
1.node app running on port 8081, which I will further access it from anywhere with NGINX reverse proxy (I already have nginx setup and it works)
2.I can get the URL and serve it as HTML when I use the proper URI.
3.I see that the table doesn't have an ID, but only the "details" class associated with it. Also, I am only interested in getting these rows:
<div class='group'>
<table class='details'>
<tr>
<th>Status:</th>
<td>
With editors
</td>
</tr>
From what I've seen so far, jQuery would work fine if the table has an ID.
This is my code in app.js
var express = require('express');
var app = express();
var request = require('request');
const path = require('path');
var content;
app.use('/', function(req, res, next) {
var status = 'It works';
console.log('This is very %s', status);
//console.log(content);
next();
});
request(
{
uri:
'https://authors.aps.org/Submissions/status?utf8=%E2%9C%93&accode=CH10674&author=Poenaru&commit=Submit'
},
function(error, response, body) {
content = body;
}
);
app.get('/', function(req, res) {
console.log('Got a GET request for the homepage');
res.sendFile(path.join(__dirname, '/', 'index.html'));
});
app.get('/url', function(req, res) {
console.log('You requested table data!!!');
TO DO: SHOW ONLY THE THE VALUES OF THAT TABLE INSTEAD OF THE WHOLE HTML PAGE
res.send(content);
});
var server = app.listen(8081, function() {
var host = server.address().address;
var port = server.address().port;
console.log('Node-App listening at http://%s:%s', host, port);
});
Basically, the HTML content of that URL is saved into content variable, and now I would like to save only the table from it, and also output only the saved part to the new html page.
Any ideas?
Thank you in advance :)
Ok, So I've come across this package called cheerio which basically allows one to use jQuery on the server. Having the html code from that specific URL, I could search in that table the elements that I need. Cheerio is quite straight-forward and with this code I got the results I needed:
var cheerio = require('cheerio');
request(
'https://authors.aps.org/Submissions/status?utf8=%E2%9C%93&accode=CH10674&author=Poenaru&commit=Submit',
(error, res, html) => {
if (!error && res.statusCode === 200) {
const $ = cheerio.load(html);
const details = $('.details');
const articleInfo = details.find('th').eq(0);
const articleStatus = details
.find('th')
.next()
.eq(0);
//console.log(details.html());
console.log(articleInfo.html());
console.log(articleStatus.html());
}
}
);
Thank you #O.Jones and #avcS for guiding me to jsdon and html-node-parser. I will definitely play with those in the near future :)
Cheers!
Your task is called "scraping." You want to scrape a particular chunk of data from some web page you did not create and then return it as part of your own web page.
You have noticed a problem with scraping: often the page you're scraping does not cleanly identify the data you want with a distinctive id. So you must use some guesswork to find it. #AvcS pointed out a server-side npm library called jsdom you can use for this purpose.
Notice this: Even though browsers and nodejs both use Javascript, they are still very different environments. Browser Javascript has lots of built-in APIs to access web pages' Document Object Models (DOMs). But nodejs doesn't have those APIs. If you try to load jQuery into node.js, it won't work, because it depends on browser DOM APIs. The jsdom package gives you some of those DOM APIs.
Once you have fetched that web page to scrape, code like this may help you get what you need.
const jsdom = require("jsdom");
const { JSDOM } = jsdom;
...
const page = new JSDOM(page_in_text_string).window;
Then you can use a subset of the DOM APIs to find the elements you want in your page. In your example, you are looking for elements with the selector div.class table.group. You're looking for the div.class element.
You can do this sort of thing to find what you need:
const desiredTbl = page.document.querySelector("div.class table.group");
const desiredDiv = desiredTbl ? desiredTbl.parentNode : null;
const result = desiredDiv ? desiredDiv.textContent : null;
Finally do this:
page.close();
Your question says you want certain rows from your document. HTML document don't have rows, they have elements. If you want to extract just parts of elements (part of the table rather than the whole thing) you'll need to use some text-string code. Just sayin'
Also, I have not debugged any of this. That is left to you.
There's a smaller and faster library to do similar things called node-html-parser. If performance is important you may want that one instead.

Nodejs - Issues with socket connection webapp

I'm currently have an issue creating a webapp. All of the pages that we are creating can be viewed through a function that gets the post data and displays it onto the page. The issue I'm having is that when loading the content, the corresponding JS for sockets does not execute.
I believe this is because we're using:
socket.on("connect")
and the connect event would only fire once, but I'm unsure about how to fix this.
An example of the JS I'm currently using can be seen below.
function runJs(){
var url = document.location.pathname.toLowerCase();
if(url == '/account/create'){
var socket = io();
//once the socket connects, make calls
socket.on("connect", function(){=
socket.on("accountCreated", function(data){
if(typeof data.data.error !== "undefined"){
jQuery("#error").text(data.data.error);
}
else{
//account creation was successful and we're logged in.
//redirect to the home page
window.location.href = "/";
}
});
});
}
When executing another function (not posted here, as it's not important), it updates the pages HTML and runs the runJs() function seen above. I have confirmed through console.log that the function is indeed being called, but the code within the socket.on does not execute unless the page is reloaded.
Does anyone have any ideas about how I could fix this?

Serving a different JS file from a server depending on the number of times a URL is visited by a client?

I've an web app that injects a server based myjavascriptfile.js file from my server, using jQuery AJAX GET request. Currently, this GET request is called every time the client visits https://www.google.co.uk.
However I'd like to be able to send mysecondjavascriptfile.js file to the client, if the client has gone to https://www.google.co.uk more that 10 times.
Do you have any ways I can do this?
First thing to do, is to persist the hits the client do to the site. I think SessionStorage could help here:
sessionStorage.counter = ++(sessionStorage.counter) || 0;
var sources = {
lessThanTen : 'http://yourscript.com/lessthan10hits.js',
moreThanTen : 'http://yourscript.com/morethan10hits.js'
}
var script = document.createElement('script');
if(sessionStorage.counter >= 10){
script.src = sources.moreThanTen;
} else {
script.src = sources.lessThanTen;
}
document.getElementsByTagName('head')[0].appendChild(script);
This is of course a client-side verification of the hit. You could implement a server-side verification through AJAX or just serve a slightly different HTML markup after 10 requests. You'll need to use sessions (or just plain cookies) to persist them on the server-side.
AJAX verification:
var xhr = new XMLHttpRequest();
xhr.addEventListener('load', function(){
var script = document.createElement('script');
script.src = xhr.response;
document.getElementsByTagName('head')[0].appendChild(script);
});
xhr.open('POST', 'http://www.urltocheckhits.com/hits');
xhr.send('url=' + encodeURIComponent(window.location.hostname));
And then from Node.js (with body-parser and express-session):
var sources = {
lessThanTen : 'http://yourscript.com/lessthan10hits.js',
moreThanTen : 'http://yourscript.com/morethan10hits.js'
}
app.post('/hits', urlEncoded, function(req, res){
if(req.body){
var url = req.body.url;
if(!req.session.views){
req.session.views = { };
}
if(req.session.views[url]){
req.session.views[url]++;
} else {
req.session.views[url] = 1;
}
if(req.session.views[url] > 10){
res.send(sources.moreThanTen);
} else {
res.send(sources.lessThanTen);
}
}
});
I suggest you check the documentation of express-session and body-parser.
Note that you'll need to add CORS Headers for this (you could just as easily do it with JSONP too instead of using XHR).
Might be easier if you just serve the JS file instead of doing the AJAX call and then including the returned script. So then you could just:
<script src="http://onesingleurl.com/hits">
Caching will behave weird like this though, so that's why I favor the other approach.

Parsing progress live from console output - NodeJS

Link to a similar problem that has no answers, but written in C
I'm using NodeJS to parse output from ark-server-tools, which is a layer on top of SteamCMD. What I'd like to do is parse the progress of the update and assign it to a variable, which I'll return as a GET call that the client can use to check progress of the update.
I put the log results of an update into a file to run my code against, which I've put in a PasteBin for brevity.
update.js
app.get('/update', function(req, res) {
var toReturn;
var outputSoFar;
var total;
var startPos;
var endPos = 0;
//var proc = spawn('arkmanager', ['update', '--safe']);
var proc = spawn('./update-log.sh'); //for testing purposes
proc.stdout.on('data', function(data){
outputSoFar += data.toString();
//if server is already updated
if (outputSoFar.indexOf('Your server is already up to date!') !== -1) {
toReturn = 'Server is already up-to-date.';
}
//find update progress
if (outputSoFar.indexOf('progress:') !== -1) {
for(var line in outputSoFar.split('\n')){
console.log('found progress');
startPos = outputSoFar[line].indexOf('progress:', endPos) + 10; //get the value right after progress:_, which should be a number
endPos = outputSoFar[line].indexOf(' (', startPos); // find the end of this value, which is signified by space + (
console.log(outputSoFar[line].substring(startPos, endPos).trim());
updatePercent = outputSoFar[line].substring(startPos, endPos).trim(); //returned to the `checkUpdateProgress` endpoint
}
toReturn = 'Updating...';
}
});
proc.stderr.on('data', function(data){
console.log(data);
});
proc.on('close', function (code, signal) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.write(JSON.stringify(toReturn));
res.end();
});
}
/*
* Returns progress of an update
*/
app.get('/updateProgress', function(req, res){
console.log('updatePercent: ' + updatePercent);
res.send(JSON.stringify(updatePercent));
});
Couple questions:
1) Is this the best way to architect my RESTful API? One call for the action of updating and another for checking the progress of the update?
2) I'd love a better way to test the function, as echoing the console log returns the data in one piece, as opposed to a data stream. How do I do this?
3) I'm pretty sure the parsing function itself isn't quite right, but I'm having a hard time testing it because of #2.
If you want to take a look at the project in its entirety, here's the repo.
Thanks in advance for your help!
For one of your questions:
Is this the best way to architect my RESTful API? One call for the
action of updating and another for checking the progress of the
update?
As implemented now, I don't think your service can support concurrent requests correctly. updatePercent is a shared global variable. If i hit /update endpoint with a single client, it will start the ./update-log.sh command.
If I request /update again, it will start another update and overwrite the global updateProgress. There doesn't seem to be anything mapping an updatePercent with the correct process
Additionally, there could be serious performance issues to each request spawning a new process. Node might be able to handle hundreds or thousands of concurrent connections using a single thread, but each request is going to spawn a new process, just something to profile

Why is this node.js code blocking?

I tried to play a bit with node.js and wrote following code (it doesn't make sense, but that does not matter):
var http = require("http"),
sys = require("sys");
sys.puts("Starting...");
var gRes = null;
var cnt = 0;
var srv = http.createServer(function(req, res){
res.writeHeader(200, {"Content-Type": "text/plain"});
gRes = res;
setTimeout(output,1000);
cnt = 0;
}).listen(81);
function output(){
gRes.write("Hello World!");
cnt++;
if(cnt < 10)
setTimeout(output,1000);
else
gRes.end();
}
I know that there are some bad things in it (like using gRes globally), but my question is, why this code is blocking a second request until the first completed?
if I open the url it starts writing "Hello World" 10 times. But if I open it simultaneous in a second tab, one tab waits connecting until the other tab finished writing "Hello World" ten times.
I found nothing which could explain this behaviour.
Surely it's your overwriting of the gRes and cnt variables being used by the first request that's doing it?
[EDIT actually, Chrome won't send two at once, as Shadow Wizard said, but the code as is is seriously broken because each new request will reset the counter, and outstanding requests will never get closed].
Instead of using a global, wrap your output function as a closure within the createServer callback. Then it'll have access to the local res variable at all times.
This code works for me:
var http = require("http"),
sys = require("sys");
sys.puts("Starting...");
var srv = http.createServer(function(req, res){
res.writeHeader(200, {"Content-Type": "text/plain"});
var cnt = 0;
var output = function() {
res.write("Hello World!\n");
if (++cnt < 10) {
setTimeout(output,1000);
} else {
res.end();
}
};
output();
}).listen(81);
Note however that the browser won't render anything until the connection has closed because the relevant headers that tell it to display as it's downloading aren't there. I tested the above using telnet.
I'm not familiar with node.js but do familiar with server side languages in general - when browser send request to the server, the server creates a Session for that request and any additional requests from the same browser (within the session life time) are treated as the same Session.
Probably by design, and for good reason, the requests from same session are handled sequentially, one after the other - only after the server finish handling one request it will start handling the next.

Categories