Trace clicking behavior of visitors of on web page - javascript

I am writing my own home page in html and javascript.
I have many hyperlinks on the home page, which interests me is, how many times visitors of my page click on them.
For instance, pdf is a hyperlink which directs to downloading a pdf file. I would like to set up a mechanism of counter of clicking on it. For instance this information is automatically recorded in a file so that I could check it from time to time.
Besides counter, other information such as the time of clicking, the IP of visitors who click interest me too. It will be great if I can record them.
I don't know javascript, could anyone suggest me an efficient way to realize this with details (or a piece of code)?

As per you code
PDF
You can see I have added one ID in the PDF link, now
$("#uniqueID").click(function(){
//Write a ajax function to calculate the count and storing
});
I have not written the entire code, I think it's sufficient for you to understand the logic.

Related

Capture the state of a web page in a URL

I find myself having to interact with a web page that hides state in various places so that one cannot easily share it as a URL, for example this page which allows users to look up information from city zoning applications:
https://aca.cityofberkeley.info/community/Default.aspx
You can interact with the page all you want, but the URL in the location bar will remain the same as the above.
Currently, city staff provide users with instructions like "Load this URL, click on the 'Zoning' tab, enter DRCP2020-0010 under the 'Permit Number' field, click 'Search', then when the records come up, click 'Record Info' and then select 'Attachments' from the dropdown menu, then click on the PDF document that says '2020-10-21_DRCP_APP_PCKT_2801 Adeline.pdf'". I would like to be able to replace these instructions with a URL.
Another example is the website where video from city council meetings is archived:
http://berkeley.granicus.com/MediaPlayer.php?publish_id=cbebb4e6-5b83-11eb-920e-0050569183fa
It would be nice to be able to produce a link which brings up one of the meeting videos, and seeks to a certain timestamp like 53:40, so that I can refer to something specific that was said at a meeting.
Looking at the pages that are loaded when I follow the instructions in each case, I can see that there are some POST forms, cookies, hidden input fields, and so on.
Is there some kind of tool that I can use to create "deep links" to pages like these, that were generated using non-URL hidden state, which will allow me to quickly share what I'm looking at with another user?
What I'm seeking is similar to the frmget "bookmarklet", which changes the forms on a page to use GET instead of POST. Sometimes this succeeds in producing a URL which captures form submission query parameters. However, it doesn't work for these applications, for whatever reason.
This question is possibly related to the idea of capturing a web page's DOM state using "browser screenshots" and a script called html2canvas. A possible solution might involve getting and setting cookies in a bookmarklet. Ideally something that produces a normal "https://" URL would be ideal, but if it is impossible to solve the problem except by outputting a "javascript:" URL (bookmarklet) then that is acceptable to me (in spite of the security implications). Thanks.
That seems like not a programming matter. It seems like the site has some security issues as well.
QUESTION A: About Zoning
Here are some links you can use
Direct link to Zoning (I've found it via Advanced search from the site):
https://aca.cityofberkeley.info/CitizenAccess/Cap/CapHome.aspx?module=Planning&TabName=Planning&TabList=Home%7C0%7CBuilding%7C1%7CHousing%7C2%7CPlanning%7C3%7CFire%7C4%7CLicenses%7C5%7CPublicWorks%7C6%7CCurrentTabIndex%7C3
A strange link to the list of files (I've found it via downloading a file, then going to chrome://downloads, then right-clicking the file I've download. The link has been the following):
https://aca.cityofberkeley.info/CitizenAccess/FileUpload/AttachmentsList.aspx?iframeid=ctl00_PlaceHolderMain_attachmentEdit&module=Planning&isInConfirm=False&isdetail=True&isaccountmanager=False&isAdmin=True&isPeopleDocument=&agencyCode=BERKELEY&isForConditionDocument=N
It still doesn't give the direct link to the file, but it it gives the list of attachement of the previously opened Zoning record.
Currently I have no idea what file is triggered by javascipt:__doPostBack('attachmentList$gdvAttachmentList$ctl02$lnkFileName','').
In any case, based on what we have, step one, and then step two seems like minimize the path to download the file. I guess there could be a way to download the file directly, but I currently don't see any easy way. Maybe someone else could figure it out.
QUESTION B: About video
I've used an embed link that shows all the attributes that can be used.
There is a pretty strange but working way to give the exact timestamp. Change starttime from the link below:
https://berkeley.granicus.com/MediaPlayer.php?publish_id=cbebb4e6-5b83-11eb-920e-0050569183fa&starttime=0&stoptime=undefined&autostart=1
So replacing 0 for 3600 will rewind the video forward by one hour (3600 seconds):
https://berkeley.granicus.com/MediaPlayer.php?publish_id=cbebb4e6-5b83-11eb-920e-0050569183fa&starttime=3600&stoptime=undefined&autostart=1
The problem here is that ... you cannot rewind back manually that particular hour (it just gets kind cropped out). But it works to show the exact episode.
That's a pretty strange site.

What is preferred implementation of Chrome extension that catches and reads newly added list values on one specific web page?

I need to create a Chrome extension that will work only for one webpage with specific URL. It will monitor changes to list of items (orders) located on page and if new order appears, it will read some values from order and do something with them. It also may be neccessary to refresh the page from time to time (using timer, maybe).
What architechture will be suitable to accomplish such a task?
Now - to thoughts I have so far. I think now of using only one content script bound to page URL. Will it be enough? Or should I introduce some background script also? Or anything else?
As #wOxxOm said in the comments, creating one content script must be sufficient for reading the values and page refreshing.

How to create a page location in browser using javascript/jQuery?

I have a setup where I display a list of buttons and clicking on the buttons triggers a function that contacts a firebase database and gets the contents of a 'slide' that is to be shown to the user. The function then clears the content of the page and then creates elements from the data acquired from the database.
Now obviously, when I press back browser button once I've replaced the content, it won't take me back to the previous content. But I believe that my user's experience will be much better if it actually took them back to the list of buttons. I have two faint ideas on how to go about solving this problem but I'm lacking in specific details of how I can go about it.
Possible Solution 1:
Some way to dynamically create a new page using javascript and then serve it to the user.
Possible Solution 2:
Some way to simulate that the page has changed location. Maybe using anchoring links.
Let me know if you have any other solutions in mind or if you know how I should go about implementing these. Your help will be much appreciated. :D

modifying HTML page "on the fly"

I looked at "Generating HTML Page on the fly" on this website, but most of it was over my head.
I have a 2 part question that I would like assistance with please.
I want to fill a narrow vertical container, <div id=”counter”> with the numbers 1 .. <xx>.
<xx> is determined by the record count of a database, filtered “on-the-fly”, by the user choosing a category (no problem there – I have an SQL background)
Eg. Category1: 1 .. 200
Category2: 1 .. 6
These numbers could change over time, as I want to allow users to add content to the database (vetted of course).
I have viewed a number of website source code pages (of similar ideas eg. Surgicalexam.com), but they have all been hard-coded and are distinct pages per category.
I have created a small website of a similar nature to that, hard-coding all the images and links, but I am looking at 3000+ images (as a starting point here), and they differ per page.
I have created this scenario many times in stand-alone apps and from past experience, I thought perhaps, I could create a javascript routine which would use a loop to
• print the numbers to the <div> using the getelementbyID ( ).
• Fill an array with the record number, a title and an image link.
Question 1: Is this possible or am I beating a “dead horse”?
If it is possible, any suggestions would be gratefully accepted.
Part 2:
My current idea is that, as the user hovers the mouse over any number, a mouseover ( ) event will occur that will read the appropriate array record and display the <title> as a tool-tip-text.
If the user clicks the number, a function (I have yet to write) will read the appropriate array record and attach the image link to an <a> tag, and subsequently display the appropriate image to the screen.
Question 2: repeat of question 1.
I have viewed a number of website source code pages (of similar ideas eg. Surgicalexam.com), but they have all been hard-coded and are distinct pages per category.
Why are you so sure about that? You can't see php-code, because it is executed on the server. There is no way to know if it was hardcoded or by php
Answer:
It is possible.
If I understand this correctly, you want to read some data from a database and if the user clicks / hovers something, you want to load more data?
You have to splitt this into two things:
Load data with PHP from the db (Server side)
If you want a live, visual feedback you need JavaScript (and/or CSS3) to do changes. (Client side)
One possible solution is to create a API with php (maybe REST-like) and then call that api with JavaScript.
You could also do everything with PHP but this will require a reload of the website on every click. PHP cannot do changes On-The-Fly.
First of all you should learn the basics about web development.
And most important: If you decide to learn Web-Programming: learn about security, too. For example things like Cross Site Scripting and SQL-Injection. Never trust data coming from a client (e.g. JavaScript)!

How to scrape website data into an Excel worksheet?

I'm a novice programmer trying to compile an Excel list of all the inc5000 companies and their industry, location, revenue, and CEO. Is there any way for me to automate this so that I don't have to manually input all 5000?
Some issues:
-The inc5000 list only displays 50 companies on a page, and scrolling to the next page does not change the URL. I tried converting the URL into HTML, but none of the metadata actually shows up in the HTML code (I used https://try.jsoup.org/~LGB7rk_atM2roavV0d-czMt3J_g).
-All of the information I need is on this one scrolling page (https://www.inc.com/profile/loot-crate), but the URL changes for each company as you progress down the page. Is there any way to grab the data from this site without manually changing 5000 URLs?
I'm really new to programming and I know next to nothing about HTML/JavaScript/Web design-- I only know basic Java. I would really appreciate any help or potential leads into a solution.
Here's the easy way:
Go to the page, hit f12, go to the "Network" tab of debug tools, select XHR (to filter to only the data calls) then scroll to the bottom of the page. The page makes a query for each company, that you can access in the debug tools.
Once you have all the pages, you can highlight all the rows in the file name list to the left, right click, and save it to a .har file.
From there, just write a script to pull out the json and you're set.

Categories