Loading page contents into one page (Duplicated Content) bad for SEO? - javascript

I'm building a page on my site (wordpress) where I am pulling in multiple posts of post type people (url: site.com/people/name-of-person) into a page called people (url: site.com/people) as a list. It is designed to be a list of people.
Now, What I am essentially doing is, in my list of people, I have a thumbnail of the person, upon clicking, the persons profile is displayed underneath via javascript. This is pulling in name, job position, favourite quote, details about them, some Instagram photos and a larger profile image. So you can quickly click through each persons profile, navigate around etc.
Now I'm thinking how to structure the page. In terms of the page by itself, does it makes more sense to have all the text for each person (title, job position etc) all loaded already into the page? BUT, this text will be an almost an exact dupe of the individual post for that person. (Which I don't intend to directly link to anywhere through the site, they will always be directed to /people)
Will this have a bad effect on SEO with this structure? With the duplicated content? Or should I not worry too much about this?
Thanks in advance,
Craig

Loading the profile of the people obviously takes more time and make the page to load slowly .which in turn gives trouble for the users with very slow internet connection or users with limited data..
Instead of doing that just keep the data in the page minimal and when the user clicks on the thumbnail redirect them to another page where you can provide them the info about people...
OR
USE AJAX

Related

How to send a query to another website page and find a .length value that i can then import to my website

I am trying to display on my webpage how many customer reviews there are, however the customer reviews are on another website.
So my idea was to target the DOM of that website and find out how many reviews there are by using .length, which returns me the correct value.
but I need to somehow link that .length value to my own website so that people who view it can see in text how many reviews there are. there will be a hyperlink below this to then direct the viewer to the other site to read them.
I want this because every time someone leaves an additional review on the other website the .length value will change and therefore need this to reflect automatically on my website.
Any ideas how this can be achieved?

Capture the state of a web page in a URL

I find myself having to interact with a web page that hides state in various places so that one cannot easily share it as a URL, for example this page which allows users to look up information from city zoning applications:
https://aca.cityofberkeley.info/community/Default.aspx
You can interact with the page all you want, but the URL in the location bar will remain the same as the above.
Currently, city staff provide users with instructions like "Load this URL, click on the 'Zoning' tab, enter DRCP2020-0010 under the 'Permit Number' field, click 'Search', then when the records come up, click 'Record Info' and then select 'Attachments' from the dropdown menu, then click on the PDF document that says '2020-10-21_DRCP_APP_PCKT_2801 Adeline.pdf'". I would like to be able to replace these instructions with a URL.
Another example is the website where video from city council meetings is archived:
http://berkeley.granicus.com/MediaPlayer.php?publish_id=cbebb4e6-5b83-11eb-920e-0050569183fa
It would be nice to be able to produce a link which brings up one of the meeting videos, and seeks to a certain timestamp like 53:40, so that I can refer to something specific that was said at a meeting.
Looking at the pages that are loaded when I follow the instructions in each case, I can see that there are some POST forms, cookies, hidden input fields, and so on.
Is there some kind of tool that I can use to create "deep links" to pages like these, that were generated using non-URL hidden state, which will allow me to quickly share what I'm looking at with another user?
What I'm seeking is similar to the frmget "bookmarklet", which changes the forms on a page to use GET instead of POST. Sometimes this succeeds in producing a URL which captures form submission query parameters. However, it doesn't work for these applications, for whatever reason.
This question is possibly related to the idea of capturing a web page's DOM state using "browser screenshots" and a script called html2canvas. A possible solution might involve getting and setting cookies in a bookmarklet. Ideally something that produces a normal "https://" URL would be ideal, but if it is impossible to solve the problem except by outputting a "javascript:" URL (bookmarklet) then that is acceptable to me (in spite of the security implications). Thanks.
That seems like not a programming matter. It seems like the site has some security issues as well.
QUESTION A: About Zoning
Here are some links you can use
Direct link to Zoning (I've found it via Advanced search from the site):
https://aca.cityofberkeley.info/CitizenAccess/Cap/CapHome.aspx?module=Planning&TabName=Planning&TabList=Home%7C0%7CBuilding%7C1%7CHousing%7C2%7CPlanning%7C3%7CFire%7C4%7CLicenses%7C5%7CPublicWorks%7C6%7CCurrentTabIndex%7C3
A strange link to the list of files (I've found it via downloading a file, then going to chrome://downloads, then right-clicking the file I've download. The link has been the following):
https://aca.cityofberkeley.info/CitizenAccess/FileUpload/AttachmentsList.aspx?iframeid=ctl00_PlaceHolderMain_attachmentEdit&module=Planning&isInConfirm=False&isdetail=True&isaccountmanager=False&isAdmin=True&isPeopleDocument=&agencyCode=BERKELEY&isForConditionDocument=N
It still doesn't give the direct link to the file, but it it gives the list of attachement of the previously opened Zoning record.
Currently I have no idea what file is triggered by javascipt:__doPostBack('attachmentList$gdvAttachmentList$ctl02$lnkFileName','').
In any case, based on what we have, step one, and then step two seems like minimize the path to download the file. I guess there could be a way to download the file directly, but I currently don't see any easy way. Maybe someone else could figure it out.
QUESTION B: About video
I've used an embed link that shows all the attributes that can be used.
There is a pretty strange but working way to give the exact timestamp. Change starttime from the link below:
https://berkeley.granicus.com/MediaPlayer.php?publish_id=cbebb4e6-5b83-11eb-920e-0050569183fa&starttime=0&stoptime=undefined&autostart=1
So replacing 0 for 3600 will rewind the video forward by one hour (3600 seconds):
https://berkeley.granicus.com/MediaPlayer.php?publish_id=cbebb4e6-5b83-11eb-920e-0050569183fa&starttime=3600&stoptime=undefined&autostart=1
The problem here is that ... you cannot rewind back manually that particular hour (it just gets kind cropped out). But it works to show the exact episode.
That's a pretty strange site.

How to reload page with new data in Node.js

I'm using express-handlebars as my template engine and I have created an 'Article' template. I have 5 articles in total. I need to show the user each article but randomly. How would I create a 'Next' button that could reload the page with a new article?
I have thought of using a cookie in the browser and implementing some sort of array within it to decide the next article to be shown (the array will be randomised).
As with loading the articles, I have thought of creating a new page for each article and then redirecting the user to a random page when the click 'next', but that wouldn't be making much use of the template engine.
I don't have code to show as I'm looking for a concept that would work.
I want the user to open my website, be shown a random article, click next and another article appear. I don't want each user to have the same sequence of articles (obviously with many users this is impossible but I'd like to minimise it).
For 5 articles,
1. track the articles already read by a user via cookies
2. add a query parameter to the article template link like randon=1 so as to respond back with a random article and another parameter not_in=*already read article ids* to exclude these ones.

Setting Priority for functions and cut "cache" in MB

I am developing a gallery for huge pictures, sounds and videos(=items). The page show really a lot of galleries, and each gallery contains a lot of items. All items will be pulled via ajax in that ,moment the user is asking for. (By clicking "next" for example). At the moment I am caching all viewed items during lifetime of the page in an extra div-container, so that no item needs to be requested via ajax a second time. But thats just a small speed advantage, even noticeable.
I would love to do two things:
The user is viewing the first gallery. All items he has viewd will be cached. During the time he is not doing anything, maybe watching a video of the page or the actual image, I want my script to load all other items of the page, one after the other, and move them to the cache. but as soon as the user clicks to view the next item, this caching process needs to be paused. So it has the priority 2. If the prosesses the user was asking for are finished, my caching process shall go forward. How can I afford this?
And how can I say, caching should stop when maybe 25MB of cached Items are received? How can I see the used RAM of an div containing the items?
Hopefully I could explain what I want and somebody has good understandable ideas for me.:)
Best!
Falk

Trace clicking behavior of visitors of on web page

I am writing my own home page in html and javascript.
I have many hyperlinks on the home page, which interests me is, how many times visitors of my page click on them.
For instance, pdf is a hyperlink which directs to downloading a pdf file. I would like to set up a mechanism of counter of clicking on it. For instance this information is automatically recorded in a file so that I could check it from time to time.
Besides counter, other information such as the time of clicking, the IP of visitors who click interest me too. It will be great if I can record them.
I don't know javascript, could anyone suggest me an efficient way to realize this with details (or a piece of code)?
As per you code
PDF
You can see I have added one ID in the PDF link, now
$("#uniqueID").click(function(){
//Write a ajax function to calculate the count and storing
});
I have not written the entire code, I think it's sufficient for you to understand the logic.

Categories