I am currently doing some work on a research database where they have decided that they want to be able to share links to articles from the site on social networks (Facebook, Twitter, LinkedIn and Google+).
Preferably this should be done through the share buttons provided by the respective networks. I quickly got the buttons working and displayed correctly on the site by following the implementation instructions from each network.
My problem is a consequence of that the site offers the possiblity to show 1000 (1K) post on a single search result page. This means that when such a page is created it needs to create 1000 share buttons for each social network (effectively 4000).
Sadly this seems to overwhelm the browser as it offers to stop the javascript provided by the social networks and whether you choose to stop it or not - the page ends up in deadlock waiting for a response from the social networks and never finishes the page loading process.
I have an idea that the problem may be that the large number of asynchronous requests means that the browser somehow misses some of the responses and thus ends up waiting forever for a response that will never come.
As mentioned it is only a problem with such a large number of posts, if a page for example displays 100 posts (effectively 400 share buttons) it works perfectly.
While it could be argued that 1000 post on a single page is overkill, limiting the maximum number of displayed post is sadly not an option.
My question therefore is whether any of you know of a way to solve this kind of problem or if my only real option is to create custom share buttons that doesn't need to be created through the javascript provided by the social networks ?
The following references leads to the documentation for each of the share buttons.
Twitter
Facebook
LinkedIn
Google+
For all these buttons, there is a main js file which does the heavy work.
So, for LinkedIn, add the script tag:
<script src="//platform.linkedin.com/in.js" type="text/javascript"></script>
once in the page. And use the below script as a placeholder for your linkedin button whereever you need it. (don't forget to replace the data-url attribute in below script)
<script type="IN/Share" data-url="http://developer.linkedin.com/plugins/share-plugin-generator" data-counter="top"></script>
For Twitter similarly, the below script tag needs to be added once in the page as it's job is to get the main js file and add it to the page.
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script>
The below script needs to be added multiple times whereever you want. Replace the data-url attribute with your url which needs to be tweeted when you click on it.
Tweet
When you get the code for FB or Google Plus like, you will get a script which needs to be added once and then the code to be added where ever you need.
EDIT:
Based on your comment below: The scripts will surely cause issue because they need to convert each and every placeholder into a good looking 'like' button. Below are few ways to improve the performance:
run these scripts only on page load (i.e., add the main scripts at load time)
using setTimeout or setInterval, work on every 100 placeholders at a time (requires change in main scripts)
Lazy load the init of like buttons. When the user scrolls the page and the like buttons will show up in the page, then initialize the buttons (requires change in main scripts)
Recommended Approach: Keep just one set of like buttons. When user hovers over a search result, then add this set of buttons to that div and change the attributes related to url in the buttons. With this way, only one set of buttons will be shown and won't take time at all to init them.
Related
I find myself having to interact with a web page that hides state in various places so that one cannot easily share it as a URL, for example this page which allows users to look up information from city zoning applications:
https://aca.cityofberkeley.info/community/Default.aspx
You can interact with the page all you want, but the URL in the location bar will remain the same as the above.
Currently, city staff provide users with instructions like "Load this URL, click on the 'Zoning' tab, enter DRCP2020-0010 under the 'Permit Number' field, click 'Search', then when the records come up, click 'Record Info' and then select 'Attachments' from the dropdown menu, then click on the PDF document that says '2020-10-21_DRCP_APP_PCKT_2801 Adeline.pdf'". I would like to be able to replace these instructions with a URL.
Another example is the website where video from city council meetings is archived:
http://berkeley.granicus.com/MediaPlayer.php?publish_id=cbebb4e6-5b83-11eb-920e-0050569183fa
It would be nice to be able to produce a link which brings up one of the meeting videos, and seeks to a certain timestamp like 53:40, so that I can refer to something specific that was said at a meeting.
Looking at the pages that are loaded when I follow the instructions in each case, I can see that there are some POST forms, cookies, hidden input fields, and so on.
Is there some kind of tool that I can use to create "deep links" to pages like these, that were generated using non-URL hidden state, which will allow me to quickly share what I'm looking at with another user?
What I'm seeking is similar to the frmget "bookmarklet", which changes the forms on a page to use GET instead of POST. Sometimes this succeeds in producing a URL which captures form submission query parameters. However, it doesn't work for these applications, for whatever reason.
This question is possibly related to the idea of capturing a web page's DOM state using "browser screenshots" and a script called html2canvas. A possible solution might involve getting and setting cookies in a bookmarklet. Ideally something that produces a normal "https://" URL would be ideal, but if it is impossible to solve the problem except by outputting a "javascript:" URL (bookmarklet) then that is acceptable to me (in spite of the security implications). Thanks.
That seems like not a programming matter. It seems like the site has some security issues as well.
QUESTION A: About Zoning
Here are some links you can use
Direct link to Zoning (I've found it via Advanced search from the site):
https://aca.cityofberkeley.info/CitizenAccess/Cap/CapHome.aspx?module=Planning&TabName=Planning&TabList=Home%7C0%7CBuilding%7C1%7CHousing%7C2%7CPlanning%7C3%7CFire%7C4%7CLicenses%7C5%7CPublicWorks%7C6%7CCurrentTabIndex%7C3
A strange link to the list of files (I've found it via downloading a file, then going to chrome://downloads, then right-clicking the file I've download. The link has been the following):
https://aca.cityofberkeley.info/CitizenAccess/FileUpload/AttachmentsList.aspx?iframeid=ctl00_PlaceHolderMain_attachmentEdit&module=Planning&isInConfirm=False&isdetail=True&isaccountmanager=False&isAdmin=True&isPeopleDocument=&agencyCode=BERKELEY&isForConditionDocument=N
It still doesn't give the direct link to the file, but it it gives the list of attachement of the previously opened Zoning record.
Currently I have no idea what file is triggered by javascipt:__doPostBack('attachmentList$gdvAttachmentList$ctl02$lnkFileName','').
In any case, based on what we have, step one, and then step two seems like minimize the path to download the file. I guess there could be a way to download the file directly, but I currently don't see any easy way. Maybe someone else could figure it out.
QUESTION B: About video
I've used an embed link that shows all the attributes that can be used.
There is a pretty strange but working way to give the exact timestamp. Change starttime from the link below:
https://berkeley.granicus.com/MediaPlayer.php?publish_id=cbebb4e6-5b83-11eb-920e-0050569183fa&starttime=0&stoptime=undefined&autostart=1
So replacing 0 for 3600 will rewind the video forward by one hour (3600 seconds):
https://berkeley.granicus.com/MediaPlayer.php?publish_id=cbebb4e6-5b83-11eb-920e-0050569183fa&starttime=3600&stoptime=undefined&autostart=1
The problem here is that ... you cannot rewind back manually that particular hour (it just gets kind cropped out). But it works to show the exact episode.
That's a pretty strange site.
I researche it quite a bit, and thus it seems simple, I couldnt find the answer.
So I have a website that has different articles,each with a custom facebook share button. Every time the user wants to share, I activate javascript sdk and it works. However it shares the opengraph tags that are defined in the header. How would I do that dynamically ? I want to share the specific content of the article ? Is opengraph the right way ?
I assume that your use case is something like the main page in a blog,
where you wish to display several articles in the same page,
and have a like button for each one.
However, the way Facebook crawls your page, it looks at the open graph meta tags for this main page only.
You wish for each of the share buttons to be specific to each post instead.
Have a read of the instructions on this page
In your situation, her is what you will need to do:
Set the data-href attribute of the individual page that the share button is for
For the individual pages, set the open graph meta tags appropriately
This way, when facebook queries your page open graph meta tags, it will not do so on the main page, but instead on the pages you have specified.
Another approach, that will give you more fine grained control, would be to use Facebook's Javascript SDK.
The one that you are looking for, in this case, would be the Share Dialog.
Essentially, here you create your own buttons by hand, and trigger the Facebook Share API using Javascript manually too.
Some of our advertisers require us to serve their ads from their server using embedded JavaScript tags so that they can track impressions and clicks.
My question is, how do WE track the clicks they have had on their ads, so we can tell how their campaign is going?
I think the best answer is actually the way OpenX does it.
http://www.openx.org/docs/whitepapers/3rd-party-click-tracking
Basically it searches the Javascript for the click the link and replaces it with a bounce (redirect) link that allows it track the click and still deliver the user.
It is slightly inflexible as it depends on the third party tracking code, but it is probably the 'cleanest' solution.
I think that all depends on the javascript code of the advertiser.
Maybe you can add an "onload" function to your page, where you search the page for images. If the image has the properties of the ad, add an eventlistener for the click event with javascript ... then call a server page with javascript which stores the values in a database ... just a quick idea
You could put an invisible layer over the ad and respond to the click by calling Their click after sending you a notice that the ad was clicked (maybe loading a special web bug). This technique is considered evil since it could easily be used to send the user to somewhere other than the advertiser's page.
in gmail if you check mark email 4 then move to different set of 50 or 25 records and mark selection 26 then both 4 and 26 are retained if you move back and forth.
How does google do this?
would it be possible to do something like this in a page that brings only 50 records and when NEXT is clicked...it again goes to DB to bring next set of 50 records.
You don't technically change pages, it's all the same page, the content is just changed dynamically with JavaScript.
Take a close look at the url. Only the hash part of it changes. Which means you don't really load new pages when you click things on Gmail. They just change the elements shown to you with javascript.
Similar thing could be done with page loads if you use localStorage or sessionStorage
You could do the page you're describing with Ajax techniques.
The inner pages are most likely loaded using AJAX. Kind of like iFrames, you monitor the links that are clicked and only load the inner part of what you're after so that you aren't loading things twice...
It's possible that these are saved in JavaScript or Cookies... I would probably store them as a JavaScript array of selected checkboxes personally... depending on how much load you're already giving to the user.
I am trying to figure out the best way to acompish "unobtrusive" forms for a user (within a web app).
The purpose: keep user on the site by not asking to fill unnecessary form in. Ask for the details as only when such are needed.
The requrements are:
User should provide additional details only when it is required (email to receive notifications, login required for account page, save credit card details when checking out).
User should not leave the current page providing the additional details.
The implementation would be fairly easy if all requests would be AJAX ones. It would be easy to analyse the response (401 or so) and show the appropriate lightbox-form.
I do not see how it can be done "the right way" with plain anchors and form submits as in both cases the user actually leaves the page (by following the link or submitting a form) and there is no way to analyse the response on the client side.
Converting all links and forms to AJAX ones would be just silly.
The closest analog to what I want to achieve is the default Basic Authentication dialog in most of the browser. But obviously that just doesn't fit my requirements.
Any creative suggestions how to do that for non-AJAX requests?
Regards,
Dmytrii.
In a page sense, where "page" refers to what the user sees and not what the URL is, I only can think of following ways to update independent parts in a page with JavaScript (and thus Ajax) switched off:
Frames
Iframes
Using held-open connections there are two more ways to update a page, however these do not work reliably in all cases:
Animated GIF
CSS DIV tags with absolute positioning.
Note that this needs that your Server can keep open a session for each person looking at the page, which can be thousands. If this does not work the only possible workaround is with FRAMEs and automatic refresh, which is somewhat clumsy.
As I think that you do not want to use Frames and you do not want to render animated GIFs, I explain the CSS DIV way:
When you load the page you do not finish loading it. Instead the connection is kept open by the web server and the script handling the connection waits for additional information to arrive. When there is additional data, this is sent to the browser by encapsulating it into additional DIV tags which can overwrite other parts of the page.
Using "style" in the DIV tag and CSS position:absolute these can overwrite other information on the page like a new layer. However you need either position:absolute or must add this data to the end of the page.
How does this work with forms?
Forms usually have a known size so you can put them into IFRAMEs. These IFRAMEs get submitted to the webserver. The script there notifies the other script that new data must be output, so the waiting script renders the response and displays it in the page while the script which took the submit redisplays the form with fresh values only.
How does this work with 404 and anchors?
I don't really know because this must be tested, but here is a hint how I would try to implement this:
We have 2 issues here.
First the URL must not point to other pages but back to a server script again, so the href is under control. This script then notifies the waiting script to update the page accordingly, for example by retrieving the page and sending it to your browser. The script can check for 404 as well.
Second you must hinder the browser to switch the page when clicking on the anchor. This probably involves some clever tricks using CSS, target and server side status codes (like "gone" or redirect to the current page, whatever) to keep the browser from switching the page. I am not completely sure if that works, but if you remember download pages, these show URLs which do not switch the page but have an effect (downloading the file). That's where to start to try to hack browsers not leaving the current page without using JavaScript.
One idea not followed here is not keeping the connection of the page open but the CSS file and send new css information to the browser which then "fills in empty stubs" using the CSS way. But I doubt that this works very well, most browsers probably will parse the CSS only after loading finished, but perhaps I am wrong.
Also note that keeping a connection open never finishes the page loading, so you will see the busy-logo spinning all the time, which is unavoidable with this technique.
Having said this all I doubt you get around JavaScript.
What I wrote here is very difficult to do and therefor usually is not used because it scales badly. And it is a lot more difficult than using JavaScript alone (that's why I explained it).
With proper AJAX it is much more easy to reach your goal. Also note that you do not need to change your page source much, all you need is to add a script which augments the page content such, that for example forms suddenly use AJAX instead of a direct POST with re-rendering the page. Things which cannot be detected easily then need some hints in the tags such that the tag scanner knows how to handle the tag. The good thing then is, that with JavaScript switched off your page still works - however it then "leaves the page".
Normal HTML just was not designed to create application-like web pages like we want to see today. This all was added using JavaScript.
About popup forms
The Basic-Auth-Handler reloads the page after the user enters something into this dialog, only if cancel is hit the current page is displayed.
But there are two ways to present additional query-popups in a page using JavaScript:
The first one is the javascript "prompt", like in following example:
http://de.selfhtml.org/javascript/objekte/anzeige/window_prompt_vor.htm
(Click on the "Hier").
The second one is "JavaScript forms" which are like popups within an HTML-page.
However I consider popups to be far too intrusive and bad design.
Ajax and JavaScript is the easiest way
Unfortunately using JavaScript is never easy, but if you think JavaScript is improper or too difficult, there is no other technique which is easier, that's why JavaScript is used everywhere.
For example your page onload-Script can cycle through all Anchor-Tags and modify them such, that clicking on them invokes a function. This function then must do something clever.
Same is true for Forms. Fields which can be modified (like the user's eMail address) then have two views, on is visible, the other one hidden. The hidden one is a form. Clicking on the eMail address then switches the view (disables the first div and enables the second), such that suddenly instead of the eMail address a text form field is there containing the eMail address. If you click on the "OK" button the button changes the look into a spinner until the data is submitted, then the view switches back to the normal one.
That's the usual way to do it using JavaScript and Ajax. And this involves a lot of programming until it works well.
Sorry for not shortening this post and missing code snippets, I am currently lacking time ;)
Hidden iframe.
Set target attribute of the form to the name of the iframe. use the onload event of the iframe to determine what is the response.
Or, if you really dont like any javascript, don't hide the iframe and instead present it in a creative manner.
CSS to hide an element
#myiframe { position:absolute; left: -999em; display: none; visibility: hidden; }
But normally, display: none is enough. This is just an overkill.