Enable Youtube's JS API after the movie was loaded - javascript

I'm looking to control videos on my page, some of which may have not been embedded with the JavaScript API parameter on (enablejsapi) at first. They may come in both iframe or the old object embed types, though this shouldn't matter.
I looked around and none seem to be up to my standards, Enable YouTube API on existing player - his answer restarts the player (re-creates the element with apienabled)
I'm having a hard time with the documentation here https://developers.google.com/youtube/iframe_api_reference and https://developers.google.com/youtube/js_api_reference both seem to focus on irrelevant stuff. No technical details on how the API actually treats the elements so it's hard for me to guess where to go with this. Not the first time Google's JavaScript API is giving me hard time, certainly not the last
So I want to be able to control, or at least partially control (listen to play/stop events) players currently on the page which when were first embedded, did not request the API with them. Is there a way to enable the JavaScript API for them live? or hack the way to it?
Update: My progress so far surrounds re-creating the elements on the page with the API, the one technique I was trying so hard to avoid. I'm still facing all sorts of object/iframe api differences that's making me want to change my career so it will take a bit more polish. I will paste some coffee code when I figure everything out. It's nothing too fancy, but it's the only universal way to add a 'global listener', if you will, to a page full of existing embedded videos not necessarily with API enabled in them.
TLDR (Too Long Didn't Read): The Youtube API is imperfect.

As mentioned in the update, this question was solved by recreating the embed/iframe element. There isn't much magic here - just add the api variable to the URLs. Unfortunately I could not find a nicer way to enable it.

Related

Facebook Pixel slows down page load time by almost a full second

I'm starting to optimize and I have this problem with the Facebook tracking pixel killing my load times:
Waterfall Report
My page finishes around 1.1 seconds but the pixel doesn't finish until almost a full second later.
My pixel script is in the head, per the docs. Is there any way of speeding this up?
The script probably doesn’t necessarily need to be in the head, even if Facebook recommends that as the default. As long as you don’t explicitly call the fbq function anywhere before the code snippet was embedded, it should work fine.
Question would be though, how much of an improvement that actually brings - the embed code is already written in a away, that the actual SDK gets loaded asynchronously.
The <script> block that embeds the SDK might be parser-blocking - but you can’t put async or defer on inline scripts, that would have no effect. Putting that code itself into an external file, and then embedding that with async or defer might help in that regard though.
Another option would be to not use the script at all, but track all events you need to track using the image alternative.
https://developers.facebook.com/docs/facebook-pixel/advanced#installing-the-pixel-using-an-img-tag:
If you need to install the pixel using a lightweight implementation, you can install it with an <img> tag.
“Install” is a bit misleading here though - this will only track the specific event that is specified in the image URL parameters. Plus, you’d have to do all your tracking yourself, there will be no automatism any more. The simple page view default event you could track by embedding an image directly into your HTML code; if you need to track any events dynamically, then you can use JavaScript to create a new Image object and assign the appropriate URL, so that the browser will fetch it in the background.
There are a few additional limitations mentioned there in the docs, check if you can live with those, if you want to try this route.
If you needed to take into account additional factors like GDPR compliance - then you would have to handle that completely yourself as well, if you use the images for tracking. (The SDK has methods to suspend tracking based on a consent cookie, https://developers.facebook.com/docs/facebook-pixel/implementation/gdpr)
A third option might be to try and modify the code that embeds the SDK yourself.
The code snippets creates the fbq function, if it does not already exist. Any subsequent calls to it will put the event to track on a “stack”, that will then get processed once the SDK file has loaded. So in theory, this could be rewritten for example in such a way, that it doesn’t insert the script node to load the SDK immediately, but delays that using a timeout. That should still work the same way (in theory, I have not explicitly tested it) - as long as the events got pushed onto the mentioned stack, it should not matter when or how the SDK loads. (Too big of a timeout could lead to users moving on to other pages before any tracking happens though.)
Last but not least, what I would call the “good karma” option - removing the tracking altogether, and all the sniffing, profiling and general privacy violation that comes with it :-) That is likely not an option for everyone, and if you are running ad campaigns on Facebook or similar, it might not be one at all. But in cases where this is for pure “I want to get an idea what users are doing on my site” purposes, a local Matomo installation or similar might serve that just as fine, if not even better.
I've found that loading the pixel using Google Tag Manager works for my WordPress website.
Before Pixel, my speed was 788ms
After adding the pixel it was 2.12s
Then, adding it through GTM it was 970ms
Source and more details: https://www.8webdesign.com.au/facebook-pixel-making-website-slower/
check this articles- https://www.shaytoder.com/improving-page-speed-when-using-facebook-pixel/
I will put the script in the footer, so it will not affect the “First Paint”, but more important – I will add the yellow marked lines to wrap the “script” part of the pixel code like shown here–
I know you might think it has no relation, but have you tried enabling lazy loading on your website? Adding it in the footer or using the Facebook for WordPress plugin also helps.

How to simply query vimeo for video information with Javascript?

We are tring to get information about a video via the Vimeo API. i.e. By using
a jquery $.ajax get request to:
vimeo.com/api/v2/video/253742573.json
However, this won't work on Internet Explorer 11, as it complains about CORS issues. Naturally, we can't control what HTTP headers the Vimeo replies with to correct this.
Is this a known issue with the Vimeo player?
Is there a better way to query Vimeo for information on a Video with Javascript over HTTP?
If there is, where can we find a good example of this?
I believe this is a known issue with IE. The response headers from Vimeo are correct.
I've seen similar issues with various browsers over the years.
The solution that I've used in the past is to implement a pass-through on my own server. In that scenario, my JS in the browser would no longer call vimeo.com/api/v2... directly. Instead, it would call mydomain.com/vimeoapi/api/v2... and my server (which doesn't care about CORS) will retrieve the JSON from Vimeo and pass it back to the JS in the browser.
Honestly, this solution makes me grind my teeth every time (WHY MUST I MAKE ARCHITECTURAL COMPROMISES FOR JUST ONE BROWSER! CURS YOU, STARS!), but I've done it a handful of times now and it plays out reasonably well. It's a straight-forward solution that can be done quickly, and if you find a preferable solution, it's easy to switch this out again.

Use of JavaScript in lieu of hyperlinks

As RIAs and SPAs (or web apps with heavy javascript usage) have become more and more popular, I've been running into systems that, instead of using good old a href hyperlinks, I see them utilizing constructs using onclick with JavaScript code that manipulates navigation. This is particularly true with images.
For example, instead of seeing something like this:
<img src="...."/>
<div ... onclick='SomeJsFunctionThatNavsToAnotherPage()'><img src="..."/></a>
What is the advantage of this? It makes it incredibly hard to trace where pages transition to when debugging or trying to root cause a bug. I can get the idea when the target to navigate can change (so yes, here you could use a function that computes to what page to navigate to.)
But I see this pattern even when the pages to navigate to are constant. I find this extremely convoluted and hard to test. Not to mention that there is always the browser-specific bugs that come from stuff (sadly in my experience from over-complexifying the front-end.)
But I am not a RIA/SPA developer (just backend and traditional web development). Am I missing the rationale behind this?
TO CLARIFY
My question is not for the case when we want to redraw the page or change current content without changing the current location. My question is for plain
old transitions, from page A to page B.
In such a case, why use onclick=funcToChangeLocation() over <a href="some location"/>.
This has been a pain for me when troubleshooting systems that are already written (for I wouldn't write them like that), but there could be reasons I am not aware of.
Again, my question is not for pages that redraw themselves without changing the browser location, but for navigation from one page to the next.
ALSO
If you are going to vote to close this question, at least leave a message explaining why.
If you are making a web application, sometime you don't want to redirect the user to another page, but you want to dynamically change the content of the page without refreshing the page. It has some advantages. It can be faster. You can easily keep the state of the page/application. You are not obligated to communicate with the server. You can update only a part of the page.
You can also dynamically request data to print the page. If you are displaying an user profile page, you can only ask a json object that represent the user. This json object is smaller than the whole page and will be dynamically rendered. It can help to reduce the data transfer between users and server when your bandwidth is limited.
EDIT: In the case of a simple page redirection, I think it's a bad practice and I cannot see an advantage. I think it obfuscate the website when the google crawler try to parse the website.
I once had a pretty successful web directory website. One day Google decided that "directories" are competing businesses and started penalizing sites that had links on directories. I used the method you describe to cloak outgoing links to try and trick Google.

Play YouTube video in sync across multiple clients

Hello Stack Overflow community,
I'm a rather novice coder, but I have a project I've been devising that looks more and more complicated every day, and I don't know where to start.
With inspiration taken from Synchtube & Phonoblaster, I'm looking to create something for my website that will allow visitors to watch YouTube videos and playlists that I have curated, together in real-time, in-sync.
Because I want to be able to put this in the context of my own website, I can't use the services listed above that already do this - so I wanted to figure out how to roll my own.
Some things have been written about this topic on Stack Overflow, and other blogs:
HERE
and HERE.
Because I still consider myself a novice programmer, and a lot of the information I've found on Google and Stack tends to be more than 1 or 2 years old, I'm still unsure where to begin or if this information is outdated. Specifically, what languages and tools I should be learning.
From what I've gathered so far, things like Javascript, Node.JS, and the YouTube API would form the crux of it. I've not used any of these before, but would be interested to see whether other experienced coders would have their own suggestions or ideas they could point me towards.
I appreciate you taking time out to read this post!
Hope to hear from some of you soon :)
Many thanks.
It partially sounds like you need a live stream from Youtube. You can find more info here. https://support.google.com/youtube/bin/answer.py?hl=en&answer=2474026
If you can get that going, then syncing play between any number of users is as simple as embedding a regular youtube embed of your stream in a browser.
Looking past that, if you wanted to sync video playback amongst any number of users, the first big problem is learning how to set time on a video. Luckily, that's easy with the hashbang #t=seconds.
Eg: http://www.youtube.com/watch?v=m38RdUGqBPM&feature=g-high-rec#t=619s will start this HuskyStarcraft video at 619 seconds into the video.
The next step is to have some backend server that keeps track of what the current time is. Node.js with Socket.io is incredibly easy to get setup. Socket.io is a wonderful library that gracefully handles concurrency connections from web sockets all through long polling and more and works well even on very old browsers. Note that websockets aren't even required, but will be the most modern and full-proof method for you. Otherwise its hacks and stuff.
One way this could work would be as follows.
User1 visits your site and starts playing the video first. A script on your page sends an XHR request to your server that says, "video started at time X". X then gets stored as the start time.
At this point, you could go 2 routes. You can have a client-side script using the Youtube API to poll the video and get its current status every second. If the status or time changes, send another request back to the server to update the state.
Another simple route would be to have the page load for User2+, then send an XHR request asking for the video play time. The server sends back the difference between the start time from User1, then the client script sets the 't' hashbang on the youtube player for User2+. This lets you sync start times, but if any users pause or rewind the video, those states dont get updated. A subsequent page refresh might do that though.
The entire application complexity depends on exactly what requirements you want to have. If its just synchronized start times, then route #2 should work well enough. Doesn't require sockets and is easy to do with jQuery or just straight javascript.
If you need a really synchronized experience where any user can start/stop/pause/fast forward/rewind the video, then you're looking at either using an established library solution or writing your own.
Sorry this answer is kind of open ended, but so was your question. =)

Do search engines process Javascript?

According to this page it would seem like they don't, in the sense that they don't actually run it, but that page is 2 years old (judging from the copyright info).
The reason I'm asking this question is because we use Javascript to replace text on our site with other more typographically sound content. We're worried that this may affect the crawlability/seo of our sites, since generally what we're replacing is headers; ie. <h1>, <h2>, etc.
Will search engine bots see our original code, or will they run the Javascript and see the replaced text?
Google now officially processes JavaScript.
In order to solve this problem, we decided to try to understand pages by executing JavaScript. It’s hard to do that at the scale of the current web, but we decided that it’s worth it. We have been gradually improving how we do this for some time. In the past few months, our indexing system has been rendering a substantial number of web pages more like an average user’s browser with JavaScript turned on.
Sometimes things don't go perfectly during rendering, which may negatively impact search results for your site. Here are a few
potential issues, and – where possible, – how you can help prevent
them from occurring:
If resources like JavaScript or CSS in separate files are blocked (say, with robots.txt) so that Googlebot can’t retrieve them, our
indexing systems won’t be able to see your site like an average user.
We recommend allowing Googlebot to retrieve JavaScript and CSS so that
your content can be indexed better. This is especially important for
mobile websites, where external resources like CSS and JavaScript help
our algorithms understand that the pages are optimized for mobile. If
your web server is unable to handle the volume of crawl requests for
resources, it may have a negative impact on our capability to render
your pages. If you’d like to ensure that your pages can be rendered by
Google, make sure your servers are able to handle crawl requests for
resources.
It's always a good idea to have your site degrade gracefully. This will help users enjoy your content even if their browser doesn't have
compatible JavaScript implementations. It will also help visitors with
JavaScript disabled or off, as well as search engines that can't
execute JavaScript yet.
Sometimes the JavaScript may be too complex or arcane for us to execute, in which case we can’t render the page fully and accurately.
Some JavaScript removes content from the page rather than adding, which prevents us from indexing the content.
Search engines don't process JavaScript as such.
There is some evidence that Google may have started processing inline script content in some cases, in order to catch content that is entered into the page parse queue using document.write. However certainly DOM methods such as you might use for font-replacement are not affected and no onload code is invoked.
Generally no. Google has mentioned that they are working on a system of indexing ajax content, but I don't think any of the major search engines index dynamic content as a rule. See this page for Google's take on it: http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=81766
The bots will certainly not run the Javascript code, but they might recognise some commonly used scripts.
You shouldn't count on it though. Clear markup, proper content and real links is still what counts.
Also, if the bots happen to recognise your script, it might not be in your favor. If the code is recognised as something that is commonly used to try to fool bots, it could even hurt your page ranking.
I'd use metadata to ensure bots pick up the content on your pages.
I know the general consensus is that google does not process javascript or index anything with a <script> tag, however, the general consensus appears incorrect.
Try searching for the following, with the surrounding quotes (or click here):
"Samsung Public Interest Statement by Thomas Fusco, Fish & Richardson P.C., for Samsung."
You should only get one result. Now click on that result (or just click here) and view the source.
Do a CTRL-F for the text you searched for in Google. Notice that the text is in a javascript variable, and not html. Google must be processing some javascript to pull those words into its index.

Categories