I am currently working on a custom URL shortener and am trying to figure out how to "inject" my own social preview metadata dynamically for each page. (eg. for Twitter Cards) I had originally planned on doing this in much the same way as I am with the actual redirect, fetching the data using the JavaScript fetch API. After reading a little more though it does not appear that this approach will work since it doesn't look like the twitter (and other social media web crawlers) run JS when looking for the metadata.
Is this correct?
If so, is there a way I can load the metadata from a dynamic source instead of just having to create a new html file for every redirect?
It looks like I can probably do something, at least for the image based on a test of this link (using https://source.unsplash.com/random for the image) through the twitter card validator. But what would be the best approach to doing something similar? Everything I can think of would use JS.
I have similar pages in production.
You'll need to use a server-side language (like PHP, or node.js) to set the meta tags for your twitter cards, and use javascript to redirect the page.
Related
I want to create embedable widgets for a Rails app, that would allow users to interact with the app from external websites.
I was all set to try using iframes to achieve this. But then I found a couple of forum responses that seemed to suggested iframes are not the best way to achieve this, and instead to use JS to embed HTML elements. This surprised me - I thought iframes would be a clear winner simply because of the isolation of CSS and scripts.
So, what is the best way to embed (limited) app functionality in a third party website. This interaction will be limited to login and a single simple form. Is iframes of JS embed the best way to go? And as a side question, are there security issues to be aware of with either approach?
I think using iframes suck hard. It's just not the feeling of a whole website, it's like a website inside another, mostly the styles won't match, or you have a scrollbar or the responsive layout is not applying right. So here's a little pro/con list:
iframe PRO:
requests are not cross site origin (most likely more secure)
"sandbox" javascript (no conflicts)
iframe CON:
style guides
history not changing (e.g. if you do a submit a form with GET you cannot paste the URL and send it to a friend)
js PRO:
Full control about the navigation (you can override link clicks with $.load etc).
Ability of changing the browser history (history API, see MDN)
smooth handling of html components
style's are automatically inherited
js CON:
CORS see https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS
Handle (override) events like link clicking, form submitting. (see Sending multipart/formdata with jQuery.ajax)
Sessions/cookies
I wrote a little rails plugin which allows you to embed your rails app as a js frame inside another (it's still really really beta): https://github.com/Elektron1c97/better_frame. The plugin handles most of the js problems like the link/form events and write to the browser history.
So.. If you need to run an app which should be really embed in a site like a store on another website I would use js embedding.
If you create a custom item to share like the soundcloud player you may want to use an iframe.
If you want third party sites to react to interactions with your widgets then you should absolutely use javascript. Although it is possible to pass messages between different domains through an iframe it is not the most convenient to use. See https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage
As for using javascript, you can simply ask your users to embed a javascript file that will render your widget. To bypass any CORS issues, your widget should interact with an API that supports JSONP responses.
I manage a small web page for a relative's business. They want to provide notes on the page for the visitors regularly (opening times, news etc.) but cannot code the web page themselves.
Is there a way to embed a news scroller, text field, whatever on the page - however, the text displayed there then comes from an external source they can manage like a htm, txt, json (whatever) file hosted e.g. on their Google Drive that they simply need to change and see the edits directly in the web page (the file would be public; the URL to be embedded in the web page code).
Is there a solution or an easier way to achieve it? thx
You can use file_get_contents() if I got you right.
To achieve this, first you need to create a portion on the website to display the news or whatever external dynamic content you want.
Then you need to develop complete APIs for that site and then hit them from your website (Keep a secret key for safety). Your API can do everything you program it to do. (E.g You can put the news in database or create publicly accessible files through it)
When this is achieved, you can then fetch the data into the portion created for news etc. (automated through programming)
Using PHP you can achieve all the above mentioned steps.
That way you will just have to push/send the data from your APIs and the actual site will keep getting updated without any further action required.
Hope that helps.
If they can write the HTML and make it available at a public URL, you can simply embed it using <iframe> in HTML.
I need to create a HTML code snippet that I will distribute to third party websites. This code snippet talks to a php file on my server and contains a logic to update the content(image) after specified time intervals. The reason I cannot use JavaScript is that it is not search engine friendly.
The way I have it now is using an HTML+ Javascript code which includes an XMLhttp request and uses Ajax to call a PHP file which in turn reads a csv file and updates the banner image on the third party site. But it is not crawlable by search engines.
Any other way of getting this to work using HTML? Probably using forms?
HTML is not active. If you want to do something, you need some sort of scripting language. You can do this without using Ajax (XMLhttp). Before Ajax, it was a common practice to relay information to the server using dynamic image loading. Of course, the dynamic image loading required a script. It can be rather simple:
<img id='myimg' src='temp.jpg'
onload="document.getElementById('myimg').src='myscript.php?width='+window.innerWidth;"
>
Your script replaces the image with whatever you like, but you have information delivered from the web page to your server through the get string. Originally, I saw this used extensively to deliver rotating ads. With this, you can record which ads are shown along with information that would otherwise only be known by the web browser.
This question may be not related to exact software stack, framework or language.
For my current project, we are using AngularJS to build the front-end that has a constant entrance page to load real data and render, which is easy for CDN and good for fast loading speed from browser side. But for some social feature, such architect may result in some problem. For example, when you paste your interested link to Facebook portal to share, Facebook will grab your page and show up a preview. If a landing page is empty, such preview won't work.
(I heard that Google+ recently support rendering javascript logic at server side before send back a preview, but obviously it's not a common support for other similar services. Google.com also supports indexing js based one-page application.)
Is there a better solution to solve this problem gracefully rather than fallback to have dynamic page which includes real data? Have I missed something in understanding this problem?
========
... I was even thinking of that, for requests that identified as FB request (like user agent), redirect it to a special gateway that wrapping sth like PhantomJS, fetch the page, render it server-side, and send back a DOM tree snapshot as content for FB to generate preview. But I also doubt that it's a good direction. : (
We are in the same situation. The simple solution is to use Open Graph meta tags in the pages your server will serve to Facebook scrapers.
Basically you need to do server-side what your web app is doing client-side. Amount of work highly depends on your hosting technology (MVC makes it super easy), your URI format and the APIs you use.
You will find some explanations here:
https://developers.facebook.com/docs/plugins/share-button/
Open graph introduction:
http://ogp.me/
I've got this setup:
Single page app that generates HTML content using Javascript. There is no visible HTML for non-JS users.
History.js (pushState) for handling URLS without hashbangs. So, the app on "domain.com" can load dynamic content of "page-id" and updates the URL to "domain.com/page-id". Also, direct URLS work nicely via Javascript this way.
The problem is that Google cannot execute Javascript this way. So essentially, as far as Google knows, there is no content whatsoever.
I was thinking of serving cached content to search bots only. So, when a search bot hits "domain.com/page-id", it loads cached content, but if a user loads the same page, it sees normal (Javascript injected) content.
A proposed solution for this is using hashbangs, so Google can automatically convert those URLs to alternative URLs with an "escaped_fragment" string. On the server side, I could then redirect those alternative URLs to cached content. As I won't use hashbangs, this doesn't work.
Theoretically I have everything in place. I can generate a sitemap.xml and I can generate cached HTML content, but one piece of the puzzle is missing.
My question, I guess, is this: how can I filter out search bot access, so I can serve those bots the cached pages, while serving my users the normal JS enabled app?
One idea was parsing the "HTTP_USER_AGENT" string in .htaccess for any bots, but is this even possible and not considered cloaking? Are there other, smarter ways?
updates the URL to "domain.com/page-id". Also, direct URLS work nicely via Javascript this way.
That's your problem. The direct URLs aren't supposed to work via JavaScript. The server is supposed to generate the content.
Once whatever page the client has requested is loaded, JavaScript can take over. If JavaScript isn't available (e.g. because it is a search engine bot) then you should have regular links / forms that will continue to work (if JS is available, then you would bind to click/submit events and override the default behaviour).
A proposed solution for this is using hashbangs
Hashbangs are an awful solution. pushState is fix for hashbangs, and you are using that already - you just need to use it properly.
how can I filter out search bot access
You don't need to. Use progressive enhancement / unobtrusive JavaScript instead.