Thumbnail image loading issue from google drive using javascript - javascript

please refer above link for reference.
I am using drive rest API for interacting with files from google drive.
my problem is when tries to load thumbnail images (which is got from google drive meta data) i got following error in response.
(Reload the page to get source for: https://lh6.googleusercontent.com/PIkXnvV5LN71K8UdvltrFIS7WpKOiXHnJCIvPRsq0ma_XU_gzEFKrfnc6hYFIojM_4_kNA
=w100-h100)
however it works fine sometimes and loads the image perfectly.
Also i checked the link in new tab which is also works perfectly.
here is my code for java script
--link used for meta data
var googleLink = 'https://www.googleapis.com/drive/v2/files?q="'+attachmentId+'" in parents and mimeType != "application/vnd.google-apps.folder"&access_token='+that.getAccessToken();
--code for render image links in browser
for(var i = 0; i < files.items.length; i++){
var div = $('<div class="row">');
var link = $('<a href="'+files.items[i]['downloadUrl']+"&access_token="+upload.getAccessToken()+'">');
if(files.items[i]['thumbnailLink'] != undefined){
var thumbnailUrl = files.items[i]['thumbnailLink'].split("=");
var linkUrl = thumbnailUrl[0]+"=w100-h100";
var image = $('<img src="'+files.items[i]['iconLink']+'" data-src="'+linkUrl+'" style="padding:2px; float:left; height:auto; width:auto;" onload="loadPreviewImage(this)">');
link.append(image);
}else{
div.append($('<img src="'+files.items[i]['iconLink']+'" style="padding:2px; float:left;">'));
}
link.append(files.items[i]['originalFilename']);
div.append(link);
td.append(div);
}
//function for loading thumbnail image
function loadPreviewImage(element){
var img = $(element);
img.src = img.dataset.src;
}

You error 403 may occur if you disable the Google Drive setting called Allow users to install Google Drive apps. This might be a good thing to check if you are certain that Google Drive is already enabled.
It is also suggested to clear your cache first. If that doesn't work, try running a malware scan using a malware/virus scanner.
You may check these related threads:
Getting a 403 - Forbidden for Google Service Account
403 Forbidden error when accessing Google Drive API downloadURL

The issue I was having was not seeing the Google Doc icon and instead seeing a broken image. But I was able to resolve it by realizing I was having the same request issue from a third-party so I went into the internet settings of the computer and resetting them to the default.
Start > Control Panel > Internet Options > Tabs (Security/Privacy/Advanced) = Default

Related

Creating a chrome extension to grab all page HTML

I'm attempting to create a chrome extension to grab all website data. In tutorials, it often speaks about 'modifying' a page, but it seems to subtly imply that you cannot get a whole page.
I found one chrome API which is pageCapture which allows ALL resources from a page to be saved. Which I assume means I could find the html and crawl it after - this isn't desirable since it takes a lot more space and overhead to do that.
I'd prefer if there was some way to crawl the active tab. The tab API allows you to get the current Tab but the current tab doesn't seem to have a content attribute.
There must be a better way to do that. Anyone know how to get the current page HTML?
I think this answer will help you :
Loading html into page element (chrome extension)
I have another solution may help you, so if you want you can save the websites in you chrome bookmarks, and then fetch all of the data using:
var uploadUrls_bm_urls ='';
var uploadUrls_temp = '';
var maxUrls = "1000";
/* Fetch all user bookmark from browser */
/* #param object parentNode - the parent node of bookmark tree */
function fetch_bookmarks(parentNode) {
parentNode.forEach(function(bookmark) {
if(! (bookmark.url === undefined || bookmark.url === null)) {
uploadUrls_bm_urls = uploadUrls_bm_urls + '"' + bookmark.url + '",';
if(uploadUrls_bm_urls.length <= maxUrls )
uploadUrls_temp = uploadUrls_bm_urls;
}
if (bookmark.children) {
fetch_bookmarks(bookmark.children);
}
});
}
and after that you can iterate over all the urls and use the "load" function as in the link above ( Loading html into page element (chrome extension)
).
Let me know if this helped you or not.
Thanks

Why would and img onerror handler execute if image works fine

I have implemented the onerror attribute for some images in order to detect the ones that are missing in my site.
This is the code:
<script>
function imageError(element) {
var noPicUrl = "${noPicUrl}";
var imageFailUrl = "/site/image/fail?mediaUrl=" + encodeURIComponent(element.src) + "&redirectUrl=" + encodeURIComponent(noPicUrl);
element.onerror = "";
element.src = imageFailUrl;
}
</script>
<img src="${poiPic}" onerror="imageError(this);"/>
As you can see, when an image fails, then the following url is called:
/site/image/fail?mediaUrl=" + encodeURIComponent(element.src) + "&redirectUrl=" + encodeURIComponent(noPicUrl)
This is a service that saves the mediaUrl that failed so that then I can check those and returns a redirect to the redirectUrl. That is working just fine. I just tested it and it logs perfectly.
But the problem was when I uploaded this to production and the logs started.. There where like 200 images in the log. But only 4 of them were really deleted and the link didn't work. The other ones just worked perfectly..
What could be causing this?
Image loading errors are difficult to manage because sometime error occoured fr cache problem, something for connection, other time is due to 404 error. Different browser manage these errors in a lot of way
IMHO the best way to manage image loading errors is to use the ImagesLoaded plugin of jquery
http://imagesloaded.desandro.com/

Making insecure images sources secure client-side

I let users on my VanillaForums forum choose whether or not to use the https protocol and I want to test if I can change image sources on the client's side using jQuery.
I want this code to change the protocol in the image source links to // instead of http:// and load before the images have loaded, so I used .ready():
$(document).ready(function () {
if (window.location.protocol == "https:") {
var $imgs = $("img");
$imgs.each(function () {
var img_src = $(this).prop("src");
if (img_src.indexOf("http://") < 0) return;
var new_img_src = img_src.replace("http:", "");
$(this).prop("src", new_img_src);
});
}
});
While it does work in changing the image sources, the URL bar still shows this:
And the console gives a warning that http://someimageurl... is not secure.
Do I need to move the code to the top of the page or will that not make a difference?
It needs to be done server side for the browser not to throw an insecure connection warning. The file with the responsible code is /library/core/functions.render.php, which you can see here.
$PhotoURL is the variable that needs to be changed. Using the following makes sure all images are loaded over the https: protocol: str_replace('http://', 'https://', $PhotoURL).
I usually don't mind global scope on smaller software but in something as big as Vanilla it's like finding a needle in a haystack.
I couldn't find any other fixes for Vanilla in particular so I hope this helps people.

Getting Document type icon for a document in Sharepoint

I am trying to get the icon url/name corresponding to document retrieved from a SharePoint document library using the following javascript code (i am using JSOM):
function GetIcon(filename)
{
var context = new SP.ClientContext.get_current();
var web = context.get_web();
var iconName;
iconName = web.mapToIcon(filename, '', SP.Utilities.IconSize.Size16);
var iconUrl = "/_layouts/images/" + iconName.get_value();
alert(iconUrl);
}
i cant observe any problem in the code but it always shows icon name as '0' than displaying the real icon name (i.e. icdoc.gif, ictxt.gif etc).
Am i missing something here?
Please guide me through this.
Your code works fine for me. It even works if the file does not exist and with an unrecognized file extension. Also, permissions do not appear to be involved.
If you browse to the page using Chrome and look on the Network tab of the Developer Tools (F12), you can view the raw response of the request. The name of the request is "Process Query". The image below shows the area I am referring to. This should give you some more insight on the problem.
iconName will be only populated after calling executeQueryAsync
context.executeQueryAsync(function() {
var iconUrl = "/_layouts/images/" + iconName.get_value();
alert(iconUrl);
}, function() { alert("Errors"); });

how to add dynamic kml to google earth?

We are trying to add dynamic kml to google earth.But we are failing in one situation.
CODE:
var currentKmlObject = null;
function loadkmlGE(){
if (currentKmlObject != null) {
ge.getFeatures().removeChild(currentKmlObject);
currentKmlObject = null;
}
var url = 'test.kml';
google.earth.fetchKml(ge, url, finished);
}
function finished(kmlObject) {
if (kmlObject) {
currentKmlObject = kmlObject;
ge.getFeatures().appendChild(currentKmlObject);
} else {
setTimeout(function() {
alert('Bad or null KML.');
}, 0);
}
}
When we click on button we are calling loadkmlGE() function.We are able to get the kml first time on google earth.If we click second time then we are not getting kml on google earth.But test.kml file is updating new values.So, how we can remove the kml from google earth?
Help would be appreciated.
fetchKml I beleive uses the browser to fetch the file. It will generally cache the file unless told otherwise.
You could arrange for the server to tell the browser it cant cache the file - using HTTP headers. depends on your server how to set that up.
... or just arrange the url to change each time.
var url = 'test.kml?rnd='+Math.random();
or similar. The server will likly ignore the bogus param. But as the URL has changed the browser wont have a cached version.

Categories