Loading EXIF data from currently displayed image with jquery - javascript

I am in the current situation:
I am developing a site for publishing my photos. I am using twitter bootstrap, jquery and Galleria.io
Now i want to show some exif data from the photos i made. Therefore i want to use this jquery plugin: http://blog.nihilogic.dk/2008/05/jquery-exif-data-plugin.html
I have already tested the examples give on the side. They work.
The code below doesn't. It returns an empty alert, everytime the image is loaded.
So at least this works.
These are my first steps in javascript, so i am glad for every help.
The exif data should be updated everytime I change the image. All images are located on the same server.
Galleria.ready(function(options) {
// this = the gallery instance
// options = the gallery options
this.bind('image', function(e) {
imgHandle = $(e.imageTarget)
imgHandle.load(function() {
$(this).exifLoad(function() {
alert($(this).exifPretty());
});
});
});
});
$("#img2").load(function() {
$(this).exifLoad(function() {
alert($(this).exifPretty());
});
});
I hope you can help me.

Okay i figured it out with a friend:
My code now looks like this:
Galleria.ready(function(options) {
// this = the gallery instance
// options = the gallery options
// will be called after the image is loaded
this.bind('image', function(e) {
imgHandle = e.imageTarget;
alert('Image loaded');
//will be called if on same server
$(imgHandle).exifLoad(function() {
alert('Exif loaded');
alert($(imgHandle).exifPretty());
//do something here....
});
});
});
});
And it works only with images on the server but not on remote servers.

Related

A screenshot of a page from a URL without opening it?

We need a module that gets a url only, and return a screenshot / image of that page without opening it, so it's being done in the background.
All tools i read about here, are doing so by getting a specific div element as an input or redirecting to that page.
It also should fit with a url that has a redirect , so :
function getImageOfPage(url) {
//get html page of this url (dont show it)
//wait for page to load all photos
//create screen shot image
return img;
}
there's a solution for this you would need node.js, In this, you need the webshot module, npm i webshot
here's a sample code
const webshot = require('webshot');
webshot('https://example.com',
'img_name.png', function(err) {
if (!err) {
console.log("Screenshot taken!")
}
});

How do I Get EXIF data after AJAX load of images?

I'm working on a swipe-able wine image viewer for NapaWapa.com and I've got it working pretty well: http://www.tonyjacobson.com/napawapa/gallery/index.html
I'm using a jquery plugin to read the exif data of the images in the gallery. However, the plugin seems to only work when the src attribute is hard coded — not when the images are loaded via AJAX. Can any of you experts out there take a peek at the JS (http://www.tonyjacobson.com/napawapa/gallery/js/jquery.exif.js) to see if you could recommend a change to the plugin code to recommend a fix?
Here's the specific part of the plugin code where it deals with reading the EXIF data:
// HTML IMG EXAMPLE
<img class="lazyOwl" data-src="images/03.jpg" exif="true" />
// JAVASCRIPT CLICK TO TEST IMG EXIF DATA
$("img").on('click', function(event){
$(this).exifLoad();
console.log( "HIT: " + $(this).exif('Make') );
});
// JQUERY PLUGIN CODE EXCERPT
function loadAllImages()
{
var aImages = document.getElementsByTagName("img");
for (var i=0;i<aImages.length;i++) {
if (aImages[i].getAttribute("exif") == "true") {
if (!aImages[i].complete) {
addEvent(aImages[i], "load",
function() {
EXIF.getData(this);
}
);
} else {
EXIF.getData(aImages[i]);
}
}
}
}
// automatically load exif data for all images with exif=true when doc is ready
jQuery(document).ready(loadAllImages);
// load data for images manually
jQuery.fn.exifLoad = function(fncCallback) {
return this.each(function() {
EXIF.getData(this, fncCallback)
});
}
I was searching for a similar question, and found assistance from this thread. Try modifying your Javascript (not the plugin) to include this code:
$("img").load(function() {
$(this).exifLoad(function() {
// exif data should now be ready...
});
});

Web page Capture and save to image using phantomjs lib

i was searching google to get any js lib which can capture the image of any website or url. i came to know that phantomjs library can do it. here i got a small code which capture and convert the github home page to png image
if anyone familiar with phantomjs then please tell me what is the meaning of this line
var page = require('webpage').create();
here i can give any name instead of webpage ?
if i need to capture the portion of any webpage then how can i do it with the help of this library. anyone can guide me.
var page = require('webpage').create();
page.open('http://github.com/', function () {
page.render('github.png');
phantom.exit();
});
https://github.com/ariya/phantomjs/wiki
thanks
Here is a simple phantomjs script for grabbing an image:
var page = require('webpage').create(),
system = require('system'),
address, output, size;
address = "http://google.com";
output = "your_image.png";
page.viewportSize = { width: 900, height: 600 };
page.open(address, function (status) {
if (status !== 'success') {
console.log('Unable to load the address!');
phantom.exit();
} else {
window.setTimeout(function () {
page.render(output);
console.log('done');
phantom.exit();
}, 10000);
}
})
Where..
'address' is your url string.
'output' is your filename string.
Also 'width' & 'height' are the dimensions of what area of the site to capture (comment this out if you want the whole page)
To run this from the command line save the above as 'script_name.js and fire off phantom making the js file the first argument.
Hope this helps :)
The line you ask about:
var page = require('webpage').create();
As far as I can tell, that line does 3 things: It adds a module require('webpage'), then creates a WebPage Object in PhantomJS .create(), and then assigns that Object to var = page
The name "webpage" tells it which module to add.
http://phantomjs.org/api/webpage/
I too need a way to use page.render() to capture just one section of a web page, but I don't see an easy way to do this. It would be nice to select a page element by ID and just render out that element based at whatever size it is. They should really add that for the next version of PhantomJS.
For now, my only workaround is to add an anchor tag to my URL http://example.com/page.html#element to make the page scroll to the element that I want, and then set a width and height that gets close to the size I need.
I recently discovered that I can manipulate the page somewhat before rendering, so I want to try to use this technique to hide all of the other elements except the one I want to capture. I have not tried this yet, but maybe I will have some success.
See this page and look at how they use querySelector(): https://github.com/ariya/phantomjs/blob/master/examples/technews.js

how to add dynamic kml to google earth?

We are trying to add dynamic kml to google earth.But we are failing in one situation.
CODE:
var currentKmlObject = null;
function loadkmlGE(){
if (currentKmlObject != null) {
ge.getFeatures().removeChild(currentKmlObject);
currentKmlObject = null;
}
var url = 'test.kml';
google.earth.fetchKml(ge, url, finished);
}
function finished(kmlObject) {
if (kmlObject) {
currentKmlObject = kmlObject;
ge.getFeatures().appendChild(currentKmlObject);
} else {
setTimeout(function() {
alert('Bad or null KML.');
}, 0);
}
}
When we click on button we are calling loadkmlGE() function.We are able to get the kml first time on google earth.If we click second time then we are not getting kml on google earth.But test.kml file is updating new values.So, how we can remove the kml from google earth?
Help would be appreciated.
fetchKml I beleive uses the browser to fetch the file. It will generally cache the file unless told otherwise.
You could arrange for the server to tell the browser it cant cache the file - using HTTP headers. depends on your server how to set that up.
... or just arrange the url to change each time.
var url = 'test.kml?rnd='+Math.random();
or similar. The server will likly ignore the bogus param. But as the URL has changed the browser wont have a cached version.

jQuery Slimbox is not requesting files correctly

I am using jQuery slimbox with it's API.
Here is my JavaScript that gets image paths via JSON, and then launches the slimbox via it's API.
$('#main-container').append('<span id="check-our-adverts">Check our Adverts</span>');
var advertImages = [];
$.getJSON( config.basePath + 'get-adverts/', function(images) {
advertImages = images;
});
$('#check-our-adverts').click(function() {
console.log(advertImages);
$.slimbox(advertImages, 0);
});
The JSON is returning ["\/~wwwlime\/assets\/images\/adverts\/advert.jpg","\/~wwwlime\/assets\/images\/adverts\/advert2.jpg"].
The actual page is here. Click top red box next to the frog. If you have a console, check it for the JSON returned.
When I view the request using Live HTTP Headers, it seems slimbox is requesting vanquish.websitewelcome.com/ and nothing else.
This is resulting in the slimbox being launched, and it's throbber spinning forever.
What could be causing this problem? Thanks
Update
I added this inside the JSON callback
$.each(images, function(i, image) {
$('body').append('link');
});
And clicking those links takes me directly to the image... what gives?
I am not 100% familiar with slimbox but the api says that the method takes and array of arrays, so your return from JSON should, i believe, look more like
[["\/~wwwlime\/assets\/images\/adverts\/advert.jpg"],["\/~wwwlime\/assets\/images\/adverts\/advert2.jpg"]]
making you call to slimbox
$.slimbox( [["\/~wwwlime\/assets\/images\/adverts\/advert.jpg"],["\/~wwwlime\/assets\/images\/adverts\/advert2.jpg"]],0);
let me know if that helps?

Categories