Latest tweet not always showing - javascript

With the follow code, the latest tweet is only occasionally showing in Chrome but always in Firefox. Typically only shows in Chrome with /? on the url but vanishes when I refresh.
jQuery(document).ready( function(){
console.log("getting twitter data..");
jQuery.getJSON("http://twitter.com/statuses/user_timeline/*hidden*.json?callback=?", function(data) {
console.log("got it..", data);
jQuery("#tweet").html(data[0].text);
jQuery("#ttime").html(data[0].created_at);
} );
});

You are trying to get the direct JSON from the twitter API. This is not possible due the same origin policy. You are only allowed to get JSON from your own domain.
A workaround exists and it's called JSONP (JSON with padding) and that's what twitter is using. You need to append the name of a function to the callback parameter in the URL. This function then gets executed when the twitter API loads.
For example you could do it like this:
JavaScript
function render(data) {
// data is the object that contains your twitter data. It's an usual JavaScript object.
}
HTML
<script type="text/javascript" src="http://twitter.com/statuses/user_timeline/username?
callback=render"></script>
Make sure that render is already loaded when the twitter API loads.

Related

How can I get the front page size of website using phantomjs

I want to check the front page size of any website using PhantomJS. Antone help me how?
Assuming your page is opening without issues, you can set Page.prototype.onResourceReceived callback (which has a bodySize property in the response corresponding to the Content-Length response header):
page.onResourceReceived = function (resp) {
console.log(JSON.stringify(resp));//check resp.bodySize
};

polling for RSS feed with casperjs not working

I am trying to match a token (string token) in the RSS feed using casperjs waitFor() but it does not seem to work. There are other ways (not using polling) to get around but I need to poll for it. Here is the code snippet:
casper.then(function() {
this.waitFor(function matchToken() {
return this.evaluate(function() {
if(!this.resourceExists(token)) {
this.reload();
return false;
}
return true;
});
});
});
The updates to rss url are not dynamic and hence, a refresh would be needed to check for the token. But it seems (from the access log) that I am not getting any hits (reload not working) on the rss url. Ideally, I would want to refresh the page if it doesn't see the token and then check for the token again & it should keep doing that until the waitFor times out.
I also tried using assertTextExists() instead of resourceExists() but even that did not work.
I am using PhantomJS (1.9.7) & the url is: https://secure.hyper-reach.com:488/rss/323708
The token I am looking for is --> item/272935. If you look at the url I have mentioned above, you will find this in a each guid tag. The reason why I am including "item/" also as a part of my token is so that it doesn't match any other numbers incorrectly.
evaluate() is the sandboxed page context. Anything inside of it doesn't have access to variables defined outside and this refers to window of the page and not casper. You don't need the evaluate() function here, since you don't access the page context.
The other thing is that casper.resourceExists() works on the resource meta data such as URL and request headers. It seems that you want to check the content of the resource. If you used casper.thenOpen() or casper.open() to open the RSS feed, then you can check with casper.getPageContent(), if the text exists.
The actual problem with your code is that you mix synchronous and asynchronous code in a way that won't work. waitFor() is the wrong tool for the job, because you need to reload in the middle of its execution, but the check function is called so fast that there probably won't be a complete page load to actually test it.
You need to recursively check whether the document is changed to your liking.
var tokenTrials = 0,
tokenFound = false;
function matchToken(){
if (this.getPageContent().indexOf(token) === -1) {
// token was not found
tokenTrials++;
if (tokenTrials < 50) {
this.reload().wait(1000).then(matchToken);
}
} else {
tokenFound = true;
}
}
casper.then(matchToken).then(function(){
test.assertTrue(tokenFound, "Token was found after " + tokenTrials + " trials");
});

Youtube API only loads on first page visit

I've written a little mobile web application to control YouTube on my PC from my phone, however something strange is happening when searching using the YouTube API. The first time the page loads, everything works great - enter the search term, click search and results are returned.
However, if I click onto another page and then come back, the search no longer works and I see "Uncaught TypeError: Cannot read property of 'search' undefined" in the search function below.
I'm very new to JavaScript so feel free to berate the code, but I've been seeing this problem for a while and despite much googling haven't been able to find a solution.
// Called automatically when JavaScript client library is loaded.
function onClientLoad()
{
//
try
{
gapi.client.load('youtube', 'v3', onYouTubeApiLoad);
}
// Called automatically when YouTube API interface is loaded.
function onYouTubeApiLoad()
{
gapi.client.setApiKey('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx');
}
function search(q) {
// Create api request and execute it
var request = gapi.client.youtube.search.list({
type: 'video',
part: 'snippet',
q: q
});
// Send the request to the API server,
// and invoke onSearchRepsonse() with the response.
request.execute(onSearchResponse);
}
function onSearchResponse(response) {
showResponse(response);
}
The link to the API script is in my search.aspx page as below:
<script src="https://apis.google.com/js/client.js?onload=onClientLoad" type="text/javascript"></script>
JQuery is also being used, so I don't know if there is any funny business being caused there but any ideas at this point would be very much appreciated!
Make sure you're calling search() after onYouTubeApiLoad() executes.
If you are binding search() to a click event, make sure to do so on the callback:
function onYouTubeApiLoad()
{
gapi.client.setApiKey('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx');
$("button#search").on("click", function(){ search(...); })
}
Looks like I figured it out. It looks like the initial load of the API is done when it first loads the script in
<script src="https://apis.google.com/js/client.js?onload=onClientLoad" type="text/javascript"></script>
but then not loaded again when leaving and coming back to the page. I added onClientLoad(); to the
$( document ).ready function at it seems to be working now.

Scrape / eavesdrop AJAX data using JavaScript?

Is it possible to use JavaScript to scrape all the changes to a webpage that is being updated live with AJAX? The site I wish to scrape updates data using AJAX every second and I want to grab all the changes. This is a auction website and several objects can change whenever a user places a bid. When a bid is placed the the following change:
The current Bid Price
The current high bidder
The auction timer has time added back to it
I wish to grab this data using a Chrome extension built on JavaScript. Is there a AJAX listener for JavaScript that can accomplish this? A tool kit? I need some direction. Can JavaScript accomplish this??
I'm going to show two ways of solving the problem. Whichever method you pick, don't forget to read the bottom of my answer!
First, I present a simple method which only works if the page uses jQuery. The second method looks slightly more complex, but will also work on pages without jQuery.
The following examples shows how you can implement filters based on method (eg POST/GET), URL, and read (POST) data and response bodies.
Use a global ajax event in jQuery
More information about the jQuery method can be found in the documentation of .ajaxSuccess.
Usage:
jQuery.ajaxSuccess(function(event, xhr, ajaxOptions) {
/* Method */ ajaxOptions.type
/* URL */ ajaxOptions.url
/* Response body */ xhr.responseText
/* Request body */ ajaxOptions.data
});
Pure JavaScript way
When the website does not use jQuery for its AJAX requests, you have to modify the built-in XMLHttpRequest method. This requires more code...:
(function() {
var XHR = XMLHttpRequest.prototype;
// Remember references to original methods
var open = XHR.open;
var send = XHR.send;
// Overwrite native methods
// Collect data:
XHR.open = function(method, url) {
this._method = method;
this._url = url;
return open.apply(this, arguments);
};
// Implement "ajaxSuccess" functionality
XHR.send = function(postData) {
this.addEventListener('load', function() {
/* Method */ this._method
/* URL */ this._url
/* Response body */ this.responseText
/* Request body */ postData
});
return send.apply(this, arguments);
};
})();
Getting it to work in a Chrome extension
The previously shown code has to be run in the context of the page (in your case, an auction page). For this reason, a content script has to be used which injects (!) the script. Using this is not difficult, I refer to this answer for a detailled explanation plus examples of usage: Building a Chrome Extension - Inject code in a page using a Content script.
A general method
You can read the request body, request headers and response headers with the chrome.webRequest API. The headers can also be modified. It's however not (yet) possible to read, let alone modify the response body of a request. If you want this feature, star https://code.google.com/p/chromium/issues/detail?id=104058.

IE8's information bar blocking a scripted file download in response to JQuery AJAX request

I have an html/javascript frontend that is using JQuery's AJAX request to send XML containing user-entered form data to a backend application which in turn creates a PDF from that information. The frontend receives a UUID in response, which it then uses in the download url to download the generated PDF.
This works wonderfully in Firefox and Safari, but is being blocked by Internet Explorer 8's protection against scripted downloads. Telling IE8 to download the file via the spawned Information Bar forces a reload of the page, which blanks out all of the entered user content.
A single onMouseUp event on a button-esque element is triggering the generation of the XML to send, sending the XML, getting its response, then initiating the download via setting the url in the window.location object. Separating out that download into a different button (having one generate and send the xml and fetch the UUID, and the other only initiate the download using the url made from the UUID) bypasses the information bar but ruins the simplicity and intuitiveness of the interface.
Here are the relevant javascript functions:
function sendXml()
{
var documentXml = generateDocumentXml();
var percentEncodedDocumentXml = escape(DocumentXml);
var url = "generate?document=" + percentEncodedDocumentXml;
$.ajax({
url: url,
type: "GET",
dataType: "xml",
success: function (xml)
{
var uuid = $(xml).find('uuid').text();
getPdf(uuid);
},
error: function (xhr)
{
alert("There was an error creating your PDF template");
}
});
}
function getPdf(uuid)
{
var url = "generate?get-pdf=" + uuid;
window.location = url;
}
I'm fishing for suggestions about how to best handle this issue. My first preference would be to have the information bar not interfere at all, but minimizing its harm would be a dramatic improvement over the current situation. If it could not reload and wipe the frontend interface, and actually proceed to downloading the file when the user chooses to "Download File..." via the Information Bar's menu, that would help.
I tested it and the reason for the bar to occur seems to be the fact, that there is no direct relation between the user-action(mouseover) and the loading of the URL(guess a PDF-file).
This workaround will solve the issue:
Create an iframe(may be hidden) inside the document and use
window.open(url,'nameAttributeOfTheIframe')
...to load the PDF. The bar occurs too, but if the user chooses to download, the current document will reload too, but the user-content(if you mean form-data) will remain, as the bar belongs to the iframe not to the parent document.
Be sure to send a attachment-header with the PDF too, to beware of showing it inside the browser(if the browser is able to), because if you use a hidden iframe the user cannot see what's loaded there.
<iframe name="nameAttributeOfTheIframe" style="display:none"></iframe>
<input type="button" value="click here" onclick="f1()"/>
<input value="default value">
<script type="text/javascript">
<!--
function f1()
{
//simulate delayed download
setTimeout(f2,1000)
}
function f2()
{
window.open('http://www.ecma-international.org/publications/files/ECMA-ST/ECMA-262.pdf','nameAttributeOfTheIframe');
}
document.getElementsByTagName('input')[1].value='this is modified value, should remain';
//-->
</script>

Categories