Facebook Search API Untrusted Connection - javascript

I am trying to search the Facebook API from my application using javascript FB.api(url, success function) and the JSON object that comes back contains the error: "Search queries are unsupported for this connection."
The url I'm using is: "https://graph.facebook.com/search?q=Bamboo&type=page&access_token=", which works when I'm testing it in browser
Why is my search unsupported???

The problem is your url. You only put your graph api command in the url parameter, thus the "https://graph.facebook.com" shouldn't be included.
For using the search call,
var urlCall = "/search?q=Bamboo&type=page&access_token=";
FB.api(urlCall, function(response) {
//you code to execute
});
and not the following
url = "https://graph.facebook.com/search?q=Bamboo&type=page&access_token="

Related

How do Java REST calls relate to the front end of a Web Application?

I am aware of how REST calls work from within a Java Web application. E.g. when a URL is reached its method will be called using HTTP.
For example:
#GET
#Path("people/{id}")
public Response getPersonWithId(#PathParam("id") id) {
//return the person object with that Id
}
What I am unsure of is how this links to the front end?
Is the role of the UI ( i.e. javascript ) just to take a user to the specific URLs so that the back end methods can be called?
E.g. if a user presses a "get details" button, does the button just redirect them to this URL that deails with returning the details, and the back end functionality is then called?
WebService is not actually linked or tied to the front end similar to webapp. Instead, webservice is a service that provides result in the form of JSON/XML, Plain text Format according to request type(get, post, update, delete) and hence, the service can be used by any multiple front end application(not only web application but also smartphone app, desktop app etc.). Also, webservice can be on totally different server.
Let me give you a scenario:
Suppose, you have an front end web site ABC-Website and a backend
webservice on host: www.xyzservice.com/api with following methods:
/product - get request that return all product as list in json format.
/product/id - get request return product detail given id in json
format.
Now, if you simply type in browser www.xyzservice.com/api/product then
all product list will displayed in json format in the browser. That means, You can also read data from webservice directly in browser without front end system and i.e. webservice is not linked/tied to any front end.
Now, you want to use this webservice in your ABC-Website to display all the product list:
You call www.xyzservice.com/api/products and get JSON object that you can use to display in your html page.
<button type="button" onclick="getProducts()">Click Me!</button>
function getProducts(){
$.ajax({
type : "GET",
contentType : "application/json",
url : "http://www.xyzservice.com/api/product",
dataType : 'json',
timeout : 100000,
success : function(data) {
// now you have "data" which is in json format-same data that is displayed on browser.
displayDate(date);
},
error : function(e) {
//do something
},
done : function(e) {
//do something
}
});
}
function displayDate(){
//your codes to parse and display json data in html table in your page.
}
Lets say that your client is a website and you have a Java API.
In the javascript of your website you could do a request to the backend to retrieve the data and then present it to the user. Your javascript (using jQuery as an example) could look like the following:
// here it prints the data retrieved from the backend route (/people/{id}
$.get('http://localhost:3000/people/3',function onDataReceived(data) {
console.log(data);
})
As pointed out, jQuery is not necessary. Here is an example using regular javascript:
this.getRequest('http://localhost:3000/people/3', function onReceived(data) {
});
function getRequest(url, callback)
{
var xmlHttp = new XMLHttpRequest();
xmlHttp.onreadystatechange = function() {
if (xmlHttp.readyState == 4 && xmlHttp.status == 200)
callback(xmlHttp.responseText);
}
xmlHttp.open("GET", url, true);
xmlHttp.send(null);
}
in javascript, usually you want to do these request at the background your webpage.
Im gonna try to explain this with an example:
Imagine you have a page that displays a list of cars for sell which can be fetched from the web service provided by java back-end. The back-end have an url that will respond to GET method with a JSON (or XML) object.
What you have is a HTML file where you write a structure for the displayed data and also includes a javascript file that asynchronously calls this webservice, GETs the data, parses the JSON and then it can manipulate it to the form you want to display it on the page.
In different example you can validate forms on the background, or save the forms or do any other stuff that works with the web service API.
For making these asynchronous request you can use different libraries.
Most used is ajax included in jQuery framework or fetch as n standalone library.

Using YouTube API on HTML page with Javascript

I am new to web development and am trying to make a very simple search page that displays YouTube videos using the YouTube API. I've been following the examples from here: https://developers.google.com/youtube/v3/code_samples/javascript?hl=fr#search_by_keyword
but I'm not having a lot of luck. I have no errors but no search results either.
My current code is here: http://amalthea5.github.io/thinkful-tube/
There seems to be several problems.
You need to use
<script src="https://apis.google.com/js/client.js?onload=googleApiClientReady"></script>
instead of
<script src="https://apis.google.com/js/client.js?onload=onClientLoad" type="text/javascript"></script>
because you need to make sure the initialization function googleApiClientReady() in auth.js is called.
However, the Google API also reports that there exists no OAuth client with the ID trusty-sentinel-92304.
Edit:
If don't have an OAuth client ID, but rather an API key, you shouldn't use the auth API at all. Instead, you need to add the API key as parameter to each API call (as described here: https://developers.google.com/youtube/v3/docs/standard_parameters).
Try this as a start:
gapi.client.load('youtube', 'v3', function() {
var request = gapi.client.youtube.search.list({
key: "YOUR_API_KEY",
q: "cats",
part: 'snippet'
});
request.execute(function(response) {
var str = JSON.stringify(response.result);
console.log(str);
});
});
To begin with if you read the very tutorial you are following again, it says that:
Again, you need to update the client ID in the auth.js file to run this code.
Which looks like you didnt,
also by running a search query from the console I get the following error:
TypeError: gapi.client.youtube is undefined
Meaning the api is not initiated, You should double check the way you embed the google javascript file and the priority of doing so (the order of them)

fetch data from a url in javascript [duplicate]

This question already has answers here:
Ways to circumvent the same-origin policy
(8 answers)
Closed 9 years ago.
I am trying to fetch a data file from a URL given by the user, but I don't know how to do. Actually, I can get data from my server successfully. Here is my code:
$("button#btn-demo").click(function() {
$.get('/file', {
'filename' : 'vase.rti',
},
function(json) {
var data = window.atob(json);
// load HSH raw file
floatPixels = loadHSH(data);
render();
});
});
It can fetch the binary data from my server, parse the file and render it into an image. But now I want it work without any server, which means users can give a URL and javascript can get the file and render it. I know it's about the cross-site request. Can you tell me about it and how to realize it?
Thanks in advance!
assuming your URL is the address of a valid XML document this example will go grab it. if the URL is on a different domain than the one that's holding your scripts you will need to use a server side scripting language to got out and grab the resource (XML doc at URL value) and return it your domain. in PHP it would be ...
<?php echo file_get_contents( $_GET['u'] );
where $_GET['u'] is a URL value from your USER. let's call our PHP script proxy.php. now our JavaScript will call our proxy.php and concatenate the URL value to the end which will allow us to pass the URL value to the PHP script.
addy = $("#idOfInputFieldhere").val();
$.ajax({
url: 'proxy.php?u='+addy, /* we pass the user input url value here */
dataType:'xml',
async:false,
success:function(data){
console.log(data); /* returns the data in the XML to the browser console */
}
});
you'll need to use the js debugger console in chrome to view data. at this point you'd want to pull out data in a loop and use find() http://api.jquery.com/?s=find%28%29
I'm not very familiar with jQuery, but as I know, due to the same origin policy, the browser won't let any JavaScript code to make an AJAX request to a domain other than its own. So in order to retrieve some data (specially JSON formatted), you can add a <script> element to your page dynamically and set its src property to the address of the data provider. Something like this:
<script src='otherdomain.com/give_me_data.json'/>
This only works if you need to access some static data (like the url above) or you have access to the server side code. Because in this scenario, the server side code should return an string like:
callbackFunction({data1: 'value1', data2: 'value2', ...});
As the browser fetches the item specified in src property, tries to run it (because it know it's a script). So if the server sends a function call as a response, the function would be called immediately after all data has been fetched.
You can implement the server side in such a way that it accepts the name of the callback function as a parameter, loads requested data and generates an appropriate output that consists of a function call with loaded data as a json parameter.

Get Twitter Feed as JSON without authentication

I wrote a small JavaScript a couple of years ago that grabbed a users (mine) most recent tweet and then parsed it out for display including links, date etc.
It used this json call to retrieve the tweets and it no longer works.
http://twitter.com/statuses/user_timeline/radfan.json
It now returns the error:
{"errors":[{"message":"Sorry, that page does not exist","code":34}]}
I have looked at using the api version (code below) but this requires authentication which I would rather avoid having to do as it is just to display my latest tweet on my website which is public anyway on my profile page:
http://api.twitter.com/1/statuses/radfan.json
I haven't kept up with Twitter's API changes as I no longer really work with it, is there a way round this problem or is it no longer possible?
Previously the Search API was the only Twitter API that didn't require some form of OAuth. Now it does require auth.
Twitter's Search API is acquired from a third party acquisition - they rarely support it and are seemingly unenthused that it even exists. On top of that, there are many limitations to the payload, including but not limited to a severely reduced set of key:value pairs in the JSON or XML file you get back.
When I heard this, I was shocked. I spent a LONG time figuring out how to use the least amount of code to do a simple GET request (like displaying a timeline).
I decided to go the OAuth route to be able to ensure a relevant payload. You need a server-side language to do this. JavaScript is visible to end users, and thus it's a bad idea to include the necessary keys and secrets in a .js file.
I didn't want to use a big library so the answer for me was PHP and help from #Rivers' answer here. The answer below it by #lackovic10 describes how to include queries in your authentication.
I hope this helps others save time thinking about how to go about using Twitter's API with the new OAuth requirement.
You can access and scrape Twitter via advanced search without being logged in:
https://twitter.com/search-advanced
GET request
When performing a basic search request you get:
https://twitter.com/search?q=Babylon%205&src=typd
q (our query encoded)
src (assumed to be the source of the query, i.e. typed)
by default, Twitter returns top 25 results, but if you click on
all you can get the realtime tweets:
https://twitter.com/search?f=realtime&q=Babylon%205&src=typd
JSON contents
More Tweets are loaded on the page via AJAX:
https://twitter.com/i/search/timeline?f=realtime&q=Babylon%205&src=typd&include_available_features=1&include_entities=1&last_note_ts=85&max_position=TWEET-553069642609344512-553159310448918528-BD1UO2FFu9QAAAAAAAAETAAAAAcAAAASAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
Use max_position to request the next tweets
The following json array returns all you need to scrape the contents:
https://twitter.com/i/search/timeline?f=realtime&q=Babylon%205&src=typd
has_more_items (bool)
items_html (html)
max_position (key)
refresh_cursor (key)
DOM elements
Here comes a list of DOM elements you can use to extract
The authors twitter handle
div.original-tweet[data-tweet-id]
The name of the author
div.original-tweet[data-name]
The user ID of the author
div.original-tweet[data-user-id]
Timestamp of the post
span._timestamp[data-time]
Timestamp of the post in ms
span._timestamp[data-time-ms]
Text of Tweet
p.tweet-text
Number of Retweets
span.ProfileTweet-action–retweet > span.ProfileTweet-actionCount[data-tweet-stat-count]
Number of Favo
span.ProfileTweet-action–favorite > span.ProfileTweet-actionCount[data-tweet-stat-count]
Resources
https://code.recuweb.com/2015/scraping-tweets-directly-from-twitter-without-authentication/
If you're still looking for unauthenticated tweets in JSON, this should work:
https://github.com/cosmocatalano/tweet-2-json
As you can see in the documentation, using the REST API you'll need OAuth Tokens in order to do this. Luckily, we can use the Search (which doesn't use OAuth) and use the from:[USERNAME] operator
Example:
http://search.twitter.com/search.json?q=from:marcofolio
Will give you a JSON object with tweets from that user, where
object.results[0]
will give you the last tweet.
Here is a quick hack (really a hack, should be used with caution as its not future proof) which uses http://anyorigin.com to scrape twitter site for the latest tweets.
http://codepen.io/JonOlick/pen/XJaXBd
It works by using anyorigin (you have to pay to use it) to grab the HTML. It then parses the HTML using jquery to extract out the relevant tweets.
Tweets on the mobile site use a div with the class .tweet-text, so this is pretty painless.
The relevant code looks like this:
$.getJSON('http://anyorigin.com/get?url=mobile.twitter.com/JonOlick&callback=?', function(data){
// Remap ... utf8 encoding to ascii.
var bar = data.contents;
bar = bar.replace(/…/g, '...');
var el = $( '<div></div>' );
el.html(bar);
// Change all links to point back at twitter
$('.twitter-atreply', el).each(function(i){
$(this).attr('href', "https://twitter.com" + $(this).attr('href'))
});
// For all tweets
$('.tweet-text', el).each(function(i){
// We only care about the first 4 tweets
if(i < 4) {
var foo = $(this).html();
$('#test').html($('#test').html() + "<div class=ProfileTweet><div class=ProfileTweet-contents>" + foo + "</div></div><br>");
}
});
});
You can use a Twitter API wrapper, such as TweetJS.com which offers a limited set of the Twitter API's functionality, but does not require authentication. It's called like this;
TweetJs.ListTweetsOnUserTimeline("PetrucciMusic",
function (data) {
console.log(data);
});
You can use the twitter api v1 to take the tweets without using OAuth. For example: this link turns #jack's last 100 tweets.
The timeline documentation is here.
The method "GET statuses/user_timeline" need a user Authentification like you can see on the official documentation :
You can use the search method "GET search" wich not require authentification.
You have a code for starting here : http://jsfiddle.net/73L4c/6/
function searchTwitter(query) {
$.ajax({
url: 'http://search.twitter.com/search.json?' + jQuery.param(query),
dataType: 'jsonp',
success: function(data) {
var tweets = $('#tweets');
tweets.html('');
for (res in data['results']) {
tweets.append('<div>' + data['results'][res]['from_user'] + ' wrote: <p>' + data['results'][res]['text'] + '</p></div><br />');
}
}
});
}
$(document).ready(function() {
$('#submit').click(function() {
var params = {
q: $('#query').val(),
rpp: 5
};
// alert(jQuery.param(params));
searchTwitter(params);
});
})

404 retrieving Twitter followers using JQuery getJSON

I'm just starting to use the Twitter API to retrieve data using jQuery. I've used the API ok to retrieve information about a single user e.g. https://twitter.com/users/show/codinghorror.json
When I try to retrieve all the users that a given user is following, I'm using the same retrieval pattern but am getting a 404 error (it looks like my callback isn't receiving the json object properly, but appending it to the URL somehow)
I'm using the following code:
getTwitterUserFriends: function() {
var user = 'codinghorror';
var url = 'http://api.twitter.com/1/friends/ids.json?screen_name='+user+'?callback=?';
$.getJSON(url, function(data) {
alert('call succeeded' + data.ids);
});
},
In chrome, the console shows the following error:
GET https://api.twitter.com/1/friends/ids.json?screen_name=codinghorror?callback=jQuery15201747908447869122_1324917568956&_=1324917580929 404 (Not Found)
However if I browse to the URL directly https://api.twitter.com/1/friends/ids.json?screen_name=codinghorror then I can see the results object being returned.
I assume I'm doing something simple wrong with my callback, but can't see what it is, as the approach I've used above has worked for other API calls, so any help would be much appreciated!
Your URL syntax is incorrect. The "callback" parameter should be separated by "&", not "?".
var url = 'http://api.twitter.com/1/friends/ids.json?&screen_name='+user+'&callback=?';
You should probably URL-encode the username too:
var url = 'http://api.twitter.com/1/friends/ids.json?&screen_name=' +
escapeURIComponent(user) +
'&callback=?';
Also I'm not sure why you've got a "&" before the "screen_name" parameter.

Categories