I am having some trouble using the wikipedia api. I use .getJSON to return the data however it only works intermittently for some reason. I am not sure if there's a limit on the search that we can perform or something else. The page would just crash after a series of hit and misses.
I have clear all my cache stored in Chrome but the issue still persists. Please help, thank you.
function wiki_api(val) {
$.getJSON("https://en.wikipedia.org/w/api.php?
action=opensearch&search="+val+"&limit=5&format=json&callback=?", function(data) {
//console.log(search+"2");
console.log("runs")
console.log(data);
}); }
function ClickEvent() {
var temp;
console.log("before");
var dummy = document.getElementById("test").value;
var search = dummy.split(" ").join("+");
console.log(search);
wiki_api(search);
console.log("ran");
}
link to codepen: https://codepen.io/xguardx/pen/BwJMXx?editors=0011
Related
If I use next function to get google output:
function myFunction() {
var post_url, result;
post_url = "http://www.google.com/search?q=stack+overflow";
result = UrlFetchApp.fetch(post_url);
Logger.log(result);
}
doesn't work.
P.S.
Sorry, I have to eŃ…plore some dependences.
I take an example
function scrapeGoogle() {
var response = UrlFetchApp.fetch("http://www.google.com/search?q=labnol");
var myRegexp = /<h3 class=\"r\">([\s\S]*?)<\/h3>/gi;
var elems = response.getContentText().match(myRegexp);
for(var i in elems) {
var title = elems[i].replace(/(^\s+)|(\s+$)/g, "")
.replace(/<\/?[^>]+>/gi, "");
Logger.log(title);
}
}
and it works, than I begin to do some modifications and noticed that when I have some error in code it gives me an error
Request failed for http://www.google.com/search?q=labnol returned code
503.
So I did some researches without error's and it solution works. But when I began to form it to the function in lib it begans to throw me an error of 503 each time!
I'm very amazing of such behavior...
Here is short video only for fact. https://youtu.be/Lem9eiIVY0I
P.P.S.
Oh! I've broke some violations, so the google engine send me to stop list
so I run this:
function scrapeGoogle() {
var options =
{
'muteHttpExceptions': true
}
var response = UrlFetchApp.fetch("http://www.google.com/search?q=labnol", options);
Logger.log(response);
}
and get
About this pageOur systems have detected unusual traffic from your computer network. This page checks to see if it's really you sending the requests, and not a robot. Why did this happen?
As I see I have to use some special google services to get the search output and not to be prohibited?
You can use simple regex to extract Google search results.
var regex = /<h3 class=\"r\">([\s\S]*?)<\/h3>/gi;
var items = response.getContentText().match(regex);
Alternatively, you can use the ImportXML function in sheets.
=IMPORTXML(GOOGLE_URL, "//h3[#class='r']")
See: Scrape Google Search with Sheets
I am trying to learn PhantomJS. I would appreciate if you can help me understand why the code below gives me an error(shown below) and help me fix the error. I am trying to execute some javascript on a page using phantomjs. The code lines in the evaluate function work well when I enter them in Chrome console, i.e., they give the expected result (document.title).
Thank you.
PhantomJS Code
var page = require('webpage').create();
var url = 'http://www.google.com';
page.open(url, function(status) {
var title = page.evaluate(function(query) {
document.querySelector('input[name=q]').setAttribute('value', query);
document.querySelector('input[name="btnK"]').click();
return document.title;
}, 'phantomJS');
console.log(title);
phantom.exit()
})
Error
TypeError: 'null' is not an object (evaluating 'document.querySelector('input[name="btnK"]').click')
phantomjs://webpage.evaluate():4
phantomjs://webpage.evaluate():7
phantomjs://webpage.evaluate():7
null
Edit 1: In response to Andrew's answer
Andrew, it is strange but on my computer, the button is an input element. The following screenshot shows the result on my computer.
Edit 2: click event unreliable
Sometimes, the following click event works, sometimes it does not.
document.querySelector('input[name="btnK"]')
Not clear to me what is happening.
About the answer
For future readers, in addition to the answer, the gist by Artjom B. is helpful in understanding what is happening. However, for a more robust solution, I think something like the waitfor.js example will have to be used (as suggested in the answer). I hope it is okay to copy and paste Artjom B.'s gist here. While the gist below works (with form submit); it is still not clear to me why it does not work if I try to simulate the click button on the input. If anyone can clarify that, it would be great.
// Gist by Artjom B.
var page = require('webpage').create();
var url = 'http://www.google.com';
page.open(url, function(status) {
var query = 'phantomJS';
page.evaluate(function(query) {
document.querySelector('input[name=q]').value = query;
document.querySelector('form[action="/search"]').submit();
}, query);
setTimeout(function(){
var title = page.evaluate(function() {
return document.title;
});
console.log(title);
phantom.exit();
}, 2000);
});
Google uses a form for submitting its queries. It's also highly likely that google has changed the prototype methods for their search buttons, so it's not really the best site to test web scraping.
The easiest way to do this is to actually perform a form submit, which slightly tweaks your example.
var page = require('webpage').create();
var url = 'http://www.google.com';
page.open(url, function(status) {
var query = 'phantomJS';
var title = page.evaluate(function(query) {
document.querySelector('input[name=q]').value = query;
document.querySelector('form[action="/search"]').submit();
return document.title
}, query);
console.log(title);
phantom.exit();
});
Note that you will likely need to consider that the response is async from this call, so getting the title directly will likely result in an undefined error (you need to account for the time it takes for the page to load before looking up data; you can review this in their waitfor.js example).
You can open google.com and try document.querySelector('input[name="btnK"]') in the console, it's null.
Actully try replace input with button:
document.querySelector('button[name="btnK"]')
I'm working on a small project. Mostly to gain some understanding because I am a complete web noob. Which may or may not be apparent in the following question(s).
My end result is fetching a bunch of values from a database. Putting them into a JSON object and then using some jQuery/JavaScript to work with the data. I've been doing a ton of reading, been through tons of posts on here but I can't find something as it relates to my problem.
I'm working in VS2010 C#. I'm bound by IE7 on the client side, but I can test with Firefox. I'm also working on localHost for now.
My JavaScript is as follows.
function plotMarkers() {
var pathStart = new String(window.location);
var newPath = pathStart.replace("Default.aspx", "getMarkers.aspx");
alert(newPath);
$("#statusDiv").html('JSON Data');
$.getJSON(newPath, function (data) {
alert("Loading JSON");
$.each(data, function (index, elem) {
var myLatlng = new google.maps.LatLng(elem.lat, elem.lng);
var marker = new gogle.maps.Marker({
position: myLatlng,
title: elem.title
});
marker.setMap(map);
});
}, function () { alert("loaded") });}
I don't get any Javascript errors in Firefox, but I do in IE7. The alert("Loading JSON") never fires anywhere. I've never seen it.
My getMarkers.aspx code
public partial class getMarkers : System.Web.UI.Page {
protected void Page_Load(object sender, EventArgs e) {
JavaScriptSerializer jsonSer = new JavaScriptSerializer();
List<marker> markerList = new List<marker>();
markerList.Add(new marker(-34.397, 150.644, "Test"));
markerList.Add(new marker(-50, 150.644, "Test2"));
string sJSON = jsonSer.Serialize(markerList);
Response.ContentType = "application/json";
Response.Write(sJSON);
}
}
I can click the link in "statusDiv" and it takes me to the aspx page. I get an error in IE7 but it works fine in Firefox. The JSON that is parsed back to me in those pages is correct. I copy and pasted the response in another function and the plotter put the two pins on my map.
The code does not seem to enter the $.getJSON function.
There is a problem with IE7 (or a problem in general) with how my getMarkers.aspx is set up. Googled some tutorials and this is where I ended up. I might be completely wrong in my approach. Or I missed something I was supposed to do. I can't seem to find that specific tutorial, it must have been closed on accident amidst my furious 2 day and counting Google adventure.
Firefox getMarkers.aspx displays the JSON as the first line followed by the html. Just plaintext in the browser.
IE7 displays an XML error and tells me the JSON string isn't allowed at the top level of the document. I did read some articles about IE7 always trying to parse XML with an elaborate workaround. Which I can't do, because I don't know how many clients will be using it. Is there a better way to do what I'm doing?
If anyone could point me in the direction I need to go I would greatly appreciate it. Hopefully my post isn't too convoluted.
You probably want a Response.End() at the end of your page load method, so that the rest of the web page isn't rendered after your JSON.
It looks to me, for starters, that you need the json2 library to handle the issues with IE7.
As for firing your function - I can't see anywhere where plotMarkers() is actually called, it's only defined?
Are you trying to display markers on a map in an existing page when the #statusDiv link is clicked ?
If so, you're doing it wrong : dont put a link in #statusDiv, just make sure the getJSON function is called when you click on it. try this :
function plotMarkers() {
var pathStart = new String(window.location);
var newPath = pathStart.replace("Default.aspx", "getMarkers.aspx");
alert(newPath);
$("#statusDiv").text('JSON Data');
$("#statusDiv").on('click',function(){
$.getJSON(newPath, function (data) {
alert("Loading JSON");
$.each(data, function (index, elem) {
var myLatlng = new google.maps.LatLng(elem.lat, elem.lng);
var marker = new gogle.maps.Marker({
position: myLatlng,
title: elem.title
});
marker.setMap(map);
});
}, function () { alert("loaded") });
});
}
I'm trying to write KDE4 plasmoid in JavaScript, but have not success.
So, I need to get some data via HTTP and display it in Label. That's working well, but I need regular refresh (once in 10 seconds), it's not working.
My code:
inLabel = new Label();
var timer= new QTimer();
var job=0;
var fileContent="";
function onData(job, data){
if(data.length > 0){
var content = new String(data.valueOf());
fileContent += content;
}
}
function onFinished(job) {
inLabel.text=fileContent;
}
plasmoid.sizeChanged=function()
{
plasmoid.update();
}
timer.timeout.connect(getData);
timer.singleShot=false;
getData();
timer.start(10000);
function getData()
{
fileContent="";
job = plasmoid.getUrl("http://192.168.0.10/script.cgi");
job.data.connect(onData);
job.finished.connect(onFinished);
plasmoid.update();
}
It gets script once and does not refresh it after 10 seconds. Where is my mistake?
It is working just fine in here at least (running a recent build from git master), getData() is being called as expected. Can you see any errors in the console?
EDIT: The problem was that getUrl() explicitly sets NoReload for KIO::get() which causes it load data from cache instead of forcing a reload from the server. Solution was to add a query parameter to the URL in order to make it force reload it.
I need to export data in excel file.I am using JSON post for the same.Its working fine in every browser except IE. Have a look at my javascript code as follow:-
function ExportQueryData() {
var Qry = $("#txtQueryInput").val();
if ($.trim(Qry) == "") {
$("#txtQueryInput").addClass("error");
return false;
}
else
$("#txtQueryInput").removeClass("error");
var url = "/Reports/ExportQueryData";
var frmserialize = $("#frmQuery").serialize();
$.post(url, frmserialize, function(data) {
data = eval("(" + data + ")");
if (data.Success) {
url = "/Reports/Export";
var win = window.open(url, "DownloadWin", "resizable=0,status=0,toolbar=0,width=600px,height=300px");
win.focus();
win.moveTo(100, 100);
return false;
}
else {
Notify("DB Query", data.Message);
}
});
}
As per above code, i am calling /Reports/Export action using window.open. This pop up getting open for every browser except IE. In IE pop up get close simultaneously in 1 or 2 seconds.
I need to use JSON post because it validate input and then returns success status.I can validate my data only at server side.
Let me know if any information is missing here.
Your suggestion will be valuable for me.
Thanks.
I think you've got a pop up blocker situation... Frankly, I don't think opening up stuff in a pop up is a good idea anymore. Browsers seem to hate it.
You could instead show the user a link to the file, asking him to click to download.