I'm really new to web programming. I'm trying to make a form post and get the callback.
I'm trying to use this plugin here: http://malsup.com/jquery/form/#ajaxSubmit
But when I call the: $("#my_form").ajaxSubmit(options); Nothing happens..
What I have done so far:
I have this form
<form method="post" id="my_form" action="record.php" enctype="multipart/form-data" >
// stuff inside..
<input type="button" id = "recordDatabase" value="Rcord on Database" />
</form>
And I have this script:
<script src="http://malsup.github.com/jquery.form.js"></script>
$(document).ready(function()
{
var options =
{
beforeSubmit: showRequest, // pre-submit callback
success: showResponse // post-submit callback
};
$("#recordDatabase").click(function()
{
alert('About to submit: \n\n');
$("#my_form").ajaxSubmit();
alert('submited: \n\n');
return false;
});
});
Finally my two functions are:
function showRequest(formData, jqForm, options)
{
// formData is an array; here we use $.param to convert it to a string to display it
var queryString = $.param(formData);
alert('About to submit: \n\n' + queryString);
return true;
}
function showResponse(responseText, statusText, xhr, $form)
{
alert('status: ' + statusText + '\n\nresponseText: \n' + responseText +
'\n\nThe output div should have already been updated with the responseText.');
}
I´m doing exactly like the example on the site(http://malsup.com/jquery/form/#ajaxSubmit), but it doesn´t work.
Any idea what's wrong?
I don't think you can hotlink to the jQuery plugin on Git. Try downloading the plugin and saving it as a JS file in your application web root.
It looks like you are not referencing your scripts correctly. According to your comment, you have included your scripts like this:
<script src="ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
<script src="localhost/js/jqueryPlugin.js"></script>
These are relative URLs. The browser will request the resources from your site by tacking on those relative URLs to the end of the current directory.
Suppose this page is at http://localhost/myapp/mypage.html. The browser will look for your script files at:
http://localhost/myapp/ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js
http://localhost/myapp/localhost/js/jqueryPlugin.js
These URLs probably don't exist. Maybe your browser is smart enough to recognise "ajax.googleapis.com" as a domain name and request the data from the domain, but it's less likely that it will recognize "localhost" as a domain.
Add // to the beginning of the URL. This is a "Schema-relative URL" and will use either http or https depending on what the current page is using. We use these kind of URLs to avoid security prompts warning users that some of the content on the page is not secure. Your scripts would then look like this:
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
<script src="//localhost/js/jqueryPlugin.js"></script>
Related
The server is running CGI, C++ Web API. There is not MVC or PHP involved. The host is using JavaScript, AJAX and some JQuery.
My root question is two fold:
1) How to squirt a raw PDF onto a new window when it returns with that PDF from the POST. Below is an example of a web page that calls a Web API to generate a PDF, which I then do not properly put that PDF into a new web page.
or
2) How to do a POST via an HREF using JavaScript and AJAX.
1 above is preferable. Please reference SQUIRT or HREF in your replies.
If you are an uber Web programmer - search for "Just Not Working" to solve the Squirt issue for me. That will save you a lot of reading.
My code works if the HREF is a GET. The Web API I am calling is returning a PDF. I open a new window and I wrapped the PDF with the proper HTML and all works exactly as I like when I return a static PDF from a GET. But the issue is that I need to push data into the Web API to tell it how to generate the PDF. Since I also want this to atomic, I don't want a post followed by a get. The web api must remove all trace of the data once it returns the PDF. So a single POST would be ideal.
There are other threads on stackoverflow that debate why it is bad to push data in the body of a GET, and since that violates some standards, that is off the table.
Also the data submitted must be secure - submitted via https, so putting it in the URL is no good.
I figured out the dual posting from the form, the same data is submitted to my Web API on a submit - to change the devices settings, and on a button click to generate a PDF.
A stripped down form:
<html>
<head>
<script src="./jquery/jquery-2.1.4.js" type="text/javascript"></script>
<script src="ToolKit.js" type="text/javascript"></script>
<script src="setupFunctions.js" ></script>
<link rel="stylesheet" type="text/css" href="yadayada.css" />
<title>My Linux Device with CGI scripts</title>
</head>
<div class="body">
<form name="MyForm" id="MyForm" hidden action="/api/config/form" method="POST">
<div class="variables">
<input type="hidden" id="ETH0_IP_MODE" name="ETH0_IP_MODE" value="DHCP">
<input type="hidden" id="WIFI_AP_BROADCAST" name="WIFI_AP_BROADCAST" value="ON">
</div>
<!-- HTML That does amazing and spectacular stuff, and sets name/value items to post. -->
<div id="bottomButtons">
<input type="submit" name="update" value="Apply settings" />
<input type="button" value="Cancel" onclick="location.reload(true);" />
<input type="button" id="GeneratePDF" value="Create PDF" onclick="GeneratePDF()" style="float:right;" />
</div>
</form>
</div> <!-- body -->
</html>
Above is a stripped down form with two buttons. Submit does a post but I overload the post to change the error/success.
<!-- now some Javascript -->
<script type="text/javascript">
function UpdateStatus(st, color) { <!-- here for completeness... --> }
$('form[name=MyForm]').submit(function (e) {
e.preventDefault(); // Keep the form from submitting
var form = $(this);
$.ajax({
url: form.attr('action'), // '/api/config/form',
type: form.attr('method'), // 'POST',
data: form.serialize(),
cache: false,
async: false,
dataType: 'html',
error: function (jqXHR, textStatus, errorThrown) {
UpdateStatus('Status: Error setting the network configuration! ' + textStatus + ': ' + errorThrown, "Crimson");
},
success: function (data) {
UpdateStatus('Status: Success set the network configuration!', "Chartreuse");
my_window = window.open("", "Update Network Configuration Results", "location=1,status=1,scrollbars=1,width=800,height=640");
my_window.document.write(data);
}
});
return false;
});
</script>
So far this is just a normal form that calls a Web API. Nothing special. After submit/update button is pressed, status is updated and on success a window is opened. That window receives a block a text that is returned from the Web API that is there for debugging. It reports back everything that has changed.
The main issue is the second button - the GeneratePDF. I do call a POST (see below), but my primary issue is how to squirt the PDF into the HTML on successful return from the post.
More JavaScript - linked to that second button:
function GetPDF() {
try {
var elem = document.getElementById('MyForm');
WebToolkit.readPDF(GeneratePDF_callback, elem); // POST
} catch (e) {
debugger;
throw e;
}
}
Now some AJAX:
var WebToolkit = {
host: document.location.host, // Javascript Get Website URL
servicePath: "/api/config/",
getValueMethod: "value?",
getPDFMethod: "GeneratePDF?",
httpsPortNumbers: [443],
_networkErrorCallback: null,
_serviceURL: null,
readPDF: function (callback, form) {
var url = this._serviceURL + this.getPDFMethod;
$.ajax({
type: "POST",
contentType: "application/pdf",
dataType: "text",
data: $("form").serialize(),
url: url,
timeout: 5000,
success: callback,
cache: false,
async: false
});
}
}
Obviously -serviceURL is setup in a setup function which I did not show. But you get the idea. All of this works. When I press the GeneratePDF button - it posts the data and the Web API (not shown) returns a raw PDF file. My callback is :
GeneratePDF_callback. And THIS is the problem code that I need help with. I am leaving in commented out attempts to wrap the PDF - which do not work.
function GeneratePDF_callback(data)
{
try {
my_window = window.open("", "GeneratePDF Results", "location=1,status=1,scrollbars=1,width=1024,height=800");
// Next two lines are just not working !!!
var PDFData = encodeURI(data);
my_window.document.body.innerHTML = "<head><Title>Generate PDF</Title><style></style></head><body><object data=:application/pdf;base64,\"" + PDFData + "\" type=\"application/pdf\" width=\"100%\" height=\"100%\"><p>It appears you don't have a PDF plugin for this browser. Not a problem... you can click here to download the PDF file.</p></object></body></html>";
UpdateStatus('Status: Success reading PDF from device!', "Chartreuse");
} catch (e) {
debugger;
UpdateStatus('Error reading PDF: ' + e.message, "Crimson");
throw e;
}
}
FRIGGIN HELP ! This is driving me batty. Once this works - as you can see I have an href after the "Not a problem". This is why I think getting an HREF to work is preferable. Clicking that link will resubmit the data and request a new PDF document. And this in itself may be a major design flaw, since I haven't quite tackled how I'd resubmit the data from the new window.
Alternately clicking the button could do a POST of the data, and a GET of the results. The POST would return an ID, the GET would obtain the ID, and only allow ONE get of that ID. There is no refresh on the GET since it simply displays or stores the PDF. A refresh on the Window showing the PDF would not need to resubmit the query. This must be SECURE - HTTPS with none of the data visibly insecure from a sniffer.
If you're still with me - THANK YOU just for reading.
SOLUTION:
I got this working, and am a bit overdue getting back to update this post:
I use GET and pass in the parameters to tell the server how to build the PDF.
I get back the html with an embedded object, which represents the PDF stream.
HTTP/1.1 200 OK
X-Frame-Options: DENY
Content-Length: 265129
Content-Type: application/pdf
Date: Wed, 17 Feb 2016 07:19:29 GMT
Server: lighttpd/1.4.35
%PDF-1.3
... all the PDF binary data...
This is done in JavaScript via:
form = $('form[name=myForm]');
PDF_window = window.open("", "PDF Results", "location=1,status=1,width=700,height=800");
if PDF_window == undefined) {
UpdateStatus("Status: Error opening window to display the PDF! Turn on POP UPS.", "Chartreuse");
}
else {
Data = WebToolkit._serviceURL+"PDF?" + form.serializeAndEncode();
}
WebPage = "<html><Title>PDF Page</Title><body><p>click here to download the PDF file for saving.</p><object data=\"" + Data + "\"type=\"application/pdf\" width=\"100%\" height=\"100%\"></object></body></html>";
PDF_window.document.clear();
PDF_window.document.write(WebPage);
The only issue I have run into is that embedded PDF will not allow me to press the SAVE icon. It works for PRINT, this is why I put the URL to redraw the page and that redraws the page exactly as it had previously - but not inside the 'object data'. Then it allows SAVE and PRINT. Weird, I see lots of people asking how to draw a PDF and disallow SAVE. Well I want the opposite, my solution is OK, for now.
Update - Solution, which is pretty simple after much experimentation:
function DisplayPDFResult() {
var form;
form = $('form[name=MyForm]');
PDFWindow = window.open("", "Yada Header", "location=0,status=1,width=700,height=800,", false);
if (PDFWindow == undefined) {
UpdateStatusPop("Status: Error opening window to display the EasySetup PDF! Turn on POP UPS.", "ForestGreen");
return;
}
else {
// This url will cause the web api to generate a raw pdf, and squirt into the window
PDFWindow.location.href = WebToolkit._serviceURL+"ShowPDF?" + form.serializeAndEncode();
}
}
Vinnie
I have an html file with many <a> tags with href links.
I would like to have the page do nothing when these links point to an outside url (http://....) or an internal link that is broken.
The final goal is to have the html page used offline without having any broken links. Any thoughts?
I have tried using a Python script to change all links but it got very messy.
Currently I am trying to use JavaScript and calls such as $("a").click(function(event) {} to handle these clicks, but these have not been working offline.
Also, caching the pages will not be an option because they will never be opened online. In the long run, this may also need to be adapted to src attributes, and will be used in thousands of html files.
Lastly, it would be preferable to use only standard and built in libraries, as external libraries may not be accessible in the final solution.
UPDATE: This is what I have tried so far:
//Register link clicks
$("a").click(function(event) {
checkLink(this, event);
});
//Checks to see if the clicked link is available
function checkLink(link, event){
//Is this an outside link?
var outside = (link.href).indexOf("http") >= 0 || (link.href).indexOf("https") >= 0;
//Is this an internal link?
if (!outside) {
if (isInside(link.href)){
console.log("GOOD INSIDE LINK CLICKED: " + link.href);
return true;
}
else{
console.log("BROKEN INSIDE LINK CLICKED: " + link.href);
event.preventDefault();
return false;
}
}
else {
//This is outside, so stop the event
console.log("OUTSIDE LINK CLICKED: " + link.href);
event.preventDefault();
return false;
}
}
//DOESNT WORK
function isInside(link){
$.ajax({
url: link, //or your url
success: function(data){
return true;
},
error: function(data){
return false;
},
})
}
Also an example:
Outside Link : Do Nothing ('#')
Outside Link : Do Nothing ('#')
Existing Inside Link : Follow Link
Inexistent Inside Link : Do Nothing ('#')
Javascript based solution:
If you want to use javascript, you can fix your isInside() function by setting the $.ajax() to be non asynchronous. That is will cause it to wait for a response before returning. See jQuery.ajax. Pay attention to the warning that synchronous requests may temporarily lock the browser, disabling any actions while the request is active (This may be good in your case)
Also instead of doing a 'GET' which is what $.ajax() does by default, your request should be 'HEAD' (assuming your internal webserver hasn't disabled responding to this HTTP verb). 'HEAD' is like 'GET' except it doesn't return the body of the response. So it's a good way to find out if a resource exists on a web server without having to download the entire resource
// Formerly isInside. Renamed it to reflect its function.
function isWorking(link){
$.ajax({
url: link,
type: 'HEAD',
async: false,
success: function(){ return true; },
error: function(){ return false; },
})
// If we get here, it obviously did not succeed.
return false;
}
Python based solution:
If you don't mind preprocessing the html page (and even caching the result), I would go with parsing the HTML in Python using a library like BeautifulSoup.
Essentially I would find all the links on the page, and replace the href attribute of those starting with http or https with #. You can then use a library like requests to check the internal urls and update the appropriate urls as suggested.
Here is some javascript that will prevent you from going to external site:
var anchors = document.getElementsByTagName('a');
for(var i=0, ii=anchors.length; i < ii; i++){
anchors[i].addEventListener('click',function(evt){
if(this.href.slice(0,4) === "http"){
evt.preventDefault();
}
});
}
EDIT:
As far as checking if a local path is good on the client side, you would have to send and ajax call and then check the status code of the call (infamous 404). However, you can't do ajax from a static html file (e.g. file://index.html). It would need to be running on some kind of local server.
Here is another stackoverflow that talks about that issue.
we have the following situation:
in default.aspx we have a link:
test.
and the JS code:
function doPost() {
$.post('AnHttpHandlerPage.aspx',"{some_data:...}", function(data) {
if(data.indexOf("http://")==0)
window.open(data);
else{
var win=window.open();
with(win.document) {
open();
write(data); //-> how to execute this HTML code? The code also includes references to other js files.
close();
}
}
}).error(function(msg){document.write(msg.responseText);});
}
The callback can first be an url address or 2nd html code that must be executed.
Option 1 fits, but in option 2, a new window will be opened where the code has been written but not executed.
It's clear, since it happens in the stream, it can't be executed. So the question, how can you fix it? Maybe a refresh(), or similar?
Because of the requirement of the customer, the workflow can not be changed, so it must be solved within doPost().
EDIT
The response in case 2 is HTML like this. This part should be executed:
<HTML><HEAD>
<SCRIPT type=text/javascript src="http://code.jquery.com/jquery-latest.js">
</SCRIPT>
<SCRIPT type=text/javascript>
$(document).ready(function() {
do_something...
});
</SCRIPT>
</HEAD>
<BODY>
<FORM>...</FORM>
</BODY>
</HTML>
Please help. Thanks.
In your JS code it should be something like this:
function doPost() {
$.post('AnHttpHandlerPage.aspx',"{some_data:...}", function(data) {
//if(data.indexOf("http://")==0)
if (data.type!="url") //i will add a data type to my returned json so i can differentiate if its url or html to show on page.
window.open(); // I dont know why this is there. You should
else{
var win=window.open(data.url); //This data.url should spit out the whole page you want in new window. If its external it would be fine. if its internal maybe you can have an Action on one of your controllers that spit it with head body js css etc.
/* with(win.document) {
open();
write(data); //-> how to execute this HTML code? The code also includes references to other js files.
close(); */ // No need to write data to new window when its all html to be rendered by browser. Why is this a requirement.
}
}
}).error(function(msg){document.write(msg.responseText);});
}
The overall logic is this
You do your ajax call on doPost
Find out if data returned is of type url or anything that need to open in new window
If it is url type it would have a url (check if this is not null or empty or even a valid url) then open a new window with that url. Have a read of W3C window.open for parameters
If you want to open and close it for some reason just do that by keeping the window handle but you can do this on dom ready event of that new window otherwise you might end up closing it before its dom is completely loaded. (someone else might have better way)
If its not url type then you do your usual stuff on this page.
If this does not make sense lets discuss.
I have an ASP.NET MVC3 application published to a url like this:
http://servername.com/Applications/ApplicationName/
In my code, I am using jquery ajax requests like this:
$.get(('a/b/c'), function (data) {}, "json");
When I run the application locally, the ajax request goes directly to the correct page (being an mvc route) because the local page ends with a "/" (localhost/a/b/c).
However, when I publish to http://servername.com/Applications/ApplicationName/, the trailing "/" is not always present. The url could be http://servername.com/Applications/ApplicationName, which then causes the ajax request to try to load http://servername.com/Applications/ApplicationNamea/b/c, which fails for obvious reasons.
I have already looked into rewriting the url to append a trailing slash, but A) It didn't work, and B) I feel like it's a poor solution to the problem, and that it would be better to configure the javascript urls to work properly regardless of the local folder setup.
I did try "../a/b/c" and "/a/b/c", but neither seemed to work.
Thanks in advance for the help!
Personally I tend to use a global variable of the relative URL of the server in my view like:
var BASE_URL = '#Url.Content("~/")';
Then you can do things like :
$.get(BASE_URL + 'a/b/c'), function (data) {}, "json");
I would like to add that if you want it to be totally global, you could add it to your /Views/Shared/_Layout.cshtml instead.
I ran into the same problem, and ended up creating two JavaScript functions that mirror the functionality of the MVC Url helper methods Url.Action and Url.Content. The functions are defined in the _Layout.cshtml file, so are available on all views, and work regardless of whether the application is in the root of the localhost or in a subfolder of a server.
<script type="text/javascript">
function UrlAction(action, controller) {
var url = ('#Url.Action("--Action--","--Controller--")').replace("--Action--", action).replace("--Controller--", controller);
return url;
}
function UrlContent(url) {
var path = "#Url.Content("~/--file--")";
path = path.replace("--file--", url.replace('~/', ''));
return path;
}
</script>
These can then be called like so:
var url = UrlAction('AvailableAssetClasses', 'Assessment');
var url2 = UrlContent('~/Images/calendar.gif');
Always use Url helpers when generating urls in an ASP.NET MVC application and never hardcode them. So if this script is directly inside the view:
<script type="text/javascript">
var url = '#Url.Action("a", "b")';
$.get(url, function (data) {}, "json");
</script>
And if this script is inside a separate javascript file (as it should be) where you don't have access to server side helpers, you could simply put the url in some related DOM element. For example using HTML5 data-* attributes:
<div data-url="#Url.Action("a", "b")" id="foo">Click me</div>
and then in your javascript file:
$('#foo').click(function() {
var url = $(this).data('url');
$.get(url, function (data) {}, "json");
});
and if you are unobtrusively AJAXifying an anchor or a form, well, you already have the url:
$('a#someAnchor').click(function() {
var url = this.href;
$.get(url, function (data) {}, "json");
return false;
});
I want to verify if an external url valid/exists/responsive using javascript. For example, "www.google.com" should return true and "www.google123.com" should return false.
I thought to use AJAX for this purpose by testing : if (xmlhttp.readyState == 4 && xmlhttp.status == 200) but it seems that this doesn't work for remote servers(external urls). As my server uses a proxy, i planned to use browser side script so that it automatically uses user's browser proxy if present.
Please tell me do I have to use "AJAX Cross Domain"? How to achieve this, as i simply want to validate a url.
Any way other than using AJAX?
I'm pretty sure this is not possible. Any AJAX that allowed you to call a random page on another domain in the user's context would open up all sorts or security holes.
You will have to use a server-side solution.
The usual way to avoid cross-domain issues is to inject a tag. Tags like image or script kan load their content from any domain. You could inject, say a script tag with type "text/x-unknown" or something, and listen to the tags load-event. When the load event triggers, you can remove the script tag from the page again.
Of course, if the files you are looking for happens to be images, then you could new Image() instead. That way you don't have to pollute the page by injecting tags, because images load when they are created (this can be used to preload images). Again, just wait for the load event on the image.
UPDATE
Okay, it seems I am jumping to conclusions here. There is some differences between browsers on how this can be supported. The following is a complete example, of how to use the script tag for validating urls in IE9 and recent versions of Firefox, Chrome and Safari.
It does not work in older versions of IE (IE8 at least) because apparently they don't provide load/error events for script-tags.
Firefox refuses to load anything if the contenttype for the script-tag is not empty or set to 'text/javascript'. This means that it may be somewhat dangerous to use this approach to check for scriptfiles. It seems like the script tag is deleted before any code is executed in my tests, but I don't for sure...
Anyways, here is the code:
<!doctype html>
<html>
<head>
<script>
function checkResource(url, callback) {
var tag = document.createElement('script');
tag.src = url;
//tag.type = 'application/x-unknown';
tag.async = true;
tag.onload = function (e) {
document.getElementsByTagName('head')[0].removeChild(tag);
callback(url, true);
}
tag.onerror = function (e) {
document.getElementsByTagName('head')[0].removeChild(tag);
callback(url, false);
}
document.getElementsByTagName('head')[0].appendChild(tag);
}
</script>
</head>
<body>
<h1>Testing something</h1>
<p>Here is some text. Something. Something else.</p>
<script>
checkResource("http://google.com", function (url, state) { alert(url + ' - ' + state) });
checkResource("http://www.google.com/this-does-not-exists", function (url, state) { alert(url + ' - ' + state) });
checkResource("www.asdaweltiukljlkjlkjlkjlwew.com/does-not-exists", function (url, state) { alert(url + ' - ' + state) });
</script>
</body>
</html>