Javascript does not run after aspx page has been loaded - javascript

I created a site in Visual Studio 2015 Community Edition starting with the ASP.NET Empty Web Site template. Recently, I needed to add authentication features, so I created a new ASP.NET Web Forms site, and moved all of my pages/files/entire site created in the empty web site to this new Web Forms site template.
Everything works perfectly--except that none of the javascript that used to update my pages dynamically continue to work. All of the javascript functions that previously worked seem to be completely ignored. (I haven't changed anything in the HTML or Javascript code -- the only thing that changed was the ASP.NET template I began with). Even something as simple as tagging the navigation menu to be active will not work, for example:
<script type="text/javascript">
$(function() {
// this will get the full URL at the address bar
var url = window.location.href;
// passes on every "a" tag
$(".Navigation a").each(function() {
// checks if its the same on the address bar
if (url == (this.href)) {
$(this).closest("li").addClass("active");
}
});
});
</script>
This worked perfectly to highlight the active menu previously, but no longer works in the new Web Forms site template. I've also tried moving it from the file header, to the content, and even to a separately referenced file with no avail.
Is there an assembly I need to add to my new project, or is there a global setting in this ASP.NET Web Forms template that could be blocking my javascript from working? Any help would be greatly appreciated. I've been stuck on this problem for over a week now.
Edit: here's a better example to see if I'm missing something more obvious:
This worked previously to dynamically load more information from a database after a page was loaded and the user scrolled to the bottom. The javascript still works to display a message box when the user hits the bottom of the page, but the Web Method in the c# code behind never gets called...
var pageIndex = 1;
var pageCount;
$(window).scroll(function () {
// Everytime that the user scroll reaches the bottom of the page, execute function GetRecords
if ($(window).scrollTop() == $(document).height() - $(window).height()) {
GetRecords();
}
});
function GetRecords() {
// Local variable page index begins at 1, and each time the user scrolls to the bottom of the page
// this number is increased to mark how many times the user has hit the bottom of the page.
// This later marks how many elements have been loaded from the database.
pageIndex++;
// On first scroll, pageCount is null so pageIndex is 2, and function still needs to be executed.
if (pageIndex == 2 || pageIndex <= pageCount) {
// Display a loading bar
$("#loader").show();
window.alert("loading");
$.ajax({
// POST signals a data request
type: "POST",
// This directs which function in the c# code behind to use
url: "databaseLoadDynamic.aspx/GetCustomers",
// The paramater pageIndex, page number we need to load, to pass to GetCustomers(int pageIndex)
data: '{pageIndex: ' + pageIndex + '}',
// Type of data we are sending to the server (i.e. the pageIndex paramater)
contentType: "application/json; charset=utf-8",
// Type of data we expect back from the server (to fill into the html ultimately)
dataType: "json",
// If all goes smoothly to here, run the function that fills our html table
success: OnSuccess,
// On failure, error alert user (aka me so that I know something isn't working)
failure: function (response) {
alert(response.d);
},
error: function (response) {
alert(response.d);
}
});
}
}
Thank you so much for all of your help!

Related

How to get string URL from multiple past pages JavaScript

I am very new to JavaScript. I am trying to make a web application, where a simple back button will go to a specific page I am looking for, one that has the word "search" in it. I don't know the exact URL, because the parameters within that URL change, and I need to keep it consistent to what the user wanted. But this one button should go back to that one page, regardless of the other links that were clicked.
For example:
If I clicked on
Home Page
Main Page
Page 1
Page 1.3
Back
I want the back to always take me to Main Page with the exact parameters it had before, not Home Page.
I tried the following:
The button itself
movieTableBodyElement.append('' + " << Back" + ''); // Building the HTML button, calls the goBackHelper() function
function goBackHelper()
{
// checks if the page directly behind is the Main Page with "search"
if(document.referrer.includes("search"))
{
// if its the main page, exit the function and end recursive call
window.history.go(-1);
}
else
{
// it is not the last page, so go to the past page and check again
window.history.go(-1);
goBackFunction();
}
}
But this takes me to the very first home page. I thought that document.referrer would get me the past URL, but it doesn't seem to be working for me. Is there a way to get the URL from past pages? So if I am on page 2, can I get all the URLs and search for Main Page? Any help is greatly appreciated!
I'm also new to Stack Overflow, so if there is any clarification please don't hesitate to let me know!
document.referrer is not the same as the actual URL in all situations.
Your best bet is to store the URLs in sessionStorage.
Add this snippet of code to your pages:
if (sessionStorage.getItem("locationHistory") !== null) {
var locationHistoryArray = JSON.parse(sessionStorage.getItem("locationHistory"));
locationHistoryArray.push(window.location.href);
sessionStorage.setItem("locationHistory", JSON.stringify(locationHistoryArray));
} else {
var locationHistoryArray = [];
locationHistoryArray.push(window.location.href);
sessionStorage.setItem("locationHistory", JSON.stringify(locationHistoryArray));
}
And this is your goBackHelper() function :
function goBackHelper() {
var searchString = 'search'; //modify this
var locationHistoryArray = JSON.parse(sessionStorage.getItem("locationHistory"));
for (i = 0; i < locationHistoryArray.length; i++) {
if (locationHistoryArray[i].includes(searchString)) {
window.location.assign(locationHistoryArray[i]);
break;
}
}
}
Read about document.referrer here.

Removing broken links in offline HTML

I have an html file with many <a> tags with href links.
I would like to have the page do nothing when these links point to an outside url (http://....) or an internal link that is broken.
The final goal is to have the html page used offline without having any broken links. Any thoughts?
I have tried using a Python script to change all links but it got very messy.
Currently I am trying to use JavaScript and calls such as $("a").click(function(event) {} to handle these clicks, but these have not been working offline.
Also, caching the pages will not be an option because they will never be opened online. In the long run, this may also need to be adapted to src attributes, and will be used in thousands of html files.
Lastly, it would be preferable to use only standard and built in libraries, as external libraries may not be accessible in the final solution.
UPDATE: This is what I have tried so far:
//Register link clicks
$("a").click(function(event) {
checkLink(this, event);
});
//Checks to see if the clicked link is available
function checkLink(link, event){
//Is this an outside link?
var outside = (link.href).indexOf("http") >= 0 || (link.href).indexOf("https") >= 0;
//Is this an internal link?
if (!outside) {
if (isInside(link.href)){
console.log("GOOD INSIDE LINK CLICKED: " + link.href);
return true;
}
else{
console.log("BROKEN INSIDE LINK CLICKED: " + link.href);
event.preventDefault();
return false;
}
}
else {
//This is outside, so stop the event
console.log("OUTSIDE LINK CLICKED: " + link.href);
event.preventDefault();
return false;
}
}
//DOESNT WORK
function isInside(link){
$.ajax({
url: link, //or your url
success: function(data){
return true;
},
error: function(data){
return false;
},
})
}
Also an example:
Outside Link : Do Nothing ('#')
Outside Link : Do Nothing ('#')
Existing Inside Link : Follow Link
Inexistent Inside Link : Do Nothing ('#')
Javascript based solution:
If you want to use javascript, you can fix your isInside() function by setting the $.ajax() to be non asynchronous. That is will cause it to wait for a response before returning. See jQuery.ajax. Pay attention to the warning that synchronous requests may temporarily lock the browser, disabling any actions while the request is active (This may be good in your case)
Also instead of doing a 'GET' which is what $.ajax() does by default, your request should be 'HEAD' (assuming your internal webserver hasn't disabled responding to this HTTP verb). 'HEAD' is like 'GET' except it doesn't return the body of the response. So it's a good way to find out if a resource exists on a web server without having to download the entire resource
// Formerly isInside. Renamed it to reflect its function.
function isWorking(link){
$.ajax({
url: link,
type: 'HEAD',
async: false,
success: function(){ return true; },
error: function(){ return false; },
})
// If we get here, it obviously did not succeed.
return false;
}
Python based solution:
If you don't mind preprocessing the html page (and even caching the result), I would go with parsing the HTML in Python using a library like BeautifulSoup.
Essentially I would find all the links on the page, and replace the href attribute of those starting with http or https with #. You can then use a library like requests to check the internal urls and update the appropriate urls as suggested.
Here is some javascript that will prevent you from going to external site:
var anchors = document.getElementsByTagName('a');
for(var i=0, ii=anchors.length; i < ii; i++){
anchors[i].addEventListener('click',function(evt){
if(this.href.slice(0,4) === "http"){
evt.preventDefault();
}
});
}
EDIT:
As far as checking if a local path is good on the client side, you would have to send and ajax call and then check the status code of the call (infamous 404). However, you can't do ajax from a static html file (e.g. file://index.html). It would need to be running on some kind of local server.
Here is another stackoverflow that talks about that issue.

AJAX $.ajax() and setInterval() only loading when someone is on page?

I wrote the code below to refresh or reload the page to a div id='bitcoin_blocks_table' and it only does so when someone is on the site.
If nobody is on the site and I come back in 2 hours it didn't update the ones from the past two hours.
Is this because of the AJAX call or could it be because of the script?
Code:
$('#bitcoin_blocks_table').load('./ajax/bitcoin_blocks.php');
var refresh_bitcoin_blocks = setInterval(function() {
$.ajax({
url: './ajax/bitcoin_blocks.php',
type: 'POST',
success: function(blocks) {
$('#bitcoin_blocks_table').html(blocks);
}
});
}, 10000);
It's because the site "works" only when somebody opens it. The intervals work within a client (browser), and once it's closed, so are the intervals... Imagine what would happen if all the periodic JS functions on every site would run (and add up with every new visit) the whole time!
Google "cron job".
run your script
/ajax/bitcoin_blocks.php
via cronjob
see this article

Dynamic script loading works first time but not thereafter

I have contact page on my website where I have various social network links (plus an e-mail form) with links at the side to select each one. Clicking a link makes an ajax request to the server, and on success replaces the html of a common div with the response.
Each one has a javascript file associated with it, and this is added as a script tag in the document head on ajax success.
These scripts should evaluate on each load and prepare the DOM in the response. However, I am finding that the first click works perfectly, the script is loaded and executes, but when I go to click on another link, it loads the new script but it never seems to execute. And none of those dynamically loaded scripts work thereafter.
The ajax call for loading each option is bound to each link's click event here:
$('.socialLink').click(function() {
var id = $(this).prop('id').toLowerCase();
var callingObj = $(this);
$.ajax({
url: "./socialMedia/" + id + ".php",
success: function(msg) {
$('.socialLink').css('opacity', '0.4');
$('.socialLink').data('active', false);
callingObj.css('opacity', '0.9');
callingObj.data('active', true);
if ($('#Feed').css('display') !== 'none') {
$('#Feed').slideToggle(400, function() {
$('#Feed').html(msg);
});
}
else
{
$('#Feed').html(msg);
}
$('#Feed').slideToggle(400);
$.getScript('./script/' + id + '.js');
}
});
});
The thing is, I dynamically load scripts for each page on the site, too... and don't seem to have any problems with that.
You can see the page I am talking about if you go here http://www.luketimoth.me/contact.me. Only two options actually load any javascript at the moment, the e-mail and twitter ones... the rest are empty js files with only a single comment inside.
EDIT: I am now using jQuery getScript()... I have changed the code above to reflect this. The scripts I am trying to load, which are not working as exepcted, are:
twitter.js (just the standard code twitter gives you for one of their widgets):
!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+"://platform.twitter.com/widgets.
js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");
email.js:
$('#Send').click(function() {
var senderName = $('#YourName').val();
var senderEmail = $('#Email').val();
var emailSubject = $('#Subject').val();
var emailBody = $('#EmailBody').val();
$.ajax({
url:'./script/sendMail.php',
data: {
name: senderName,
email: senderEmail,
subject: emailSubject,
body: emailBody
},
type: "POST",
success: function(msg) {
$('#success').html(msg);
}
});
});
$('input, textarea').focus(function() {
if (this.value === this.defaultValue) {
this.value = '';
}
});
$('input, textarea').focusout(function() {
if (!this.value.length) {
this.value = this.defaultValue;
}
});
Thanks for all the comments and suggestions. I decided in the end to load everything in the background rather than make an ajax request every single time.
It's actually a much more responsive page now... admittedly at the cost of having unused DOM elements in the background. Given how much faster it is, though, I think the trade-off is acceptable.

IE8's information bar blocking a scripted file download in response to JQuery AJAX request

I have an html/javascript frontend that is using JQuery's AJAX request to send XML containing user-entered form data to a backend application which in turn creates a PDF from that information. The frontend receives a UUID in response, which it then uses in the download url to download the generated PDF.
This works wonderfully in Firefox and Safari, but is being blocked by Internet Explorer 8's protection against scripted downloads. Telling IE8 to download the file via the spawned Information Bar forces a reload of the page, which blanks out all of the entered user content.
A single onMouseUp event on a button-esque element is triggering the generation of the XML to send, sending the XML, getting its response, then initiating the download via setting the url in the window.location object. Separating out that download into a different button (having one generate and send the xml and fetch the UUID, and the other only initiate the download using the url made from the UUID) bypasses the information bar but ruins the simplicity and intuitiveness of the interface.
Here are the relevant javascript functions:
function sendXml()
{
var documentXml = generateDocumentXml();
var percentEncodedDocumentXml = escape(DocumentXml);
var url = "generate?document=" + percentEncodedDocumentXml;
$.ajax({
url: url,
type: "GET",
dataType: "xml",
success: function (xml)
{
var uuid = $(xml).find('uuid').text();
getPdf(uuid);
},
error: function (xhr)
{
alert("There was an error creating your PDF template");
}
});
}
function getPdf(uuid)
{
var url = "generate?get-pdf=" + uuid;
window.location = url;
}
I'm fishing for suggestions about how to best handle this issue. My first preference would be to have the information bar not interfere at all, but minimizing its harm would be a dramatic improvement over the current situation. If it could not reload and wipe the frontend interface, and actually proceed to downloading the file when the user chooses to "Download File..." via the Information Bar's menu, that would help.
I tested it and the reason for the bar to occur seems to be the fact, that there is no direct relation between the user-action(mouseover) and the loading of the URL(guess a PDF-file).
This workaround will solve the issue:
Create an iframe(may be hidden) inside the document and use
window.open(url,'nameAttributeOfTheIframe')
...to load the PDF. The bar occurs too, but if the user chooses to download, the current document will reload too, but the user-content(if you mean form-data) will remain, as the bar belongs to the iframe not to the parent document.
Be sure to send a attachment-header with the PDF too, to beware of showing it inside the browser(if the browser is able to), because if you use a hidden iframe the user cannot see what's loaded there.
<iframe name="nameAttributeOfTheIframe" style="display:none"></iframe>
<input type="button" value="click here" onclick="f1()"/>
<input value="default value">
<script type="text/javascript">
<!--
function f1()
{
//simulate delayed download
setTimeout(f2,1000)
}
function f2()
{
window.open('http://www.ecma-international.org/publications/files/ECMA-ST/ECMA-262.pdf','nameAttributeOfTheIframe');
}
document.getElementsByTagName('input')[1].value='this is modified value, should remain';
//-->
</script>

Categories