I wanted to ask if there is any alternative to JavaScript/AJAX.
My goal is to have functionality of dynamic content without reloading the page. My problem with JavaScript/Flash or any other plug-ins is that user can disable those.
I already did some research and found Google Dart but this is implemented through JavaScript so it doesn't help.
TL;DR - I want an alternative to JavaScript/AJAX, which cant be disabled so that every user will see the same web page without having disadvantages through disabling plug-ins.
There is nothing like what you're describing that a user cannot disable. Nor should there be. Users should be the ultimate arbiters of what runs on their machines.
JavaScript and Ajax is your most broadly-supported solution. Yes, users can disable it, but globally, fewer than 2% do and it's easy to detect that they have and present a non-JavaScript version of your page (or a message saying your page isn't accessible without). Also, note that JavaScript is not a plug-in for web browsers; all popular browsers (and most niche browsers) support it natively.
Flash would be your next stop, but despite the Flash plug-in having great penetration there are more users without Flash than without JavaScript (anyone using an iPhone or iPad, for instance). Also, since Flash has been used so heavily for irritating advertising, a lot of people install Flash blockers that prevent the Flash app from running by default, requiring them to click on it to run it. (And of course Flash is closed and proprietary.)
There's also Silverlight from Microsoft (also a plug-in) and the open-source version Moonlight, but there are a lot more people without Silverlight/Moonlight than without Flash.
At the end of the day, you need code running on the end-user's computer, which means they control whether that code is allowed to run — by enabling/disabling JavaScript, by installing or not installing Flash (and using or not using Flash blockers, since it's used for so much irritating advertising), etc.
There is no alternative to "client side programming" for doing "client side actions". Evey option that exists (JS, Flash, Shockwave, Silverlight, Unity, Dart, etc.) can also be disabled.
The purpose of this is to allow the user to control every data request himself and protect him from JS or 3rd party plugins security flaws.
JavaScript is not meant to show page content to the user. For that you have HTML.
It's not even meant to style the page. There is CSS for that.
With HTML and CSS the page content can be seen by search engines, and by people using different devices and browsing methods thanks to CSS, even impaired users.
JavaScript is meant to enhance the functionalities of a web page by allowing a smoother navigation for the user. It should not be used to show content impossible to see if JS is being disabled.
If using AJAX, be sure that each content loaded with AJAX may also be accessible if a user has JS disabled using normal links.
First develop your pages without thinking about JavaScript or other scripting/plugins technologies. Let your pages be fully navigable for every user and every browser.
Then, use JavaScript to enhance the site navigation and give users with JS enabled the best user experience possible.
Related
If you look at the image, there are shared web pages between the 3 applications, Web, Mobile and Touch Browser. I need to report the web pages uniquely per application. So if web page A is loaded, it will need to load different JS libraries for each type of application. Currently a JS bootstrap loader file handles the logic to load the proper JS libraries but that is a 1-to-1 relationship. I now need a way to determine which application is loading the web page and load the appropriate libraries (DTM libraries but irrelevant). The solution would have to reside in the JS bootstrap loader file logic. I’m looking into using the navigator object to sniff out which type of application is requesting the web page but not sure how feasible that is? Maybe feature detection is another way but not sure how/if this would work with the applications? Any ideas?
You might want to take a look at this article: http://www.stucox.com/blog/you-cant-detect-a-touchscreen/
As far as I'm aware, there's no good/reliable ways to specifically detect touch devices, and even if you could, how would you differentiate between a phone and a HP Spectre laptop, for example?
Sniffing for UserAgents can get you some of the results you want, but it's considered to be a suboptimal solution.
Cloudflare gives you the ability to serve different sub-domains for mobile devices, but I'm not sure what criteria they use.
Screen width is a reliable way of detecting whether your content will fit or not.
You can do this in javascript by detecting screen width and redirecting them, but that would be quite inefficient to load the page only to redirect them, so you may want to look into server-side detection: https://webplatform.github.io/docs/concepts/Detecting_device_and_browser/
Hope that helps somewhat.
I am needing to a custom widget into users of my applications websites and the initial thought it that an iframe would make it SO much simpler for a few reasons:
I can use the custom framework built for the application which provides a ton of pre-built code and features that i'll be using in the widget and thus wouldn't have to recreate. Cross browser event handlers and prototyped objects are just a few of many examples.
I don't have to worry about clashing with CSS and more specifically
won't have to use inline css for every dom element I create.
Cross domain requests are not a problem because i've already built
the functionality for everything that needs to be communicated using
jsonp, so don't let that be a downside for an embedded dom widget.
The idea as of right now is to write a quick javascript snippet that is essentially a button and loads a transparent iframe above the window that is full screen. This will give me control of the entire view and also allow me to write the widget just like I would any other part of the parent application. Maintaining the same json communication, using the same styles, using the framework etc. Clicking on any part of the iframe that was is not part of the widget (which will likely load centered in the middle of the screen, and be full screen if on a mobile device) will dismiss the widget and resume the user back to their normal navigation within the website. However, they can interact with my widget while its up, just like it were an embedded portion of the website that happened to load a javascript popup.
The widget itself does communication somewhat heavily with the server. There is a few requests on load, and then as the user interacts, it typically sends a request, changes the view and then wait for another response.
All that being said, is this a bad idea, or otherwise unprofessional idea? Should I put the extra work into embedding the widget directly into the hosts DOM and rewrite all the convenient framework code? I don't have a problem doing that, but if an iframe is a more appropriate choice, then I think i'd rather that. My only limitation is I need to support down to IE8, and of course, the widget needs to be viewable on both desktop and mobile, and the widget cannot be obtrusive to the clients website.
Some good reading from another post. Although closed, it poses some decent arguments in favor of iframes.
Why developers hate iframes?
A few points that resonate well:
Gmail, live.com, Facebook any MANY other major websites on the
internet take advantage iframes and in situations where they are
used properly...well that is what they were made for.
In situations especially like mine its considered a better practice
to prevent any possible compatibility issues with a remote website I
have no control over. Would I rather use an iframe, or destroy a
persons website? Sure I can take all possible precautions, and the
likelyhood of actually causing problems on another persons site is
slim, but with an iframe its impossible. Hence why they were created
in the first place.
SEO in my situation does not matter. I am simply extending a
javascript application onto remote users websites, not providing
searchable content.
For the above reasons, I think that taking the easier approach is actually the more professional option. Iframes in general have a bit of a misconception, I think because of their overuse in ugly ways in the past, but then again tables also have that same misconception yet believe it or not, when data is best displayed in a tabular fashion, it makes sense to use...wait for it...tables. Though we do this behind css values such as display: table-cell.
For now iframes get my vote. I'll update this answer if a little further down the development road on this project, I change my mind.
I am trying to work out if I can alter the functionality of a website preferably through vba (access) or any other way that I can centrally manage. What I am trying to achieve is, depending on permissions, I would like users to log onto a website and the website is then changed on the fly to stop the user using normal functions of the website. For example some users have access to a submit button while others do not.
I have seen that you can use VBA to parse websites and auto logon. I'm just not sure if its capable of doing any local scripting like greasemonkey does.
Maybe I am looking at this wrong and can achieve this at the firewall level instead of running website scripts.
Any ideas?
You should not manage website permissions using a client-side technology like JavaScript. Users can easily either just disable JavaScript/VBScript or tamper with the page.
The best approach is to manage permissions by emitting the HTML from a server-side scripting language such as ASP.Net or PHP.
ASP.Net has built-in, generally adequate support for membership, roles and permissions that would meet this need.
http://msdn.microsoft.com/en-us/library/yh26yfzy(v=vs.100).aspx
If that is not an option for whatever reason, and you can accept the risk of someone tampering with the permissions you setup, you can certainly use something like jQuery to hide portions of an HTML document that a user should have no access to. You can accomplish the same thing using JavaScript without jQuery, but I would suggest jQuery because it abstracts away many of the cross-browser issues.
If you do that, hide everything by default and then show selectively based on permissions. That way, the simplest method of just disabling JavaScript will not reveal anything special (though it is still quite easy to hack).
I have a project where I need to create a desktop app that acts like a browser, however, I need to be able to execute my own css and javascript on ANY page that a user goes to. The goal is to have a user be able to browse to a website, and then click on certain elements of the site and quickly pull information regarding that element (divID, classes, etc), then add some javascript inside the browser that will add some new functionality to the page (though only in the browser). I'll also need to sync this desktop app up to both an internal database as well as connect to a remote database online.
I'm a javascript developer, and so I really want to be able to use jquery to help build out the interaction with the site. I've played around with adobe air, and was able to build a browser using flex, but then I wasn't able to use jquery to manipulate the pages (maybe there's a way, but I don't know flex at all, and I couldn't figure it out and didn't want to waste too much time to discover that I couldn't do it). I then tried to create an HTML air app and have the browser essentially be an iframe. However, the cross domain scripting became an issue, and I don't think that the iframe sandBox solution is what I'm after because that looks like I would need to create a local version for each page that is browsed to, and then alter that local version.
So, I'm back to square one and am trying to find what technology I should be looking at where I can add my own javascript and css to a page within a browser? I'm familiar with javascript and PHP, but this will be my first desktop app. I'm willing to learn a new technology though I obviously want to be able to stick to what I'm most familiar with. I've thought about building a firefox plugin, but I'm hoping to sell this app, and I think a stand alone app would allow for a higher price tag.
Try Adobe Air. It's cross platform, has the ability to create "real" apps, can load and process HTML and CSS (has webkit built in), and allows for the creation of applications using HTML/CSS/JavaScript. If you're looking for something more freedom loving, check out Titanium, which is a similar framework.
Today a lot of content on Internet is generated using JavaScript (specifically by background AJAX calls). I was wondering how web crawlers like Google handle them. Are they aware of JavaScript? Do they have a built-in JavaScript engine? Or do they simple ignore all JavaScript generated content in the page (I guess quite unlikely). Do people use specific techniques for getting their content indexed which would otherwise be available through background AJAX requests to a normal Internet user?
JavaScript is handled by both Bing and Google crawlers. Yahoo uses the Bing crawler data, so it should be handled as well. I didn't look into other search engines, so if you care about them, you should look them up.
Bing published guidance in March 2014 as to how to create JavaScript-based websites that work with their crawler (mostly related to pushState) that are good practices in general:
Avoid creating broken links with pushState
Avoid creating two different links that link to the same content with pushState
Avoid cloaking. (Here's an article Bing published about their cloaking detection in 2007)
Support browsers (and crawlers) that can't handle pushState.
Google later published guidance in May 2014 as to how to create JavaScript-based websites that work with their crawler, and their recommendations are also recommended:
Don't block the JavaScript (and CSS) in the robots.txt file.
Make sure you can handle the load of the crawlers.
It's a good idea to support browsers and crawlers that can't handle (or users and organizations that won't allow) JavaScript
Tricky JavaScript that relies on arcane or specific features of the language might not work with the crawlers.
If your JavaScript removes content from the page, it might not get indexed.
around.
Most of them don't handle Javascript in any way. (At least, all the major search engines' crawlers don't.)
This is why it's still important to have your site gracefully handle navigation without Javascript.
I have tested this by putting pages on my site only reachable by Javascript and then observing their presence in search indexes.
Pages on my site which were reachable only by Javascript were subsequently indexed by Google.
The content was reached through Javascript with a 'classic' technique or constructing a URL and setting the window.location accordingly.
Precisely what Ben S said. And anyone accessing your site with Lynx won't execute JavaScript either. If your site is intended for general public use, it should generally be usable without JavaScript.
Also, related: if there are pages that you would want a search engine to find, and which would normally arise only from JavaScript, you might consider generating static versions of them, reachable by a crawlable site map, where these static pages use JavaScript to load the current version when hit by a JavaScript-enabled browser (in case a human with a browser follows your site map). The search engine will see the static form of the page, and can index it.
Crawlers doesn't parse Javascript to find out what it does.
They may be built to recognise some classic snippets like onchange="window.location.href=this.options[this.selectedIndex].value;" or onclick="window.location.href='blah.html';", but they don't bother with things like content fetched using AJAX. At least not yet, and content fetched like that will always be secondary anyway.
So, Javascript should be used only for additional functionality. The main content taht you want the crawlers to find should still be plain text in the page and regular links that the crawlers easily can follow.
crawlers can handle javascript or ajax calls if they are using some kind of frameworks like 'htmlunit' or 'selenium'