javascript find replace - javascript

I have a question about optimization, but more on the browser/client side.
I am catering to a few societies that need about 3 different languages. So I'm just putting my user's language type in the php session, and swapping out the text for selected areas on each page they navigate to. So, really nothing complicated.
However, I'm toying with the idea of letting javascript do the find/replace of the selected texts on each page.
There are a few ways to skin this cat, and I've done them all, and they work. However, I do have a few hundred pages, and many words to replace with the correct language text.
If I were to go the Javascript route, does anyone have an opposing view to this? And if so, why? I'm interested in letting the user's browser do the work, rather than my servers constantly finding and replacing, or creating new CONSTANTS for each language specific situation.
I'm worried about their browsers getting slower. But that could be a very small problem.
For those individuals who love to get specifics, here's what I would do with javascript.
I would load a languages.js file with all appropriate word translations for any language I implement. Instead of running a huge find/replace on each page load, I'd localize the find/replace to the specific page, or possibly narrow an element to have an attribute that my scripting would load in the the DOM and perform a find/replace on that alone.
I'm open to better ideas.
Also, for those people who find "over-optimization" useless or "over-doing-it", please don't mention anything. This is for fun and not a critical decision item.
thanks guys!

Well, on the pro side, yes, you are offloading some of the work to the client, but I don't think that's going to make any real difference. You're probably talking about a tiny percentage of the overall performance of the site. The only way to know of course is to test it.
On the con side, you'll be increasing the bandwidth it takes to load your site, since the user will need to load the page plus the language file. It will be cached, if you set it up right, so that's probably not a huge concern either.
Another con is that this will make your site depend on javascript. A non-scripting visitor won't get the translation, and that includes search engines. Whether that matters to you depends on the nature of your site, but in general, that's a pretty big negative.
You'd also have to watch out for "flashing" of the non-localized language. It'd look horrible if the page loaded and then a split second later the language changed to something else. If you are doing the swapping from the DOM ready event ($(function() {}) in jquery, for example), it's probably too late. You could do it from a script you put at the bottom of the page, and that'd probably be ok, but even then, it may depend on the browser and the structure of the markup, not to mention the user's bandwidth and whether the server sends the content in chunks.
I think it comes down to what fits your needs best. Sorry that's not much of an answer, but it's an accurate one I think :)

I agree it is important to keep your server from overload. I would solve the problem one of two ways
Use your suggested javascript find and replace, whilst the javascript is working, have a loading.gif spinning round with a message to the effect of 'translating' nearly to explain to users why they must wait. If you are doing a word by word translation, you have to be careful about causing a browser like IE or to moan about having to do work 'The page has become unresponsive'; I would suggest running a setInterval(Translate(), 1) where translate translates a set number of words at a time so the browser doesn't think your script is going in an endless loop.
Provided the same sections are translated for all foreign visitors, you could make a PHP script that makes new, translated pages next to the originals. The translated pages could include the translator.php script to do a quick check to see if the original page has changed to decide whether or not to make a new translated page. This would not mean translating a page every time it needs to be viewed in a foreign language, but only a little check to see if the original had changed - putting less load on your server and none on the client side browser.
Personally I would implement 2 if possible to be more low-power-browser friendly (such as mobile devices) but in practice either would do and it's an interesting problem.

Related

Should Javascript be used to modify HTML?

I just recently started learning javascript and have a question regarding the'proper use'. I'm still trying to identify the role of Javascript within a website, and I'm curious whether or not it would be considered ok to have Javascript modified the HTML of a web page.
Let's say I have a panel on a web page. This panel houses a list. I would like users to be prompted to add items to this list.
I was thinking that it would be possible to use Javascript to generate list items to add to the list. However, this would be modifying the actual number of HTML elements on the web page... For some reason, this just seems 'hacky'. When I think of HTML, I think of a static structure that should come to life with CSS and Javascript.
So my question: is it considered okay to have Javascript modify the HTML of a web page? What about the case of adding items to a list?
Thank you!
Javascript is a programming language designed so it can modify the document that is being displayed(the DOM), the actual HTML is never touched.
Javascript has a place on a website and modifying the document/dom is perfectly acceptable and without it, would make javascript almost useless. CSS is great for certain tasks, but you can't do everything in CSS, though CSS5 is coming pretty close for many tasks.
Rewriting the entire DOM IS bad practice, but using it to shift an element's position based on an action, or creating a popup overlay is perfectly acceptable.
Remember the gold rule:
Modify as little as possible to accomplish the goal.
What matters is the user's experience of the (HTML) document. The representation of "the document" can change by utilising a language like javascript that "manipulates the DOM" - and the DOM is like an instance of the HTML document, or "document session" if you will.
So in a way, no, the HTML is touched. It is positively manhandled by javascript - indirectly and in a non-persistent way. But if you want to be pedantic... we say it isn't and leave half the readers confused.
When to use javascript and when not to. It's swings and roundabouts. You need to learn (mostly from experience) when one particular tool suits the situation. It usually boils down to some core considerations:
HTML is for markup. Structure. Meaning.
CSS is for style, feel, appearance.
Javascript is for those situations where none of the above work.
And I'm neglecting to mention server-side processing with the obvious disclaimer that all processing that ought to be done in privacy is done on the server (with a programming language like PHP or Ruby for example).
Sometimes you get the grey area in-between where you can do something either way. Those situations you may ask yourself a question like... would it be processed quicker if the client (user's computer) processes it, or the server... and that's where experience comes in.
It depends on the case to decide if you should manipulate DOM directly or let the JS do it.
If you have a static html page, just do your html and hand craft the
DOM. There is no need for JS to get a hand here.
If you have a semi static html page where the user actions change
part of it - then get the JS to do the changing part.
If you have a highly dynamic html page (like single page app) - you
should get the JS to render html mostly.
Using plain JS however is not the best for you in this age of great JS evolution. Learn it -but also learn to use good libraries and frameworks which can take you to the track very fast.
I recommend you to learn Jquery and Angular 2 which make you feel like learning a super set of JS straightaway.
Short disclamer: Javascript may modify DOM content in a browser but never change original HTML served from Web server.
Modern Web is unthinkable without javascript. JS allows to make static HTML interactive and improve User Experience (UX). As any sharp tool JS can produce a masterpiece out of nearly dead page and cut throat to a blooming static content.
Everything is up to the developer.
When not to use JS
Do not use JS to deliver ever-green content to the page. Web bots (crawlers) don't run JS, and you message "I come to this world to testify to the truth" may appear "a voice of crying out of desert" and be non-indexed and thus unread.
When JS in the must
Every time your page visitor does something the page should respond with proper action (using JS or, if possible, just CSS).
This is especially important when a prospect fills in a form. To err is human so a developer via JS should help the visitor to make wrong things right. In many cases it is possible without requesting server and in even more cases the answer should come from the server. And JS is your best friend in this case.
Javascript never lives alone. Browser object is its trustful ally. Most modern browsers support XMLHttpObject A.K.A AJAX (you may remember this name from ancient Greek history) which communicates with the server without reloading the page.
The idea of AJAX which stands for Asynchronous Javascript And Xml is to send request and receive response from the server asynchronously without blocking page in the browser.
Native javascript may be difficult to understand to many beginner developers. To make thing easier there are a lot of JS libraries with jQuery being most known.
Returning to the OP's question, Should Javascript be used to modify HTML?
The answer is: It Depends.

time calculating script in javascript

Is there a way to know the total time one spends on a page opened in a browser using a python script? For example if one uses gmail, and is currently using it (i.e the page is non idle) can we know the total time for which the page was active?
Further explaining what I intend to do:
By active means I am actually using the page, be it reading it,
doing some typing or doing some mouse work.
It would be great if there would somehow be a way to exclude the
time spent in breaks, perhaps introduce some error?
I am not sure whether javascript would be apt for it, and am open to suggestions!
Short answer: I don't think so.
This isn't really an issue of whether Python can do it, it's to do with whether your browser exposes that information in a way that an external program/script can query.
There's also the issue of how you define/determine which page is "active". Is it sufficient that the browser window is currently the active window and the page is on the selected tab? Or would you expect some kind of interaction? What if I'm reading a long text and so am not making interacting with the page for a period of time, does that still count as active?
The fact that it's hard to detect activity even from the website point of view, doing so from a 3rd party application/script be it written in Python or any other language would be even trickier.
If you wish to explore this further, I'd say your best bet would be to write a browser extension/plugin. In fact, there may already be existing ones that may meet your needs.
That sounds highly browser and platform-specific.
So, in general "no".
If the browser has some kind of interface or "hook" support then it might be possible.
This isn't really Python specific.
The best I can come up with is leave something like "Google Analytics" to do all that sort of stuff for you, then using your choice of language, get the data you need from the API
Of course that might not be appropriate, and the rules for how long it deems the page viewed, etc... may not work, but at least it'll be consistent.

Website doesn't work with Javascript turned off

So head to www.jabsy.com, with Javascript turned off.
Basically, I use some JQuery UI Dialogs, I use Javascript for all the bindings on the page...I pretty much use it for everything. Is that really a bad thing though?
Nothing really works without Javascript. Not even the Google Maps API.
Should I go out of my way to try and make the entire page work without Javascript? Is that even possible with my site? I wouldn't even know where to begin as I use Javascript for everything, so could I get some points? How many users actually turn off their Javascript these days?
Would it help to let the user know if they have Javascript turned off and make them turn it on before accessing it and provide them with directions how?
Yes, if your site requires JavaScript you need to let the user know that it is required.
For example:
<noscript>
<div>
You need to have JavaScript enabled to use this site.
</div>
</noscript>
You can provide more description as appropriate. A savvy user that sees this text is going to be able to then go in and turn on JavaScript for your site. A non-technical user might have trouble, but I would think most of them would be running with JavaScript enabled anyway (?).
According to data collected in 2007, about 3% of users in the US have JavaScript off. I'm sure that number is lower today.
It really depends on how critical the sections of your page that require JavaScript are. If there is a form that is mission critical, but controlled completely by JavaScript, you probably want to engineer a way for that form to do the same thing with JS on and off.
However, you have animated snowflakes on your background (for the love of God, don't really do this), it's not going to negatively affect someone visiting your site with JavaScript off.
Really, it all comes down to how important the information or actions are to your site. Turn off JavaScript and note all the things you can't do that are absolutely vital, then make them work.
Keep in mind there are several audiences that will not render your JavaScript:
Screen readers/accessible browsers
Console-based browsers (Text based browsers)
Search Engines (Google)
Your specific service (location-based messages) will be way too cumbersome to use without JavaScript (and its content is dynamic). Therefore, I see no problem requiring it. You should, however, point out that JavaScript is necessary to use your site (Preferably at the top, in really large letters). You can do that by including the alternative no-JavaScript content in noscript tags, i.e.
<noscript>
<div style="font-size: 200%;">You need JavaScript!</div>
</noscript>
However, most websites are content-based, like a company's homepage, stackoverflow or Wikipedia. These websites should be usable without JavaScript. Nowadays, even smartphones have excellent JavaScript support, but Kindle and regular phones are still too slow for JavaScript.
There is a line of argument that says sites should work without JS. Personally, I think that is tosh, unless you have a clientelle for whom this is liable to be an issue. JS is a reasonable thing to expect for many sites.
However, it is polite to let people know that this is a requirement, and inform them rather than just letting it not work. If your site is heavily JS dependent, then you may have made some mistaken design decisions, but it is probably not worth re-working it. If you monitor the number of people who get the "you need js" message, you will identify if it is proving a turn-off. I suspect it will not be an issue.
So build based on what you need, BUT tell people if they need to have things set.
You can use the <noscript><!-- html here if no Javascript --></noscript> tags and place content to be rendered in between if javascript is turned off.
I don't think there are many sites that will work with these days without it. It's more or less mandatory.

Just In General: JS Only Vs Page-Based Web Apps

When a developing a web app, versus a web site, what reasons are there to use multiple HTML pages, rather than using one html page and doing everything through Javascript?
I would expect that it depends on the application -- maybe -- but would appreciate any thoughts on the subject.
Thanks in advance.
EDIT:
Based on the responses here, and some of my own research, if you wanted to do a single-page, fully JS-Powered site, some useful tools would seem to include:
JQuery Plug Ins:
JQuery History:
http://balupton.com/projects/jquery-history
JQuery Address:
http://plugins.jquery.com/project/jquery-address
JQuery Pagination:
http://plugins.jquery.com/project/pagination
Frameworks:
Sproutcore
http://www.sproutcore.com/
Cappucino
http://cappuccino.org/
Possibly, JMVC:
http://www.javascriptmvc.com/
page based applications provide:
ability to work on any browser or device
simpler programming model
they also provide the following (although these are solvable by many js frameworks):
bookmarkability
browser history
refresh or F5 to repeat action
indexability (in case the application is public and open)
One of the bigger reasons is going to be how searchable your website is.
Doing everything in javascript is going to make it complicated for search engines to crawl all content of your website, and thus not fully indexing it. There are ways around this (with Google's recent AJAX SEO guidelines) but I'm not sure if all search engines support this yet. On top of that, it's a little bit more complex then just making separate pages.
The bigger issue, whether you decide to build multiple HTML pages, or you decide to use some sort of framework or CMS to generate them for you, is that the different sections of your website have URL's that are unique to them. E.g., an about section would have a URL like mywebsite.com/about, and that URL is used on the actual "about" link within the website.
One of the biggest downfalls of single-page, Ajax-ified websites is complexity. What might otherwise be spread across several pages suddenly finds its way into one huge, master page. Also, it can be difficult to coordinate the state of the page (for example, tracking if you are in Edit mode, or Preview mode, etc.) and adjusting the interface to match.
Also, one master page that is heavy on JS can be a performance drag if it has to load multiple, big JS files.
At the OP's request, I'm going to discuss my experience with JS-only sites. I've written four relevant sites: two JS-heavy (Slide and SpeedDate) and two JS-only (Yazooli and GameCrush). Keep in mind that I'm a JS-only-site bigot, so you're basically reading John Hinkley on the subject of Jody Foster.
The idea really works. It produces gracefully, responsive sites at very low operational costs. My estimate is that the cost for bandwidth, CPU, and such goes to 10% of the cost of running a similar page-based site.
You need fewer but better (or at least, better-trained) programmers. JavaScript is an powerful and elegant language, but it has huge problems that a more rigid and unimaginative language like Java doesn't have. If you have a whole bunch of basically mediocre guys working for you, consider JSP or Ruby instead of JS-only. If you are required to use PHP, just shoot yourself.
You have to keep basic session state in the anchor tag. Users simply expect that the URL represents the state of the site: reload, bookmark, back, forward. jQuery's Address plug-in will do a lot of the work for you.
If SEO is an issue for you, investigate Google Ajax Crawling. Basically, you make a very simple parallel site, just for search engines.
When would I not use JS-only? If I were producing a site that was almost entirely content, where the user did nothing but navigate from one place to another, never interacting with the site in a complicated manner. So, Wikipedia and ... well, that's about it. A big reference site, with a lot of data for the user to read.
modularization.
multiple files allows you to mre cleanly break out different workflow paths and process parts.
chances are your Business Rules are something that do not usually directly impact your layout rules and multiple files would better help in editing on what needs to be edited without the risk of breaking something unrelated.
I actually just developed my first application using only one page.
..it got messy
My idea was to create an application that mimicked the desktop environment as much as possible. In particular I wanted a detailed view of some app data to be in a popup window that would maintain it's state regardless of the section of the application they were in.
Thus my frankenstein was born.
What ended up happening due to budget/time constraints was the code got out of hand. The various sections of my JavaScript source got muddled together. Maintaining the proper state of various views I had proved to be... difficult.
With proper planning and technique I think the 'one-page' approach is a very easy way to open up some very interesting possibilities (ex: widgets that maintain state across application sections). But it also opens up many... many potential problem areas. including...
Flooding the global namespace (if you don't already have your own... make one)
Code organization can easily get... out of hand
Context - It's very easy to
I'm sure there are more...
In short, I would urge you to stay away from relying on JavaScript dependency for the compatibility issue's alone. What I've come to realize is there is simply no need rely on JavaScript to everything.
I'm actually in the process of removing JavaScript dependencies in loo of Progressive Enhancement. It just makes more sense. You can achieve the same or similar effects with properly coded JavaScript.
The idea is too...
Develop out well-formatted, fully functional application w/o any JavaScript
Style it
Wrap the whole thing with JavaScript
Using Progressive Enhancement one can develop an application that delivers the best possible experience for the user that is possible.
For some additional arguments, check out The Single Page Interface Manifesto and some (mostly) negative reaction to it on Hacker News (link at the bottom of the SPI page):
The Single Page Interface Manifesto: http://itsnat.sourceforge.net/php/spim/spi_manifesto_en.php
stofac, first of all, thanks for the link to the Single Page Interface (SPI) Manifesto (I'm the author of this boring text)
Said this, SPI != doing everything through Javascript
Take a look to this example (server-centric):
http://www.innowhere.com/insites/
The same in GAE:
http://itsnatsites.appspot.com/
More info about the GAE approach:
http://www.theserverside.com/news/thread.tss?thread_id=60270
In my opinion coding a complex SPI application/web site fully on JavaScript is very very complex and problematic, the best approach in my opinion is "hybrid programming" for SPI, a mix of server-centric for big state management and client-centric (a.k.a JavaScript by hand) for special effects.
Doing everything on a single page using ajax everywhere would break the browser's history/back button functionality and be annoying to the user.
I utterly despise JS-only sites where it is not needed. That extra condition makes all the difference. By way of example consider the oft quoted Google Docs, in this case it not only helps improve experiences it is essential. But some parts of Google Help have been JS-only and yet it adds nothing to the experience, it is only showing static content.
Here are reasons for my upset:
Like many, I am a user of NoScript and love it. Pages load faster, I feel safer and the more distracting adverts are avoided. The last point may seem like a bad thing for webmasters but I don't want anyone to get rewarded for pushing annoying flashy things in my face, if tactless advertisers go out of business I consider it natural selection.
Obviously this means some visitors to your site are either going to be turned away or feel hassled by the need to provide a temporary exclusion. This reduces your audience.
You are duplicating effort. The browser already has a perfectly good history function and you shouldn't need to reinvent the wheel by redrawing the previous page when a back button is clicked. To make matters worse going back a page shouldn't require re-rendering. I guess I am a student of If-it-ain't-broke-don't-fix-it School (from Don't-Repeat-Yourself U.).
There are no HTTP headers when traversing "pages" in JS. This means no cache controls, no expiries, content cannot be adjusted for requested language nor location, no meaningful "page not found" nor "unavailable" responses. You could write error handling routines within your uber-page that respond to failed AJAX fetches but that is more complexity and reinvention, it is redundant.
No caching is a big deal for me, without it proxies cannot work efficiently and caching has the greatest of all load reducing effects. Again, you could mimic some caching in your JS app but that is yet more complexity and redundancy, higher memory usage and poorer user experience overall.
Initial load times are greater. By loading so much Javascript on the first visit you are causing a longer delay.
More JavaScript complexity means more debugging in various browsers. Server-side processing means debugging only once.
Unfuddle (a bug-tracker) left a bad taste. One of my most unpleasant web experiences was being forced to use this service by an employer. On the surface it seems well suited; the JS-heavy section is private so doesn't need to worry about search engines, only repeat visitors will be using it so have time to turn off protections and shouldn't mind the initial JS library load.
But it's use of JS is pointless, most content is static. "Pages" were still being fetched (via AJAX) so the delay is the same. With the benefit of AJAX it should be polling in the background to check for changes but I wouldn't get notified when the visible page had been modified. Sections had different styles so there was an awkward re-rendering when traversing those, loading external stylesheets by Javascript is Bad Practice™. Ease of use was sacrificed for whizz-bang "look at our Web 2.0" features. Such a business-orientated application should concentrate on speed of retrieval, but it ended up slower.
Eventually I had to refuse to use it as it was disrupting the team's work flow. This is not good for client-vendor relationships.
Dynamic pages are harder to save for offline use. Some mobile users like to download in advance and turn off their connection to save power and data usage.
Dynamic pages are harder for screen readers to parse. While the number of blind users are probably less than those with NoScript or a mobile connection it is inexcusable to ignore accessibility - and in some countries even illegal, see the "Disability Discrimination Act" (1999) and "Equality Act" (2010).
As mentioned in other answers the "Progressive Enhancement", née "Unobtrusive Javascript", is the better approach. When I am required to make a JS-only site (remember, I don't object to it on principle and there are times when it is valid) I look forward to implementing the aforementioned AJAX crawling and hope it becomes more standardised in future.

Do search engines process Javascript?

According to this page it would seem like they don't, in the sense that they don't actually run it, but that page is 2 years old (judging from the copyright info).
The reason I'm asking this question is because we use Javascript to replace text on our site with other more typographically sound content. We're worried that this may affect the crawlability/seo of our sites, since generally what we're replacing is headers; ie. <h1>, <h2>, etc.
Will search engine bots see our original code, or will they run the Javascript and see the replaced text?
Google now officially processes JavaScript.
In order to solve this problem, we decided to try to understand pages by executing JavaScript. It’s hard to do that at the scale of the current web, but we decided that it’s worth it. We have been gradually improving how we do this for some time. In the past few months, our indexing system has been rendering a substantial number of web pages more like an average user’s browser with JavaScript turned on.
Sometimes things don't go perfectly during rendering, which may negatively impact search results for your site. Here are a few
potential issues, and – where possible, – how you can help prevent
them from occurring:
If resources like JavaScript or CSS in separate files are blocked (say, with robots.txt) so that Googlebot can’t retrieve them, our
indexing systems won’t be able to see your site like an average user.
We recommend allowing Googlebot to retrieve JavaScript and CSS so that
your content can be indexed better. This is especially important for
mobile websites, where external resources like CSS and JavaScript help
our algorithms understand that the pages are optimized for mobile. If
your web server is unable to handle the volume of crawl requests for
resources, it may have a negative impact on our capability to render
your pages. If you’d like to ensure that your pages can be rendered by
Google, make sure your servers are able to handle crawl requests for
resources.
It's always a good idea to have your site degrade gracefully. This will help users enjoy your content even if their browser doesn't have
compatible JavaScript implementations. It will also help visitors with
JavaScript disabled or off, as well as search engines that can't
execute JavaScript yet.
Sometimes the JavaScript may be too complex or arcane for us to execute, in which case we can’t render the page fully and accurately.
Some JavaScript removes content from the page rather than adding, which prevents us from indexing the content.
Search engines don't process JavaScript as such.
There is some evidence that Google may have started processing inline script content in some cases, in order to catch content that is entered into the page parse queue using document.write. However certainly DOM methods such as you might use for font-replacement are not affected and no onload code is invoked.
Generally no. Google has mentioned that they are working on a system of indexing ajax content, but I don't think any of the major search engines index dynamic content as a rule. See this page for Google's take on it: http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=81766
The bots will certainly not run the Javascript code, but they might recognise some commonly used scripts.
You shouldn't count on it though. Clear markup, proper content and real links is still what counts.
Also, if the bots happen to recognise your script, it might not be in your favor. If the code is recognised as something that is commonly used to try to fool bots, it could even hurt your page ranking.
I'd use metadata to ensure bots pick up the content on your pages.
I know the general consensus is that google does not process javascript or index anything with a <script> tag, however, the general consensus appears incorrect.
Try searching for the following, with the surrounding quotes (or click here):
"Samsung Public Interest Statement by Thomas Fusco, Fish & Richardson P.C., for Samsung."
You should only get one result. Now click on that result (or just click here) and view the source.
Do a CTRL-F for the text you searched for in Google. Notice that the text is in a javascript variable, and not html. Google must be processing some javascript to pull those words into its index.

Categories