The product I work on offers SSO into Office365, through both the web and native, "thick" clients aka rich clients. Part of SSO-ing into an Office365 app, such as Excel for example, involves displaying my product's login page inside of the login popup window inside the thick client. The problem is, only on Windows, I get many JavaScript errors when trying to execute the JavaScript included in our login page (it happens to be using AngularJS, but I suspect many frameworks/libraries would be incompatible). It appears that console is not supported, along with document.body, and many other "essentials".
Does anyone have any knowledge of the DOM and script engines that are used here? The first page shown in the SSO flow is Microsoft's login page where you enter your email address, which then redirects to my product's login page (mapped by domain on the email address), and their page seems to render fine, so clearly it's possible to get HTML and JS to work nice (enough). I'd also take a recommendation on any kind of shim/polyfill that would help me get moving, as well.
After doing some more digging, it looks like I was able to solve my problem by specifying an HTTP Response header of name X-UA-Compatible with value IE=edge, which tells IE to render using the latest document standards. It looked like the web view was originally trying to render using IE7 compatibility mode, which explains why none of my JS was working as intended.
See https://stackoverflow.com/a/6771584/3822733 for more information on X-UA-Compatible, this is the question/answer that helped me solve this problem.
Related
I have an application that seems to be working across the internet. But I fielded a call from an end-user who is having difficulty using the website
I asked her to send a screenshot of the console error and recieved this:
From searching stackoverflow it looks like its a permissions issue, but the site works for everyone else. Is this on her end or mine?
UPDATE I should've mentioned that is user works from a school. Perhaps her IT admin has blocked some internet resources?
Have end-user tried different browser? i would say it's some plugin stopping to load that script. If your app is working everywhere then it can't be your app. Ask end-use to try another browser and for check if end-user has any plugin witch can also cause this.
The problem may also come from lealfet or google map.
We can se that the page is currently loaded, css ans content seems fine, only the map part seems to be broken. So i don't think there is a problem with your server/website, I would put the fault on the others.
Note : we see that leaflet only is concerned when looking carefully at the screenshot.
Edit : in some cases, you can make a local copy of these external file, check whether the user need them or not, then load them as rescue.
Update : Either the school did block leaflet, or leaflet did automatically block the school as their bandwich use can excess what they allows by user, or due to abuse of any kind.
I am creating an angular app that is hosted on a webserver that doesn't allow me to edit htaccess files or webconfig. There is no server side language option available which means no middleware for creating HTML snapshots. This is a high dollar CRM with webstore and no option of switching hosts.
So I have come up with my own "solution" to the issue. Would it be considered ok to create hyperlinks that link to url's that will generate the same view that will be updated by an onClick event. This way the user will see the content loaded immediately, but bots will have to reload the page at the new url to see the page content.
Example:
View 2
I'm struggling to find a good solution to this issue, and I know others have to be in the same situation as me when it comes to development. The code above is just a visual reference to what I am referring to.
Have you looked at
grunt-html-snapshot
After implementing this and testing this, it does work well. Google sees them as new pages and the user never has to worry about loading new content.
I'm currently developing an app that is essentially a single WebView that allows access to a specific website (terrible idea I know, but the decision comes from higher up); said website offers the option to login through Facebook with the standard Facebook Connect procedure.
The login process works fine in Mobile Safari but unfortunately when a UIWebView attempts to do the same thing after authorizing a blank page is displayed and nothing happens. This is of course because of the page actually being just a JavaScript that communicates with the original page through postMessage (I think!).
I tried searching and while this is a pretty widely recognized problem all the solutions I found are either not applicable or won't work. I found somewhere that it's possible to pass mode=redirect to the oauth URL to prevent the whole process to involve popups which sounds promising but as far as I tell it doesn't work.
Is there a way to make Facebook Connect work for a website inside a UIWebView? I'm considering the option of having the Facebook button call a special URL that I would then listen to inside the ap to trigger a native authentication process but unfortunately since my company is not the one developing the website so this kind of solution would be the least preferred.
I have a website which has two versions, an all singing all dancing javascript powered application which is served when you request the root url
/
As you navigate around the lovely website the content updates, as does the url, thanks to html5 push state or good old correctly formatted #! urls. However if you don't have javascript enabled you can still use all functionality of the site as each piece of content also exists under it's own url. This is great for 3 reasons
non javascript users can still use the site
SEO - web crawlers can index the site easily
everything is shareable on social networks
The third reason is very important to me as every piece of content must be individually shareable on the site. And because each piece of content has it's own url it is easy to deep link to that url, and each piece of content can have it's own specific open graph data.
However the issue I hit is the following. You are a normal person and have javascript enabled and you are browsing and image gallery on the site and decide to share the picture of a lovely cat you have found. Using javascript the url has been updated to
/gallery/lovely-cat
You share this url and your friend clicks on it. When they click on the link the server sends you the non javascript / web crawler version of the site, and the experience is no where near as nice as the javascript version you would have been served if you directly went to the root of the site and navigated there.
Do anyone have a nice solution / alternative setup to solve this problems? I have several hacks which work, however I am not that happy with them. They include :
javascript redirect to the root of the site on every page and store a cookie / add a #! to the url so on page render the javascript router will show the correct content. ( does google punish automatic javascript redirects? )
render the no javascript page, and add some javascript which redirects the user to the root, similar to above, whenever the user clicks on a link
I don't particularly like either of these solutions, but can't think of a better solution. Rendering the entire javascript app for each page doesn't appear to be a solution to me, as you would end up with bad looking urls such as /gallery/lovely-cat/gallery/another-lovely-cat as you start navigating through the site.
My solution must support old browsers which do not implement push state
Make the "non javascript / web crawler version of the site" the same as the JavaScript version. Just build HTML on the server instead of DOM on the client.
Rendering the entire javascript app for each page doesn't appear to be a solution to me,
That is the robust approach
as you would end up with bad looking urls such as /gallery/lovely-cat/gallery/another-lovely-cat
Only if you linked (and pushStateed) to gallery/another-lovely-cat instead of /gallery/another-lovely-cat. (Note the / at the front).
Try out this plugin it might solve your 3rd reason, along with two reasons.
http://www.asual.com/jquery/address/
I'm sorry if this is a newbie question but I don't really know what to search for either. How do you keep content from a previous page when navigating through a web site? For example, the right side Activity/Chat bar on facebook. It doesn't appear to refresh when going to different profiles; it's not an iframe and doesn't appear to be ajax (I could be wrong).
Thanks,
I believe what you're seeing in Facebook is not actual "page loads", but clever use of AJAX or AHAH.
So ... imagine you've got a web page. It contains links. Each of those links has a "hook" -- a chunk of JavaScript that gets executed when the link gets clicked.
If your browser doesn't support JavaScript, the link works as it normally would on an old-fashioned page, and loads another page.
But if JavaScript is turned on, then instead of navigating to an HREF, the code run by the hook causes a request to be placed to a different URL that spits out just the HTML that should be used to replace a DIV that's already showing somewhere on the page.
There's still a real link in the HTML just in case JS doesn't work, so the HTML you're seeing looks as it should. Try disabling JavaScript in your browser and see how Facebook works.
Live updates like this are all over the place in Web 2.0 applications, from Facebook to Google Docs to Workflowy to Basecamp, etc. The "better" tools provide the underlying HTML links where possible so that users without JavaScript can still get full use of the applications. (This is called Progressive Enhancement or Graceful degradation, depending on your perspective.) Of course, nobody would expect Google Docs to work without JavaScript.
In the case of a chat like Facebook, you must save the entire conversation on the server side (for example in a database). Then, when the user changes the page, you can restore the state of the conversation on the server side (with PHP) or by querying your server like you do for the chat (Javascript + AJAX).
This isn't done in Javascript. It needs to be done using your back-end scripting language.
In PHP, for example, you use Sessions. The variables set by server-side scripts can be maintained on the server and tied together (between multiple requests/hits) using a cookie.
One really helpful trick is to run HTTPFox in Firefox so you can actually monitor what's happening as you browse from one page to the next. You can check out the POST/Cookies/Response tabs and watch for which web methods are being called by the AJAX-like behaviors on the page. In doing this you can generally deduce how data is flowing to and from the pages, even though you don't have access to the server side code per se.
As for the answer to your specific question, there are too many approaches to list (cookies, server side persistence such as session or database writes, a simple form POST, VIEWSTATE in .net, etc..)
You can open your last closed web-page by pressing ctrl+shift+T . Now you can save content as you like. Example: if i closed a web-page related by document sharing and now i am on travel web page. Then i press ctrl+shift+T. Now automatic my last web-page will open. This function works on Mozilla, e explorer, opera and more. Hope this answer is helpful to you.