I am working on a Chrome Extension which is structured like this :
the service_worker : fetches a list of compatible websites every day and if it is compatible, I will inject a 'widget' in the website
the widget : is a React App injected inside the page (a 300x300px position:absolute div) that can help the user and that is linked to the user account from my website, for example he can save a product from Amazon and see it later in his account in my website.
My problem is the communication between these two parts.
An example :
I want to check if the user has currently a session opened in my website, if yes, in the React App (the widget) I will render him some tools and his items, and if not I will render him a link to login in my website.
Tried solutions:
Call my website API directly from the widget : but I dont want to make the verification from the widget because it is a part of the webpage and I am afraid of a cookie/token hijacking (tell me if i'm wrong).
Using CustomEvents to get a response : I found this useful post, the problem is that as said in the 5th point I cant really wait for a response from the service_worker and store it.
Using CustomEvents with callbacks : I also can't send a callback function as said here.
Using sendMessage from Chrome API : To use this between webpage and the service_worker you need to specify in the manifest.json the pattern of every website that needs to use it, but the list can change every day. Also the domain cannot be *, and we cannot use <all_urls>
So, my communication is only working from the web page to the extension but not in the opposite way.
Can you advise me a way of making this work ?
Thanks !
Related
The product I work on offers SSO into Office365, through both the web and native, "thick" clients aka rich clients. Part of SSO-ing into an Office365 app, such as Excel for example, involves displaying my product's login page inside of the login popup window inside the thick client. The problem is, only on Windows, I get many JavaScript errors when trying to execute the JavaScript included in our login page (it happens to be using AngularJS, but I suspect many frameworks/libraries would be incompatible). It appears that console is not supported, along with document.body, and many other "essentials".
Does anyone have any knowledge of the DOM and script engines that are used here? The first page shown in the SSO flow is Microsoft's login page where you enter your email address, which then redirects to my product's login page (mapped by domain on the email address), and their page seems to render fine, so clearly it's possible to get HTML and JS to work nice (enough). I'd also take a recommendation on any kind of shim/polyfill that would help me get moving, as well.
After doing some more digging, it looks like I was able to solve my problem by specifying an HTTP Response header of name X-UA-Compatible with value IE=edge, which tells IE to render using the latest document standards. It looked like the web view was originally trying to render using IE7 compatibility mode, which explains why none of my JS was working as intended.
See https://stackoverflow.com/a/6771584/3822733 for more information on X-UA-Compatible, this is the question/answer that helped me solve this problem.
I have a website which has two versions, an all singing all dancing javascript powered application which is served when you request the root url
/
As you navigate around the lovely website the content updates, as does the url, thanks to html5 push state or good old correctly formatted #! urls. However if you don't have javascript enabled you can still use all functionality of the site as each piece of content also exists under it's own url. This is great for 3 reasons
non javascript users can still use the site
SEO - web crawlers can index the site easily
everything is shareable on social networks
The third reason is very important to me as every piece of content must be individually shareable on the site. And because each piece of content has it's own url it is easy to deep link to that url, and each piece of content can have it's own specific open graph data.
However the issue I hit is the following. You are a normal person and have javascript enabled and you are browsing and image gallery on the site and decide to share the picture of a lovely cat you have found. Using javascript the url has been updated to
/gallery/lovely-cat
You share this url and your friend clicks on it. When they click on the link the server sends you the non javascript / web crawler version of the site, and the experience is no where near as nice as the javascript version you would have been served if you directly went to the root of the site and navigated there.
Do anyone have a nice solution / alternative setup to solve this problems? I have several hacks which work, however I am not that happy with them. They include :
javascript redirect to the root of the site on every page and store a cookie / add a #! to the url so on page render the javascript router will show the correct content. ( does google punish automatic javascript redirects? )
render the no javascript page, and add some javascript which redirects the user to the root, similar to above, whenever the user clicks on a link
I don't particularly like either of these solutions, but can't think of a better solution. Rendering the entire javascript app for each page doesn't appear to be a solution to me, as you would end up with bad looking urls such as /gallery/lovely-cat/gallery/another-lovely-cat as you start navigating through the site.
My solution must support old browsers which do not implement push state
Make the "non javascript / web crawler version of the site" the same as the JavaScript version. Just build HTML on the server instead of DOM on the client.
Rendering the entire javascript app for each page doesn't appear to be a solution to me,
That is the robust approach
as you would end up with bad looking urls such as /gallery/lovely-cat/gallery/another-lovely-cat
Only if you linked (and pushStateed) to gallery/another-lovely-cat instead of /gallery/another-lovely-cat. (Note the / at the front).
Try out this plugin it might solve your 3rd reason, along with two reasons.
http://www.asual.com/jquery/address/
I'm working on an app built in Titanium that has a few "Tweet about this" buttons. Since I can't use the Javascript part of a Tweet button as described in Twitter API, I just use a plain URL with parameters.
On Android, this causes problems. When users click this link, they get a choice how to open it: always the native browser, and additionally any app that has registered for this kind of link. So if the user has the Twitter app installed, Twitter will be shown as one of the options.
That would be great, except the Twitter app is awful. Most types of suggest-a-tweet URL cause the app to crash, and the few that do work don't pass the status text.
I'm looking for a way to force the URL to be opened by the native browser. (Or way to prevent the Twitter app from being among the options presented to the user, but that seems harder to do)
Is this possible using only the URL itself, or maybe a little Javascript? Since I'm using Titanium, Java won't help me.
I can't give you what you want but give you an alternative suggestion.
What you are trying is hard (often impossible to do without errors) even with native code as your trying to work against the OS. Intents are used in android as a way to let the user decide which program should handle a certain request. If you don't want the user to take this decision I'd suggest opening the url in a embedded browser.
I'm sorry if this is a newbie question but I don't really know what to search for either. How do you keep content from a previous page when navigating through a web site? For example, the right side Activity/Chat bar on facebook. It doesn't appear to refresh when going to different profiles; it's not an iframe and doesn't appear to be ajax (I could be wrong).
Thanks,
I believe what you're seeing in Facebook is not actual "page loads", but clever use of AJAX or AHAH.
So ... imagine you've got a web page. It contains links. Each of those links has a "hook" -- a chunk of JavaScript that gets executed when the link gets clicked.
If your browser doesn't support JavaScript, the link works as it normally would on an old-fashioned page, and loads another page.
But if JavaScript is turned on, then instead of navigating to an HREF, the code run by the hook causes a request to be placed to a different URL that spits out just the HTML that should be used to replace a DIV that's already showing somewhere on the page.
There's still a real link in the HTML just in case JS doesn't work, so the HTML you're seeing looks as it should. Try disabling JavaScript in your browser and see how Facebook works.
Live updates like this are all over the place in Web 2.0 applications, from Facebook to Google Docs to Workflowy to Basecamp, etc. The "better" tools provide the underlying HTML links where possible so that users without JavaScript can still get full use of the applications. (This is called Progressive Enhancement or Graceful degradation, depending on your perspective.) Of course, nobody would expect Google Docs to work without JavaScript.
In the case of a chat like Facebook, you must save the entire conversation on the server side (for example in a database). Then, when the user changes the page, you can restore the state of the conversation on the server side (with PHP) or by querying your server like you do for the chat (Javascript + AJAX).
This isn't done in Javascript. It needs to be done using your back-end scripting language.
In PHP, for example, you use Sessions. The variables set by server-side scripts can be maintained on the server and tied together (between multiple requests/hits) using a cookie.
One really helpful trick is to run HTTPFox in Firefox so you can actually monitor what's happening as you browse from one page to the next. You can check out the POST/Cookies/Response tabs and watch for which web methods are being called by the AJAX-like behaviors on the page. In doing this you can generally deduce how data is flowing to and from the pages, even though you don't have access to the server side code per se.
As for the answer to your specific question, there are too many approaches to list (cookies, server side persistence such as session or database writes, a simple form POST, VIEWSTATE in .net, etc..)
You can open your last closed web-page by pressing ctrl+shift+T . Now you can save content as you like. Example: if i closed a web-page related by document sharing and now i am on travel web page. Then i press ctrl+shift+T. Now automatic my last web-page will open. This function works on Mozilla, e explorer, opera and more. Hope this answer is helpful to you.
I'm working on a big website at the moment and the designers are talking about making a facebook-like content area.. By this they mean that they want to keep the header loaded at all times and then only reload the content area when a link is clicked. Still we want to change the url to keep the framework working as well when accessing some page directly.
I'm not sure how to explain this any further - check out facebook and take a close look at the header whenever you navigate to another page..
Thanks..
I'm not sure if you're even asking a question, but here's my response.
Facebook, like most other major websites, use frameworks (custom built, or not) to separate a template into components, separate code logic from design, and more.
The reason why the url and the header will not change is because one of the designated areas of the body is acting as a container. When links are clicked, the data is gotten via remote procedure calls... via their facebook API. The content that is returned is then loaded into that container via javascript.
keywords: ajax, rpc, rest api, javascript, mvc, framework.
all of those things are important to that style of web development.