redirect to ios application using php - javascript

i have read a lot of stack-over-flow Q&A and blogs about deep linking, the way i found convenient was to use header() function in my php code and use url within it to redirect. But when i put myApp:// in header function like this
header('Location: myApp://');
as redirect link it says
cannot open the page because too many redirects occur
then did some research and found this link
then according to the link, i used javascript method as
echo '<script type="text/javascript" charset="utf-8">window.location="360VUZ://";</script>';
to achieve the same
still this doesn't work as it changed the whole url i needed to open the app
it says
The requested URL /subFolder/myApp:// was not found
now i am not getting how the path is appending itself, all i need is to hit this
myApp://
url to open my application
any type of help or suggestion is appreciated ! thanks in advance!!
EDIT:
specifically when i am using JS to open app, it says
The requested URL /subFolder/myApp:// was not found on this server.
so all i need to do is somehow delete/remove the prefix path as it is finding the app on the server not on device and hit only 'myApp://', but still dont know how!? please help me!

With iOS 9.2, Apple deprecated the ability to launch apps via URI schemes in favor of Universal Linking. Consequently, this approach that you are exploring no longer works.
Your options are to set up Universal Linking on your own or to leverage a third party service like branch.io.

Related

Creating AngluarJS Apps but don't have a server side option for SEO Friendly URLs. Will This Work?

I am creating an angular app that is hosted on a webserver that doesn't allow me to edit htaccess files or webconfig. There is no server side language option available which means no middleware for creating HTML snapshots. This is a high dollar CRM with webstore and no option of switching hosts.
So I have come up with my own "solution" to the issue. Would it be considered ok to create hyperlinks that link to url's that will generate the same view that will be updated by an onClick event. This way the user will see the content loaded immediately, but bots will have to reload the page at the new url to see the page content.
Example:
View 2
I'm struggling to find a good solution to this issue, and I know others have to be in the same situation as me when it comes to development. The code above is just a visual reference to what I am referring to.
Have you looked at
grunt-html-snapshot
After implementing this and testing this, it does work well. Google sees them as new pages and the user never has to worry about loading new content.

What does "Failed to get composer template data" mean in Facebook lint?

I've created a trivial prototype app on Facebook. When my test script (JavaScript on Ubuntu command line, powered by Node.js) tries to access the app, it produces this error message:
{ error:
{ message: 'Invalid OAuth access token.',
type: 'OAuthException',
code: 190
}
}
So I'm trying to debug using Facebook's lint debugger. However when I paste the app's access token into lint, it responds with:
Failed to get composer template data.
I have no idea what this means, and a lengthy stumble through Google reveals page after page of people who are similarly clueless.
Has anyone seen this error, and fixed it?
Details about the app:
It's configured to ask for read_insights and manage_pages alongside standard permissions. No other permissions are requested.
Settings, Basic: I've had to put a nonexistent URL in the secure canvas URL, since I don't have any SSL hosting anywhere. The non-SSL canvas URL is complete and points to an existing page.
"App Info" is all filled in (apart from Tagline which is optional).
I haven't submitted the app for approval for public use, and there are no "items for approval". I'm going to be the only person who ever uses it, this isn't necessary for this app.
Switching from "live" to "sandbox" and back again doesn't make any difference.
There are no warnings anywhere on the app developer page.
A client I worked for was experiencing a similar issue - when sharing certain URLs on Facebook, the Facebook Sharer wasn't picking up any of the thumbnails. Frustrated with that, the client was trying to clear the Sharer's cache using the debugger at https://developers.facebook.com/tools/debug/, hoping that this way Facebook will re-cache the page and display the corresponding images.
However, in doing so, the client was seeing the ambiguous "Failed to get composer template data." error, and resorted to me for a solution.
I did my research, and it turned out that Facebook had decided to block the domain of the CDN that my client was using to serve images from. Since the pages were loading all images from that CDN, none of the images were getting picked up and the debugger was returning that "Failed to get composer template data." error.
The moment we started serving the images from a new CDN, Facebook started picking them up correctly, and the error disappeared.
Hope that helps you!
P.S. Please note however, this is not a permanent solution if you are violating Facebook's terms in some way. Yes - Facebook's spam prevention algorithms do return false positives sometimes, but most of the time they have a pretty good reason to block your content.
P.P.S. Worth noting, in the case I'm describing, when we passed the CDN URL to the debugger, it returned "This link is blocked, or you have triggered an excessive amount of scrapes. If you think you're seeing this by mistake, please let us know."
I had same error, "Failed to get composer template data.".
I believe my path to Images was blacklisted by Facebook. Workaround was to create virtual path that points to Images folder. Then I could reference /Images with /OGImages virtual directory. Then I no longer received the error.
Had kind of the same problem, I figured out I needed to use HTTPS instead of HTTP for the image link, and everything went fine then.
Hope it may help !
I had just the same problem and it appeared suddenly after several months without any site changes except content. First I thought the Facebook spam filter had blocked our site, as suggested by a Ycombinator comment thread but then I found the real problem.
In fact it was the official Facebook Wordpress plugin that was acting up. Disabling it meant that the Facebook debugger could once again fetch our data and sharing started to work immediately.
In my case is was a "Facebook Share Buttons" plugin for Wordpress. I've deactivated the plugin and resolved the issue.

Deep linking javascript powered websites

I have a website which has two versions, an all singing all dancing javascript powered application which is served when you request the root url
/
As you navigate around the lovely website the content updates, as does the url, thanks to html5 push state or good old correctly formatted #! urls. However if you don't have javascript enabled you can still use all functionality of the site as each piece of content also exists under it's own url. This is great for 3 reasons
non javascript users can still use the site
SEO - web crawlers can index the site easily
everything is shareable on social networks
The third reason is very important to me as every piece of content must be individually shareable on the site. And because each piece of content has it's own url it is easy to deep link to that url, and each piece of content can have it's own specific open graph data.
However the issue I hit is the following. You are a normal person and have javascript enabled and you are browsing and image gallery on the site and decide to share the picture of a lovely cat you have found. Using javascript the url has been updated to
/gallery/lovely-cat
You share this url and your friend clicks on it. When they click on the link the server sends you the non javascript / web crawler version of the site, and the experience is no where near as nice as the javascript version you would have been served if you directly went to the root of the site and navigated there.
Do anyone have a nice solution / alternative setup to solve this problems? I have several hacks which work, however I am not that happy with them. They include :
javascript redirect to the root of the site on every page and store a cookie / add a #! to the url so on page render the javascript router will show the correct content. ( does google punish automatic javascript redirects? )
render the no javascript page, and add some javascript which redirects the user to the root, similar to above, whenever the user clicks on a link
I don't particularly like either of these solutions, but can't think of a better solution. Rendering the entire javascript app for each page doesn't appear to be a solution to me, as you would end up with bad looking urls such as /gallery/lovely-cat/gallery/another-lovely-cat as you start navigating through the site.
My solution must support old browsers which do not implement push state
Make the "non javascript / web crawler version of the site" the same as the JavaScript version. Just build HTML on the server instead of DOM on the client.
Rendering the entire javascript app for each page doesn't appear to be a solution to me,
That is the robust approach
as you would end up with bad looking urls such as /gallery/lovely-cat/gallery/another-lovely-cat
Only if you linked (and pushStateed) to gallery/another-lovely-cat instead of /gallery/another-lovely-cat. (Note the / at the front).
Try out this plugin it might solve your 3rd reason, along with two reasons.
http://www.asual.com/jquery/address/

Delivering Javascript Files Securely When Using HTTPS

My asp.net mvc application runs under https and it is working just fine.
The problem is when a user goes to the secure portion of the website they get the warning asking them if they want to view only to content that was delivered securely. If they click yes, then non on the javascript or jquery will work. If they select no, then it all works just fine.
How then can I provide the .js files securely? Or is that totally up to the user?
Also the warning gets very annoying at it shows it on every new page that is navigated to.
thanks!
also, this is only a problem when the user is using IE, Firefox has no issues
twal,
use the following approach and it should fix the issue:
<script type="text/javascript" src="<%= Url.Content("~/Scripts/jqGrid/js/jquery.jqGrid.min.js") %>"></script>
this will then use the approriate path to the file resolving the protocol on it's way.
Make sure you're referencing HTTPS for full URL.
You can often avoid this if they are relative URL as it will normally use the same protocol you are currently on.

How to create SharePoint 2007 Web Application on a Virtual Path to solve an url rewrite rule?

I there anyway that I can install a SharePoint web application not on the root I need it on a virtual path? I have a situation that my customer is using somehow a url rewriting tool using ISA server like the following:
He have a main url let say http://publicsite/
we have sharepoint implemented internal on a site called http://internal/sites/sitecollection
when we open http://publicsite/sites/sitecollection it will open http://internal/sites/sitecollection
http://publicsite/ is on a different server than http://internal.
Right now I have an issue with the embedded resources in the rendered html markup for the sharepoint site like
<script src="/ScriptResource.axd?d=MZkmbKEwKTBSRdxFCFncmF72UDKBF9tO54OpDYX6Df4DBmB7HSDbA8CAqY5mCBAK2TAU34oVF24xOS5EJEafjb6Zcvwnmou5zv3RqxNzcSKM1XXzvQP1JpAzOAaH9PUPRTPUjZfdMBnoJPmBfgNZ-BFEntGwjcL7UiqfpH8R9TE1&t=ffffffffed1cce36" type="text/javascript"></script>
effectually it opens from the root and root based on the customer rewrite rule is another server so the resource response with status 404
And AFAIN since the sharepoint web application on the root so I can't change the way it renders the url.
Is there any way to solve this?
I started to think about some javascript function to change the url of all scripts with starts with /ScriptResource.axd and change it to /sites/sitecollection/ScriptResource.axd but It faild to reload the javascript I don't know why!!
The 2nd solution Is to create the web application on a virtual path and I don't know if this possible or not, so can anybody help me?
Thanks in advance.
Regards
My first suggestion would be to create a different public url for sharepoint. You are going to save yourself a lot of hurt if you do.
But if you really need to go forward, you could try to implementing a HttpModule that replaces the bad references. And you are going to have to test this a fare bit it you are doing anything more than just viewing some content. (collaboration, office sync etc)
https://sharepoint.stackexchange.com/questions/5956/rich-text-editor-error-messages/8364#8364
You will probably need to change the path on the page. Changing the configuration of the ISA server will still leave you with the problem that the root is on a different server to the SharePoint site.
If the JavaScript approach did not work you could try use the "adaptive control behavior" as suggested as the answer for this Question, Where you rewrite all the Script tags, you will also need to rewrite the Style and Image tags as some will refer to /_layouts
A heavy handed approach though but Control adapters work well with SharePoint.
I am thinking you could extend the web app on the internal server to the additional url http://publicsite/. Now you will rely on intelligent IP routing so that if the URL is http://publicsite/sites/sitecollection it goes to your intranet server, otherwise it goes to the public server. It sounds as though your ISA server already does this. I think that would make all the relative links correct.

Categories