I had made a SPA in angularjs, when I run it on browser it showed me '#' in the address bar. So I enabled html5Mode. This successfully removed '#' from the url but when I try to reload the page, it gives 404: page not found error. I tried writing in .htaccess from this url:
htaccess redirect for Angular routes
Nothing change.
I also followed grunt answers but that was also not helpful.
Did you correctly enable html5 mode?
Take a look at the answer to a question regarding html5 mode/hashbang mode.
Did you add <base href="/"> to the head of your html?
You need to setup your server routes to return index file for all angular urls.
Usually you register your api at some special endpoint (as mysite.com/api/v1) and if url is not matching this pattern return index.html file with angular bootstrap logic.
This happens because without html5 mode all requests are sent to root server url (mysite.com/ even if link is mysite.com/#!/profile) but when you enable html5 mode, requests are sent as declared in angular so if you have '/' route with href to profile (/profile) and reload page from there (mysite.com/profile), request will be sent to mysite.com/profile and it will respond with 404 as it is setuped to return index file only for '/' url
Related
We are integrating CKFinder with the CKEditor installation in PeopleSoft.
We created our own connector in Peoplesoft and almost everything is now working except editing image. It is stuck in the loading image dialog. We already implemented ImageInfo and the response is successfully received. In my observation, the following are the requests made by the browser.
caman.js [GET]
ImageInfo Command Request [GET]
(current URL)?camanproxyURL=(CKFinder Thumbnail Request URL) [GET]
(current URL)?camanproxyURL=(CKFinder ImagePreview Request URL) [GET]
I tried this in CKFinder demo but I don't see # 3 and #4 and the Thumbnail and ImagePreview were directly requested.
I think the problem here is in #3 and #4, the URL used is the current URL which is .../ckeditor/ckfinder/ckfinder.html. I don;t this is what supposed to happen.
How do I fix this issue? Is this something with our setup or configuration?
CKFinder is inside our CKEditor directory.
We're using custom CamanJS with some improvements. One of them was this change. It was due to similar problem with domain name that contain - in it's name. The fix changed a regexp that was used to tell if domain is local or remote (if a domain URL fails this regexp the proxy mechanism is used).
Could you verify that a domain URL that you are using for development passes this regex:
/(?:(?:http|https):\/\/)((?:[\w-]+)\.(?:(?:-|\w|\.)+))/
I have an AngularJS application that is injected into 3rd party sites. It injects dynamic content into a div on the 3rd party page. Google is successfully indexing this dynamic content but does not appear to be crawling links within the dynamic content. The links would look something like this in the dynamic content:
Link Here
I'm using query parameters for the links rather than an actual url structure like:
http://www.example.com/support/title/Example Title/titleId/12345
I have to use the query parameters as I don't want the 3rd party site to have to change their web server configuration to redirect unfound URLs.
When the link is clicked I use the $locationService to update the url in the browser and then my angular application responds accordingly. Mainly it shows just the relevant content based on the query params, sets the page title and meta description.
Many of the articles I have read use the route provider in angularJS and templates but I'm not sure why this would make a difference to the crawler?
I have read that google should view urls with query parameters as separate pages so I don't believe that should be the issue:
https://webmasters.googleblog.com/2008/09/dynamic-urls-vs-static-urls.html
The only things I have not tried are 1. providing a sitemap with the urls that have the query parameters and 2. adding static links from other pages to the dynamic links to help google discover those pages.
Any help, ideas or insights would be greatly appreciated.
This happens because google crawlers are not able to get the static html from your url since your pages are dynamically rendered with Javascript, you can achieve what you want using the following :
Since #! is deprecated, You can tell google that your pages are rendered with javascript by using the following tag in your header
<meta name="fragment" content="!">
On finding the above tag google bots will request your urls with the _escaped_fragment_ query parameter from your server like
http://www.example.com/?_escaped_fragment_=/support?title=Example Title&titleId=12345
Then you need to rebuild your original url from the _escaped_fragment_ on your server and it will look like this again
http://www.example.com/support?title=Example Title&titleId=12345
Then you will need to serve the static HTML to the crawler for that url.
You can do that using a headless browser to access the url. Phantom.js is a good option to render your page using the javascript and then give the contents into a file to create a HTML snapshot of your page. You can save the snapshot as well on your server for further crawling, so when google bots visit can you can directly serve the snapshot instead of re-rendering the page again.
The web crawler might be running at a higher priority than the AngularJS interpretation of your dynamic links as the web crawler loads the page. Using ng-href makes the dynamic link interpretation happen at a higher priority. Hope it works!
If you use urls with #
Nothing after the hash in the url gets sent to your server. Since Javascript frameworks originally used the hash as a routing mechanism, that's a main reason why Google created this protocol.
Change your urls to #! instead of just using #.
angular.module('myApp').config([
'$locationProvider',
function($locationProvider) {
$locationProvider.hashPrefix('!');
}
]);
This is how Google and bing handle the ajax calls.
The documentation is mentioned here.
The overview as mentioned in the docs is as follows
The crawler finds a pretty AJAX URL (that is, a URL containing a #! hash fragment). It then requests the content for this URL from your server in a slightly modified form. Your web server returns the content in the form of an HTML snapshot, which is then processed by the crawler. The search results will show the original URL.
Step by Step guide is shown in the docs.
Since the Angular JS is designed for the Client Side so you will need to configure your Web server to summon a headless html browser to access your web page and deliver a hashbang url which will be given to the special google URL.
If you use hashbang URL then you would need to instruct the angular application to use them instead of regular hash values
App.config(['$routeProvider', '$locationProvider', function($routes, $location) {
$location.hashPrefix('!');
$routes.when('/home',{
controller : 'IndexCtrl',
templateUrl : './pages/index.html'
});
as mentioned in the code example here
However if you do not wish to use hashtag url but still inform the google of the html content but still want to inform the google then you can use this meta tag as this
<meta name="fragment" content="!" />
and then configure the angular to use the htmlURL's
angular.module('HTML5ModeURLs', []).config(['$routeProvider', function($route) {
$route.html5Mode(true);
}]);
and then whichever method to be installed via module
var App = angular.module('App', ['HashBangURLs']);
//or
var App = angular.module('App', ['HTML5ModeURLs']);
Now you will need a headless browser to access the url
You can use phantom.js to download the contents of the page ,run the javascript and then give the contents into a temporary file.
Phantomrunner.js which takes any url as input,downloads and parses the html into DOM and then checks the data status.
Test each page by using the function defined here
SiteMap can also be made as well as shown in this example
The best feature is you can use search console of verify your site url using
Google search console
Full attribution goes to the website and the author mentioned in this site
.
UPDATE 1
Your crawler needs the pages as -
- com/
- com/category/
- com/category/page/
By default, however, Angular sets your pages up as such:
- com
- com/#/category
- com/#/page
Approach 1
Hash bang allows Angular to know which HTML elements to inject with JS which can be done as mentioned before but since it has been depericated hence the another solution would be the following
Configure the $locationProvider and set up the base for relative links
You can use the $locationProvider as mentioned in these docs and set the html5mode to true
$locationProvider.html5Mode(true);
This lets Angular change the routing and URLs of our pages without refreshing the page
Set the base and head of your document as <base href="/">
The $location service will automatically fallback to the hashbang method for browsers that do not support the HTML5 History API.
Full attribution goes to the page and the author
Also to mention there are also some other measures and tests that you can take care of as mentioned in this document
I have a web application that I am hosting on Parse with a subdomain "appname".parseapp.com url (The quotes are not actually there, and that's not the actual link to my app). Supposedly, I am able to use my own templates for things like the password reset form, however, I haven't had any success. I downloaded the template, modified it, and put it in my public directory then deployed it. I set the Parse Frame URL to the "appname".parseapp.com/user_management.html like it says after also putting the user_management.html file in my public directory, then I set the directory of password reset file in the Customize User-Facing Pages section as choose_password.html since it is right in the public directory. The link sent to the email that attempts to reset the password somehow keeps being wrong and gives me a 404. I'll get a link like this: "appname".parseapp.com/user_management.html?link=%2Fapps%2Fschool-project%2Frequest_password_reset&token=TvIoEhOD8ZsWAP414jBCbY3OI&username=testuser
Any Idea why this isn't working correctly?
Figured out my mistake. I was supposed to include the entire link for the template not just the directory after the domain. e.g. "appname".parseapp.com/choose_password.html rather than just /choose_password.html
I am using an S3 bucket as static web solution to host a single page React application.
The react application works fine when I hit the root domain s3-bucket.amazon.com and the HTML5 history api works fine every time I click on a link the new url looks fine: _http://s3-bucket.amazon.com/entities/entity_id_
The problem happens when I use permalinks to access the application. Let's assume I am typing the same url (_http://s3-bucket.amazon.com/entities/entity_id_) in the browser I will get the following error from Amazon S3:
404 Not Found
Code: NoSuchKey
Message: The specified key does not exist.
Key: users
RequestId: 3A4B65F49D07B42C
HostId: j2KCK4tvw9DVQzOHaViZPwT7+piVFfVT3lRs2pr2knhdKag7mROEfqPRLUHGeD/TZoFJBI4/maA=
Is it possible to make Amazon S3 to work nicely with permalinks and HTML5 history api? Maybe it can act as proxy?
Thank you
Solution using AWS CloudFront:
Step 1: Go to CloudFront
Click on distribution id
Go to the Error page tab
Click on Create Custom Error Response
Step 2: Fill the form as
HTTP Error Code: 404
TTL: 0
Custom Error Response: Yes
Response Page Path: /index.html
HTTP Response Code: 200
source: https://medium.com/#nikhilkapoor17/deployment-of-spa-without-location-strategy-61a190a11dfc
Sadly S3 does not support the wildcard routing required for single page apps (basically you want everything after the host in the url to serve index.html, but preserve the route.
So www.blah.com/hello/world would actually serve www.blah.com/index.html but pass the route to the single page app.
The good news is you can do this with a CDN such as Fastly (Varnish) in front of S3. Setting up a rule such as:
if(req.url.ext == "" ) {
set req.url = "/index.html";
}
This will redirect all non asset requests (anything without a file extension) to index.html on the domain.
I have no experience running SPA on Amazon S3, but this seems to be a problem of url-rewriting.
It is one thing do have the history (html5) api rewrite your url when running your application / site.
But allowing rewritten urls to be accessible when refreshing or cold-surfing to your site definitely needs url-rewriting on a server level.
I'm thinking web.confg (IIS), .htaccess (Apache) or nginx site configuration.
It seems the same question already got asked some time ago: https://serverfault.com/questions/650704/url-rewriting-in-amazon-s3
Specify the same name for the index and error files on "Static website hosting" properties. See linked image 1.
Old question but simplest way would be to use hash routing. So instead of mydomain.com/somepage/subpage it would be mydomain.com/#/somepage/subpage
Angular.js routes create URLs such as these:
http://cooldomain.com:3000/#/search
http://cooldomain.com:3000/#/docs
In my docs url, I would like to have one long page with <a name="sdsds"> sections and a traditional table of content with anchor links so that the user can hop up and down the page
Conceptually the table of contents would produce lots of invalid URLs such as http://cooldomain.com:3000/#/docs#coolAPIFunction which of course wouldn't work because of the double hash
So- is it possible to use anchor links in Angular.js applications that have routes?
You could enable html5 pushstate and get rid of the # in your routes. You can do so by adding this to your .config
$locationProvider.html5Mode(true);
However, be aware that now there will not be a distinction between Angular routes vs. server requests. You'll have to config your server to deliver the appropriate static html file (e.g. index.html) for that url.