Is Angular 2 Isomorphic? Will it be in the future? - javascript

Been reading up a lot on Isomorphic frameworks and curious as to whether Angular 2 can be considered "Isomorphic". It doesn't seem to be included on any lists, but that may well be because it's still very new.
I have read that Angular 2 is less tightly coupled with the DOM than AngularJS, but it does not however support server rendering. Judging from this link
https://github.com/mbujs/isomorphic-angular
Angular 2 doesn't seem to be classed as Isomorphic by default, however it looks like it is heading that way.
Very general question I know but just looking to see if anyone has any thoughts or opinions on the matter, or whether in fact it actually matters!
Thanks

It's looking to be like the creators of Angular 2 want it to be multi-platform (so, yes, server side rendering). If you browse their source code on GitHub you can see that they have a few modules for both server-side and browser-side "platforms" - a platform provides the bootstrap method to the library using Angular2, enabling it to start an application (think angular.bootstrap).
Unfortunately the server.ts is currently empty so it looks like right now, no, it is not isomorphic. It would appear that Angular2 uses an adapter pattern for connecting to the native browser API, so it's entirely possible that an adapter just needs to be written for the server side for this to work.
There's no feature tracking other than the issue that Gunter linked above, so that would be your best bet to keep tabs on the feature.
Interestingly there also appear to be source files for web workers.
Don't worry if you get lost navigating the GitHub repo, for some reason the Angular2 repo looks like it has been intentionally hard to navigate with mixtures of dart and ts everywhere

This document is old. It's planned for Angular2 to support server rendering, it might work already (don't know the current status).
See also https://github.com/angular/angular/issues/1810, https://angularu.com/VideoSession/2015sf/angular-2-server-rendering

Related

Asp.Net Core + Angularjs2, together or separately?

I will start to develop a new Project and i want use Asp .NET core and angular.js but i have a question what is the better way, use asp.net and angular together or separately?
I defined my architecture in this form
Cliente1(Angular)->RestApi->BussinessLogic->DataAccess->DB
yes, i see that my architecture says me that i need manage asp .NET and angular js separately, but I'd like to hear any suggestions.
UPDATE:
thank you for your answers, in the end both they have their pros and cons, i would like to share with you this articles:
Together: http://proudmonkey.azurewebsites.net/asp-net-core-getting-started-with-angularjs-2/
separately:
Part one: https://chsakell.com/2016/06/23/rest-apis-using-asp-net-core-and-entity-framework-core/
Part two: chsakell.com/2016/06/27/angular-2-crud-modals-animations-pagination-datetimepicker/
In general in programming you should separate your logic the most you can.
You will want to separate both projects for so many reasons :
You have a web app right now (angular) but maybe in a near futur you will need to have a mobile app (hybrid or native)
You can be more than one person working on the project, for example you will maybe need some designer/integrator to work on the app, and you dont want to share with him your back end, same applies if you have a back end guy.
two projects means maybe two source control repositories, means more control on branches, versions, rolling back ...
etc ...
I hope this can help.
If I see other benefits, ill update this answer.
Keep them separate, your MVC Part will be mainly REST APIs which has nothing to do with the JavaScript, HTML and CSS in the Angular Project, besides, if you want to build another client, EX: Mobile, then it will have its own project as well, this way you will have a clean structure for your solution.
So, you should have the following:
YourProject.REST
YourProject.Angular
YourProject.MobileClient
Also, the separation will make it easier for the teams working on the project, the one who will work on the front end doesn't have to worry about any other code not related to his tasks, the same for the developer working on the APIs, and each project can be structured as per the best practices for its technology.
You question is opinion based more than facts, so here is my opinion.
I have done few projects with ASP.Net MVC, Web API and AngularJS. They all stay in a single Web Application Project. Note: I have few class libraries for strongly typed Angular Helpers, Business Logic and Data Access.
Here are the advantages
I authenticate user using Owin Middleware, then redirect to Angular. The main advantage is I do not have to maintain Bearer Token or Authentication Cookie explicit inside Angular.
Second, I use csthml as strongly typed Angular view rather than plain html. It is the best of both world.
Last but not least, you can debug it easily rather than starting two projects at the same time, so that I can save resources on development machine. Everyone know Visual Studio is a memory hungry IDE.

Why SPA (Single Page App)?

Inspired by John Papa's video at Pluralsight, I started learning SPA. It appears pretty interesting. However, before I fully jump in, I'd like to clarify some of my questions.
From what I learnt, SPA is a lean server, fat client app. I think this should work well for small apps like what John Para demonstrated. Does it scale? How big it can be? Anybody has experience with this?
In SPA, you seem to code all the business logic in JavaScript. Is this a good idea at all? How do you hide the business "secret"?
With my background primarily in C#/WPF/.NET, moving to JavaScript seems to be very difficult (well, I learnt a little JavaScript more than 10 years ago - I hated it and never touched it again). With my limited knowledge, I ran into several problems. Debugging JavaScript seems to be a nightmare to me. The highly praised component Breezejs seems to be still in its early stage (e.g. it doesn't support UOW, doesn't support CascadeDelete, doesn't support enums). So, I'm wondering this is good time to jump in?
Directly to your questions:
Since the server logic is thin you can use some kind of cloud services and they scale pretty good. The most of the logic will be handled by the browsers of your users.
You should be careful if you depend on client. HTTP protocol can be easily manipulated. Don't forget that you should always do the validation logic both on client and server side! Also the "hidden" validation and other "secret" logic should be located only on the server.
Debugging JavaScript isn't so bad at all. You can use the built-in tools (Inspect element in Chrome and FireBug in Firefox, etc.) Also there are a lot of useful third party tools that will help you with the debugging.
If you start a new project just for your own use then I advice you to try the SPA approach. If you are writing production code you should become an expert in this area and then try to use these technologies.
Regarding UoW, take a look at the TempHire sample. It demonstrates using the UoW pattern on the client as well as the server.
https://github.com/IdeaBlade/Breeze/tree/master/Samples/TempHire
I believe SPA's provide a better framework for Business Intensive Applications as well as simpler application workflows such as that of Facebook. I have worked with Multi Page Applications for banking application with complex workflows and it get daunting to handle every thing and still keep up the application performance.
But I do think Knockout Alone wont be able to handle large applications as it is to connected in nature. I would recommend something like Backbone Marionete or Angular for that venture.
I am building a framework for large scale SPA development for the opensource community so I do believe it is the right direction.
Interested parties can go to my demo page at http:\saqibshakil.github.io I have demonstrated some of my work there.
i have been looking into it for months. My conclusion is to use Knockout with a light path.js or sammy.js for your url. I use json with a standard Visual Studio MVC ( which can return Json) as the backend.
I am still not done with the project but so far so good. It's lightning fast, elequent and lightweight.
Stay away from the frameworks. Get a look at the standard libraries: how they are written; You can learn a lot of JavaScript that way. Finally debug with chrome or explorer developer tools.
Good luck

Migrating from asp.mvc application to node.js application with a focus on design [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I am currently looking into alternative platforms to migrate an existing application onto, it started out as a prototype using asp.mvc but the majority of code is javascript with a simple asp mvc web service so as we are looking at taking it forward it seems sensible that we just scrap the current microsoft stack and just go for nodejs giving us more freedom of where and how we host our application, also we can reuse some of the models and code in both the web service and front end, although this would probably end up being a small amount.
This will probably be quite a large question encompassing many parts but I will put it out there anyway as I am sure this could be helpful to lots of other people looking at how they can move from something like .net/java to to node.js. As most of these statically typed languages have lots of patterns and practices used, such as Inversion of Control, Unit of Work, Aspect Oriented Programming etc it seems a bit strange moving towards another platform which doesn't seem to require as much structure in this area... So I have some concerns about migrating from my super structured and tested world to this new seemingly unstructured and dynamic world.
So here are the main things I would do in MVC and would want to do in node.js currently but am not quite sure the best way to achieve the same level of separation or functionality.
Routing to Actions
This mechanism in ASP MVC seems to be replaceable by Express in node.js, which will give me the same ability to map a route to a method. However there are a couple of concerns:
in ASP MVC my controllers can be dependency injected and have variables so actions are easy to test as everything they depend on can be mocked when needed and passed in via constructor. However as the method in express seems to not have a containing scope it seems like I would have to either use global variables or new up the variables internally. Is there a nice way to access my business logic containers in these routed methods?
Is there any way to autobind the model being sent? or at least get the json/xml etc in some useful manner? It appears that if you send over the correct mime type with the content it can be extracted but I have not seen any clear examples of this online. I am happy to use additional frameworks on top of Express to provide this functionality, as ideally I just want to make a POST request to /user/1 and have the user object pulled out and update the user with id 1 in the database (which we will talk about shortly).
What is the best way to validate the data being sent over? Currently in our front end javascript application we use KnockoutJS and all models use knockout observables and are validated using Knockout.Validation. I am happy to use Knockout Validation on node as the models are the contract between the front end and back end, however if there is a better solution I am happy to go look into it.
Database Interactions
Currently in .net land we use NHibernate for communicating with our relational databases and the MongoDB driver for communicating with our MongoDB Databases. We use a Generic Repository pattern and isolate queries into their own classes. We also use a Unit Of Work pattern quite heavily so we can wrap logical chunks of word into a transaction and then either commit it all if it goes well or roll back if it doesn't. This gives us the ability to mock out our objects at almost any level depending on what we want to test and also lets us change our implementations easily. So here is my concern:
Sequalize seems to be a good fit for replacing NHibernate, however it doesn't seem to have any sort of transaction handling making it very difficult to make a unit of work pattern. This is not the end of the world if it cannot be done in the same way, but I would like some way of being able to group a chunk of work in some way, so an action like CreateNewUserUnitOfWork which would take a model representing the users details, validate it, create an entry in one table, then create some relational data in others etc, get the users id from the database and then send that back (assuming all went well). From looking at the QueryChainer it seems like it provides most of the functionality but if it failed on the 3rd of 5 actions it doesn't appear simple to roll back, so is there some way to get this level of control?
Plugins / Scattered Configuration Data
This is more of a niche concern of our application, but we have the central application then other dlls which contain plugins. There are dropped into the bin folder and will then be hooked in to the routing, database and validation configuration. Imagine it like having the google homepage, and google maps, docs etc were all plugins which told the main application to route additional calls to methods within the plugin, which then had their own models and database configurations etc. Here is my concern around this:
There seems to be a way to update the routing by just scanning a directory for new plugins and including them (node.js require all files in a folder?) but is there some best practice around doing this sort of thing as I don't want each requestto have to constantly do directory scans. It is safe to assume that I am happy to just have the plugins in their right places at the time of starting the node application so no need to add plugins at runtime at the moment.
Testing
Currently there is Unit, Integration, Acceptance testing within the application. The Unit tests happen on both the front end and back end, so currently it will do javascript tests using JsTestDriver in our build script to confirm all the business logic works as intended in isolation etc. Then we have integration tests which currently are all done in C# which will test our controllers and actions work as expected as well as any units of work etc, this again is kicked off by the build script but can also be run via the Resharper unit test runner. Then finally we have acceptance tests which are written in c# using web driver which just target the front end and test all the functionality via the browser. My main concerns around this are:
What is the best practice and frameworks for testing nodejs? Currently most of the tests which test at the ASP layer are carried out via C# scripts creating the controller with mocked dependencies and running the actions to prove it works as intended using MVC Helpers etc. However I was wondering what level of support NodeJS has around this, as it doesn't seem simple (on a quick glance) to test node.js components without having node running.
Assuming there is some good way to test node.js can this be hooked into build scripts via command line runners etc? as we will want everything automated and isolated wherever possible.
I appreciate that this is more like 5+ smaller questions rolled up into one bigger question but as the underlying question is how to achieve good design with a large node js application I was hoping this would still be useful to a lot of people who like myself come from larger enterprise applications to node js style apps.
I recently recently switched from ASP MVC to Node.js and highly recommend switching.
I can't give you all the information you're looking for, but I highly recommend Sequelize as an ORM and mocha (with expect.js and sinon) for your tests. Sequelize is adding transaction support in the next version, 1.7.0. I don't like QueryChainer since each of it's elements are executed separately.
I got frustrated with Jasmine as a test framework, which is missing an AfterAll method for shutting down the Express server after acceptance tests. Mocha is developed by the author of Express.
If you want to have an Express server load in plugins, you could use a singleton pattern like this: https://github.com/JeyDotC/articles/blob/master/EXPRESS%20WITH%20SEQUELIZE.md
You will probably will also really like https://github.com/visionmedia/express-resource which provides a RESTful interface for accessing your models. For validation, I think you'll be happy with https://github.com/ctavan/express-validator
Why would you need to test Node modules without using Node? It's considered standard to call a test script with a Makefile, and you can add a pre-commit hook to git to run the tests before making a commit.

Rails: Not ember, not JS responses, but something in-between

I am developing a standard rails application, and so far I haven't used any AJAX, just good ol' HTML. My plan is to iteratively add "remote" links and all that kind of stuff and support for JS responses, because I know that generating JS server side is very very evil, but I find it to be very handy as well, easy, fast and it makes the application snappy enough and i18n comes out-of-the-box.
Using a pure JSON approach would be lighter, but needs lots of client-side coding.
Now imagine that in this application users have a mailbox, and since the idea is that they will be able to do most or even all of the actions without reloading the page, the mailbox counter will never change unless they refresh the page manually.
So, here comes the question: Which is the best way to handle this?
I thought about using Ember (for data binding), and sharing views with rails, via some sort of handlebars implementation for ruby. That would be quite awesome, but not very transparent for the developer (me). Although I guess that I only need to write handlebars views that will be used by ember, the rest can still be written in their original format, no?
Another option might be to use some sort of event system (EventSource maybe?), and just go with handy the JS views approach, and listen to those events. I guess those should be JSON objects, and the client must be coded to be able to handle them. This seems a bit cumbersome, and I need a solution for heroku (faye?), which is where my app is hosted. Any hints?
I think that the ember approach is the more robust one, but seems to be quite complex as well, and I don't want to repeat myself server and client side.
EDIT:
I have seen this, which is more or less the option #2.
One of the advantages of using a JavaScript framework is that the whole application can be concatenated and compressed into one JavaScript file. Provided that modern browsers aggressively cache JavaScript, the browser would no longer need to request those assets after initial page load.
Another advantage of using a JavaScript framework is that it requires you to be a consumer of your own API. Fleshing out the application's API for your own consumption might lend to less work in the future if there is a possibility of mobile applications or 3rd parties having access to it.
If you do not need your application to respond to every request with an equivalent HTML response, I think a compelling case could be made for using a JavaScript framework.
Many of those benefits might be lost if your application needs to respond to every request with an equivalent HTML template. The Ember core has been relatively vocal and in opposition to supporting this style of progressive enhancement. Considering the tools for using a JavaScript framework in this way are relatively unstable and immature, I might be prone to using option 2 to accomplish this.

Decision about web application architecture

I am facing a decision about the web application architecture I am going to work on.
We are a small team and actually I will work on it alone (everybody work on something else).
This application will consist of front-end build on the ExtJS library
and it will use the model "load page, build GUI and never refresh".
On the web "desktop" there will be a lot of data windows, map views (using openlayers + GeoExt) and other stuff.
GUI should be flexible and allow every user to modify (and persist) the layout to fit his/her needs.
It should be possible to divide the application into modules / parts / ... and then let users in specific groups use only the specific modules. In other words, each group of users
can have different GUI available on the web "desktop".
Questions are:
First of all, is this approach good?
There will be a lot of AJAX calls from clients,
may be this could be a problem.
How to handle code complexity on client side?
So far I have decided to use dojo.require / dojo.provide feature and divide the client side code into modules
(for production they will be put together using dojo build system)
I am thinking about to use kind of IoC container on client side, but not sure which one yet.
It is very likely that I will write one for myself, it should not be difficult in dynamic language like JavaScript.
How to handle AJAX calls on server ?
Should I use WCF on server side ? Or just ordinary ashx handler ?
How to handle code complexity on server side ?
I want to use Spring.NET. May be this approach could help with modularity problem.
Data access - here I am pretty sure what to use:
For DAL classes I will use nHibernate. Then I compose them with business classes using Spring.NET.
I would really appreciate some advice about which way to go.
I know about a lot of technologies, but I have used only little part of it.
I don't have time to explore all of them and be fine with the decision.
We do this type of single page interface where I work on a pretty large scale for our clients. (Our site is not an internet site)
This seems to work pretty well for us. The more js you have the more difficult it gets to maintain, so have as many automated js tests as you can and try to break up your js logic in an mvc fashion. 4.0 is supposed to make this much easier.
Ext 4.0 has this built in if you are trying to limit the code you bring down. If you have the same users day after day, then I think it would be best to just bring all the source down (compressed and minified) and cache it.
We've found asmx to work really well. I have nothing against wcf, but last I looked it seemed like more trouble than it was worth. I know they have made many improvements recently. asmx just works though (with a few request header changes and managing the "d." on the client side).
Our Server side data access layer is pretty complex, but the interface for the ajax calls is pretty simple. You have not really given enough info to answer this part. I would start as simple as possible and refactor often.
We are also using nHibernate. Works fine for us. We have built a DDD model around it. It can take a lot of work to get that right though (not sure if we have it right after months of working at it).
If I were you I'd start with just extjs, your web service technology, and nHibernte.
I would recommend ASP.NET MVC 3 with Razor instead of a lot Javascript and calls to Service you can just do ajax calls to an Action in a Controller and that will let you have more maintainable code and use a IoC like Ninject. EF instead of NHibernate.
But it's your decision.
I would look into using a tool like Google Closure Compiler, especially if you're dealing with a very large project. I don't have too much experience with ExtJS, but large projects in JavaScript are hard and something like Closure Compiler tends to make it easier.

Categories