Related
I'm creating a single page web application that needs to be 100% restful (first time doing this). So, I'm having trouble to decide how I'm going to render the UI for multiple roles using react.
There's going to be a regular user and an admin, after the login/authentication, the server is going to redirect to the main application page, and there is going to be a menu. The thing is, the admin menu should have an extra option, so I'm thinking on should I handle this.
Should I have two different files for the redirect (user.html / admin.html)? The downside would be that I will probably have a bit of duplicated code, because one of the bundles will have everything the other one has, plus an extra component.
Or maybe a component that defaults to none and then if it's an admin it shows itself? The downside would be that if the user has some JS knowledge he could search the code and see what an admin could do (of course there would be authentication in the server to prevent anything from actually happening).
I was also thinking in getting the menu list through a json, and when I click the extra option, it gets the script with the components that should appear, and then I somehow do a ReactDOM.render with the result (don't even know if that's possible, maybe I'm overthinking). Also I've heard about a React-Router but didn't found enough information, maybe that's what I need?
Any ideas/examples on how this is properly done would be really helpful. Thanks in advance.
Definitely don't duplicate your code. First I'd ask, is there really a problem knowing what an admin is capable of? I wouldn't personally worry about that.
React Router would be used for client side routing, and you definitely want client side routing if you're going for a proper React app. You only use the server for initial App route, and then API calls, essentially.
It is certainly possible to pass back arguments to feed into the button component. Your props could contain a 'buttonList' property that has all the arguments necessary to render the button... But that's only really feasible obfuscation if that button's operation is simple, and that's as far as the admin functionality goes. Writing a generic component that takes in specifics by arguments, which can certainly be passed in from the server, is definitely the React way to do this.
What exactly you need to hide from users and how important that is determines if that will truly be enough, however.
This is a generic question about paradigms, and I apologize if this is an inappropriate place to ask. Polite recommendations on the correct place to ask this will be appreciated :)
I'm working for a company that has a separate codebase for each of its websites. I've been asked to take a sizable piece of functionality out of one codebase and put it into an external library, so that multiple codebases can make use of it.
The problem is that the code is tightly coupled to the codebase it was built in, and I'm having a difficult time extracting it. I've approached this problem from multiple angles, and restarted from scratch each time. Each time, I start running into complexities, and it feels like I'm approaching the problem the wrong way. I was wondering if anyone else has had experience doing this, or if there is a recommeneded way to proceed?
Here's what I've tried:
I copied the relevant files into a new project, carefully replacing each reference to the old codebase with vanilla javascript. This has been a laborious process, and I keep running into issues I can't solve
I placed a very basic HTML file in the old codebase, as well as a blank javascript file. I've been cut and pasting functions one at a time into that javascript file, and calling them in the old codebase as well as the basic HTML file.
I created another new project, and copy and pasted functions one at a time into the new project.
Each approach has presented me with its own challenges, but I can't get around the fact that the original code is so tightly coupled to the original codebase that progress is very slow, and I'm beginning to question whether any of the code is salvageable.
The old code may not be salvageable, and it's more than reasonable to reach a point where you go back and say so.
The typical goal I have in cases such as these, cases where nearly all of the old code is unsalvageable but something new needs to not only take over for it, but quickly be used by old and new codebases alike, is to refactor the code into models, services, and components (less MVC and more 'data, how you get and change data, and how you view and interact with data').
In cases where you are building something to replicate the old, but get to write it from scratch, I treat it like it's brand new, and start from the interfaces, first. By knowing what the outer-edges should look like, and by keeping the internal code clean, and leaning on DI (the principle, not any wrapper in particular), I build the system I think I should be able to have, such that new projects/products can happily integrate with the right thing.
...for projects which need to have a product revamped, inside of a busted old system, I take nearly the same tack; I design the interface that I want, I make sure that everything is DI friendly (this becomes more important, here), and then I build a facade that looks exactly like how the old bustedness is called and used, and inside of that facade, I instantiate the sane system, I transform whatever the old, awful data points were, into our new models, I do whatever it is my system needs to do, and on the way out of the system, I transform our awesome new models into the terrifying results that the old system was responsible for making.
The latest such thing is a new platform which hosts new APIs.
The APIs, however, talk to awful, old, stateful, session-based web-services, which make horizontal-scaling absolutely impossible (not what you want to hear, when your goal is to distribute the new platform on Node, on AWS).
The solution was to build the APIs exactly as we expect them; get the interface to look as nice and be as useful as possible, while serving the actual needs of the API clients.
Then, we made sure that the modules which provided the APIs used DI for the service that acts as a connector to the old back-end.
That's so that we can simply swap out that service, when it comes time to connect to a better implementation of the system.
That service, however, needs transformers.
It needs one transformer to convert our awesome new request objects, into the scary old ball of mud that just kept growing.
Then it needs another transformer to turn the output from the ugly old data, into our new models, that our whole system uses.
Those transformers don't necessarily need to be injected into the service, because their implementation details are tied pretty tightly to the place that they're calling, and any update to the service, or any new service called will require transformer work for that specific service's implementation details.
Then there are problems on the front-end side, where communication used to take too much for granted, when talking to a server.
We now have transformers on the client side, which are used at the last possible second (actually, we wrote client-side services) to convert the old way of doing things to talk to the new form.
Any magic global data, which was randomly called in the middle of a process was factored into the service, the transform, and the API in general, if it serves a specific / reusable enough purpose.
Any of those magically grabbed pieces of information are now explicitly passed in. Some are client-only, and thus are either config data for instantiation, or are parameters for particular methods on services.
Session data is now explicitly passed back from the client, in the form of tokens/ids on each request that requires them (for now).
So the new platform stays 100% stateless (and thus scales wonderfully, from that aspect).
So long as all of that magical data gets pulled out of the internals, and passed through, the system can keep being refactored, without too much worry.
As soon as state and state-management exist on the inside of your system, it starts getting harder to work with, and harder to refactor (but you already know that).
Doing a refactor of a product which never leaves the page (ie: involves no APIs/services, or at least none that are tightly coupled to your front-end), isn't really much different.
Remove global state, by explicitly forcing it to be passed into your system (build time, call-time, whenever fits the data's purpose).
If there are async race conditions with moving parts that touch too many things, see if you can resolve them with promises, to get you out of nested callback hell.
My team is now largely using set-based programming (.map, .filter, .reduce, over arrays) and functional programming, in general, to simplify much of the code we look at and write, as each new function may only be 3-5 lines long (some being one-liners).
So our services will tend to be structured in an OOP sort of way, but as much as possible, will remain pure (no outer state modified by/for function calls), and the internals of those calls will typically look much more like chained or composed functional programming.
This has less to do with the overall refactor, and more to do with the micro refactors, as we build our systems.
For the macro-level, it's really your interface, and the facade you wrap the old stuff in, and the removal of all global state (which functional helps with) which make the difference.
The other alternative, of course, is to copy and paste the whole file/page, and start erasing things that you know aren't going to break, until you get to the things that might break, and continue from there. It's not pretty, it's not reusable, but I've been forced to do it a few times in my life, and regretted it every time.
I'm taking my first steps writing an QML/javascript application for Ubuntu touch using qt creator.
Currently there i dont think there's too much documentation on this topic.
Can anyone point me out a good/clean way to work on multiple threads in this circumstance?
QML is not really designed with the intention of working within more than one thread. The origional intention was that any threading should be handled by the layer existing in C++. However, if you do really need acces to threads to perform things like calculations, and you are unable/unwilling to write code at the C++ level, there is the WorkerScript QML element that may be able to provide the funcionality you want.
https://qt-project.org/doc/qt-5.0/qtqml/qml-qtquick2-workerscript.html
Its worth note that depending on what you are trying to do this may or may not be entirely appropriate to use.
When a developing a web app, versus a web site, what reasons are there to use multiple HTML pages, rather than using one html page and doing everything through Javascript?
I would expect that it depends on the application -- maybe -- but would appreciate any thoughts on the subject.
Thanks in advance.
EDIT:
Based on the responses here, and some of my own research, if you wanted to do a single-page, fully JS-Powered site, some useful tools would seem to include:
JQuery Plug Ins:
JQuery History:
http://balupton.com/projects/jquery-history
JQuery Address:
http://plugins.jquery.com/project/jquery-address
JQuery Pagination:
http://plugins.jquery.com/project/pagination
Frameworks:
Sproutcore
http://www.sproutcore.com/
Cappucino
http://cappuccino.org/
Possibly, JMVC:
http://www.javascriptmvc.com/
page based applications provide:
ability to work on any browser or device
simpler programming model
they also provide the following (although these are solvable by many js frameworks):
bookmarkability
browser history
refresh or F5 to repeat action
indexability (in case the application is public and open)
One of the bigger reasons is going to be how searchable your website is.
Doing everything in javascript is going to make it complicated for search engines to crawl all content of your website, and thus not fully indexing it. There are ways around this (with Google's recent AJAX SEO guidelines) but I'm not sure if all search engines support this yet. On top of that, it's a little bit more complex then just making separate pages.
The bigger issue, whether you decide to build multiple HTML pages, or you decide to use some sort of framework or CMS to generate them for you, is that the different sections of your website have URL's that are unique to them. E.g., an about section would have a URL like mywebsite.com/about, and that URL is used on the actual "about" link within the website.
One of the biggest downfalls of single-page, Ajax-ified websites is complexity. What might otherwise be spread across several pages suddenly finds its way into one huge, master page. Also, it can be difficult to coordinate the state of the page (for example, tracking if you are in Edit mode, or Preview mode, etc.) and adjusting the interface to match.
Also, one master page that is heavy on JS can be a performance drag if it has to load multiple, big JS files.
At the OP's request, I'm going to discuss my experience with JS-only sites. I've written four relevant sites: two JS-heavy (Slide and SpeedDate) and two JS-only (Yazooli and GameCrush). Keep in mind that I'm a JS-only-site bigot, so you're basically reading John Hinkley on the subject of Jody Foster.
The idea really works. It produces gracefully, responsive sites at very low operational costs. My estimate is that the cost for bandwidth, CPU, and such goes to 10% of the cost of running a similar page-based site.
You need fewer but better (or at least, better-trained) programmers. JavaScript is an powerful and elegant language, but it has huge problems that a more rigid and unimaginative language like Java doesn't have. If you have a whole bunch of basically mediocre guys working for you, consider JSP or Ruby instead of JS-only. If you are required to use PHP, just shoot yourself.
You have to keep basic session state in the anchor tag. Users simply expect that the URL represents the state of the site: reload, bookmark, back, forward. jQuery's Address plug-in will do a lot of the work for you.
If SEO is an issue for you, investigate Google Ajax Crawling. Basically, you make a very simple parallel site, just for search engines.
When would I not use JS-only? If I were producing a site that was almost entirely content, where the user did nothing but navigate from one place to another, never interacting with the site in a complicated manner. So, Wikipedia and ... well, that's about it. A big reference site, with a lot of data for the user to read.
modularization.
multiple files allows you to mre cleanly break out different workflow paths and process parts.
chances are your Business Rules are something that do not usually directly impact your layout rules and multiple files would better help in editing on what needs to be edited without the risk of breaking something unrelated.
I actually just developed my first application using only one page.
..it got messy
My idea was to create an application that mimicked the desktop environment as much as possible. In particular I wanted a detailed view of some app data to be in a popup window that would maintain it's state regardless of the section of the application they were in.
Thus my frankenstein was born.
What ended up happening due to budget/time constraints was the code got out of hand. The various sections of my JavaScript source got muddled together. Maintaining the proper state of various views I had proved to be... difficult.
With proper planning and technique I think the 'one-page' approach is a very easy way to open up some very interesting possibilities (ex: widgets that maintain state across application sections). But it also opens up many... many potential problem areas. including...
Flooding the global namespace (if you don't already have your own... make one)
Code organization can easily get... out of hand
Context - It's very easy to
I'm sure there are more...
In short, I would urge you to stay away from relying on JavaScript dependency for the compatibility issue's alone. What I've come to realize is there is simply no need rely on JavaScript to everything.
I'm actually in the process of removing JavaScript dependencies in loo of Progressive Enhancement. It just makes more sense. You can achieve the same or similar effects with properly coded JavaScript.
The idea is too...
Develop out well-formatted, fully functional application w/o any JavaScript
Style it
Wrap the whole thing with JavaScript
Using Progressive Enhancement one can develop an application that delivers the best possible experience for the user that is possible.
For some additional arguments, check out The Single Page Interface Manifesto and some (mostly) negative reaction to it on Hacker News (link at the bottom of the SPI page):
The Single Page Interface Manifesto: http://itsnat.sourceforge.net/php/spim/spi_manifesto_en.php
stofac, first of all, thanks for the link to the Single Page Interface (SPI) Manifesto (I'm the author of this boring text)
Said this, SPI != doing everything through Javascript
Take a look to this example (server-centric):
http://www.innowhere.com/insites/
The same in GAE:
http://itsnatsites.appspot.com/
More info about the GAE approach:
http://www.theserverside.com/news/thread.tss?thread_id=60270
In my opinion coding a complex SPI application/web site fully on JavaScript is very very complex and problematic, the best approach in my opinion is "hybrid programming" for SPI, a mix of server-centric for big state management and client-centric (a.k.a JavaScript by hand) for special effects.
Doing everything on a single page using ajax everywhere would break the browser's history/back button functionality and be annoying to the user.
I utterly despise JS-only sites where it is not needed. That extra condition makes all the difference. By way of example consider the oft quoted Google Docs, in this case it not only helps improve experiences it is essential. But some parts of Google Help have been JS-only and yet it adds nothing to the experience, it is only showing static content.
Here are reasons for my upset:
Like many, I am a user of NoScript and love it. Pages load faster, I feel safer and the more distracting adverts are avoided. The last point may seem like a bad thing for webmasters but I don't want anyone to get rewarded for pushing annoying flashy things in my face, if tactless advertisers go out of business I consider it natural selection.
Obviously this means some visitors to your site are either going to be turned away or feel hassled by the need to provide a temporary exclusion. This reduces your audience.
You are duplicating effort. The browser already has a perfectly good history function and you shouldn't need to reinvent the wheel by redrawing the previous page when a back button is clicked. To make matters worse going back a page shouldn't require re-rendering. I guess I am a student of If-it-ain't-broke-don't-fix-it School (from Don't-Repeat-Yourself U.).
There are no HTTP headers when traversing "pages" in JS. This means no cache controls, no expiries, content cannot be adjusted for requested language nor location, no meaningful "page not found" nor "unavailable" responses. You could write error handling routines within your uber-page that respond to failed AJAX fetches but that is more complexity and reinvention, it is redundant.
No caching is a big deal for me, without it proxies cannot work efficiently and caching has the greatest of all load reducing effects. Again, you could mimic some caching in your JS app but that is yet more complexity and redundancy, higher memory usage and poorer user experience overall.
Initial load times are greater. By loading so much Javascript on the first visit you are causing a longer delay.
More JavaScript complexity means more debugging in various browsers. Server-side processing means debugging only once.
Unfuddle (a bug-tracker) left a bad taste. One of my most unpleasant web experiences was being forced to use this service by an employer. On the surface it seems well suited; the JS-heavy section is private so doesn't need to worry about search engines, only repeat visitors will be using it so have time to turn off protections and shouldn't mind the initial JS library load.
But it's use of JS is pointless, most content is static. "Pages" were still being fetched (via AJAX) so the delay is the same. With the benefit of AJAX it should be polling in the background to check for changes but I wouldn't get notified when the visible page had been modified. Sections had different styles so there was an awkward re-rendering when traversing those, loading external stylesheets by Javascript is Bad Practice™. Ease of use was sacrificed for whizz-bang "look at our Web 2.0" features. Such a business-orientated application should concentrate on speed of retrieval, but it ended up slower.
Eventually I had to refuse to use it as it was disrupting the team's work flow. This is not good for client-vendor relationships.
Dynamic pages are harder to save for offline use. Some mobile users like to download in advance and turn off their connection to save power and data usage.
Dynamic pages are harder for screen readers to parse. While the number of blind users are probably less than those with NoScript or a mobile connection it is inexcusable to ignore accessibility - and in some countries even illegal, see the "Disability Discrimination Act" (1999) and "Equality Act" (2010).
As mentioned in other answers the "Progressive Enhancement", née "Unobtrusive Javascript", is the better approach. When I am required to make a JS-only site (remember, I don't object to it on principle and there are times when it is valid) I look forward to implementing the aforementioned AJAX crawling and hope it becomes more standardised in future.
The previous programmer left the website in pretty unusable state, and I am having difficulty modifying anything. I am new to web design so I don't know whether my skills are a mismatch to this kind of job or is it normal in the real industry to have websites like these
The Home page includes three frames
Each of these frames have their own javascript functions ( between <head>, and also call other common javascript functions (using <script src=..>
Excessive usage of document.all - in fact the elements are referred or accessed by document.all only.
Excessive usage of XSLT and Web Services - Though I know that using Web Services is generally considered a good design choice - is there any other way I can consume these services other than using xslt. For example, the menu is created using the data returned by a web method.
Every <div>, <td> and every other element has an id, and these id's are manipulated by the javascript functions, and then some appropriate web service and the xslt files are loaded based on these..
From the security perspective, he used T-SQL's for xml auto for most of the data that is returned by the web service - is it a good choice from the security standpoint to expose the table names and column names to the end user??
I am a lot confused about the state of the application itself. Should I learn about the intricacies that he has developed and continue working on it, or should I start rewriting everything? What I am perplexed a lot is the lack of alternatives - and whether this is the common way web projects are handled in the real world or was it an exception?
Any suggestions, any pointers are welcome. Thanks
No, it is not acceptable in this industry that people keep writing un-maintainable code.
My advice to you is to go up the chain and convince everyone that this needs to be rewritten. If they question you, find an external consultant with relevant web development skills to review the application (for 1 day).
Keeping this website as-is, because it 'works' is like keeping a working model Ford-T car on today's highways, very dangerous. Security and maintenance costs are likely the most persuading topics to convince anyone against keeping this site 'as-is'.
Next, get yourself trained, it will pay off if you can rewrite this application knowing the basics. Todays technology (asp.net MVC) allows you to implement core business value faster than trying to maintain this unconventionally written app.
Tough spot for an inexperienced developer (or any) to be left in. I think you have a few hard weeks a head of you where you really need to read up on the technologies involved to get a better understanding of them and what is best practice. You will also need to really dig down into the existing code to understand how it all hangs together.
When you done all that you really need to think about your options. Usually re-writing something from scratch (especially if it actually works) is a bad idea. This obviously depend on the size of the project, for a smaller projects with only a couple of thousand lines of code it might be OK. When looking at someone elses code it is also easy to overlook that all that weird shit going on could actually be fixes for valid requirements. Things often start out looking neat, but then the real words comes visiting.
You will need to present the business with time estimates for re-writing to see if that is an option at all, but I'm guessing you will need to accept the way things are and do your best with what you have. Maybe you could gradually improves things.
I would recommend moving the project to MVC3 and rewriting the XSLT portions to function using views and/or partial views with MVC. The Razor model binding syntax is very clean and should be able to quickly cleave out the dirty XSLT code and be left with just the model properties you would need.
I would then have those web services invoked from MVC serverside and for you to deserialize the object results into real objects (or even just use straight XQuery or Json traversing to directly pull stuff out for your model) and bind those to your views.
This could be a rather gargantuan leap for technology at your company though. Some places have aversion to change.
I'd guess this was written 6-7 years ago, and hacked on since then. Every project accumulates a certain amount of bubble gum and duct tape. Sounds like this one's got it bad. I suggest breaking this up into bite size chunks. I assume that the site is actually working right now? So you don't want to break anything, the "business" often thinks "it was working just fine when the last guy was here."
Get a feel for your biggest pain points for maintaining the project, and what you'll get the biggest wins from fixing. a rewrite is great, if you have the time and support. But if it's a complex site, there's a lot to be said for a mature application. Mature in the sense that it fulfills the business needs, not that it's good code.
Also, working on small parts will get you better acquainted with the project and the business needs, so when you start the rewrite you'll have a better perspective.