Recovering identity in Blazor application - javascript

I have a Blazor server side application with a custom identityprovider.
Whenever I need to recover the identity I do so from the session storage using javascript.
This all works well.
But now I have added an upload control and I have embedded it with <CascadingAuthenticationState> to see if the identity was still known until that point, which it is.
Once I try to upload a file it makes a call to a controller using routing and that's where it goes haywire.
Once I enter the constructor of the controller the identity is null.
All attempts to recover it using javascript result in
JavaScript interop calls cannot be issued at this time. This is
because the component is being statically rendered
While I am familiar with this error message, I am puzzled why this happens because the render mode of the application is Server and not ServerPrerendered.
Is it because I am in a post event?
I've also played with several ways passing the httpcontext but that's also null.
Basically what I'm trying to accomplish is that I need my token there to call an API, but I've lost all track of who I am at that point.
I can think of ugly solutions like passing it in the URL, but that's not safe at all.
Or store it in a singleton with a lock or something like that, but those are all fugly solutions.
Can anyone kick me into right direction? I feel like I'm running into a very large, very dark forest now.

The vendor of the component added a new event to add custom headers, so now my problem is solved.

Related

Angular JS and Symfony2

I am currently working on a project using Symfony2 and seeking some advice on that.
I am thinking of a hybrid application in two(2) different ways a) Login Page shall use traditional form with CRF Token and let symfonty2 handle it. b) All Inner pages ( which potentially are modules ) I want them to be non AJAX, but the other activities inside that shall behave like a Single Page.
For example I have an employee module. When user clicks on that it is entirely loaded from Server ( all the templates and forms etc ) now each activity under employee module like add/update delete/view etc shall be loaded through AJAX and response to be returned in JSON i.e AngularJS.
I am currently thinking of using FOSUserBundle to return html on initial request and then based on request type Accept: application/json it will return the JSON ( remember the add/updat delete/view part? ).
My question is it a better idea to use Angular Partials (html) files or Symfony2 Twig? or would it be better to use Angular JS, but let those partials be rendered by Symfony2 twig? ( I am thinking of Forms here, would want to validate that both from client and server side )
Has any one been through similar problem, if yes then what approach was used to develop HYBRID application using AngularJS and Symfony2 or any other framework? any relevant ideas are appreciated.
I was in the same situation you are. AngularJS+Symfony2 project, REST API, login using FOSUserBundle, etc.
... And every way has pros and cons, so there is no right way, i'm just gonna say exactly what i did.
I choose AngularJS native templates, no CSRF validation, a base template built using Twig, server-side validation, use of the FOSJSRoutingBundle, and some helpers (BuiltResponse and BaseController).
Why native templates?
With the use of verbatim, we solve the variable problems, but we gonna have a more complex logic in our templates.
We also will have a less scalable application. All our forms templates are doing a request in the Symfony application, and one of the best pros of the AngularJS is load our controllers, templates, etc from a storage service, like S3, or CDN, like Cloudfront. As there is no server-side processing, our templates would load so much faster. Even with caching, Twig is slower, obviously.
And both, Twig and AngularJS templates, are really complex to manage, in my own experience. I started making them together, but was painful to manage.
What i did?
I created static templates in front-end, with the same field names, it's not really good. We need to update the templates every time we update the forms, manually. But was the best way i found. As the field names are equal, we won't have problems to ajust the model names in the Angular controllers.
And if you are creating the software as a service, you will need to do it anyway. Will you not load the form templates from the application in a mobile app, right?
Why no CSRF validation?
We don't use CSRF validation in a REST API, obviously. But, if you wanna do it, you need to make a request every time you load a form, to get the CSRF token. It's really, really bad. So, we create a CRUD, and also we need to create a "csrf-CRUD", 4 routes more. That doesn't make any sense.
What i did?
I disabled the CSRF in the forms.
Base template?!
Yep. A base template is just to load any route in our application. Here is what i'm doing:
This will help us to avoid errors when users are going directly to some Application URL if you are using html5 angularjs urls. Simple like that.
Server-side validation, why?
If we do a validation in the Angular, we need to do the same in the server-side, so we have 2 validation codes to maintain. That is painful. Every change we do in the form, we need to change the validation in the front, validation in the back and also the Angular static form. Really, really painful.
What i did?
I basically did a server-side validation using the Symfony constraints. For every request, the application validates the form and check if any error was found, if yes, it gets the first one and send it as a response.
In the AngularJS, the application checks if there is any error inside of the errors key. So, we have a proccess used in all application to do any form request. It's like that:
And the routes?
There is another problem: the routes. Put the url directly is not a reliable way. If we change anything in the url, that route is gone and the users won't like that.
To fix that, we can use the FOSJsRoutingBundle. With that library, we can put the route name directly in the Angular controller, and it will fill with the exact url of the route. It's completely integrated with the Symfony, so parameters will work very well.
Instead using the url directly, we can do it:
Routing.generate('panel_products_show', {id: $routeParams.product_id});
And voilá! We get the route url.
That will solve the biggest part of the problems you have. But there are more.
Problem 1 - Form inputs
The forms from Symfony generally have a prefix, like "publish_product", so every field has a name like [publish_product]name. Ah, how that was a problem for me.
In the Angular, publish_product is not considered a array. You need to put the single quote to do this, like ['publish_product']name. And it's really bad, we need to change every key to use this format. In AngularJS, i was doing like that:
{{ formData('[publish_product]name') }}
Absolutely stupid.
The best solution was simply remove the form prefix in the Symfony, using the createNamedBuilder method instead just createBuilder. I let the first parameter null, and yeah, we don't need to use the prefix anymore. Now, we use:
{{ formData.name }}
So much better.
Problem 2 - Routes hard do maintain
Every request can return anything, i need to repeat much code. That is really hard to maintain, so i just create some application rules, built responses, a BaseController, etc.
createNamedBuilder
createNamedBuilder is a big method. We need to do this, for every form we have:
It's simple to solve. I just created a BaseController and i'm extending every controller from it. I created a simple method that does it.
For every route, we do not need to repeat 3 lines, much better.
Responses
When my application started growing, i had a serious problem: all my responses are different. That was really hard to maintain. For every request i was doing, sometimes i was using "response", sometimes "data", the error messages were lost in the response, etc.
So, i decided to create a buildResponse, that i just need to set some parameters and i get the same result for every route, even GET routes.
response key shows me the status and the message. It can be error or success, and the message os a optional field, that can come blank. For example, a success status with the message "You created the product".
data key shows me any information i need. For example, the user added the product, and now he needs the link to see it. In the data, i put the url of the post, and i easily can get it from the AngularJS controller.
notifications is a specific key for my business logic. Every action can return a notification to the user.
It doesn't matter what keys you have. The most important thing is have a standardized response, because when your application grows, it will be really helpful.
That is a route from my controller:
Completely standardized. The Scrutinizer code quality tool says all my routes are duplicated. :D
Have a BaseController and a builtResponse will help you so much. When i started refactoring my code, each route lost about 4-10 lines.
Details: getFormError return the first error of the form. Here is my method:
public function getFormError(FormInterface $form)
{
if ($form->getErrors()->current()) {
return $form->getErrors()->current()->getMessage();
}
return 'errors.unknown';
}
... And the parameters from the buildResponse are:
1. Status. I get it from a constant in the BaseController. It can be changed, so i believe is important do not use a string value in each route.
2. The translation message. (I use a preg_match to check if it has a translation format, because getFormError already translates the error).
3. The data (array) parameter.
4. The notifications (array) parameter.
Other problem i'm gonna have
The project just have one supported language until now. When i start to work in a multilingual version, i'm gonna have another big problem: maintain 2 versions of the translations: the back-end messages and validations and the text from the front-end. That probably will be a big problem. When i get the best approach, i'll update this answer.
I took some months to get the this approach. So many code refactorings and probaly much more in the future. So i hope it help someone to do not need to do the same.
1. If i get a better way to do this, i'll update this answer.
2. I'm not good at writing english, so this answer probably will have many grammatical errors. Sorry, i'm fixing what i'm seeing.

How to guard against scope objects getting changed?

I'm an Angular noob. In an app I have taken over there is an object in the scope that defines the role of the current user (e.g. user.role=REGULAR).
Is there a way to keep a user from opening firebug and changing user.role=ADMIN?
For example, I have seen code that shows a tab based on a value in a scope, but I'm not sure how to keep a user from changing that value (and getting access to the tab). Is there a pattern to deal with this? Does everything access-related need to come directly from a web service/protected remote location?
There is no way to do this. Your design has a fundamental issue; it relies on client side validation.
You can never ever ever ever ever trust anything coming from the client. Anything that you truly want validated or authenticated must be done on the server side, particularly security related matters.
The most important rule is that once it leaves the server and hits the client, its out of your control. Assume its compromised, assume its not trust-worthy, and assume you have to check everything.
In your case, if a user is not an admin don't even provide them with admin options.
Well, you can try to hide the object inside of a closure or use Object.freeze in browsers that support it, however there is no getting around the fact that the code is being sent to and executed on the client. Even if there was a foolproof way of preventing modification ( which there isn't ), the client could have modified the payload in Fiddler or something before it reached the browser.
With that in mind, you cannot trust anything on the client for access/authorization; you must verify this on the server or you'll have security holes/risks.

Syncing Database and Javascript

I'm working on a real-time JavaScript Application that requires all changes to a database are mirrored instantly in JavaScript and vise versa.
Right now, when changes are made in JavaScript, I make an ajax call to my API and make the corresponding changes to the DOM. On the server, the API handles the request and finishes up by sending a push using PubNub to the other current JavaScript users with the change that has been made. I also include a changeID that is sequential to JavaScript can resync the entire data set if it missed a push. Here is an example of that push:
{
"changeID":"2857693",
"type":"update",
"table":"users",
"where":{
"id":"32"
},
"set":{
"first_name":"Johnny",
"last_name":"Applesead"
}
}
When JavaScript gets this change, it updates the local storage and makes the corresponding DOM changes based on which table is being changed. Please keep in mind that my issue is not with updating the DOM, but with syncing the data from the database to JavaScript both quickly and seamlessly.
Going through this, I can't help but think that this is a terribly complicated solution to something that should be reasonably simple. Am I missing a Gotcha? How would you sync multiple JavaScript Clients with a MySQL Database seamlessly?
Just to update the question a few months later - I ended up sticking with this method and it works quite well.
I know this is an old question, but I've spent a lot of time working on this exact same problem although for a completely different context. I am creating a Phonegap App and it has to work offline and sync at a later point.
The big revelation for me is that what I really need is a version control between the browser and the server so that's what I made. stores data in sets and keys within those sets and versions all of those individually. When things go wrong there is a conflict resolution callback that you can use to resolve it.
I just put the project on GitHub, it's URL is https://github.com/forbesmyester/SyncIt

Is it possible to complete the loop from browser->java->c++->java->browser?

I've got a question about data flow that is summarized best by the image below:
I've got the data path from the UI (WaveMaker) down to the hardware working perfectly. The question I have is whether I'm missing something in the connection from the Java Service to Wavemaker.
I'm trying to provide information back to Wavemaker from the HW. The specifics of shared memory and semaphore signaling are worked out already. Where I'm running into a problem is how to get the data from the Java Service back to WaveMaker, when it hasn't specifically requested it. My plan was to generate events when the Java Service returned, but another engineer here insists that it won't work, since there's no direct call from Wavemaker and we don't want to poll.
What I proposed was to call the function after the page loaded, allow the blocking to occur at the .so level, as shown below, and then handle the return string when the call returned. We would then call the function again. That has the serious flaw of blocking out interaction with the user interface.
Another option put forth would be to use a hidden control, somehow pass it into Java, and invoke an event on it from Java, which could then be made to execute a script to update the UI with the HW response. That keeps the option of using threads alive, and possibly resolves the issue. Is there some more elementary way of getting information from Java->JavaScript->UI without it having been asked for?

Javascript event loop/message pump for Google Sketchup plugin

I'm working on a plugin for Google Sketchup that is written using the Ruby API. Within this API is a WebDialog class which one can use to render HTML and move data between the WebDialog and the Ruby side of the plugin code. I'm using this class to build a UI for my plugin.
Data is sent from the WebDialog to the Ruby side asynchronously. Due to subpar documentation I was not initially aware of this and now that I'm a ways into my plugin it's began to create some problems for me. Specifically: when multiple successive calls are made from the WebDialog to the Ruby side, only the last call is executed. So, I clearly need to devise some sort of "bridge" which will prevent calls from the WebDialog to the Ruby side from getting lost -- which is, I think, basically an "event loop" or "message pump" system.
My problem is that I haven't a good idea of how to do this. What I'm hoping is that someone can provide me with some sort of resource that lays out a framework for how such a system should work -- what sort of checks are needed, the sequence in which they're performed, etc. I know this can be a terrifically complex task, but I only need something basic: basically, a way of making Javascript stop when I send a request to Ruby, not proceeding until I get the data I need back, and dealing with any potential errors that may crop up.
Any help would be very much appreciated!
I've spent a great deal of time with the WebDialog class. I planned to write such a pump, but I found that I could do it differently with more reliable results.
( My WebDialog findings: http://forums.sketchucation.com/viewtopic.php?f=180&t=23445 )
Alternative Method
SketchUp > JavaScript
My alternative method was that I didn't try to push data from the WebDialog to Ruby. But instead had Ruby pump the WebDialog because Webdialog.execute_script is synchronous.
I send a command to the WebDialog with a query. The Javascript then processes this and put the result into a hidden INPUT element which I then use ´WebDialog.get_element_value` to fetch the content of.
All of this I wrapped up into a wrapper method the will process the return value and convert it into appropriate Ruby objects. http://www.thomthom.net/software/sketchup/tt_lib2/doc/TT/GUI/Window.html#call_script-instance_method
The outline is:
Make a call ( .execute_script ) to clear the hidden INPUT element
Make the actual call which JS will process and put the return value into the hidden INPUT
Use .get_element_value to fetch the hidden INPUT value
All this is synchronous.
Javascript Pump
Javascript > SketchUp
If you really need to pump information from JS, then I think you need to do something like this:
JS: push messages into a message queue
JS: Send a message to SU that there is messages
SU: When the callback notifies about new messages, query JS for the next message and continue until there are no more messages. This should work as it'd be similar method as described earlier.
The concept would be to store up your messages and then hand over control to the SketchUp side which can pump it synchronously.
(Untested theory.)

Categories