I'm working on an Angular5+ front-end which is using google-protobuf JS with a WebSocket to communicate with the backend.
I currently have 2 objects from my .proto files :
a Request object.
a Notification object.
I have then made a handler service that will get the messages sent through the WebSocket, but then I got a big issue : I cannot find a way to parse / deserialize efficiently my objects :
this._socketService.getSocket().onmessage = ((message: Message) => {
const uiArray = new Uint8Array(message.data);
this.parseMessage(uiArray);
});
parseMessage(uiArray: Uint8Array) {
let response = null;
// DOES NOT WORK
// response = reqRep.Response.deserializeBinary(uiArray) || notif.BackendStatusNotification.deserializeBinary(uiArray);
// <==== This is where I need to find a good way to deserialize efficiently my objects
// TEMPORARY
if (uiArray.byteLength === 56) {
response = reqRep.Response.deserializeBinary(uiArray)
} else {
response = notif.BackendStatusNotification.deserializeBinary(uiArray);
}
// Notify different Observables which object has changed based on their type
switch (response && response.hasSupplement() && response.getSupplement().array[0]) {
case 'type.googleapis.com/req.BackendStatusResponse':
this._responseSubject.next(response);
break;
case 'type.googleapis.com/notif.BackendStatusNotification':
this._notificationSubject.next(response);
break;
default:
console.log('DOESN\'T WORK');
}
}
I tried using a || as shown in the code to always be able to deserialize my response but it doesn't work : if the first one fails, I get a runtime error.
I have a few insights and I though maybe someone could help me :
Either I have to do try catch all around to manage every case (Which is the worst possible solution obviously).
There is something I did wrong in the way I try to deserialize and it's a dumb mistake. I though maybe
I could use a generic Message.deserialize() from google-protobuf, but there is no way this work since each object should implement their own deserialize method.
Or the last one, I should make a .proto file in which I set a base object that will nest all of my different objects for my application. This way I could deserialize a single type of message that will be able to deserialize the nested objects with it. (To me this is the best solution but it's quite costly for the backend)
In the end, I did the last option I mentionned in the question, which is to encapsulate all my different objects in a generic object.
This way, I have only one way to deserialize my objects which will then dispatch the processing of the nested known objects. It works flawlessly.
Related
I've difficulties to understand how far should I normally go with checking and validating data I operate in my code. I'm not even saying about user-input data here, I'm just saying about my own data, the data I know the structure of. For example, I might have some basic function like this:
let someData = [object];
someFunction(newData) {
let newSet = newData.frequency[0];
someData.dataset.chart.data[0].frequency = newSet;
}
Let's say I have someData variable that is used by a chart, and then I also have a function that simply updates the chart data with a new one. newData comes from my database (when user adjust filters and apply them) and someData is the default, initial data everyone sees. Now, my problem is... I foresee the following events:
Someone might change someData using Developers Console, so this variable will no longer hold an object, or won't have properties I address in my code, so it will lead to errors and break my logic entirely.
Someone might call someFunction directly (using console again) with very random data as argument, and this, again, will lead to errors.
Maybe newData received from DB will be somewhat wrong due to some errors on my side or anything, or maybe initial someData will fail initialising (cause it's initialised through a function as well and relies on third party .JS file that also might fail to load one day).
And I'm not even sure I've foreseen all possible events. So my code turns from what you saw above to something "tedious" (in my opinion) like this:
let someData = [object];
someFunction(newData) {
let newSet = typeof newData === "object" &&
newData.frequency ?
newData.frequency[0] : undefined;
let oldSet = typeof someData === "object" &&
someData.dataset &&
someData.dataset.chart &&
someData.dataset.chart.data &&
someData.dataset.chart.data[0] ?
someData.dataset.chart.data[0].frequency : undefined;
// Since using length on undefined will lead to an error, I've to check the type too
if (Array.isArray(newSet) && newSet.length === 5 && oldSet) {
oldSet = newSet;
} else {
throw Error("Dataset update has failed.");
}
}
And even this doesn't guarantee that the newSet of data is the data I expect to see, because maybe I was looking for [1,2,3,4,5] and user managed to insert ["a","b","c","d","e"] via his console and so on. It feels like I can refine this endlessly and it will never be bulletproof plus eventually it's starting to get complicated to understand my own code, that the only thing I wanted to do is to changed old data with the new data. At this point I'm feeling like I'm doing it all wrong. Coding takes way more time and I'm not even sure I'm doing it for good, but at the same time I can't feel the limit when to stop over-validating my code. Please advise.
I would just stick with user input validation. Out of that, it's their problem if they want to screw you things with developper tools. Anyway, those would only stay on their side.
What's important is the validation on your server. The client side input validation is just to make sure everything put by regular user is error free before processing. It also save useless send to server. The server must redo the validation and more because of those screwed people.
I am converting a table that was previously working but locking the UI during some tasks to be managed by a web worker. One of the functions I am updating is sorting. My old Sort function has been converted to:
function mySort(column){
myWorker.postMessage({ message: "sort", column: column.id });
}
In my worker:
self.onmessage = function (e) {
switch (e.data.message) {
case "sort":
DoSort(e.data.column);
break;
}
}
var m_sortCol;
function DoSort(column) {
if (m_sortCol != null && column.equals(m_sortCol)) // <---- The problem is here
myData.reverse();
else {
m_sortCol = column;
myData.sort(column);
}
}
The problem is, when my worker gets to the indicated line, it throws an exception, column.equals is not a function. Looking at the data in the Debugger, both column and m_sortCol have values, and they sure look like strings, though I don't see anywhere the object type is explicitly stated. I have tried changing the statement to column.toString().equals(m_sortCol.toString()), and the exception changes to column.toString(...).equals is not a function.
This code previously worked before changing to a web worker. Could the web worker be messing with my types somehow so that what looks like a string isn't actually a string? Or could the worker not have access to String functions somehow? My research says web workers should have String functions, but I am at a loss for how to explain this.
I suspect your string prototype has been modified, since there is no .equals function on the standard Javascript string type. In this case, you should simply be using == or ===. (This is an example of why modification of the string prototype is considered bad form in Javascript).
I have no idea why this is happening. I'm connecting my angular 4 app to a signalR hub from the hosting server and it works like a charm (version 2.2.2)
I'm now having to add a second signalR connection to another project and for some unknown reason the properties of the response are all camelCase in stead of PascalCase. The jquery.signalr-2.2.2.js file however expects them to be PascalCase and throws an error that the server version is "undefined".
"Undefined" is logical since he's looking for res.ProtocolVersion and that property does not exist on my deserialized response. I do however have a res.protocolVersion and that one holds the exact value that he needs.
I've been losing a lot of time on this, any help is seriously appreciated!
Edit: #rory-mccrossan
I thought as much and that's why I commented out the server side json serializer/formatter code, but to no avail.
I'm taking any suggestion where to look next
So after Rory's hint I searched some more on the internet and of course, someone else has run into this problem:
SignalR : use camel case
However, the solution isn't working for me :/
Then I found a similar solution, here however you'll always have the default contract resolver except when the object comes from a certain library -the one with your view models-.
https://blogs.msdn.microsoft.com/stuartleeks/2012/09/10/automatic-camel-casing-of-properties-with-signalr-hubs/
Note: This is not the perfect solution, but it is one that worked best for my scenario
So I came up with a new solution all together that ties in nicely with the existing code that is giving me the problem.
The other way to go around this is to let your app use the DefaultContractResolver. SignalR will now connect, but the rest of your application will break. To mitigate this in the solution I'm working in I used two simple extension methods.
Firstly I extended the HttpConfiguration class to swap out the formatter for a CamelCasePropertyNamesContractResolver
public static class HttpConfigurationExtensions
{
public static HttpConfiguration ToCamelCaseHttpConfiguration(this HttpConfiguration configuration)
{
var jsonFormatter = configuration.Formatters.OfType<JsonMediaTypeFormatter>().FirstOrDefault();
bool needToAddFormatter = jsonFormatter == null;
if (needToAddFormatter)
{
jsonFormatter = new JsonMediaTypeFormatter();
}
jsonFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();
jsonFormatter.SerializerSettings.DateTimeZoneHandling = DateTimeZoneHandling.Utc;
if (needToAddFormatter)
{
configuration.Formatters.Add(jsonFormatter);
}
return configuration;
}
}
The existing Web API will always return a HttpResponseMessage and that's why I could go about it in the way that I did.
Example of an api call
[Route("")]
[HttpPost]
public async Task<HttpResponseMessage> CreateSetting(Setting setting)
{
// the response object is basically the data you want to return
var responseData = await ...
return Request.CreateResponse(responseData.StatusCode, responseData);
}
I noticed every Api call was using the Request.CreateResponse(...) method. What I didn't see immediately is that Microsoft actually has foreseen all necessary overloads, this meant I couldn't just make my own Request.CreateResponse(...) implementation.
That's why it's called MakeResponse
public static class HttpRequestMessageExtensions
{
public static HttpResponseMessage MakeResponse<T>(this HttpRequestMessage request, T response) where T : Response
{
return request.CreateResponse(response.StatusCode, response,
request.GetConfiguration().ToCamelCaseHttpConfiguration());
}
}
The Response classes are the data structures that you want your api to return. In our api they all got wrapped into one of these structures. This results into an api with responses similar to the ones on the Slack API.
So now the controllers all use Request.MakeResponse(responseData)
We're planning on rebuilding our service at my workplace, creating a RESTful API and such and I happened to stumble on an interesting question: can I make my JS code in a way that it mimics my API design?
Here's an example to illustrate what I mean:
We have dogs, and you can access those dogs doing a GET /dogs, and get info on a specific one by GET /dogs/{id}.
My Javascript code would then be something like
var api = {
dogs : function(dogId) {
if ( dogId === undefined ) {
//request /dogs from server
} else {
//request /dogs/dogId from server
}
}
}
All if fine and dandy with that code, I just have to call api.dogs() or api.dogs(123) and I'll get the info I want.
Now, let's say those dogs have a list of diseases (or whatever, really) which you can fetch via GET /dogs/{id}/disases. Is there a way to modify my Javascript so that the previous calls will remain the same - api.dogs() returns all dogs and api.dogs(123) returns dog 123's info - while allowing me to do something like api.dogs(123).diseases() to list dog 123's diseases?
The simplest way I thought of doing it is by having my methods actually build queries instead of retrieving the data and a get or run method to actually run those queries and fetch the data.
The only way I can think of building something like this is if I could somehow, when executing a function, if some other function is chained to the object, but I don't know if that's possible.
What are your thoughts on this?
I cannot give you a concrete implementation, but a few hints how you could accomplish what you want. It would be interesting to know, what kind of Server and framework you are using.
Generate (Write yourself or autogenerate from code) a WADL describing your Service and then try do generate the Code for example with XSLT
In my REST projects I use swagger, that analyzes some common Java REST Implementation and generates JSON descriptions, that you could use as a base for Your JavaScript API
It can be easy for simple REST Apis but gets complicated as the API divides into complex hierarchies or has a tree structure. Then everything will depend on an exact documentation of your service.
Assuming that your JS application knows of the services provided by your REST API i.e send a JSON or XML file describing the services, you could do the following:
var API = (function(){
// private members, here you hide the API's functionality from the outside.
var sendRequest = function (url){ return {} }; // send GET request
return {
// public members, here you place methods that will be exposed to the public.
var getDog = function (id, criteria) {
// check that criteria isn't an invalid request. Remember the JSON file?
// Generate url
response = sendRequest(url);
return response;
};
};
}());
var diseases = API.getDog("123", "diseases");
var breed = API.getDog("123", "breed");
The code above isn't 100% correct since you still have to deal with AJAX call but it is more or less what you what.
I hope this helps!
I am doing some development on the firefox both with javascript and C++ for some XPCOM components.
I am trying to monitor the http activity with nsIHttpActivityDistributor.
The problem now is , is there any flag or id that belong to nsIHttpChannel that I can use to identify a unique nsHttpChannel object?
I want to save some nsIHttpChannel referred objects in C++ and then process later in Javascript or C++. The thing is that currently I cannot find a elegent way to identify a channel object that can used both in js and C++, which is used to log it clearly into a log file.
Any idea?
You can easily add your own data to HTTP channels, they always implement nsIPropertyBag2 and nsIWritablePropertyBag2 interfaces. Something along these lines (untested code, merely to illustrate the principle):
static PRInt64 maxChannelID = -1;
...
nsCOMPtr<nsIWritablePropertyBag2> bag = do_QueryInterface(channel);
if (!bag)
...
nsAutoString prop(NS_LITERAL_STRING("myChannelID"));
PRInt64 channelID;
rv = bag->GetPropertyAsInt64(prop, &channelID);
if (NS_FAILED(rv))
{
// First time that we see that channel, assign it an ID
channelID = ++maxChannelID;
rv = bag->SetPropertyAsInt64(prop, channelID)
if (NS_FAILED(rv))
...
}
printf("Channel ID: %i\n", channelID);
You might want to check what happens on HTTP redirect however. I think that channel properties are copied over to the new channel in that case, not sure whether this is desirable for you.