How can we identify a unique nsHTTPChannel? - javascript

I am doing some development on the firefox both with javascript and C++ for some XPCOM components.
I am trying to monitor the http activity with nsIHttpActivityDistributor.
The problem now is , is there any flag or id that belong to nsIHttpChannel that I can use to identify a unique nsHttpChannel object?
I want to save some nsIHttpChannel referred objects in C++ and then process later in Javascript or C++. The thing is that currently I cannot find a elegent way to identify a channel object that can used both in js and C++, which is used to log it clearly into a log file.
Any idea?

You can easily add your own data to HTTP channels, they always implement nsIPropertyBag2 and nsIWritablePropertyBag2 interfaces. Something along these lines (untested code, merely to illustrate the principle):
static PRInt64 maxChannelID = -1;
...
nsCOMPtr<nsIWritablePropertyBag2> bag = do_QueryInterface(channel);
if (!bag)
...
nsAutoString prop(NS_LITERAL_STRING("myChannelID"));
PRInt64 channelID;
rv = bag->GetPropertyAsInt64(prop, &channelID);
if (NS_FAILED(rv))
{
// First time that we see that channel, assign it an ID
channelID = ++maxChannelID;
rv = bag->SetPropertyAsInt64(prop, channelID)
if (NS_FAILED(rv))
...
}
printf("Channel ID: %i\n", channelID);
You might want to check what happens on HTTP redirect however. I think that channel properties are copied over to the new channel in that case, not sure whether this is desirable for you.

Related

Is there a way to reply to only the sender, after receiving a BroadcastChannel message?

Suppose I have a bunch of same-origin windows or tabs A, B, C, D, and E, that don't hold references to each other. (e.g. a user opened them independently). Suppose A sends a BroadcastChannel message to the others, and as a result, D needs to send some data back to A, ideally without involving B, C, or E.
Is this possible, using any of the message-passing APIs?
There's an event.source property on the broadcast message event, which looked as if it should maybe contain a WindowProxy or MessagePort object in this context, but (in my tests with Firefox 78 at least) it was simply null. There's also a ports array, but that was empty.
...I'm aware that you could start up a SharedWorker to assign each window a unique ID and act as a waystation for passing messages between them, but (a) that seems very complicated for the functionality desired, and (b) every message sent that way is going to need 2 hops, from window to sharedWorker and back to a window, crossing thread boundaries both times, and (usually) getting serialized & unserialized both times as well - even when the two windows share the same javascript thread! So it's not very efficient.
This seems like such an obvious thing to want to do, I'm finding it hard to believe there isn't something obvious I'm missing... but I don't see it, if so!
Looks like the standards require source to be null for a BroadcastChannel. But it shares the MessageEvent interface with several other APIs that do use source, hence why it exists, but is null.
The postMessage(message) method steps are:
...
5. Remove source from destinations.
Looks like they intentionally kept BroadcastChannel very lightweight. Just a guess, but the functionality you're looking for might have required additional resources that they didn't want to allocate. This guess is based on a general note they have in the spec:
For elaborate cases, e.g. to manage locking of shared state, to manage synchronization of resources between a server and multiple local clients, to share a WebSocket connection with a remote host, and so forth, shared workers are the most appropriate solution.
For simple cases, though, where a shared worker would be an unreasonable overhead, authors can use the simple channel-based broadcast mechanism described in this section.
SharedWorkers are definitely more appropriate for complicated cases, think of the BroadcastChannel really just as a one-to-many simple notification sender.
It isn't able to transfer data — Which of the receivers should become the owner then? — so except in the case of Blobs (which are just small wrappers with no data of their own), passing data through a BroadcastChannel means it has to be fully deserialized by all receivers, not the most performant way of doing.
So I'm not sure what kind of data you need to send, but if it's big data that should normally be transferable, then probably prefer a SharedWorker.
One workaround though if your data is not to be transfered, is to create a new BroadcastChannel that only your two contexts will listen at.
Live demo
In page A:
const common_channel = new BroadcastChannel( "main" );
const uuid = "private-" + Math.random();
common_channel.postMessage( {
type: "gimme the data",
from: "pageB",
respondAt: uuid
} );
const private_channel = new BroadcastChannel( uuid );
private_channel.onmessage = ({data}) => {
handleDataFromPageB(data);
private_channel.close();
};
In page B:
const common_channel = new BroadcastChannel( "main" );
common_channel.onmessage = ({ data }) => {
if( data.from === "pageB" && data.type === "gimme the data" ) {
const private_channel = new BroadcastChannel( data.respondAt );
private_channel.postMessage( the_data );
private_channel.close();
}
};
Regarding why you can't have a ports value on MessageEvent firing on BroadcastChannels it's because MessagePorts must be transfered, but as we already said, BroadcastChannels can't do transfers.
For why there is no source, it's probably because as you expected that should have been a WindowProxy object, but WorkerContexts can also post messages to BroadcastChannels, and they don't implement that interface (e.g their postMessage method wouldn't do the same thing at all than for a WindowContext).

Access multiple gRPC Services over the same Connection (with a single Channel)

Note that this is not a duplicate of a similar question for go, since this uses grpc-node. For some reason, there seems to be differences in the API
I do the standard procedure of creating my APIPackageDefinitions and APIPackagePbjects, and create two separate clients from each one, individually.
let grpc = require('grpc')
let protoLoader = require('#grpc/proto-loader')
async function createGrcpConnection() {
const HOST = 'localhost'
const PORT = '50053'
const PORT2 = '50054'
let physicalProjectAPIPackageDefinition = await protoLoader.load(
'./physical_project_api.proto',protoLoaderOptions
)
let configAPIPackageDefinition = await protoLoader.load(
'./config_api.proto', protoLoaderOptions
)
let physicalProjectAPIPackageObject = grpc.loadPackageDefinition(
physicalProjectAPIPackageDefinition
).package.v1
let configAPIPackageObject = grpc.loadPackageDefinition(
configAPIPackageDefinition
).package.v1
let grpcClient1 = physicalProjectAPIPackageObject.PhysicalProjectAPI(
`${HOST}:${PORT}`,
grpc.credentials.createInsecure()
)
let grpcClient2 = configAPIPackageObject.ConfigAPI(
`${HOST}:${PORT2}`,
grpc.credentials.createInsecure()
)
return { grpcClient1, grpcClient2 }
}
I am looking for a way to create two clients that share the same connection. I think I am close to the solution by creating a new Channel and replacing the last two let statements with
let cc = new grpc.Channel(
`${HOST}:${PORT}`,
grpc.credentials.createInsecure()
)
let grpcClient1 = physicalProjectAPIPackageObject.PhysicalProjectAPI(cc)
let grpcClient2 = configAPIPackageObject.ConfigAPI(cc)
However, I received a TypeError: Channel's first argument (address) must be a string. I'm not sure how to incorporate the newly instantiated Channel to create new clients for each service. I couldn't find any useful methods on the docs. Any help would be appreciated.
P.S. At the moment I am trying to use two services, and create a client for each service, and have those two clients share a connection on the same channel. Is it possible to use two service, and create a single client for both services? Maybe I can use .proto package namespaces to my advantage here? My internet search fu failed me on this question.
There is an API to do this, but it is a bit more awkward than what you were trying. And you don't actually need to use it to get what you want. The grpc library internally pools connections to the same server, as long as those connections were created with identical parameters. So, the Client objects created in your first code block will actually use the same TCP connection.
However, as mentioned, there is a way to do this explicitly. The third argument to the Client constructor is an optional object with various additional options, including channelOverride. That accepts a Channel object like the one you constructed at the beginning of your second code block. You still have to pass valid values for the first two arguments, but they will actually be ignored and the third argument will be used instead. You can see more information about that constructor's arguments in the API documentation.

How to get parameter from previous context in Dialogflow

I am creating a chatbot for an IP firm. Its have an entity named service to have 4 type values. (Patent, Copyright, Trademark, design).
Client: What is a patent?
Bot: (Answer)
Client: how much cost to file it?
How I can know client asking about the patent from the previous context?
I can't use followup-intent in every intent.
Right now I'm using a global variable to get the slot agent.parameters.Service inside fulfillment.
let slot='patent';
exports.dialogflowFirebaseFulfillment = functions.https.onRequest((request, response) => {
const agent = new WebhookClient({ request, response });
function service_typeHandler(agent){
var serv ='';
serv = agent.parameters.Service;
if(serv ===''){
serv=slot;
}
else{
slot=serv;
}
switch( serv ){
case 'patent':
First of all, you're correct on two fronts:
Don't use followup Intents. There are few cases where you actually want followup Intents. Most of the time you want to do this with other means.
Use Contexts. These are (part of) the "other means" in most cases.
In this case, it sounds like you'll have two Intents (and likely more, but this illustrates the point):
"ask.what" - which is the user saying things like "What is a patent?"
"ask.price" - which is the user saying things like "How much to file a patent?", but also "How much to file it?"
For the "ask.what" Intent, you would set an "Outgoing Context". This will automatically capture the parameters that are attached to the Intent. If you want to control it more yourself, you can create your own Context in your webhook and set parameters to whatever value you want. I suggest the latter, because it lets you use a parameter name that you don't use elsewhere. Let's assume that you're using a context named "savedInfo" and that you're setting the parameter to "savedService".
In your "ask.price" Intent, you'd do something similar to what you're doing now. Except that if the Service parameter is empty, get the parameters from the "savedInfo" context and, specifically, the savedService parameter.

How does live object creation and partial teardown management work in javascript?

What I would like to do is load javascript to create a library of methods in an object and wait until the object is used for the first time before it is actually defined or compiled. I would like to build references to this object before it is actually fully defined. When I call a method on this object for the first time before the methods on the object are ever defined (meaning the object doesn't actually have methods) I would like to define the object and then call the method. Is there a way to do this using standard syntax such as "MyLibrary.sayHello()" if "sayHello()" is not yet defined on the object.
I imagine it would look like this:
var independentVar = "noCommitments";
var MyLibrary = function(user_ini){
//MyLibrary.init looks like
// (function(ini){
// var a = ini;
// return function(){
// //Notice the method sayHello defines when called,
// // and does not return a reference
// return {
// b:a,c:"c",sayHello:function(z){return "Hello"+a+z}
// }
// }
// })(user_ini);
var d1 = myRequire("MyLibrary.init");
return {
**handleAll : function(){ this = d1(); this.("**calledMethod")}
}
};
var greeting = MyLibrary.sayHello();
alert(greeting);
This is only pseudo-code. If I add a method to cleanup I can then return that object to the uninitialized state of "{**handleAll:function(){/noContext/}}". My application/library has a stub and a link this way and can be used immediately from an undefined state, when building modules this can be useful in order to lower the number of references to a utility, say a post has a menu of functions and those functions are shared by by all posts, -- with a mechanism such as is described here only the "active post"/"post in focus" will reference the utility. It moreless give the ability to activate and de-activate modules. The special part is the modules are already warmed up, they are ready to call functions even though they do not reference them yet, it is similar to live binding but allows the whole user interface to already be defined with functions already stubbed out with the exact name they will have when they are usable. A control mechanism for defaults and debounce is easily found in this model for me.
My question is: Is this type of scripting possible natively or will I have to use some form of compilation like for TypeScript, CoffeeScript or others. I understand it is possible if I pass the method I would like to call as a parameter to a singleton factory. I ultimately would like whole applications that are able to gracefully degrade unused functionality without polluting the code.
What I mean by pollution:
var LibDef = (
function(){
return {
callUndefined:function(methodName){
var returnVal = {}
}
}
})()
var MySingltonLibrary = moduleSinglton.getLibrary("MyLibrary", Lib);
var greeting = MySingltonLibrary.callUndefined("sayHello");
//
// Please use your imagination to consider the complexity in the singlton
The best way that will allow you to tear down an object releasing any space its functions and members consume on the heap and maintain a single reference, that will allow the object to rebuild itself or just rebuild the function that is called is like this - (A very simple model, you may like to use arrays and gradually tear down nested objects internally):
var twentySecondObj(function(window,document){
var base_obj = undefined;
var externalAPI = undefined;
setTimeout(function(){
base_obj = undefined;
},20000);
return function(){
base_obj = (function(base_obj){
if(base_obj === undefined){
return {
property1:"This is property1",
property2:"This is property2"
}
}
})();
externalAPI = (function(){
if(externalAPI === undefined){
return {
property1:base_obj.property1,
property2:base_obj.property2
}
}
})();
return externalAPI;
}
})(window,document);
console.log(twentySecondObj().property1);
On an additional note, you can use getters and setters to observe access to properties and can internally present a facade of both functions and properties which reference a build method like the one above, this way it looks like you are accessing a legit member of the object. There are no options I can think of that will allow you to intercept when attempt to set a new property on an object like: myObj.fooProperty = "foo", and buildup that property into a custom object with a getter and setter, if you have a custom type that needs to be set, then you will have to know it's implimentation details to set it, or call a function passing in the property name and value, or use a method similar to what is shown above.
Here is a link to the proposal for adding weak references to javascript: https://ponyfoo.com/articles/weakref weak-references would alter how this looks, however would not address everything mentioned in this question. Remapping an object when a property is added via some type of deep observer will allow new property members to be enhanced at the time they are set, this would require that the observer ran synchrounously when the property was set, or once the set is complete, the very next statement must be a call to update the object. I will keep posted here for any advances I see that will make the "default handler function" available within javascript in the future.
WeakRef can absolutely be used for recording and handling object usage. I would really like to move object management into webworkers and service workers so they can be maintained through all web endpoints on the domain and do not require to reload across requests. Web frameworks would need to have modified handle to offload all dom changes and updates to worker, essentially a single hook that handles message passing for all hooks. Modload, now must include a message handle name and have task priority meta data so it is properly placed in the least busy or least active worker (slow worker and fast worker) this helps to create an api that can offload to cloud functions, this shpuld give us ability to do more AI, lookups and work offline that is currently handled for most apps in the cloud where more processing power is, and in this way we can gracefully augment local processing with cloud functions only when local resources, or completion times are degraded below acceptable speeds, or above acceptable power policy.
https://v8.dev/features/weak-references

Writing JS code to mimic api design

We're planning on rebuilding our service at my workplace, creating a RESTful API and such and I happened to stumble on an interesting question: can I make my JS code in a way that it mimics my API design?
Here's an example to illustrate what I mean:
We have dogs, and you can access those dogs doing a GET /dogs, and get info on a specific one by GET /dogs/{id}.
My Javascript code would then be something like
var api = {
dogs : function(dogId) {
if ( dogId === undefined ) {
//request /dogs from server
} else {
//request /dogs/dogId from server
}
}
}
All if fine and dandy with that code, I just have to call api.dogs() or api.dogs(123) and I'll get the info I want.
Now, let's say those dogs have a list of diseases (or whatever, really) which you can fetch via GET /dogs/{id}/disases. Is there a way to modify my Javascript so that the previous calls will remain the same - api.dogs() returns all dogs and api.dogs(123) returns dog 123's info - while allowing me to do something like api.dogs(123).diseases() to list dog 123's diseases?
The simplest way I thought of doing it is by having my methods actually build queries instead of retrieving the data and a get or run method to actually run those queries and fetch the data.
The only way I can think of building something like this is if I could somehow, when executing a function, if some other function is chained to the object, but I don't know if that's possible.
What are your thoughts on this?
I cannot give you a concrete implementation, but a few hints how you could accomplish what you want. It would be interesting to know, what kind of Server and framework you are using.
Generate (Write yourself or autogenerate from code) a WADL describing your Service and then try do generate the Code for example with XSLT
In my REST projects I use swagger, that analyzes some common Java REST Implementation and generates JSON descriptions, that you could use as a base for Your JavaScript API
It can be easy for simple REST Apis but gets complicated as the API divides into complex hierarchies or has a tree structure. Then everything will depend on an exact documentation of your service.
Assuming that your JS application knows of the services provided by your REST API i.e send a JSON or XML file describing the services, you could do the following:
var API = (function(){
// private members, here you hide the API's functionality from the outside.
var sendRequest = function (url){ return {} }; // send GET request
return {
// public members, here you place methods that will be exposed to the public.
var getDog = function (id, criteria) {
// check that criteria isn't an invalid request. Remember the JSON file?
// Generate url
response = sendRequest(url);
return response;
};
};
}());
var diseases = API.getDog("123", "diseases");
var breed = API.getDog("123", "breed");
The code above isn't 100% correct since you still have to deal with AJAX call but it is more or less what you what.
I hope this helps!

Categories