Note that this is not a duplicate of a similar question for go, since this uses grpc-node. For some reason, there seems to be differences in the API
I do the standard procedure of creating my APIPackageDefinitions and APIPackagePbjects, and create two separate clients from each one, individually.
let grpc = require('grpc')
let protoLoader = require('#grpc/proto-loader')
async function createGrcpConnection() {
const HOST = 'localhost'
const PORT = '50053'
const PORT2 = '50054'
let physicalProjectAPIPackageDefinition = await protoLoader.load(
'./physical_project_api.proto',protoLoaderOptions
)
let configAPIPackageDefinition = await protoLoader.load(
'./config_api.proto', protoLoaderOptions
)
let physicalProjectAPIPackageObject = grpc.loadPackageDefinition(
physicalProjectAPIPackageDefinition
).package.v1
let configAPIPackageObject = grpc.loadPackageDefinition(
configAPIPackageDefinition
).package.v1
let grpcClient1 = physicalProjectAPIPackageObject.PhysicalProjectAPI(
`${HOST}:${PORT}`,
grpc.credentials.createInsecure()
)
let grpcClient2 = configAPIPackageObject.ConfigAPI(
`${HOST}:${PORT2}`,
grpc.credentials.createInsecure()
)
return { grpcClient1, grpcClient2 }
}
I am looking for a way to create two clients that share the same connection. I think I am close to the solution by creating a new Channel and replacing the last two let statements with
let cc = new grpc.Channel(
`${HOST}:${PORT}`,
grpc.credentials.createInsecure()
)
let grpcClient1 = physicalProjectAPIPackageObject.PhysicalProjectAPI(cc)
let grpcClient2 = configAPIPackageObject.ConfigAPI(cc)
However, I received a TypeError: Channel's first argument (address) must be a string. I'm not sure how to incorporate the newly instantiated Channel to create new clients for each service. I couldn't find any useful methods on the docs. Any help would be appreciated.
P.S. At the moment I am trying to use two services, and create a client for each service, and have those two clients share a connection on the same channel. Is it possible to use two service, and create a single client for both services? Maybe I can use .proto package namespaces to my advantage here? My internet search fu failed me on this question.
There is an API to do this, but it is a bit more awkward than what you were trying. And you don't actually need to use it to get what you want. The grpc library internally pools connections to the same server, as long as those connections were created with identical parameters. So, the Client objects created in your first code block will actually use the same TCP connection.
However, as mentioned, there is a way to do this explicitly. The third argument to the Client constructor is an optional object with various additional options, including channelOverride. That accepts a Channel object like the one you constructed at the beginning of your second code block. You still have to pass valid values for the first two arguments, but they will actually be ignored and the third argument will be used instead. You can see more information about that constructor's arguments in the API documentation.
Related
We are having a session-enabled azure bus topic. This topic can have multiple messages with distinct session IDs. We want to create a listener/receiver that keeps reading messages from the Topic. As we have multiple dynamic session IDs, we can not use acceptSession to create a handler. We have tried using the methods createReceiver and acceptNextSession methods of ServiceBusClient but they have the following issues
CreateReceiver: This method does not work on session-enabled subscriptions giving a runtime error.
acceptNextSession: This method only listens to the first message and does not read further messages.
Our Current code is :
const serviceBusSettings = this.appSettings.Settings.serviceBus;
const sbClient = new ServiceBusClient(serviceBusSettings.connectionString);
//const receiver = sbClient.createReciver(topicName, subscriptionName);
const receiver = sbClient.acceptNextSession(topicName, subscriptionName);
const handleTopicError = async (error: any) => {
this.logger.error(error);
throw error;
};
(await receiver).subscribe({
processMessage: handleTopicMessage, // handleTopicMessage is a method passed as an argument to the function where this code snippet exists
processError: handleTopicError
});
We also tried implementation one of the sample code repo wiki. But the methods shared in the example seems to be no longer available in new npm version of #azure/service-bus Link to tried example
Can anyone suggest some solution to this?
There's a sample showing how you can continually read through all the available sessions. Could you please check to see if it is helpful?
https://github.com/Azure/azure-sdk-for-js/blob/21ff34e2589f255e8ffa7f7d5d65ca40434ec34d/sdk/servicebus/service-bus/samples/v7/javascript/advanced/sessionRoundRobin.js
I'm interested in using Fauna from browser from pure JS in read-only mode.
Either official documentation lacks understandable information for this case or I can't figure it out.
Please help me to understand.
I'm trying to run such query:
Map(Paginate(Documents(Collection('spells'))), Lambda('x', Get(Var('x'))));
Running this query from the shell gives me the result.
I want to push this query to JavaScript variable.
But I don't understand what to do.
Here is my code:
var client = new faunadb.Client({
secret: '320438826900127936',
keepAlive: false,
domain: 'db.eu.fauna.com',
// NOTE: Use the correct domain for your database's Region Group.
port: 443,
scheme: 'https',
});
var helper = client.paginate(
q.Match(
q.Index('spells'),
'101'
)
)
paginate.then(function(response) {
console.log(response.ref) // Would log the ref to console.
})
Please help me to get output from DB using pure JavaScript.
Using the JavaScript driver, all of the FQL functions are in faunadb.query namespace. Typically, developers would do this at the top of their scripts:
const faunadb = require('faunadb')
const q = faunadb.query
After that, the FQL functions can be access using the q object. For example: q.Add(1, 2).
Otherwise, you can use destructuring to import FQL functions into the script's namespace, like so:
const faunadb = require('faunadb')
const {
Documents,
Collection,
Get,
Index,
Lambda,
Match,
Paginate
} = faunadb.query
and then you can call the FQL functions "directly", e.g. Add(1, 2).
If you use destructuring, be sure to import all of the FQL functions that you intend to use in your queries. Not all FQL functions can be imported "as-is", since some of their names would collide with built-in JavaScript names, such as Function. For those, you have to assign them directly:
const FaunaFunction = faunadb.query.Function
And then use the local name that you specified (FaunaFunction in this case) when composing queries.
Either way, the query itself isn't sent to Fauna until the client.query(), or in your case, client.paginate(), function is called.
That means that you can assign the FQL function calls to JavaScript variables. With the desired query, that could look like this:
const spellsQuery = q.Map(
q.Paginate(
q.Documents(q.Collection('spells'))
),
q.Lambda('x', q.Get(q.Var('x')))
)
Then you can do this:
client.query(spellsQuery)
.then(res => console.log(res))
You can use the client.paginate if you are only performing pagination. Your desired query calls Map, so the client.pagination helper gets in your way. My example uses client.query.
Note that when you configure your client connection object, you don't need to specify the port and scheme options; you're using the default values already. And you likely shouldn't change the keepAlive option unless your workflow specifically requires it.
Plus, I see that you didn't include an actual secret in your example, which is a good thing: if you did include an actual secret, anyone with that secret could access your database just like you can.
If you want to see a complete (but simple) Fauna-based application that runs in a web page, see: https://github.com/fauna-labs/todo-vanillajs
Suppose I have a bunch of same-origin windows or tabs A, B, C, D, and E, that don't hold references to each other. (e.g. a user opened them independently). Suppose A sends a BroadcastChannel message to the others, and as a result, D needs to send some data back to A, ideally without involving B, C, or E.
Is this possible, using any of the message-passing APIs?
There's an event.source property on the broadcast message event, which looked as if it should maybe contain a WindowProxy or MessagePort object in this context, but (in my tests with Firefox 78 at least) it was simply null. There's also a ports array, but that was empty.
...I'm aware that you could start up a SharedWorker to assign each window a unique ID and act as a waystation for passing messages between them, but (a) that seems very complicated for the functionality desired, and (b) every message sent that way is going to need 2 hops, from window to sharedWorker and back to a window, crossing thread boundaries both times, and (usually) getting serialized & unserialized both times as well - even when the two windows share the same javascript thread! So it's not very efficient.
This seems like such an obvious thing to want to do, I'm finding it hard to believe there isn't something obvious I'm missing... but I don't see it, if so!
Looks like the standards require source to be null for a BroadcastChannel. But it shares the MessageEvent interface with several other APIs that do use source, hence why it exists, but is null.
The postMessage(message) method steps are:
...
5. Remove source from destinations.
Looks like they intentionally kept BroadcastChannel very lightweight. Just a guess, but the functionality you're looking for might have required additional resources that they didn't want to allocate. This guess is based on a general note they have in the spec:
For elaborate cases, e.g. to manage locking of shared state, to manage synchronization of resources between a server and multiple local clients, to share a WebSocket connection with a remote host, and so forth, shared workers are the most appropriate solution.
For simple cases, though, where a shared worker would be an unreasonable overhead, authors can use the simple channel-based broadcast mechanism described in this section.
SharedWorkers are definitely more appropriate for complicated cases, think of the BroadcastChannel really just as a one-to-many simple notification sender.
It isn't able to transfer data — Which of the receivers should become the owner then? — so except in the case of Blobs (which are just small wrappers with no data of their own), passing data through a BroadcastChannel means it has to be fully deserialized by all receivers, not the most performant way of doing.
So I'm not sure what kind of data you need to send, but if it's big data that should normally be transferable, then probably prefer a SharedWorker.
One workaround though if your data is not to be transfered, is to create a new BroadcastChannel that only your two contexts will listen at.
Live demo
In page A:
const common_channel = new BroadcastChannel( "main" );
const uuid = "private-" + Math.random();
common_channel.postMessage( {
type: "gimme the data",
from: "pageB",
respondAt: uuid
} );
const private_channel = new BroadcastChannel( uuid );
private_channel.onmessage = ({data}) => {
handleDataFromPageB(data);
private_channel.close();
};
In page B:
const common_channel = new BroadcastChannel( "main" );
common_channel.onmessage = ({ data }) => {
if( data.from === "pageB" && data.type === "gimme the data" ) {
const private_channel = new BroadcastChannel( data.respondAt );
private_channel.postMessage( the_data );
private_channel.close();
}
};
Regarding why you can't have a ports value on MessageEvent firing on BroadcastChannels it's because MessagePorts must be transfered, but as we already said, BroadcastChannels can't do transfers.
For why there is no source, it's probably because as you expected that should have been a WindowProxy object, but WorkerContexts can also post messages to BroadcastChannels, and they don't implement that interface (e.g their postMessage method wouldn't do the same thing at all than for a WindowContext).
I'm just beginning to investigate Firebase Cloud Functions and am struggling to find an example of copying data between database nodes on the server side (fanning out data). More specifically, when a specific user (for example uid "PdXHgkfP3nPxjhstkhX") updates a URL (the dictionary key "link") on the /users node, I'd like to copy that value to all instances of that user's "link" on the /friendsList node. Here's what I have so far. Please let me know if I am approaching this the wrong way.
exports.fanOutLink = functions.database.ref('/users/PdXHgkfP3nPxjhstkhX/link').onWrite(event => {
friendsListRef = admin.database().ref('/friendsList');
//if there's no value
if (!event.data.val()){
//TBD
}else{
//when there is a value
let linkURL = event.data.val()
friendsListRef.once("value", function(snap)){
snap.forEach(function(childSnapshot)){
let childKey=childSnapshot.key;
admin.database().ref('/friendsList/'+childKey+'/PdXHgkfP3nPxjhstkhX/link').set(event.data.val());
}
}
}
})
You're not returning a promise, which means that Cloud Functions may terminate your code before it has written to the database, or that it may keep it running (and thus charge you) longer than needed. I recommend reading more about that here or watching this video.
The simple fix is quite simple in your case:
exports.fanOutLink = functions.database.ref('/users/PdXHgkfP3nPxjhstkhX/link').onWrite(event => {
friendsListRef = admin.database().ref('/friendsList');
if (!event.data.val()){
return; // terminate the function
}else{
let linkURL = event.data.val()
return friendsListRef.once("value", function(snap)){
var promises = [];
snap.forEach(function(childSnapshot)){
let childKey=childSnapshot.key;
promises.push(admin.database().ref('/friendsList/'+childKey+'/PdXHgkfP3nPxjhstkhX/link').set(event.data.val()));
}
return Promise.all(promises);
}
}
})
You'll see that I'm mostly just passing the return value of once() and the combined set() calls back up (and out of) our function, so that Cloud Functions knows when you're done with it all.
But please study the post, video and other materials thoroughly, because this is quite fundamental to writing Cloud Functions.
If you are new to JavaScript in general, Cloud Functions for Firebase is not the best way to learn it. I recommend first reading the Firebase documentation for Web developers and/or taking the Firebase codelab for Web developer. They cover many basic JavaScript, Web and Firebase interactions. After those you'll be much better equipped to write code for Cloud Functions too.
I am doing some development on the firefox both with javascript and C++ for some XPCOM components.
I am trying to monitor the http activity with nsIHttpActivityDistributor.
The problem now is , is there any flag or id that belong to nsIHttpChannel that I can use to identify a unique nsHttpChannel object?
I want to save some nsIHttpChannel referred objects in C++ and then process later in Javascript or C++. The thing is that currently I cannot find a elegent way to identify a channel object that can used both in js and C++, which is used to log it clearly into a log file.
Any idea?
You can easily add your own data to HTTP channels, they always implement nsIPropertyBag2 and nsIWritablePropertyBag2 interfaces. Something along these lines (untested code, merely to illustrate the principle):
static PRInt64 maxChannelID = -1;
...
nsCOMPtr<nsIWritablePropertyBag2> bag = do_QueryInterface(channel);
if (!bag)
...
nsAutoString prop(NS_LITERAL_STRING("myChannelID"));
PRInt64 channelID;
rv = bag->GetPropertyAsInt64(prop, &channelID);
if (NS_FAILED(rv))
{
// First time that we see that channel, assign it an ID
channelID = ++maxChannelID;
rv = bag->SetPropertyAsInt64(prop, channelID)
if (NS_FAILED(rv))
...
}
printf("Channel ID: %i\n", channelID);
You might want to check what happens on HTTP redirect however. I think that channel properties are copied over to the new channel in that case, not sure whether this is desirable for you.