Google Cloud Functions, Google Compute Engine, and Queueing Requests - javascript

I have a cloud function that starts a Compute Engine instance. However, when multiple functions are triggered, the previous actions running on the Compute Engine are interrupted by the incoming functions/commands. The Compute Engine is running a pytorch implementation... Would there be a way to send those incoming functions to a queue, so that the current action running on Compute Engine completes before picking up the next incoming action before shutting the machine down? Any conceptual guidance would be greatly appreciated.
EDIT
My function is triggered on changes to a Storage bucket (uploads). In the function, I start a GCE instance and customize its startup behavior with a startup script as follows (some commands and directories are simplified for brevity):
import os
from googleapiclient.discovery import build
def start(event, context):
file = event
print(file["id"])
string = file["id"]
newstring = string.split('/')
userId = newstring[1]
paymentId = newstring[2]
name = newstring[3]
print(name)
if name == "uploadcomplete.txt":
startup_script = """#! /bin/bash
cd ~ && pwd 1>>/var/log/log.out 2>&1
PATH=$PATH://usr/local/cuda 1>>/var/log/log.out 2>&1
cd program_directory 1>>/var/log/log.out 2>&1
source /opt/anaconda3/etc/profile.d/conda.sh 1>/var/log/log.out 2>&1
conda activate env
cd keras-retinanet/ 1>>/var/log/log.out 2>&1
export PYTHONPATH=`pwd` 1>>/var/log/log.out 2>&1
cd tracker 1>>/var/log/log.out 2>&1
python program_name --gcs_input_path gs://input/{userId}/{paymentId} --gcs_output_path gs://output/{userId}/{paymentId} 1>>/var/log/log.out 2>&1
sudo python3 gcs_to_mongo.py {userId} {paymentId} 1>>/var/log/log.out 2>&1
sudo shutdown -P now
""".format(userId=userId, paymentId=paymentId)
service = build('compute', 'v1', cache_discovery=False)
print('VM Instance starting')
project = 'XXXX'
zone = 'us-east1-c'
instance = 'YYYY'
metadata = service.instances().get(project=project, zone=zone, instance=instance)
metares = metadata.execute()
print(metares)
fingerprint = metares["metadata"]["fingerprint"]
print(fingerprint)
bodydata = {"fingerprint": fingerprint,
"items": [{"key": "startup-script", "value": startup_script}]}
print(bodydata)
meta = service.instances().setMetadata(project=project, zone=zone, instance=instance,
body=bodydata)
res = meta.execute()
instanceget = service.instances().get(project=project, zone=zone, instance=instance).execute()
request = service.instances().start(project=project, zone=zone, instance=instance)
response = request.execute()
print('VM Instance started')
print(instanceget)
print("'New Metadata:", instanceget['metadata'])
The problem occurs when multiple batches are uploaded to Cloud Storage. Each new function will restart the GCE instance with a new startup script and begin work on the new data, leaving the previous data unfinished.

Cloud Tasks indeed meet your requirement, Cloud Tasks lets you separate out pieces of work that can be performed independently, outside of your main application flow, and send them off to be processed, asynchronously, using handlers that you create. Let me know if you have more questions about adapting Cloud Tasks into your current system structure.
Since your GCE instances are created by Cloud Function. Cloud Tasks can call your Cloud Function through its HTTP endpoint with a public IP address. So you Cloud Tasks are HTTP targets. You can follow the Google Cloud Taks Creating HTTP Target tasks for the explanation and sample codes.
Also, GCP provided the tutorial Using Cloud Tasks to trigger Cloud Functions exactly the same system design exactly matches your current requirement.

Related

How to move from Firebase Functions to Cloud Run after encountering 540s timeout limit?

I was reading this Reddit thread where a user mentioned that 540s is the limit of Firebase Functions and that moving to Cloud Run was recommended.
As others have said 540s is the maximum timeout and if you want to increase it without changing much else about your code, consider moving to Cloud Run. ​- #samtstern on Reddit
After looking at the Node.JS QuickStart documentation
and other content on YouTube and Google, I did not find a good guide explaining how to move your Firebase Function to Cloud Run.
One of the issues that were not addressed by what I read, for example: what do I replace the firebase-functions package with to define the function? Etc...
So, how may I move my Firebase Function over to Cloud Run to not run into the 540s max timeout limitation ?
​const functions = require('firebase-functions');
​const runtimeOpts = {timeoutSeconds: 540,memory: '2GB'}
​exports.hourlyData = functions.runWith(runtimeOpts).pubsub.schedule('every 1 hours')
Preface: The following steps have been generalized for a wider audience than just the OP's problem (covers HTTP Event, Scheduled and Pub/Sub Functions) and have been adapted from the documentation linked in the question: Deploying Node.JS Images on Cloud Run.
Step 0: Code/Architecture Review
More often than not, exceeding the 9-minute timeout of a Cloud Function is a result of a bug in your code - make sure to evaluate this before switching to Cloud Run as this will just make the problem worse. The most common of these is sequential instead of parallelized asynchronous processing (normally caused by using await in a for/while loop).
If your code is doing meaningful work that is taking a long time, consider sharding it out to "subfunctions" that can all work on the input data in parallel. Instead of processing data for every user in your database, you can use a single function to trigger multiple instances of a function that each that take care of different user ID ranges such as a-l\uf8ff, m-z\uf8ff, A-L\uf8ff, M-Z\uf8ff and 0-9\uf8ff.
Lastly, Cloud Run and Cloud Functions are quite similar, they are designed to take a request, process it and then return a response. Cloud Functions have a limit of up to 9 minutes and Cloud Runs have a limit of up to 60 minutes. Once that response has been completed (because the server ended the response, the client lost connection or the client aborted the request), the instance is severely throttled or terminated. While you can use WebSockets and gRPC for a persistent communication between server and client when using Cloud Run, they are still subject to this limitation. See the Cloud Run: General development tips documentation for more information.
Like other serverless solutions, your client and server need to be able to handle connecting to different instances. Your code shouldn't make use of local state (like a local store for session data). See the Setting request timeout documentation for more information.
Step 1: Install Google Cloud SDK
I'll refer you to the Installing Google Cloud SDK documentation for this step.
Once installed, call gcloud auth login and login with the account used for the target Firebase project.
Step 2: Get your Firebase Project settings
Open up your project settings in the Firebase Console and take note of your Project ID and your Default GCP resource location.
Firebase Functions and Cloud Run instances should be co-located with your GCP resources where possible. In Firebase Functions, this is achieved by changing the region in code and deploying using the CLI. For Cloud Run, you specify these parameters on the command line as flags (or use the Google Cloud Console). For the below instructions and for simplicity, I will be using us-central1 as my Default GCP resources location is nam5 (us-central).
If using the Firebase Realtime Database in your project, visit your RTDB settings in the Firebase Console and take note of your Database URL. This is usually of the form https://PROJECT_ID.firebaseio.com/.
If using Firebase Storage in your project, visit your Cloud Storage settings in the Firebase Console and take note of your Bucket URI. From this URI, we need to take note of the host (ignore the gs:// part) which is usually of the form PROJECT_ID.appspot.com.
Here's a table that you can copy to help keep track:
Project ID:
PROJECT_ID
Database URL:
https://PROJECT_ID.firebaseio.com
Storage Bucket:
PROJECT_ID.appspot.com
Default GCP Resource Location:
Chosen Cloud Run Region:
Step 3: Create Directories
In your Firebase Project directory or a directory of your choosing, create a new cloudrun folder.
Unlike Firebase Cloud Functions, where you can define multiple functions in a single module of code, each Cloud Run image uses its own module of code. For this reason, each Cloud Run image should be stored in its own directory.
As we are going to define a Cloud Run instance called helloworld, we'll create a directory called helloworld inside cloudrun.
mkdir cloudrun
mkdir cloudrun/helloworld
cd cloudrun/helloworld
Step 4: Create package.json
For correct deployment of the Cloud Run image, we need to provide a package.json that is used to install dependencies in the deployed container.
The format of the package.json file resembles:
{
"name": "SERVICE_NAME",
"description": "",
"version": "1.0.0",
"private": true,
"main": "index.js",
"scripts": {
"start": "node index.js"
"image": "gcloud builds submit --tag gcr.io/PROJECT_ID/SERVICE_NAME --project PROJECT_ID",
"deploy:public": "gcloud run deploy SERVICE_NAME --image gcr.io/PROJECT_ID/SERVICE_NAME --allow-unauthenticated --region REGION_ID --project PROJECT_ID",
"deploy:private": "gcloud run deploy SERVICE_NAME --image gcr.io/PROJECT_ID/SERVICE_NAME --no-allow-unauthenticated --region REGION_ID --project PROJECT_ID",
"describe": "gcloud run services describe SERVICE_NAME --region REGION_ID --project PROJECT_ID --platform managed",
"find": "gcloud run services describe SERVICE_NAME --region REGION_ID --project PROJECT_ID --platform managed --format='value(status.url)'"
},
"engines": {
"node": ">= 12.0.0"
},
"author": "You",
"license": "Apache-2.0",
"dependencies": {
"express": "^4.17.1",
"body-parser": "^1.19.0",
/* ... */
},
"devDependencies": {
/* ... */
}
}
In the above file, SERVICE_NAME, REGION_ID and PROJECT_ID are to be swapped out as appropriate with the details from step 2. We also install express and body-parser to handle the incoming request.
There are also a handful of module scripts to help with deployment.
Script Name
Description
image
Submits the image to Cloud Build to be added to the Container Registry for other commands.
deploy:public
Deploys the image from the above command to be used by Cloud Run (while allowing any requester to invoke it) and returns its service URL (which is partly randomized).
deploy:private
Deploys the image from the above command to be used by Cloud Run (while requiring that the requester that invokes it is an authorized user/service account) and returns its service URL (which is partly randomized).
describe
Gets the statistics & configuration of the deployed Cloud Run.
find
Extracts only the service URL from the response of npm run describe
Note: Here, "Authorized User" refers to a Google Account associated with the project, not an ordinary Firebase User. To allow a Firebase User to invoke your Cloud Run, you must deploy it using deploy:public and handle token validation in your Cloud Run's code, rejecting requests appropriately.
As an example of this file filled in, you get this:
{
"name": "helloworld",
"description": "Simple hello world sample in Node with Firebase",
"version": "1.0.0",
"private": true,
"main": "index.js",
"scripts": {
"start": "node index.js"
"image": "gcloud builds submit --tag gcr.io/com-example-cloudrun/helloworld --project com-example-cloudrun",
"deploy:public": "gcloud run deploy helloworld --image gcr.io/com-example-cloudrun/helloworld --allow-unauthenticated --region us-central1 --project com-example-cloudrun",
"deploy:public": "gcloud run deploy helloworld --image gcr.io/com-example-cloudrun/helloworld --no-allow-unauthenticated --region us-central1 --project com-example-cloudrun",
"describe": "gcloud run services describe helloworld --region us-central1 --project com-example-cloudrun --platform managed",
"find": "gcloud run services describe helloworld --region us-central1 --project com-example-cloudrun --platform managed --format='value(status.url)'"
},
"engines": {
"node": ">= 12.0.0"
},
"author": "You",
"license": "Apache-2.0",
"dependencies": {
/* ... */
},
"devDependencies": {
/* ... */
}
}
Step 5: Create your container files
To tell Cloud Build what container to use for your Cloud Run image, you must create a Dockerfile for your image. To prevent sending the wrong files to the server, you should also specify a .dockerignore file.
In this file, we use the Firebase Project settings from Step 2 to recreate the process.env.FIREBASE_CONFIG environment variable. This variable is used by the Firebase Admin SDK and contains the following information as a JSON string:
{
databaseURL: "https://PROJECT_ID.firebaseio.com",
storageBucket: "PROJECT_ID.appspot.com",
projectId: "PROJECT_ID"
}
Here is cloudrun/helloworld/Dockerfile:
# Use the official lightweight Node.js 14 image.
# https://hub.docker.com/_/node
FROM node:14-slim
# Create and change to the app directory.
WORKDIR /usr/src/app
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure copying both package.json AND package-lock.json (when available).
# Copying this first prevents re-running npm install on every code change.
COPY package*.json ./
# Install production dependencies.
# If you add a package-lock.json, speed your build by switching to 'npm ci'.
# RUN npm ci --only=production
RUN npm install --only=production
# Copy local code to the container image.
COPY . ./
# Define default configuration for Admin SDK
# databaseURL is usually "https://PROJECT_ID.firebaseio.com", but may be different.
# TODO: Update me
ENV FIREBASE_CONFIG={"databaseURL":"https://PROJECT_ID.firebaseio.com","storageBucket":"PROJECT_ID.appspot.com","projectId":"PROJECT_ID"}
# Run the web service on container startup.
CMD [ "node", "index.js" ]
Here is cloudrun/helloworld/.dockerignore:
Dockerfile
.dockerignore
node_modules
npm-debug.log
Step 6: Create & deploy your entry point
When a new Cloud Run instance is launched, it will normally specify the port it wants your code to listen on using the PORT environment variable.
Variant: Migrating a HTTP Event Function
When you use a HTTP Event function from the firebase-functions package, it internally handles body-parsing on your behalf. The Functions Framework uses the body-parser package for this and defines the parsers here.
To handle user authorization, you could use this validateFirebaseIdToken() middleware to check the ID token given with the request.
For a HTTP-based Cloud Run, configuring CORS will be required to invoke it from a browser. This can be done by installing the cors package and configuring it appropriately. In the below sample, cors will reflect the origin sent to it.
const express = require('express');
const cors = require('cors')({origin: true});
const app = express();
app.use(cors);
// To replicate a Cloud Function's body parsing, refer to
// https://github.com/GoogleCloudPlatform/functions-framework-nodejs/blob/d894b490dda7c5fd4690cac884fd9e41a08b6668/src/server.ts#L47-L95
// app.use(/* body parsers */);
app.enable('trust proxy'); // To respect X-Forwarded-For header. (Cloud Run is behind a load balancer proxy)
app.disable('x-powered-by'); // Disables the 'x-powered-by' header added by express (best practice)
// Start of your handlers
app.get('/', (req, res) => {
const name = process.env.NAME || 'World';
res.send(`Hello ${name}!`);
});
// End of your handlers
const port = process.env.PORT || 8080;
app.listen(port, () => {
console.log(`helloworld: listening on port ${port}`);
});
In the $FIREBASE_PROJECT_DIR/cloudrun/helloworld directory, execute the following commands to deploy your image:
npm run image // builds container & stores to container repository
npm run deploy:public // deploys container image to Cloud Run
Variant: Invoke using Cloud Scheduler
When invoking a Cloud Run using the Cloud Scheduler, you can choose which method is used to invoke it (GET, POST (the default), PUT, HEAD, DELETE). To replicate a Cloud Function's data and context parameters, it is best to use POST as these will then be passed in the body of the request. Like Firebase Functions, these requests from Cloud Scheduler may be retried so make sure to handle idempotency appropriately.
Note: Even though the body of a Cloud Scheduler invocation request is JSON-formatted, the request is served with Content-Type: text/plain, which we need to handle.
This code has been adapted from the Functions Framework source (Google LLC, Apache 2.0)
const express = require('express');
const { json } = require('body-parser');
async function handler(data, context) {
/* your logic here */
const name = process.env.NAME || 'World';
console.log(`Hello ${name}!`);
}
const app = express();
// Cloud Scheduler requests contain JSON using
"Content-Type: text/plain"
app.use(json({ type: '*/*' }));
app.enable('trust proxy'); // To respect X-Forwarded-For header. (Cloud Run is behind a load balancer proxy)
app.disable('x-powered-by'); // Disables the 'x-powered-by' header added by express (best practice)
app.post('/*', (req, res) => {
const event = req.body;
let data = event.data;
let context = event.context;
if (context === undefined) {
// Support legacy events and CloudEvents in structured content mode, with
// context properties represented as event top-level properties.
// Context is everything but data.
context = event;
// Clear the property before removing field so the data object
// is not deleted.
context.data = undefined;
delete context.data;
}
Promise.resolve()
.then(() => handler(data, context))
.then(
() => {
// finished without error
// the return value of `handler` is ignored because
// this isn't a callable function
res.sendStatus(204); // No content
},
(err) => {
// handler threw error
console.error(err.stack);
res.set('X-Google-Status', 'error');
// Send back the error's message (as calls to this endpoint
// are authenticated project users/service accounts)
res.send(err.message);
}
)
});
const port = process.env.PORT || 8080;
app.listen(port, () => {
console.log(`helloworld: listening on port ${port}`);
});
Note: The Functions Framework handles errors by sending back a HTTP 200 OK response with a X-Google-Status: error header. This effectively means "failed successfully". As an outsider, I'm not sure why this is done but I can assume it's so that the invoker knows to not bother retrying the function - it'll just get the same result.
In the $FIREBASE_PROJECT_DIR/cloudrun/helloworld directory, execute the following commands to deploy your image:
npm run image // builds container & stores to container repository
npm run deploy:private // deploys container image to Cloud Run
Note: In the following setup commands (only need to run these once), PROJECT_ID, SERVICE_NAME, SERVICE_URL and IAM_ACCOUNT will need to be substituted as appropriate.
Next we need to create a service account that Cloud Scheduler can use to invoke the Cloud Run. You can call it whatever you want such as scheduled-run-invoker. The email of this service account will be referred to as IAM_ACCOUNT in the next step. This Google Cloud Tech YouTube video (starts at the right spot, about 15s) will quickly show what you need to do. Once you've created the account, you can create the Cloud Scheduler job following the next 30 or so seconds of the video or use the following command:
gcloud scheduler jobs create http scheduled-run-SERVICE_NAME /
--schedule="every 1 hours" /
--uri SERVICE_URL /
--attempt-deadline 60m /
--http-method post /
--message-body='{"optional-custom-data":"here","if-you":"want"}' /
--oidc-service-account-email IAM_ACCOUNT
--project PROJECT_ID
Your Cloud Run should now be scheduled.
Variant: Invoke using Pub/Sub
To my understanding, the deploy process is the same as for a scheduled run (deploy:private) but I'm unsure about the specifics. However, here is the Cloud Run source for a Pub/Sub parser:
This code has been adapted from the Functions Framework source (Google LLC, Apache 2.0)
const express = require('express');
const { json } = require('body-parser');
const PUBSUB_EVENT_TYPE = 'google.pubsub.topic.publish';
const PUBSUB_MESSAGE_TYPE =
'type.googleapis.com/google.pubsub.v1.PubsubMessage';
const PUBSUB_SERVICE = 'pubsub.googleapis.com';
/**
* Extract the Pub/Sub topic name from the HTTP request path.
* #param path the URL path of the http request
* #returns the Pub/Sub topic name if the path matches the expected format,
* null otherwise
*/
const extractPubSubTopic = (path: string): string | null => {
const parsedTopic = path.match(/projects\/[^/?]+\/topics\/[^/?]+/);
if (parsedTopic) {
return parsedTopic[0];
}
console.warn('Failed to extract the topic name from the URL path.');
console.warn(
"Configure your subscription's push endpoint to use the following path: ",
'projects/PROJECT_NAME/topics/TOPIC_NAME'
);
return null;
};
async function handler(message, context) {
/* your logic here */
const name = message.json.name || message.json || 'World';
console.log(`Hello ${name}!`);
}
const app = express();
// Cloud Scheduler requests contain JSON using
"Content-Type: text/plain"
app.use(json({ type: '*/*' }));
app.enable('trust proxy'); // To respect X-Forwarded-For header. (Cloud Run is behind a load balancer proxy)
app.disable('x-powered-by'); // Disables the 'x-powered-by' header added by express (best practice)
app.post('/*', (req, res) => {
const body = req.body;
if (!body) {
res.status(400).send('no Pub/Sub message received');
return;
}
if (typeof body !== "object" || body.message === undefined) {
res.status(400).send('invalid Pub/Sub message format');
return;
}
const context = {
eventId: body.message.messageId,
timestamp: body.message.publishTime || new Date().toISOString(),
eventType: PUBSUB_EVENT_TYPE,
resource: {
service: PUBSUB_SERVICE,
type: PUBSUB_MESSAGE_TYPE,
name: extractPubSubTopic(req.path),
},
};
// for storing parsed form of body.message.data
let _jsonData = undefined;
const data = {
'#type': PUBSUB_MESSAGE_TYPE,
data: body.message.data,
attributes: body.message.attributes || {},
get json() {
if (_jsonData === undefined) {
const decodedString = Buffer.from(base64encoded, 'base64')
.toString('utf8');
try {
_jsonData = JSON.parse(decodedString);
} catch (parseError) {
// fallback to raw string
_jsonData = decodedString;
}
}
return _jsonData;
}
};
Promise.resolve()
.then(() => handler(data, context))
.then(
() => {
// finished without error
// the return value of `handler` is ignored because
// this isn't a callable function
res.sendStatus(204); // No content
},
(err) => {
// handler threw error
console.error(err.stack);
res.set('X-Google-Status', 'error');
// Send back the error's message (as calls to this endpoint
// are authenticated project users/service accounts)
res.send(err.message);
}
)
});
const port = process.env.PORT || 8080;
app.listen(port, () => {
console.log(`helloworld: listening on port ${port}`);
});

Robot framework Keyword is not identified when java.lang and java.util are imported in python file

I'm a new for Robot Framework and python, working on using Robot Framework with jython for TI code composer Debug server scripting operations.
using : jython2.7.2,robot framework- Pycharm 2020.3,python 3.7.
path Environmental variables are added as below :
C:\ti\ccs930\ccs\ccs_base\DebugServer\packages\ti\dss\java\dss.jar
C:\ti\ccs930\ccs\ccs_base\DebugServer\packages\ti\dss\java\com.ti.ccstudio.scripting.environment_3.1.0.jar C:\ti\ccs930\ccs\ccs_base\DebugServer\packages\ti\dss\java\com.ti.debug.engine_1.0.0.jar
C:\Program Files\Java\jdk-15.0.1\bin
C:\Program Files (x86)\Python37-32
C:\Program Files (x86)\Python37-32\Scripts
Code snippet are as below :
enter code here
Emulator.py :
#*****************************************************
from java.lang import *
from java.util import *
from com.ti.debug.engine.scripting import *
from com.ti.ccstudio.scripting.environment import *
from decimal import *
#*****************************************************
def CreateEnvironment():
# Create our scripting environment object - which is the main entry point into any script and
# the factory for creating other Scriptable Servers and Sessions
script = ScriptingEnvironment.instance()
# Create a log file in the current directory to log script execution
script.traceBegin("BreakpointsTestLog_python.xml", "DefaultStylesheet.xsl")
# Set our TimeOut
script.setScriptTimeout(100000)
# Log everything
script.traceSetConsoleLevel(TraceLevel.ALL)
script.traceSetFileLevel(TraceLevel.ALL)
# Start up CCS
ccsServer = script.getServer("CCSServer.1")
ccsSession = ccsServer.openSession(".*")
print("Creating Environment...");
# Get the Debug Server and start a Debug Session
debugServer = script.getServer("DebugServer.1")
return debugServer,script,ccsServer,ccsSession
#****************************************************
EmulatorTest.robot :
*** Settings ***
Library Emulator.py
*** Variables ***
*** Test Cases ***
Emulator Test functionality
[Documentation] TEST DESCRIPTION:
...
... Verify that the Test script can launch target configuration and
... connect to target, create a debug server and hit and verify breakpoint .
[Tags] TC-EmulatorTest-001
CreateCCSEnvironment
*** Keywords ***
CreateCCSEnvironment
CreateEnvironment
I am able to execute the python file successfully for creating code composer environment for DSS but the same was not able to do using robot.

How to send trace data to Jaeger through OpenTelemetry in front end app?

Background
I am trying to trace in a front end app.
I am not be able to use #opentelemetry/exporter-jaeger since I believe it is for Node.js back end app only.
So I am trying to use #opentelemetry/exporter-collector.
1. Succeed to print in browser console
First I tried to print the trace data in the browser console. And
the code below succeed printing the trace data.
import { CollectorTraceExporter } from '#opentelemetry/exporter-collector';
import { DocumentLoad } from '#opentelemetry/plugin-document-load';
import { SimpleSpanProcessor, ConsoleSpanExporter } from '#opentelemetry/tracing';
import { WebTracerProvider } from '#opentelemetry/web';
const provider = new WebTracerProvider({ plugins: [new DocumentLoad()] });
provider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
provider.register();
2. Failed to forward to Jaeger
Now I want to forward them to Jaeger.
I am running Jaeger all-in-one by
docker run -d --name jaeger \
-e COLLECTOR_ZIPKIN_HTTP_PORT=9411 \
-p 5775:5775/udp \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
-p 9411:9411 \
jaegertracing/all-in-one:1.18
Based on the Jaeger port document, I might be able to use these two ports (if other ports work, that will be great too!):
14250 HTTP collector accept model.proto
9411 HTTP collector Zipkin compatible endpoint (optional)
Then I further found more info about this port:
Zipkin Formats (stable)
Jaeger Collector can also accept spans in several Zipkin data format,
namely JSON v1/v2 and Thrift. The Collector needs to be configured to
enable Zipkin HTTP server, e.g. on port 9411 used by Zipkin
collectors. The server enables two endpoints that expect POST
requests:
/api/v1/spans for submitting spans in Zipkin JSON v1 or Zipkin Thrift format.
/api/v2/spans for submitting spans in Zipkin JSON v2.
I updated my codes to
import { CollectorTraceExporter, CollectorProtocolNode } from '#opentelemetry/exporter-collector';
import { DocumentLoad } from '#opentelemetry/plugin-document-load';
import { SimpleSpanProcessor } from '#opentelemetry/tracing';
import { WebTracerProvider } from '#opentelemetry/web';
const provider = new WebTracerProvider({ plugins: [new DocumentLoad()] });
// The config below currently has issue
const exporter = new CollectorTraceExporter({
serviceName: 'my-service',
protocolNode: CollectorProtocolNode.HTTP_JSON,
url: 'http://localhost:9411/api/v1/spans', // Also tried v2
});
provider.addSpanProcessor(new SimpleSpanProcessor(exporter));
provider.register();
However, I got bad request for both v1 and v2 endpoints without any response body returned
POST http://localhost:9411/api/v1/spans 400 (Bad Request)
POST http://localhost:9411/api/v2/spans 400 (Bad Request)
Any idea how can I make the request format correct? Thanks
UPDATE (8/19/2020)
I think Andrew is right that I should use OpenTelemetry collector. I also got some help from Valentin Marchaud and Deniz Gurkaynak
at Gitter. Just add the link here for further people who meet same issue:
https://gitter.im/open-telemetry/opentelemetry-node?at=5f3aa9481226fc21335ce61a
My final working solution posted at https://stackoverflow.com/a/63489195/2000548
Thing is, you should use opentelemetry collector if you use opentelemetry exporter.
Pls see schema in attachment
Also I created a gist, which will help you to setup
pls see
https://gist.github.com/AndrewGrachov/11a18bc7268e43f1a36960d630a0838f
(just tune the values, export to jaeger-all-in-one instead of separate + cassandra, etc)

Webdriver.io crashes with NoSessionIdError

I'm trying to get webdriver.io and Jasmine working.
Following their example, my script is at test/specs/first/test2.js (in accordance with the configuration) and contains:
var webdriverio = require('webdriverio');
describe('my webdriverio tests', function() {
var client = {};
jasmine.DEFAULT_TIMEOUT_INTERVAL = 9999999;
beforeEach(function() {
client = webdriverio.remote({ desiredCapabilities: {browserName: 'firefox'} });
client.init();
});
it('test it', function(done) {
client
.url("http://localhost:3000/")
.waitForVisible("h2.btn.btn-primary")
.click("h2.btn.btn-primary")
.waitForVisible("h2.btn.btn-primary")
.call(done);
});
afterEach(function(done) {
client.end(done);
});
});
I'm using wdio as the test runner, and set it up using the interactive setup. That config is automatically-generated and all pretty straightforward, so I don't see a need to post it.
In another terminal window, I am running selenium-server-andalone-2.47.1.jar with Java 7. I do have Firefox installed on my computer (it blankly starts when the test is run), and my computer is running OS 10.10.5.
This is what happens when I start the test runner:
$ wdio wdio.conf.js
=======================================================================================
Selenium 2.0/webdriver protocol bindings implementation with helper commands in nodejs.
For a complete list of commands, visit http://webdriver.io/docs.html.
=======================================================================================
[18:17:22]: SET SESSION ID 46731149-79aa-412e-b9b5-3d32e75dbc8d
[18:17:22]: RESULT {"platform":"MAC","javascriptEnabled":true,"acceptSslCerts":true,"browserName":"firefox","rotatable":false,"locationContextEnabled":true,"webdriver.remote.sessionid":"46731149-79aa-412e-b9b5-3d32e75dbc8d","version":"40.0.3","databaseEnabled":true,"cssSelectorsEnabled":true,"handlesAlerts":true,"webStorageEnabled":true,"nativeEvents":false,"applicationCacheEnabled":true,"takesScreenshot":true}
NoSessionIdError: A session id is required for this command but wasn't found in the response payload
at waitForVisible("h2.btn.btn-primary") - test2.js:21:14
/usr/local/lib/node_modules/webdriverio/node_modules/q/q.js:141
throw e;
^
NoSessionIdError: A session id is required for this command but wasn't found in the response payload
0 passing (3.90s)
$
I find this very strange and inexplicable, especially considering that it even prints the session ID.
Any ideas?
Please check out the docs on the wdio test runner. You don't need to create an instance using init on your own. The wdio test runner takes care on creating and ending the session for you.
Your example covers the standalone WebdriverIO usage (without testrunner). You can find examples which use wdio here.
To clarify that: there are two ways of using WebdriverIO. You can embed it in your test system by yourself (using it as standalone / or as a scraper ). Then you need to take care of things like create and end an instance or run those in parallel. The other way to use WebdriverIO is using its test runner called wdio. The testrunner takes a config file with a bunch of information on your test setup and spawns instances updates job information on Sauce Labs and so on.
Every Webdriver command gets executed asynchronously.
You properly called the done callback in afterEach and in your test it test, but forgot to do it in beforeEach:
beforeEach(function(done) {
client = webdriverio.remote({ desiredCapabilities: {browserName: 'firefox'} });
client.init(done);
});

How to use redis PUBLISH/SUBSCRIBE with nodejs to notify clients when data values change?

I'm writing an event-driven publish/subscribe application with NodeJS and Redis. I need an example of how to notify web clients when the data values in Redis change.
OLD only use a reference
Dependencies
uses express, socket.io, node_redis and last but not least the sample code from media fire.
Install node.js+npm(as non root)
First you should(if you have not done this yet) install node.js+npm in 30 seconds (the right way because you should NOT run npm as root):
echo 'export PATH=$HOME/local/bin:$PATH' >> ~/.bashrc
. ~/.bashrc
mkdir ~/local
mkdir ~/node-latest-install
cd ~/node-latest-install
curl http://nodejs.org/dist/node-latest.tar.gz | tar xz --strip-components=1
./configure --prefix=~/local
make install # ok, fine, this step probably takes more than 30 seconds...
curl http://npmjs.org/install.sh | sh
Install dependencies
After you installed node+npm you should install dependencies by issuing:
npm install express
npm install socket.io
npm install hiredis redis # hiredis to use c binding for redis => FAST :)
Download sample
You can download complete sample from mediafire.
Unzip package
unzip pbsb.zip # can also do via graphical interface if you prefer.
What's inside zip
./app.js
const PORT = 3000;
const HOST = 'localhost';
var express = require('express');
var app = module.exports = express.createServer();
app.use(express.staticProvider(__dirname + '/public'));
const redis = require('redis');
const client = redis.createClient();
const io = require('socket.io');
if (!module.parent) {
app.listen(PORT, HOST);
console.log("Express server listening on port %d", app.address().port)
const socket = io.listen(app);
socket.on('connection', function(client) {
const subscribe = redis.createClient();
subscribe.subscribe('pubsub'); // listen to messages from channel pubsub
subscribe.on("message", function(channel, message) {
client.send(message);
});
client.on('message', function(msg) {
});
client.on('disconnect', function() {
subscribe.quit();
});
});
}
./public/index.html
<html>
<head>
<title>PubSub</title>
<script src="/socket.io/socket.io.js"></script>
<script src="/javascripts/jquery-1.4.3.min.js"></script>
</head>
<body>
<div id="content"></div>
<script>
$(document).ready(function() {
var socket = new io.Socket('localhost', {port: 3000, rememberTransport: false/*, transports: ['xhr-polling']*/});
var content = $('#content');
socket.on('connect', function() {
});
socket.on('message', function(message){
content.prepend(message + '<br />');
}) ;
socket.on('disconnect', function() {
console.log('disconnected');
content.html("<b>Disconnected!</b>");
});
socket.connect();
});
</script>
</body>
</html>
Start server
cd pbsb
node app.js
Start browser
Best if you start google chrome(because of websockets support, but not necessary). Visit http://localhost:3000 to see sample(in the beginning you don't see anything but PubSub as title).
But on publish to channel pubsub you should see a message. Below we publish "Hello world!" to the browser.
From ./redis-cli
publish pubsub "Hello world!"
here's a simplified example without as many dependencies.
You do still need to npm install hiredis redis
The node JavaScript:
var redis = require("redis"),
client = redis.createClient();
client.subscribe("pubsub");
client.on("message", function(channel, message){
console.log(channel + ": " + message);
});
...put that in a pubsub.js file and run node pubsub.js
in redis-cli:
redis> publish pubsub "Hello Wonky!"
(integer) 1
which should display: pubsub: Hello Wonky! in the terminal running node!
Congrats!
Additional 4/23/2013: I also want to make note that when a client subscribes to a pub/sub channel it goes into subscriber mode and is limited to subscriber commands. You'll just need to create additional instances of redis clients. client1 = redis.createClient(), client2 = redis.createClient() so one can be in subscriber mode and the other can issue regular DB commands.
Complete Redis Pub/Sub Example (Real-time Chat using Hapi.js & Socket.io)
We were trying to understand Redis Publish/Subscribe ("Pub/Sub") and all the existing examples were either outdated, too simple or had no tests.
So we wrote a Complete Real-time Chat using Hapi.js + Socket.io + Redis Pub/Sub Example with End-to-End Tests!
https://github.com/dwyl/hapi-socketio-redis-chat-example
The Pub/Sub component is only a few lines of node.js code:
https://github.com/dwyl/hapi-socketio-redis-chat-example/blob/master/lib/chat.js#L33-L40
Rather than pasting it here (without any context) we encourage you to checkout/try the example.
We built it using Hapi.js but the chat.js file is de-coupled from Hapi and can easily be used with a basic node.js http server or express (etc.)
Handle redis errors to stop nodejs from exiting. You can do this by writing;
subcribe.on("error", function(){
//Deal with error
})
I think you get the exception because you are using the same client which is subscribed to publish messages. Create a separate client for publishing messages and that could solve your problem.
Check out acani-node on GitHub, especially the file acani-node-server.js. If these links are broken, look for acani-chat-server among acani's GitHub public repositories.
If you want to get this working with socket.io 0.7 AND an external webserver you need to change (besides the staticProvider -> static issue):
a) provide the domain name instead of localhost (i.e. var socket = io.connect('http://my.domain.com:3000'); ) in the index.html
b) change HOST in app.js (i.e. const HOST = 'my.domain.com'; )
c) and add sockets in line 37 of app.js (i.e. 'socket.sockets.on('connection', function(client) { …' )
Update to the code:
staticProvider
now renamed to
static
see migration guide
according to #alex solution. if you have an error like this one as per #tyler mention:
node.js:134
throw e; // process.nextTick error, or 'error'
event on first tick ^ Error: Redis connection to 127.0.0.1:6379 failed - ECONNREFUSED, Connection refused at Socket.
then you need to install Redis first. check this out:
http://redis.io/download

Categories