I am currently working on a weather station that gets data from OpenWeatherMap's API every 10 minutes.
Every 10 seconds the temperature is published via MQTT in the topic 'local/temperature', so that other systems (for example a heater or air conditioner) can do further actions depending on the temperature.
Every 10 minutes, parallel to the new data retrieval, the weather operations are also published, also via MQTT.
Publishing the data every 10 seconds is a requirement of the project, but not important for this case.
The problem I'm stuck on is this: My request to the API of OWM is done in an extra file, which contains a function that should return the data as an object. At the same time the data is stored in a file, so that in case of a network failure the last local status is saved and can still be used.
I already write into the file, the 'reading when offline' functionality will be added later on. I have also noticed that the assembleURL() function is actually unnecessary, but I haven't changed that yet.
I'm still relatively new in JavaScript / Nodejs, but I already have experience in Java and Python, so it may be that I have mixed in something from Java by mistake.
Can someone please explain to me why the object I return in openWeatherMapCall.js is undefined? I'm thankful for every hint.
My file weather-station.js that calls the function getData in openWeatherMapCall.js:
const mqtt = require('mqtt');
const owm = require('./lib/openWeatherMapCall');
const client = mqtt.connect('mqtt://localhost:1885');
const fs = require('fs');
const config = require('../config/config.json');
const owmConfig = config.owm;
let weatherData = owm.getData(owmConfig.city, owmConfig.owmapikey, owmConfig.lang, "metric");
console.log(weatherData); // -> it says undefined
setInterval(_ => {
weatherData = owm.getData(owmConfig.city, owmConfig.owmapikey, owmConfig.lang, "metric");
client.publish("local/condition", toString(weatherData.weatherID));
console.log('successful publish of wID ' + weatherData.weatherID);
}, 600000); //10 min
setInterval(_ => {
client.publish("local/temperature", toString(weatherData.celsius));
console.log('successful publish of ' + weatherData.celsius + ' celsius');
}, 30000); //10 sec
My OWM API call as openWeatherMapCall.js:
const fetch = require('node-fetch');
const fs = require('fs');
const util = require("util");
function assembleURL (city, apiKey, lang, units){
let url = "http://api.openweathermap.org/data/2.5/weather?q=" + city + "&units=" + units + "&lang=" + lang + "&appid=" + apiKey;
console.log("url: " + url);
return url;
}
function getData(city, apiKey, lang, units){
let url = assembleURL(city, apiKey, lang, units )
fetch(url)
.then(function(resp) { return resp.json() }) // Convert data to json
.then(function(data) {
var currentWeather = {
weather: data.weather[0].description,
weatherID: data.weather[0].id,
celsius: Math.round(parseFloat(data.main.temp)),
wind: data.wind.speed,
location: data.name
};
let toString = JSON.stringify(currentWeather);
fs.writeFile('../config/weather.json', toString, err => {
if (err) {
console.log('Error while writing', err)
} else {
console.log('Successful write')
}
})
return currentWeather;
})
.catch( err => {
console.log('caught it!',err);
});
}
module.exports = {getData};
Return fetch response from getData and use then on owm.getData as fetch returns a Promise.
function getData(city, apiKey, lang, units){
let url = assembleURL(city, apiKey, lang, units )
return fetch(url)....
}
And
owm.getData(owmConfig.city, owmConfig.owmapikey, owmConfig.lang, "metric").then((weatherData) => {
console.log(weatherData)
});
Related
I'm struggling to find documentation or examples of implementing an upload progress indicator using fetch.
This is the only reference I've found so far, which states:
Progress events are a high level feature that won't arrive in fetch for now. You can create your own by looking at the Content-Length header and using a pass-through stream to monitor the bytes received.
This means you can explicitly handle responses without a Content-Length differently. And of course, even if Content-Length is there it can be a lie. With streams you can handle these lies however you want.
How would I write "a pass-through stream to monitor the bytes" sent? If it makes any sort of difference, I'm trying to do this to power image uploads from the browser to Cloudinary.
NOTE: I am not interested in the Cloudinary JS library, as it depends on jQuery and my app does not. I'm only interested in the stream processing necessary to do this with native javascript and Github's fetch polyfill.
https://fetch.spec.whatwg.org/#fetch-api
Streams are starting to land in the web platform (https://jakearchibald.com/2016/streams-ftw/) but it's still early days.
Soon you'll be able to provide a stream as the body of a request, but the open question is whether the consumption of that stream relates to bytes uploaded.
Particular redirects can result in data being retransmitted to the new location, but streams cannot "restart". We can fix this by turning the body into a callback which can be called multiple times, but we need to be sure that exposing the number of redirects isn't a security leak, since it'd be the first time on the platform JS could detect that.
Some are questioning whether it even makes sense to link stream consumption to bytes uploaded.
Long story short: this isn't possible yet, but in future this will be handled either by streams, or some kind of higher-level callback passed into fetch().
My solution is to use axios, which supports this pretty well:
axios.request({
method: "post",
url: "/aaa",
data: myData,
onUploadProgress: (p) => {
console.log(p);
//this.setState({
//fileprogress: p.loaded / p.total
//})
}
}).then (data => {
//this.setState({
//fileprogress: 1.0,
//})
})
I have example for using this in react on github.
fetch: not possible yet
It sounds like upload progress will eventually be possible with fetch once it supports a ReadableStream as the body. This is currently not implemented, but it's in progress. I think the code will look something like this:
warning: this code does not work yet, still waiting on browsers to support it
async function main() {
const blob = new Blob([new Uint8Array(10 * 1024 * 1024)]); // any Blob, including a File
const progressBar = document.getElementById("progress");
const totalBytes = blob.size;
let bytesUploaded = 0;
const blobReader = blob.stream().getReader();
const progressTrackingStream = new ReadableStream({
async pull(controller) {
const result = await blobReader.read();
if (result.done) {
console.log("completed stream");
controller.close();
return;
}
controller.enqueue(result.value);
bytesUploaded += result.value.byteLength;
console.log("upload progress:", bytesUploaded / totalBytes);
progressBar.value = bytesUploaded / totalBytes;
},
});
const response = await fetch("https://httpbin.org/put", {
method: "PUT",
headers: {
"Content-Type": "application/octet-stream"
},
body: progressTrackingStream,
});
console.log("success:", response.ok);
}
main().catch(console.error);
upload: <progress id="progress" />
workaround: good ol' XMLHttpRequest
Instead of fetch(), it's possible to use XMLHttpRequest to track upload progress — the xhr.upload object emits a progress event.
async function main() {
const blob = new Blob([new Uint8Array(10 * 1024 * 1024)]); // any Blob, including a File
const uploadProgress = document.getElementById("upload-progress");
const downloadProgress = document.getElementById("download-progress");
const xhr = new XMLHttpRequest();
const success = await new Promise((resolve) => {
xhr.upload.addEventListener("progress", (event) => {
if (event.lengthComputable) {
console.log("upload progress:", event.loaded / event.total);
uploadProgress.value = event.loaded / event.total;
}
});
xhr.addEventListener("progress", (event) => {
if (event.lengthComputable) {
console.log("download progress:", event.loaded / event.total);
downloadProgress.value = event.loaded / event.total;
}
});
xhr.addEventListener("loadend", () => {
resolve(xhr.readyState === 4 && xhr.status === 200);
});
xhr.open("PUT", "https://httpbin.org/put", true);
xhr.setRequestHeader("Content-Type", "application/octet-stream");
xhr.send(blob);
});
console.log("success:", success);
}
main().catch(console.error);
upload: <progress id="upload-progress"></progress><br/>
download: <progress id="download-progress"></progress>
Update: as the accepted answer says it's impossible now. but the below code handled our problem for sometime. I should add that at least we had to switch to using a library that is based on XMLHttpRequest.
const response = await fetch(url);
const total = Number(response.headers.get('content-length'));
const reader = response.body.getReader();
let bytesReceived = 0;
while (true) {
const result = await reader.read();
if (result.done) {
console.log('Fetch complete');
break;
}
bytesReceived += result.value.length;
console.log('Received', bytesReceived, 'bytes of data so far');
}
thanks to this link: https://jakearchibald.com/2016/streams-ftw/
As already explained in the other answers, it is not possible with fetch, but with XHR. Here is my a-little-more-compact XHR solution:
const uploadFiles = (url, files, onProgress) =>
new Promise((resolve, reject) => {
const xhr = new XMLHttpRequest();
xhr.upload.addEventListener('progress', e => onProgress(e.loaded / e.total));
xhr.addEventListener('load', () => resolve({ status: xhr.status, body: xhr.responseText }));
xhr.addEventListener('error', () => reject(new Error('File upload failed')));
xhr.addEventListener('abort', () => reject(new Error('File upload aborted')));
xhr.open('POST', url, true);
const formData = new FormData();
Array.from(files).forEach((file, index) => formData.append(index.toString(), file));
xhr.send(formData);
});
Works with one or multiple files.
If you have a file input element like this:
<input type="file" multiple id="fileUpload" />
Call the function like this:
document.getElementById('fileUpload').addEventListener('change', async e => {
const onProgress = progress => console.log('Progress:', `${Math.round(progress * 100)}%`);
const response = await uploadFiles('/api/upload', e.currentTarget.files, onProgress);
if (response.status >= 400) {
throw new Error(`File upload failed - Status code: ${response.status}`);
}
console.log('Response:', response.body);
}
Also works with the e.dataTransfer.files you get from a drop event when building a file drop zone.
I don't think it's possible. The draft states:
it is currently lacking [in comparison to XHR] when it comes to request progression
(old answer):
The first example in the Fetch API chapter gives some insight on how to :
If you want to receive the body data progressively:
function consume(reader) {
var total = 0
return new Promise((resolve, reject) => {
function pump() {
reader.read().then(({done, value}) => {
if (done) {
resolve()
return
}
total += value.byteLength
log(`received ${value.byteLength} bytes (${total} bytes in total)`)
pump()
}).catch(reject)
}
pump()
})
}
fetch("/music/pk/altes-kamuffel.flac")
.then(res => consume(res.body.getReader()))
.then(() => log("consumed the entire body without keeping the whole thing in memory!"))
.catch(e => log("something went wrong: " + e))
Apart from their use of the Promise constructor antipattern, you can see that response.body is a Stream from which you can read byte by byte using a Reader, and you can fire an event or do whatever you like (e.g. log the progress) for every of them.
However, the Streams spec doesn't appear to be quite finished, and I have no idea whether this already works in any fetch implementation.
with fetch: now possible with Chrome >= 105 🎉
How to:
https://developer.chrome.com/articles/fetch-streaming-requests/
Currently not supported by other browsers (maybe that will be the case when you read this, please edit my answer accordingly)
Feature detection (source)
const supportsRequestStreams = (() => {
let duplexAccessed = false;
const hasContentType = new Request('', {
body: new ReadableStream(),
method: 'POST',
get duplex() {
duplexAccessed = true;
return 'half';
},
}).headers.has('Content-Type');
return duplexAccessed && !hasContentType;
})();
HTTP >= 2 required
The fetch will be rejected if the connection is HTTP/1.x.
Since none of the answers solve the problem.
Just for implementation sake, you can detect the upload speed with some small initial chunk of known size and the upload time can be calculated with content-length/upload-speed. You can use this time as estimation.
A possible workaround would be to utilize new Request() constructor then check Request.bodyUsed Boolean attribute
The bodyUsed attribute’s getter must return true if disturbed, and
false otherwise.
to determine if stream is distributed
An object implementing the Body mixin is said to be disturbed if
body is non-null and its stream is disturbed.
Return the fetch() Promise from within .then() chained to recursive .read() call of a ReadableStream when Request.bodyUsed is equal to true.
Note, the approach does not read the bytes of the Request.body as the bytes are streamed to the endpoint. Also, the upload could complete well before any response is returned in full to the browser.
const [input, progress, label] = [
document.querySelector("input")
, document.querySelector("progress")
, document.querySelector("label")
];
const url = "/path/to/server/";
input.onmousedown = () => {
label.innerHTML = "";
progress.value = "0"
};
input.onchange = (event) => {
const file = event.target.files[0];
const filename = file.name;
progress.max = file.size;
const request = new Request(url, {
method: "POST",
body: file,
cache: "no-store"
});
const upload = settings => fetch(settings);
const uploadProgress = new ReadableStream({
start(controller) {
console.log("starting upload, request.bodyUsed:", request.bodyUsed);
controller.enqueue(request.bodyUsed);
},
pull(controller) {
if (request.bodyUsed) {
controller.close();
}
controller.enqueue(request.bodyUsed);
console.log("pull, request.bodyUsed:", request.bodyUsed);
},
cancel(reason) {
console.log(reason);
}
});
const [fileUpload, reader] = [
upload(request)
.catch(e => {
reader.cancel();
throw e
})
, uploadProgress.getReader()
];
const processUploadRequest = ({value, done}) => {
if (value || done) {
console.log("upload complete, request.bodyUsed:", request.bodyUsed);
// set `progress.value` to `progress.max` here
// if not awaiting server response
// progress.value = progress.max;
return reader.closed.then(() => fileUpload);
}
console.log("upload progress:", value);
progress.value = +progress.value + 1;
return reader.read().then(result => processUploadRequest(result));
};
reader.read().then(({value, done}) => processUploadRequest({value,done}))
.then(response => response.text())
.then(text => {
console.log("response:", text);
progress.value = progress.max;
input.value = "";
})
.catch(err => console.log("upload error:", err));
}
I fished around for some time about this and just for everyone who may come across this issue too here is my solution:
const form = document.querySelector('form');
const status = document.querySelector('#status');
// When form get's submitted.
form.addEventListener('submit', async function (event) {
// cancel default behavior (form submit)
event.preventDefault();
// Inform user that the upload has began
status.innerText = 'Uploading..';
// Create FormData from form
const formData = new FormData(form);
// Open request to origin
const request = await fetch('https://httpbin.org/post', { method: 'POST', body: formData });
// Get amount of bytes we're about to transmit
const bytesToUpload = request.headers.get('content-length');
// Create a reader from the request body
const reader = request.body.getReader();
// Cache how much data we already send
let bytesUploaded = 0;
// Get first chunk of the request reader
let chunk = await reader.read();
// While we have more chunks to go
while (!chunk.done) {
// Increase amount of bytes transmitted.
bytesUploaded += chunk.value.length;
// Inform user how far we are
status.innerText = 'Uploading (' + (bytesUploaded / bytesToUpload * 100).toFixed(2) + ')...';
// Read next chunk
chunk = await reader.read();
}
});
const req = await fetch('./foo.json');
const total = Number(req.headers.get('content-length'));
let loaded = 0;
for await(const {length} of req.body.getReader()) {
loaded = += length;
const progress = ((loaded / total) * 100).toFixed(2); // toFixed(2) means two digits after floating point
console.log(`${progress}%`); // or yourDiv.textContent = `${progress}%`;
}
Key part is ReadableStream ≪obj_response.body≫.
Sample:
let parse=_/*result*/=>{
console.log(_)
//...
return /*cont?*/_.value?true:false
}
fetch('').
then(_=>( a/*!*/=_.body.getReader(), b/*!*/=z=>a.read().then(parse).then(_=>(_?b:z=>z)()), b() ))
You can test running it on a huge page eg https://html.spec.whatwg.org/ and https://html.spec.whatwg.org/print.pdf . CtrlShiftJ and load the code in.
(Tested on Chrome.)
I have a little problem, my firebase cloud function completes before I get the API key from calling the Google Secret Manager API. The API key is important as it makes an API call to get data from an external server to store the result of the API call in Google Cloud Storage.
Here is my code,
'use strict';
// Request Data From A URL
var request = require('request');
var https = require('https');
// Var Firebase Functions
var functions = require('firebase-functions');
const admin = require('firebase-admin');
// Initalise App
admin.initializeApp();
// init firebase admin and get the default project id
const projectId = admin.instanceId().app.options.projectId
const util = require('util');
// Imports the Google Storage client library
const {Storage} = require('#google-cloud/storage');
// Import the Secret Manager client and instantiate it:
const {SecretManagerServiceClient} = require('#google-cloud/secret-manager');
const secretClient = new SecretManagerServiceClient();
// Setting Timeout in Seconds - default is 1 second
// The maximum value for timeoutSeconds is 540, or 9 minutes. Valid values for memory are:
// 128MB, 256MB, 512MB, 1GB, 2GB
const runtimeOpts = {
timeoutSeconds: 300,
memory: '512MB'
}
let apikey = '';
// From were the data comes
// 1 = Shoreserver
var shipid = '1';
// Get the current date
var today = new Date();
var dd = String(today.getDate()).padStart(2, '0');
var mm = String(today.getMonth() + 1).padStart(2, '0'); //January is 0!
var yyyy = today.getFullYear();
today = '0000' + '-' + '00' + '-' + '00';
// Creates a storage client
const storage = new Storage({
projectId: projectId,
});
// Set Bucket Name
const bucket = storage.bucket('masterlog');
/**
* Delete a file from a storage bucket and download a new file from remote location to store in the bucket
*/
exports.getEmilyAPItoStorage = functions
.runWith(runtimeOpts)
.region('europe-west1')
.https.onRequest((req, res) => {
// Get Secret
***(async () => { apikey = await getSecret() })***
console.info(`ApiKey Secret: ${apikey}`);
// First we want to delete the current file, the filename is always the same.
// Delete files in the Bucket people
bucket.deleteFiles({
prefix: `people.json`
})
.catch( (err) => {
console.log(`Failed to delete people.json`);
});
// Start of the requesting different tables
// Table to get data from
var apitable = 'people';
// Set destination filename
const people = bucket.file('people.json');
var url = 'https://<URL>/api/' + shipid + '/' + apitable + '?apikey=' + apikey + '&syncdate=' + today;
// Set the options to make the request
var options = {
url: url,
strictSSL: false,
secureProtocol: 'TLSv1_method'
}
// Make a request for the API and store the file in Storage
request(options)
.pipe(people
.createWriteStream({sourceFormat: 'NEWLINE_DELIMITED_JSON'}))
.on('finish', function(error) {
if (error) {
console.log(error);
res.status(500).send(error);
} else {
console.log( "- done!")
res.status(200).send("OK");
}
});
// End Function with status code 200
// Set destination filename
const agents = bucket.file('agents.json');
// Table to get data from
var apitable = 'ports';
var url = 'https://emily.greenpeace.net/api/' + shipid + '/' + apitable + '?apikey=' + apikey + '&syncdate=' + today;
// Set the options to make the request
var options = {
url: url,
strictSSL: false,
secureProtocol: 'TLSv1_method'
}
// Make a request for the API and store the file in Storage
request(options)
.pipe(agents
.createWriteStream({sourceFormat: 'NEWLINE_DELIMITED_JSON'}))
.on('finish', function(error) {
if (error) {
console.log(error);
res.status(500).send(error);
} else {
console.log( "- done!")
res.status(200).send("OK");
}
});
// End Function with status code 200
async function getSecret() {
// Access the secret.
const resource_name = 'projects/' + projectId + '/secrets/emilyapikey/versions/latest';
let [version] = await secretClient.accessSecretVersion({name: resource_name})
console.info(`Found secret ${version.payload.data} with state ${version.state}`);
apikey = version.payload.data;
return apikey;
}
});
I can get the API key from Google Secrets Manager in the function getSecret(), the API Key is not available when I make the API key to my server. My expectation is that the getSecret would complete before it executes the rest of the code.
If someone has an insight her of what I'm missing I really interested to hear from you.
If you want to use async/await in any funciton, that function has to be declared async:
exports.getEmilyAPItoStorage = functions
.runWith(runtimeOpts)
.region('europe-west1')
.https.onRequest(async (req, res) => { ... })
Then you can await in the code in its body:
const apikey = await getSecret()
I have a function that monitors a node in a Realtime database and once a new child is written to the node the function simply needs to create a html document in a Google Cloud bucket. The HTML document will have a unique name and will contain some data from the node. It's all fairly straightforward, however I can't actually create and write to the document. I've tried 3 methods so far (outlined in the code below), none of these methods work.
const {Storage} = require('#google-cloud/storage');
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp();
const fs = require('fs');
const {StringStream} = require('#rauschma/stringio')
const instanceId = 'my-project-12345';
const bucketName = 'my-bucket';
exports.processCertification = functions.database.instance(instanceId).ref('/t/{userId}/{testId}')
.onCreate((snapshot, context) => {
const dataJ = snapshot.toJSON();
var testResult = "Invalid";
if(dataJ.r == 1) {testResult = "Positive";}
else if(dataJ.r == 2) {testResult = "Negative";}
console.log('Processing certificate:', context.params.testId, testResult);
var storage = new Storage({projectId: instanceId});
const fileName = context.params.testId + '.html';
const fileContents = "<html><head></head><body>Result: " + testResult + "</body></html>"
const options = {resumable:false, metadata:{contentType:'text/html'}};
const bucket = storage.bucket(bucketName);
const file = bucket.file(fileName);
console.log('Saving to:' + bucketName + '/' + fileName);
if(false) {
// Test 1. the file.save method
// Errors with:
// (node:2) MetadataLookupWarning: received unexpected error = URL is not defined code = UNKNOWN
file.save(fileContents, options, function(err) {
if (!err) {console.log("Save created object at " + bucketName + "/" + fileName);}
else {console.log("Save Failed " + err);}
});
} else if(true) {
// Test 2. the readStream.pipe method
// No errors, doesn't output error message, doesn't output finish message, no file created
fs.createReadStream(fileContents)
.pipe(file.createWriteStream(options))
.on('error', function(err) {console.log('WriteStream Error');})
.on('finish', function() {console.log('WriteStream Written');});
} else {
// Test 3. the StringStream with readStream.pipe method
// Errors with:
// (node:2) MetadataLookupWarning: received unexpected error = URL is not defined code = UNKNOWN
const writeStream = storage.bucket(bucketName).file(fileName).createWriteStream(options);
writeStream.on('finish', function(){console.log('WriteStream Written');}).on('error', function(err){console.log('WriteStream Error');});
const readStream = new StringStream(fileContents);
readStream.pipe(writeStream);
}
console.log('Function Finished');
return 0;
});
In all cases the "Processing certificate" and "Saving to" outputs appear, I also get the "Function Finished" message every time. The errors (or in one case no response) is written against each of the tests in the code.
My next step will be to create the file locally and then use upload() method, however each of these methods seem like they should work, plus the only error message I have is talking about URL errors so I suspect trying to use upload() method would run into the same problems as well.
I'm using Node.JS v8.17.0 and the following packages
"dependencies": {
"#google-cloud/storage": "^5.0.0",
"#rauschma/stringio": "^1.4.0",
"firebase-admin": "^8.10.0",
"firebase-functions": "^3.6.1"
}
Any advice is most welcome
In each case, you are not working with promises correctly. For database triggers (and all other background triggers), you must return a promise that resolves when all of the asynchronous work is complete in a function. Right now, you're not doing anything at all with promises, while each of the APIs you're calling are all asynchronous. Your function is just returning 0 immediately without waiting for the upload to complete, and Cloud Functions is simply terminating and cleaning up before anything can happen.
I suggest choosing one of the methods that returns a promise with the upload is complete (probably file.save()), then return that promise from the function.
I am trying to send image data from my TCP client to my TCP server both written in node.js
I have already tried doing it this way
client:
function onData(socket, data) {
var data = Buffer.from(data).toString()
var arg = data.split(',')
var event = arg[0]
console.log(event)
if (event == 'screenshot') {
console.log(hostname)
console.log('control client uid ' + arg[1] + 'then we screenshot')
screenshot()
.then(img => {
console.log(img)
socket.write('screenshotData,' + ',' + hostname + ',' + img)
socket.write('stdout,' + arg[2] + ',Screenshot')
})
.catch(err => {
console.log(err)
socket.write('error', err)
})
}
}
server:
sock.on('data', function(data) {
//right here i need to parse the first 'EVENT' part of the text so i can get cusotom tcp events and
var data = Buffer.from(data).toString()
var arg = data.split(',')
var event = arg[0]
if (event == 'screenshotData') {
agentName = arg[1]
img = arg[2]
console.log('agent-name ' + agentName)
console.log('screnshotdata' + img)
var dt = dateTime.create()
var formattedTime = dt.format('Y-m-d-H-M-S')
var folder = 'adminPanel/screenshots/'
var filename = formattedTime + '-' + agentName + '.png'
console.log(filename)
fs.writeFile(folder + filename, img, function(err) {
console.log(err)
})
}
})
I had to build some rudimentary event system in TCP. If you know a better way then let me know. Anyways, the client takes a screenshot and then it does socket.write('screenshotData', + ',' + hostname + ',' img).
But it sends the data in multiple chunks as my console is showing random gibberish as a new event many times so I don't even know how I would do this. Any help would be great.
You are treating your TCP stream as a message-oriented protocol, in addition to mixing encodings (your image Buffer is simply concatenated into the string).
I suggest you switch TCP streams with websockets. The interface remains largely the same (read replaced with message events, stuff like that) but it actually behaves like you are expecting.
Working server:
const WebSocket = require('ws');
const fs = require('fs');
const PORT = 3000;
const handleMessage = (data) => {
const [action, payload] = data.split(',');
const imageData = Buffer.from(payload, 'base64');
const imageHandle = fs.createWriteStream('screenshot.jpg');
imageHandle.write(imageData);
imageHandle.end();
console.log(`Saved screenshot (${imageData.length} bytes)`);
};
const wss = new WebSocket.Server({port: PORT});
wss.on('connection', (ws) => {
console.log('Opened client');
ws.on('message', (data)=>handleMessage(data));
});
console.log('Server started');
client:
const WebSocket = require('ws');
const screenshot = require('screenshot-desktop');
const PORT = 3000;
const sendImage = ( client, image ) => {
const payload = image.toString('base64');
const message = ["screenshot", payload].join(',');
console.log(`Sending ${image.length} bytes in message ${message.length} bytes`);
client.send(
message,
() => {
console.log('Done');
process.exit(0);
}
);
};
const client = new WebSocket('ws://localhost:'+PORT+'/');
client.on('open', () => {
console.log('Connected');
screenshot().then( image => sendImage(client, image) );
});
if you specifically want to transfer an image type file, then the best-suggested way is to deal with b64 data. i.e convert your image to a b64 and send the data over a channel, and after receiving it in the server, you can convert it into a .jpg/.png again.
For reference https://www.npmjs.com/package/image-to-base64
I've written a small tool that returns a promise after calling several other promises. This tool works great when I test it solo, it takes about 10 seconds in the example below. However, when I try to run it along with a http server instance it, takes in the order of several minutes to return, if at all!
I'm fairly sure I'm just misunderstanding something here, as I'm not extremely proficient in Node. If anyone can spot an issue, or suggest an alternative to using promises for handling asynchronous methods, please let me know!
Just to clarify, it's the Promise.all returned by the traceRoute function which is hanging. The sub-promises are all resolving as expected.
Edit: As suggested in the comments, I have also tried a recursive version with no call to Promise.all; same issue.
This is a working standalone version being called without any http server instance running:
const dns = require('dns');
const ping = require('net-ping');
var traceRoute = (host, ttl, interval, duration) => {
var session = ping.createSession({
ttl:ttl,
timeout: 5000
});
var times = new Array(ttl);
for (var i=0; i<ttl; i++){
times[i] = {'ttl': null, 'ipv4': null, 'hostnames': [], 'times': []}
};
var feedCb = (error, target, ttl, sent, rcvd) => {
var ms = rcvd - sent;
if (error) {
if (error instanceof ping.TimeExceededError) {
times[ttl-1].ttl = ttl;
times[ttl-1].ipv4 = error.source;
times[ttl-1].times.push(ms)
} else {
console.log(target + ": " +
error.toString () +
" (ttl=" + ttl + " ms=" + ms +")");
}
} else {
console.log(target + ": " +
target + " (ttl=" + ttl + " ms=" + ms +")");
}
}
var proms = new Array();
var complete = 0
while(complete < duration){
proms.push(
new Promise((res, rej) => {
setTimeout(function(){
session.traceRoute(
host,
{ maxHopTimeouts: 5 },
feedCb,
function(e,t){
console.log('traceroute done: resolving promise')
res(); // resolve inner promise
}
);
}, complete);
})
)
complete += interval;
}
return Promise.all(proms)
.then(() => {
console.log('resolving traceroute');
return times.filter((t)=> t.ttl != null);
});
}
traceRoute('195.146.144.8', 20, 500, 5000)
.then( (times) => console.log(times) )
Below, is the same logic being called from inside the server instance, this is not working as it should. See the inline comment for where exactly it hangs.
const express = require('express');
const http = require('http');
const WebSocket = require('ws');
const app = express();
const server = http.createServer(app);
const wss = new WebSocket.Server({server: server, path: "/wss"});
const dns = require('dns');
const ping = require('net-ping');
var traceRoute = (host, ttl, interval, duration) => {
var session = ping.createSession({
ttl:ttl,
timeout: 5000
});
var times = new Array(ttl);
for (var i=0; i<ttl; i++){
times[i] = {'ttl': null, 'ipv4': null, 'hostnames': [], 'times': []}
};
var feedCb = (error, target, ttl, sent, rcvd) => {
var ms = rcvd - sent;
if (error) {
if (error instanceof ping.TimeExceededError) {
times[ttl-1].ttl = ttl;
times[ttl-1].ipv4 = error.source;
times[ttl-1].times.push(ms)
} else {
console.log(target + ": " +
error.toString () + " (ttl=" + ttl + " ms=" + ms +")");
}
} else {
console.log(target + ": " + target +
" (ttl=" + ttl + " ms=" + ms +")");
}
}
var proms = new Array();
var complete = 0
while(complete < duration){
proms.push(
new Promise((res, rej) => {
setTimeout(function(){
session.traceRoute(
host,
{ maxHopTimeouts: 5 },
feedCb,
function(e,t){
console.log('traceroute done: resolving promise')
res(); // resolve inner promise
}
);
}, complete);
})
)
complete += interval;
}
console.log('Promise all:', proms);
// #####################
// Hangs on this promise
// i.e. console.log('resolving traceroute') is not called for several minutes.
// #####################
return Promise.all(proms)
.then(() => {
console.log('resolving traceroute')
return times.filter((t)=> t.ttl != null)
});
}
wss.on('connection', function connection(ws, req) {
traceRoute('195.146.144.8', 20, 500, 5000)
.then((data) => ws.send(data));
});
app.use('/tools/static', express.static('./public/static'));
app.use('/tools/templates', express.static('./public/templates'));
app.get('*', function (req, res) {
res.sendFile(__dirname + '/public/templates/index.html');
});
server.listen(8081);
Note: I have tried calling it before the server.listen, after server.listen, from inside wss.on('connection', .... None of which makes a difference. Calling it anywhere, while the server is listening, causes it to behave in a non-deterministic manner.
I'm not going to accept this answer as it's only a workaround; it was just too long to put in the comments...
None of the promises, including the Promise.all, are throwing exceptions. However, Node seems to be parking the call to Promise.all. I accidentally discovered that if I keep a timeout loop running while waiting for the promise.all to resolve, then it will in fact resolve as and when expected.
I'd love if someone could explain exactly what is happening here as I don't really understand.
var holdDoor = true
var ps = () => {
setTimeout(function(){
console.log('status:', proms);
if (holdDoor) ps();
}, 500);
}
ps();
return Promise.all(proms)
.then(() => {
holdDoor = false
console.log('Resolving all!')
return times.filter((t)=> t.ttl != null)
});
Your code is working perfectly fine!
To reproduce this I've created a Dockerfile with a working version. You can find it in this git repository, or you can pull it with docker pull luxferresum/promise-all-problem.
You can run the docker image with docker run -ti -p 8081:8081 luxferresum/promise-all-problem. This will expose the webserver on localhost:8081.
You can also just run the problematic.js with node problematic.js and then opening localhost:8081 in the web browser.
The web socket will be opened by const ws = new WebSocket('ws://localhost:8081/wss'); which then triggers the code to run.
Its just very important to actually open the web socket, without that the code will not run.
I would suggest replacing the trace route with something else, like a DNS lookup, and see of the issue remains. At this point you cannot be sure it relates to raw-socket, since that uses libuv handles directly and does not effect other parts of the Node.js event loop.