Closing previously opened Twitter API streams? (due to maximum connections limit) - javascript

So I am currently creating a bot to read a twitter api stream and post incoming tweets to a discord bot.
I am just testing without the discord side at the moment but I have noticed that after a few disconnects of the bot (relating to CTRL+C in the node terminal) and reboots, I have reached the max stream limit.
This is my current code (removing all the other stream config like rules etc.):
stream.autoReconnect = true;
stream.on(ETwitterStreamEvent.Data, async tweet => {
console.log(`https://twitter.com/user/status/${tweet.data.id}`)
});
This is all wrapped within an async function "setup" which runs at the end of the index.js file.
How can I fix this error? I know I have no current way to properly close open streams when I want to restart my program. I am also wondering if there is a way to safely disconnect the stream when I want to.
Thank you guys!

Related

How to implement room to room file sharing system using socket.io and nodejs larger than 1mb?

I want to implement socket.io room to room file sharing system so users can send images to their respective rooms to all user can see it and I have tried using base64 encoding method to emit sender image file to specific room but it can send only approximately 700kb to 800kb file.
Is there any easier way of doing this and can support larger files above 1mb and it should be able to load images progressively?
I am using ejs template engine, nodejs, socket.io, javascript.
Console.log("please help me guys if you any idea about this, I tried many things but none of them are working and I have read the socket.io documentation but didn't get any clue about it
I have also tried binary streaming but got no luck please help me guys with some codes samples
You will probably find it easier for the client to upload the file to your http server with a room name and then have your http server send a message to all the other clients in the room via socket.io with a URL where the client can download the file using http. socket.io is just not a streaming protocol, it's a packet or message-based protocol so to send large things, it has to be broken up into messages and then reassembled on the client. This can be done, but it's just extra non-standard work that http uploads and downloads already know how to do.
Here would be the steps:
Client uploads file to server via http post with the room name as a field in the form.
Server receives uploaded file, assigns it a unique ID and stores it in a temporary location on the server on disk.
When file upload completes server notifies all other clients in the room via socket.io that the file is uploaded and ready for download and sends them the URL for download that has the uniqueID in it.
Each client sends request to download the file via http using the unique URL they received.
Server serves the file to each client as requested over http.
Server either keeps track of whether all clients have now finished downloading or just removes the file after some period of time based on timestamp of the file (to just clean up disk space) with some regular cleanup function on a recurring timer.
You can create a single route that handles all the downloads:
const downloadRoot = "/temp/filexfer";
app.get("/download/:id", (req, res) => {
const fullPath = path.resolve(path.join(downloadRoot, req.params.id));
// detect any leading . or any double .. that might jump outside
// the downloadRoot and get to other parts of the server
if (!fullPath.startsWith(downloadRoot)) {
console.log(`Unsafe download request ${fullPath}`);
res.sendStatus(500);
return;
}
res.download(fullPath);
});
A cleanup algorithm could look like this:
const fsp = require('fs').promises;
const path = require('path');
const oneHour = 1000 * 60 * 60;
// run cleanup once per hour
let cleanupTimer = setInterval(async () => {
let oneHourOld = Date.now() - oneHour;
try {
let files = await fsp.readdir(downloadRoot, {withFileTypes: true});
for (let f of files) {
if (f.isFile()) {
let fullName = path.join(downloadRoot, f.name);
let info = await fsp.stat(fullName);
// if file modification time is older than one hour, remove it
if (info.mtimeMs <= oneHourOld) {
fsp.unlink(fullName).catch(err => {
// log error, but continue
console.log(`Can't remove temp download file ${fullName}`, err);
});
}
}
}
} catch(e) {
console.log(e);
}
}, oneHour);
// unref the timer so it doesn't stop node.js from exiting naturally
cleanupTimer.unref();
There are a lot of ways to do this sort of thing and it depends heavily on what sort of architecture you want to support.
Sending large files through socket.io or any other web-socket is fine. it does require a bunch of chopping and reassembling on your web app but it will work.
WebRTC is another way to share files of any description, and it will not burden your server in anyway which is good. (here is a tutorial on it https://ably.com/tutorials/web-rtc-file-transfer)
The issue with either of these methods is that they are transient shares, are new user to the room will not get the image, unless your server re-transmits the data again.
My suggestion would be to upload the file to s3 directly and then share a link to it that can be resolved on each of the clients. This will keep the server burden down and reduce your storage requirements in the backend server

How to resume Google Cloud Speech API (longRunningRecognize) timeout on Cloud Functions

I'm trying to create an application to transcribe some wav files using cloud functions and cloud speech API. The official document shows how to do this ( https://cloud.google.com/speech-to-text/docs/async-recognize). However, cloud functions have processing time limit (up to 540 seconds), and some long wav files might exceed the time for waiting transcription API. I'm searching for a resuming way.
The official document shows the following code. (I'm using node for cloud functions)
// Detects speech in the audio file. This creates a recognition job that you
// can wait for now, or get its result later.
const [operation] = await client.longRunningRecognize(request);
// Get a Promise representation of the final result of the job
const [response] = await operation.promise();
client.longRunningRecognize() sends a request and returns request information in a few seconds, and operation.promise() waits transcription API finishes. However, it may take more than 540 seconds for large files, and the process may be killed at this line. So somehow I want to resume processing using 'operation' object in another process. I tried serializing the 'operation' object to a file and loading it afterwards, but it can not include functions and operation.promise() is lost. How can I solve this problem?
Here is how to do it (the code is in PHP, but the idea classes are the same)
$client = new SpeechClient([
'credentials' => json_decode(file_get_contents('keys.json'), true)
]);
$operation = $client->longRunningRecognize($config, $audio);
$operationName = $operation->getName()
Now the job has started and you can save "$operationName" somewhere (say in DB) to be used in another process.
In another process
$client = new SpeechClient([
'credentials' => json_decode(file_get_contents('keys.json'), true)
]);
CloudSpeech::initOnce();
$newOperationResponse = $speechClient->resumeOperation($name, 'LongRunningRecognize');
if ($newOperationResponse->operationSucceeded()) {
$result = $newOperationResponse->getResult();
}
...
Notice: Make sure to put "LongRunningRecognize" as resume operation name and NOT "longRunningRecognize" (first letter should be uppercase - contrary to documentation https://github.com/googleapis/google-cloud-php-speech/blob/master/src/V1/Gapic/SpeechGapicClient.php#L312)
Otherwise the response will be protobuf encoded (https://github.com/googleapis/google-cloud-php-speech/blob/master/src/V1/Gapic/SpeechGapicClient.php#L135)
This answer helped to find the final solution https://stackoverflow.com/a/57209441/932473
If your job is going to take more than 540 seconds, Cloud Functions is not really the best solution for this problem. Instead, you may want to consider using Cloud Functions as just a triggering mechanism, then offload the work to App Engine or Compute Engine using pubsub to send it the relevant data (e.g. the location of the file in Cloud Storage, and other metadata needed to make the request to recognize speech.

In NodeJS, how do I re-establish a socket connection with another server that may have gone down?

So, I have a Express NodeJS server that is making a connection with another app via an upagraded WebSocket uri for a data feed. If this app goes down, then obviously the WebSocket connection gets closed. I need to reconnect with this uri once the app comes back online.
My first approach was to use a while loop in the socket.onclose function to keep attempting to make the re-connection once the app comes back online, but this didn't seem to work as planned. My code looks like this:
socket.onclose = function(){
while(socket.readyState != 1){
try{
socket = new WebSocket("URI");
console.log("connection status: " + socket.readyState);
}
catch(err) {
//send message to console
}
}
};
This approach keeps giving me a socket.readyState of 0, even after the app the URI is accessing is back online.
Another approach I took was to use the JavaScript setTimout function to attempt to make the connection by using an exponential backoff algorithm. Using this approach, my code in the socket.onclose function looks like this:
socket.onclose = function(){
var time = generateInterval(reconnAttempts); //generateInterval generates the random time based on the exponential backoff algorithm
setTimeout(function(){
reconnAttempts++; //another attempt so increment reconnAttempts
socket = new WebSocket("URI");
}, time);
};
The problem with this attempt is that if the app is still offline when the socket connection is attempted, I get the following error, for obvious reasons, and the node script terminates:
events.js:85
throw er; // Unhandled 'error' event
Error: connect ECONNREFUSED
at exports._errnoException (util.js:746:11)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1010:19)
I also began using the forever node module to ensure that my node script is always running and to make sure it gets restarted after an unexpected exit. Even though I'm using forever, after a few restarts, forever just stops the script anyway.
I am basically just looking for a way to make my NodeJS server more robust and automatically re-connect with another server that may have gone down for some reason, instead of having to manually restart the node script.
Am I completely off base with my attempts? I am a noob when it comes to NodeJS so it may even be something stupid that I'm overlooking, but I have been researching this for a day or so now and all of my attempts don't seem to work as planned.
Any suggestions would be greatly appreciated! Thanks!
Few suggestions
1) Start using domain which prevents your app from an unexpected termination. Ie your app will run under the domain(run method of domain). You can implement some alert mechanism such as email or sms to which will notify when any error occurs.
2) Start using socket.io for websocket communication, it automatically handles the reconnection. Socket.io uses keep-alive heartbeat and continuously polls from the server.
3) Start using pm2 instead of forever. Pm2 allows clustering for your app which improves the performance.
I think this may improve your app's performance, stability and robustness.

connection timeout OrientDB with Javascript API

In my webapp client side script I'm using the OrientDB Javascript API (orientdb-api.js). When the script initializes I run this code:
var orientdb = new ODatabase("http://localhost:2480/testapp");
var orientdbinfo = orientdb.open('root', 'admin');
This works fine and I can do all the various queries etc, as long as I don't wait more than 15 seconds between them. If I do, I get "error 401 (Unauthorised)" returned.
I know for a fact that this is the socket connection timing out. The timeframe matches the 15000ms timeout setting in the config. Also, as a test I've built a little button that calls the orientdb.open method above and reopens the connection. After I hit that button I can access the DB again.
Currently the queries and commands are being called directly in my script as I trigger actions from my web UI. Am I just being lazy and am I actually supposed to wrap every query in a function that tests the connection first and re-initializes if it is closed, or is there something I'm missing? If the former, what is an elegant way of coding that? If the latter, what am I missing?
To get around this I'm running a setInterval function that opens a new socket every 14 seconds. That will get me through my testing for sure, but I realise it's a hack.
When you start the OrientDB server, it creates two sockets: 2424 (binary) and 2480 (HTTP).
Because OrientJS uses the binary protocol, you need to connect to port 2424.
Try:
var orientdb = new ODatabase("http://localhost:2424/testapp");
var orientdbinfo = orientdb.open('root', 'admin');
And the socket should stay open (longer).

PHP minimal working example of Web Sockets

I'm trying to determine how to setup a web socket for the first time ever so a working minimal example with static variables (IP address for example instead of getservbyname) will help me understand what is flowing where.
I want to do this the right way so no frameworks or addons for both the client and the server. I want to use PHP's native web sockets as described here though without over-complicating things with in-depth classes...
http://www.php.net/manual/en/intro.sockets.php
I've already put together some basic JavaScript...
window.onload = function(e)
{
if ('WebSocket' in window)
{
var socket = new WebSocket('ws://'+path.split('http://')[1]+'mail/');
socket.onopen = function () {alert('Web Socket: connected.');}
socket.onmessage = function (event) {alert('Web Socket: '+event.data);}
}
}
It's the PHP part that I'm not really sure about. Presuming we have a blank PHP file...
If necessary how do I determine if my server's PHP install has this socket functionality already available?
Is the request essentially handled as a GET or POST request in
example?
Do I need to worry about the port numbers? e.g. if
($_SERVER['SERVER_PORT']=='8080')
How do I return a basic message on the initial connection?
How do I return a basic message say, five seconds later?
It's not that simple to create a simple example, I'm afraid.
First of all you need to check in php configuration if the server is configured for sockets with the setting enable-sockets
Then you need to implement (or find) a websocket server that at least follows the Hybi10 specification (https://datatracker.ietf.org/doc/html/draft-ietf-hybi-thewebsocketprotocol-10) of websockets. If you find the "magic number" 258EAFA5-E914-47DA-95CA-C5AB0DC85B11 in the code for the header, you can be sure it does follow at least Hybi06 ...
Finally, you need to have access to an admin console on the server in order to execute the PHP websocket server using php -q server.php
EDIT: This is the one I've been using a year ago ... it might still work as expected with current browsers supporting Websockets: http://code.google.com/p/phpwebsocket/source/browse/trunk/+phpwebsocket/?r=5

Categories