Watch FTP folder via node.js - javascript

I need to watch ftp folder for files create/update/remove events.
But I can't find any suitable solution to do this using node.js.
Can you help me, please?

You need get recursive listing of remote ftp directory and save it (look at https://github.com/mscdex/node-ftp and https://github.com/evanplaice/node-ftpsync)
Set timeout to get new recursive listing
Compare new and old listing (look at https://github.com/andreyvit/json-diff) and call handlers to the corresponding events
Overwrite old listing with new
Return to step two

You can use sftp-watcher modules. which will reduce your time.
https://www.npmjs.com/package/sftp-watcher

Related

Using Azure's azcopy tool from within a javascript azure function

My setup is the following: I have 1 storage account with a container and I have another storage account with a different container. What I want to do is have a blob trigger activate whenever someone uploads a file to the first storage account, and have that blob be copied to the second storage account's container. azcopy works well with the command line, but I have not found a way to use it within an azure function. Any help is appreciated, thanks.
For NodeJS, you would just use Child Processes (or a wrapper like execa) to run executables but isn't something I would recommend you do. Also, when running on Azure, you will have to make sure azcopy is present and if you still need to go down this path, Custom Containers would be your best bet.
In the case of Azure Functions if the file just must be copied to a different container, you could just use the Blob Output Binding which would achieve this with almost no code.
For more complex scenarios where the output binding lacks, you could just use the Blob Storage NodeJS SDK directly in your code.

How to programatically create pull request with changed file using GitHub API

How can I create PR on Github using their API? Let's imagine I have package.json file as a string and I want to make changes to this file. So I parse it, make changes to it and then what exactly I need to do to make it look like I made those changes locally after checking out the new branch, making changes, commiting them to new branch and the pushing them to remote? I see that they have POST API endpoint to create a commit and POST API endpoint to create a PR, but I don't see where I put the file I changed.
Using something like GitJS could be a solution.
This library is a simple wrapper around the git command line.
Instead of using the Github API you could just work the git commands programatically with a library like this.
That way you don't have to work with the much more complicated API and have the benefit of supporting other source control sites too.
Looking at their example we can see it's incredibly easy to commit and push a file using javascript:
require('simple-git')()
.add('./*')
.commit("first commit!")
.addRemote('origin', 'some-repo-url')
.push(['-u', 'origin', 'master'], () => console.log('done'));
Make sure you refer to the usage documentation before trying that example to ensure you configure everything correctly.
Alternatively, you can use the package Octokit to more easily interface with the Github API.
When adding, committing and pushing a file via the API you must first start by creating a tree. Then you use that tree as part of the commit request, update the references and finally push the commit.
You can find a working example on this Github issue.

what is the alternative of cloudinary.v2.api.delete_folder? How can I delete empty folder now as this method is deprecated?

few monts ago I deleted my empty cludinary folders using this:
cloudinary.v2.api.delete_folder
but it is not working now, what is the best alternative of this?
The delete folder method of the Admin API is not deprecated.
The most likely reason it's not working for you is that the folder you are trying to delete is not actually empty. It's important to note that if you have backups enabled and have deleted files from within that folder, then these backup copies will result in the folder not being considered empty. In such cases, you would need to delete the folder from within the Media Library directly.

Recreate localDiskDb.db in Sails.js

So I deleted the file .tmp/localDiskDb.db manually and now when Sails generates it on start its empty, Is there a way to make Sails to recreate it based on my models?
As far as I understand, that file contains only your models' instances, i.e. your actual data. To make it have some data, just create some instances of your models and save them into the file database.
Deleting ./tmp/localDisk.db removes your data. I wouldn't actually use the default sails-disk as my adapter, if I were you. You should use a better DB (e.g mysql, sqlite, mongodb etc) that prevents issues like these. localDisk.db is literally a text file not isolated from your dev environment. you can see how this would be a problem.
use fixtures in your config/bootstrap to import "dummy" Data on Startup

How to combine node.js modules/source into one .js file, to execute inside node

I have a need where I need to execute node code/modules in a node app (in a sandbox) with vm.createScript / script.runInNewContext. The host node app runs on heroku, so there is no local filesystem to speak of. I am able to download and run code that has no outside dependencies just fine, however the requirement is to be able to include other node modules as well. (as a build/packaging step would be ideal)
There are many existing solutions (browserify is one I've spent the most time with) which get close... but they inevitably generate a single blob of code (yeah!), meant to execute in a browser (boo!). Browserify for example generates dependencies on window., etc.
Does anyone know of a tool that will either read a package.json dependencies{} (or look at all require()'s in the source) and generate a single monolithic blob suitable for node's runInNewContext?
I don't think the solution you're looking for is the right solution. Basically you want to grab a bunch of require('lib')'s, mush them together into a single Javascript context, serialize that context into source code, then pass that serialized form into the runInNewContext function to deserialize and rebuild into a Javascript context, then deserialize your custom, sandboxed code, and finally run the whole thing.
Wouldn't it make much more sense to just create a context Object that includes the needed require('lib')'s and pass that object directly into your VM? Based on code from the documentation:
var vm = require('vm'),
initSandbox = {
async: require('async'),
http: require('http')
},
context = vm.createContext(initSandbox);
vm.runInContext("async.forEach([0, 1, 2], function(element) { console.log(element); });", context);
Now you have the required libraries accessible via the context without going through a costly serialization/deserialization process.

Categories