I have files stored in a MongoDB using GridFS. I need to remove some of those files by ID, from the JavaScript shell. I need to remove a single file using it's ID. I figured I could just do this:
db.fs.files.remove({_id: my_id});
This works to some extent; it removes the file from the fs.files collection but does not remove the chunks itself from the fs.chunks collection. The reason I know that is because I check the length of both collections before and after in RockMongo.
I could go through the chunks and remove those that are referring to that file, but is there a better, built-in way of doing that?
You can delete gridFS file by deleting both chunks and files from shell. for example
db['fs.chunks'].remove({files_id:my_id});
db['fs.files'].remove({_id:my_id});
Those commands will do such trick.
You want to use db.fs.delete(_id); instead.
Update
Sorry, that apparently doesn't work from the shell, only through the driver. GridFS is a specification for storage implemented by the drivers. Looks like it doesn't have much built-in functionality from the shell, as such.
Update 2 There is also a command line tool, mongofiles (http://www.mongodb.org/display/DOCS/GridFS+Tools), which allows you to delete files by name. mongofiles delete <filename>. It comes with a warning that it will delete all of the files by that name, so it's not as granular as by id.
mongofiles --host localhost:30000 --db logo delete logo_susan1225.png
refer to this page:
http://docs.mongodb.org/manual/reference/program/mongofiles/#bin.mongofiles
Related
few monts ago I deleted my empty cludinary folders using this:
cloudinary.v2.api.delete_folder
but it is not working now, what is the best alternative of this?
The delete folder method of the Admin API is not deprecated.
The most likely reason it's not working for you is that the folder you are trying to delete is not actually empty. It's important to note that if you have backups enabled and have deleted files from within that folder, then these backup copies will result in the folder not being considered empty. In such cases, you would need to delete the folder from within the Media Library directly.
I need to watch ftp folder for files create/update/remove events.
But I can't find any suitable solution to do this using node.js.
Can you help me, please?
You need get recursive listing of remote ftp directory and save it (look at https://github.com/mscdex/node-ftp and https://github.com/evanplaice/node-ftpsync)
Set timeout to get new recursive listing
Compare new and old listing (look at https://github.com/andreyvit/json-diff) and call handlers to the corresponding events
Overwrite old listing with new
Return to step two
You can use sftp-watcher modules. which will reduce your time.
https://www.npmjs.com/package/sftp-watcher
I have been playing around with IPFS a lot recently, and have been wondering how to make a download link for files that gives them a custom name. The standard <a> tag download attribute doesn't work:
foo
Is there a way I can work around this by using JavaScript or Jquery? At a last resort I could route the files through the server, but I would prefer not to.
You can add your file by wrapping it inside a folder and therefore preserving the name of the original file. Try:
$ ipfs add -w example.txt
added QmbFMke1KXqnYyBBWxB74N4c5SBnJMVAiMNRcGu6x1AwQH example.txt
added QmVFDXxAEC5iQ9ptb36dxzpNsQjVatvxhG44wK7PpRzsDE
This way, you can point to the last hash, which is a MerkleDAG Node that points to your file, preserving the name of it. Let me know if this solution works for you :)
Try adding "?filename=filename.pdf&download=true" at the end of CID.
Like so:
https://ipfs.io/ipfs/QmV9tSDx9UiPeWExXEeH6aoDvmihvx6jD5eLb4jbTaKGps?filename=filename.pdf&download=true
how to identify if two files are same in javascript(nodejs), one is just a renamed copy of other?
Use case: I am trying to write a script for syncing a HDD (hdd1) and its clone (hdd2). 95% only video files (size: ~1 GB, count: ~4000). Sometimes I rename the files in hdd1 and move them different folders. So while syncing, instead of delete and fresh copy from hdd1 to hdd2, I just want to rename and move the files( identified ones) in hdd2 to match its location in hdd1.
Like mentioned by mscdex, there's probably already a tool out there that does what you're looking for (like rsync).
If you're more interested in doing it from scratch as a learning experience, then what you're looking for is called a checksum or hash of a file. Generating a checksum for each file gives you a sort of finger print for a file. You can then use this to compare against the checksum or other files, and if they're the same, the checksums will match as well.
Node.js's Crypto library gives you methods for generating checksums. This blog entry walks through some of this.
So I deleted the file .tmp/localDiskDb.db manually and now when Sails generates it on start its empty, Is there a way to make Sails to recreate it based on my models?
As far as I understand, that file contains only your models' instances, i.e. your actual data. To make it have some data, just create some instances of your models and save them into the file database.
Deleting ./tmp/localDisk.db removes your data. I wouldn't actually use the default sails-disk as my adapter, if I were you. You should use a better DB (e.g mysql, sqlite, mongodb etc) that prevents issues like these. localDisk.db is literally a text file not isolated from your dev environment. you can see how this would be a problem.
use fixtures in your config/bootstrap to import "dummy" Data on Startup