Amazon Redshift - SQL query with Node.js - javascript

I am very new to AWS, so I wonder how one can query the redshift database using #aws-sdk/client-redshift-data? The documentation is not very clear (at least for me). Please can someone explain me how to do it?
I have tried the #aws-sdk/client-redshift-data documentation method but i am really stuck on what should be in the params object and how to do the SQL query.
`// a client can be shared by different commands.
const client = new RedshiftDataClient({ region: "REGION" });
const params = {
/** input parameters */
};
const command = new BatchExecuteStatementCommand(params);`

Related

How to get data from back end side, to use it in the browser side?

I am new to programming, and I heard that some guys on this website are quite angry, but please don't be. I am creating one web app, that has a web page and also makes som ecalculations and works with database (NeDB). I have an index.js
const selects = document.getElementsByClassName("sel");
const arr = ["Yura", "Nairi", "Mher", "Hayko"];
for (let el in selects) {
for (let key in arr) {
selects[el].innerHTML += `<option>${arr[key]}</option>`;
}
}
I have a function which fills the select elements with data from an array.
In other file named: getData.js:
var Datastore = require("nedb");
var users = new Datastore({ filename: "players" });
users.loadDatabase();
const names = [];
users.find({}, function (err, doc) {
for (let key in doc) {
names.push(doc[key].name);
}
});
I have some code that gets data from db and puts it in array. And I need that data to use in the index.js mentioned above, but the problem is that I don't know how to tranfer the data from getData.js to index.js. I have tried module.exports but it is not working, the browser console says that it can't recognize require keyword, I also can't get data directly in index.js because the browse can't recognize the code related to database.
You need to provide a server, which is connected to the Database.
Browser -> Server -> DB
Browser -> Server: Server provides endpoints where the Browser(Client) can fetch data from. https://expressjs.com/en/starter/hello-world.html
Server -> DB: gets the Data out of the Database and can do whatever it want with it. In your case the Data should get provided to the Client.
TODOs
Step 1: set up a server. For example with express.js (google it)
Step 2: learn how to fetch Data from the Browser(Client) AJAX GET are the keywords to google.
Step 3: setup a Database connection from you Server and get your data
Step 4: Do whatever you want with your data.
At first I thought it is a simple method, but them I researched a little bit and realized that I didn't have enough information about how it really works. Now I solved the problem, using promises and templete engine ejs. Thank you all for your time. I appreciate your help)

GCP AutoML dataset id return from Node.js client library is mismatched the actual dataset id

Previously, i have used Node.js client library to create dataset and train the model on GCP automl, and everything works fine. But, now i faced "Error: 5 NOT_FOUND: The Dataset doesn't exist or is inaccessible for use with AutoMl".
After that, i realized that the dataset id return from create dataset request is mismatched the actual dataset id (compared with web interface). So i can not interact with the dataset programmatically.
My goal is to use that dataset id to upload image and train the model. Any update am i missing? Any suggestion will be grateful.
The following is ref code for create dataset with GCP Automl from https://cloud.google.com/vision/automl/docs/create-datasets.
* TODO(developer): Uncomment these variables before running the sample.
*/
// const projectId = 'YOUR_PROJECT_ID';
// const location = 'us-central1';
// const displayName = 'YOUR_DISPLAY_NAME';
// Imports the Google Cloud AutoML library
const {AutoMlClient} = require(`#google-cloud/automl`).v1;
// Instantiates a client
const client = new AutoMlClient();
async function createDataset() {
// Construct request
// Specify the classification type
// Types:
// MultiLabel: Multiple labels are allowed for one example.
// MultiClass: At most one label is allowed per example.
const request = {
parent: client.locationPath(projectId, location),
dataset: {
displayName: displayName,
imageClassificationDatasetMetadata: {
classificationType: 'MULTILABEL',
},
},
};
// Create dataset
const [operation] = await client.createDataset(request);
// Wait for operation to complete.
const [response] = await operation.promise();
console.log(`Dataset name: ${response.name}`);
console.log(`
Dataset id: ${
response.name
.split('/')
[response.name.split('/').length - 1].split('\n')[0]
}`);
}
createDataset();
Now i used the sample code from https://github.com/googleapis/nodejs-automl/blob/master/samples/vision_object_detection_create_dataset.js, and it works robustly. Yes, i should used it at the beginning. But we all always found the way that won't work, before the work one right?
Happy coding and create a better world ;).

Is it safe to use a single Mongoose database from two files/processes?

I've been working on a server and a push notification daemon that will both run simultaneously and interact with the same database. The idea behind this is that if one goes down, the other will still function.
I normally use Swift but for this project I'm writing it in Node, using Mongoose as my database. I've created a helper class that I import in both my server.js file and my notifier.js file.
const Mongoose = require('mongoose');
const Device = require('./device'); // This is a Schema
var uri = 'mongodb://localhost/devices';
function Database() {
Mongoose.connect(uri, { useMongoClient: true }, function(err) {
console.log('connected: ' + err);
});
}
Database.prototype.findDevice = function(params, callback) {
Device.findOne(params, function(err, device) {
// etc...
});
};
module.exports = Database;
Then separately from both server.js and notifier.js I create objects and query the database:
const Database = require('./db');
const db = new Database();
db.findDevice(params, function(err, device) {
// Simplified, but I edit and save things back to the database via db
device.token = 'blah';
device.save();
});
Is this safe to do? When working with Swift (and Objective-C) I'm always concerned about making things thread safe. Is this a concern? Should I be worried about race conditions and modifying the same files at the same time?
Also, bonus question: How does Mongoose share a connection between files (or processes?). For example Mongoose.connection.readyState returns the same thing from different files.
The short answer is "safe enough."
The long answer has to do with understanding what sort of consistency guarantees your system needs, how you've configured MongoDB, and whether there's any sharding or replication going on.
For the latter, you'll want to read about atomicity and consistency and perhaps also peek at write concern.
A good way to answer these questions, even when you think you've figured it out, is to test scenarios: Hammer a duplicate of your system with fake data and events and see if what happen is OK or not.

Using Multiple Mongodb Databases with Meteor.js

Is it possible for 2 Meteor.Collections to be retrieving data from 2 different mongdb database servers?
Dogs = Meteor.Collection('dogs') // mongodb://192.168.1.123:27017/dogs
Cats = Meteor.Collection('cats') // mongodb://192.168.1.124:27017/cats
Update
It is now possible to connect to remote/multiple databases:
var database = new MongoInternals.RemoteCollectionDriver("<mongo url>");
MyCollection = new Mongo.Collection("collection_name", { _driver: database });
Where <mongo_url> is a mongodb url such as mongodb://127.0.0.1:27017/meteor (with the database name)
There is one disadvantage with this at the moment: No Oplog
Old Answer
At the moment this is not possible. Each meteor app is bound to one database.
There are a few ways you can get around this but it may be more complicated that its worth:
One option - Use a separate Meteor App
In your other meteor app (example running at port 6000 on same machine). You can still have reactivity but you need to proxy inserts, removes and updates through a method call
Server:
Cats = Meteor.Collection('cats')
Meteor.publish("cats", function() {
return Cats.find();
});
Meteor.methods('updateCat, function(id, changes) {
Cats.update({_id: id}, {$set:changes});
});
Your current Meteor app:
var connection = DDP.connect("http://localhost:6000");
connection.subscribe("cats");
Cats = Meteor.Collection('cats', {connection: connection});
//To update a collection
Cats.call("updateCat", <cat_id>, <changes);
Another option - custom mongodb connection
This uses the node js mongodb native driver.
This is connecting to the database as if you would do in any other node js app.
There is no reactivity available and you can't use the new Meteor.Collection type collections.
var mongodb = Npm.require("mongodb"); //or var mongodb = Meteor.require("mongodb") //if you use npm package on atmosphere
var db = mongodb.Db;
var mongoclient = mongodb.MongoClient;
var Server = mongodb.Server;
var db_connection = new Db('cats', new Server("127.0.0.1", 27017, {auto_reconnect: false, poolSize: 4}), {w:0, native_parser: false});
db.open(function(err, db) {
//Connected to db 'cats'
db.authenticate('<db username>', '<db password>', function(err, result) {
//Can do queries here
db.close();
});
});
The answer is YES: it is possible set up multiple Meteor.Collections to be retrieving data from different mongdb database servers.
As the answer from #Akshat, you can initialize your own MongoInternals.RemoteCollectionDriver instance, through which Mongo.Collections can be created.
But here's something more to talk about. Being contrary to #Akshat answer, I find that Oplog support is still available under such circumstance.
When initializing the custom MongoInternals.RemoteCollectionDriver, DO NOT forget to specify the Oplog url:
var driver = new MongoInternals.RemoteCollectionDriver(
"mongodb://localhost:27017/db",
{
oplogUrl: "mongodb://localhost:27017/local"
});
var collection = new Mongo.Collection("Coll", {_driver: driver});
Under the hood
As described above, it is fairly simple to activate Oplog support. If you do want to know what happened beneath those two lines of code, you can continue reading the rest of the post.
In the constructor of RemoteCollectionDriver, an underlying MongoConnection will be created:
MongoInternals.RemoteCollectionDriver = function (
mongo_url, options) {
var self = this;
self.mongo = new MongoConnection(mongo_url, options);
};
The tricky part is: if MongoConnection is created with oplogUrl provided, an OplogHandle will be initialized, and starts to tail the Oplog (source code):
if (options.oplogUrl && ! Package['disable-oplog']) {
self._oplogHandle = new OplogHandle(options.oplogUrl, self.db.databaseName);
self._docFetcher = new DocFetcher(self);
}
As this blog has described: Meteor.publish internally calls Cursor.observeChanges to create an ObserveHandle instance, which automatically tracks any future changes occurred in the database.
Currently there are two kinds of observer drivers: the legacy PollingObserveDriver which takes a poll-and-diff strategy, and the OplogObseveDriver, which effectively use Oplog-tailing to monitor data changes. To decide which one to apply, observeChanges takes the following procedure (source code):
var driverClass = canUseOplog ? OplogObserveDriver : PollingObserveDriver;
observeDriver = new driverClass({
cursorDescription: cursorDescription,
mongoHandle: self,
multiplexer: multiplexer,
ordered: ordered,
matcher: matcher, // ignored by polling
sorter: sorter, // ignored by polling
_testOnlyPollCallback: callbacks._testOnlyPollCallback
});
In order to make canUseOplog true, several requirements should be met. A bare minimal one is: the underlying MongoConnection instance should have a valid OplogHandle. This is the exact reason why we need to specify oplogUrl while creating MongoConnection
This is actually possible, using an internal interface:
var d = new MongoInternals.RemoteCollectionDriver("<mongo url>");
C = new Mongo.Collection("<collection name>", { _driver: d });

node.js - how to switch a database in mongodb driver?

I'm new to this stuff and just stuck in the middle of nowhere. Am using node-mongodb-native and am in need to switch to another database (after authentication against admin db). I googled and found this topic where the creator of library recommends to keep a connection for each db in a hash. So my question is - how do I accomplish it?
Just create different database connections and store them in an object.
var dbConnections = {};
var dbConnections.authDb = new Db('adminDb', server, {});
dbConnections.authDb.authenticate(username, password);
var dbConnections.otherDb = new Db('otherDb', server, {});
Does that make sense?
There's an example hidden in the MongoDB driver docs under Db:
[...]
MongoClient.connect('mongodb://localhost:27017/test', function(err, db) {
[...]
// Reference a different database sharing the same connections
// for the data transfer
var secondDb = db.db("integration_tests_2");
// Fetch the collections
var multipleColl1 = db.collection("multiple_db_instances");
var multipleColl2 = secondDb.collection("multiple_db_instances");
[...]
});

Categories