How to find out when azure snapshot is finished creating - javascript

I'm taking a snapshot of a disk on azure. I'm using the Node SDK. When I issue the command to take a snapshot, within a few seconds I get the response back. I'll paste the output below.
The thing is, the provisioning state always shows Succeeded even though the snapshot is obviously not finished being created yet. And it does not yet show in the dashboard.
If I use the snapshot.list the method, it also says Succeeded for this snapshot.
How can I query to find out when the snapshot is actually finished being created?
{ id:
'/subscriptions/1a6c4c11-6729-48fb-8e76-06c6281bb6f1/resourceGroups/RGOUP1/providers/Microsoft.Compute/snapshots/snapCostTest',
name: 'snapCostTest',
type: 'Microsoft.Compute/snapshots',
location: 'westus',
sku: { name: 'Standard_LRS', tier: 'Standard' },
timeCreated: 2019-08-16T00:51:04.099Z,
osType: 'Windows',
hyperVGeneration: 'V1',
creationData:
{ createOption: 'Copy',
sourceResourceId:
'/subscriptions/1a6c4c11-6729-48fb-8e76-06c6281bb6f1/resourceGroups/RGOUP1/providers/Microsoft.Compute/disks/vm1_OsDisk_1_502b5534fe4b4f288d19e127c457d652' },
diskSizeGB: 127,
provisioningState: 'Succeeded' }
I would have thought the provisioningState would show something like Creating while the snapshot is being created.

You can use the the rest API below to get detailed information of your disks :
GET https://management.azure.com/<YOUR DISK ID>?api-version=2018-06-01
Only properties.provisioningState in response value turned "Succeeded" , the disk will shows up on portal dashboard.

Related

firebase.firestore() .set() not firing first time

I'm not a dev, I'm just learning for my own interest as a hobby.
For my own learning I'm building real world project rather than just following the usual online tutorials.
I'm building a simple vue.js app with vuex and using firebase for auth and db.
I have a method that should just take a value and .set() a new document in a collection with a single piece of data:
HTML
<template>
<div>
<v-btn #click="testing('1', '2')">Testing</v-btn>
</div>
</template>
SCRIPT
methods: {
testing(id, val) {
console.log('entered testing:', id, val)
firebase.firestore().collection('users').doc(id)
.set({
key: val
})
.then(result => {
console.log('created in firestore', result)
})
.catch(error => {
console.error('failed to create in firestore', error)
})
}
...
The problem is that after refreshing the browser the first time I click the button the method runs.
I see the console log 'entered testing' with the correct id and val but the firestore call doesn't appear to run.
Nothing in the console and no network requests.
The second time I click it I see the .then() console log 'created in firestore: undefined' and what looks like 3 network requests, and the doc and data is set correctly!
Every time I click the button after that I see the correct console log, a single network request and the doc and data is set correctly.
Other info...
I have security rules set up on the db but only basic allow if auth != null
The user is logged in when I try this.
I'm not ruling out that there might be something somewhere else in my app which is causing this but there's nothing obvious and I've stripped things right back to debug this. I'm trying to set it up so all firebase requests are done in the store only, with each component just passing it the values it needs, but to debug this I've moved all the logic (including importing firestore) into a single component.
What is going on?
Is there something obvious I'm missing?

Write an object containing an array of objects to a mongo database in Meteor

In my user collection, I have an object that contains an array of contacts.
The object definition is below.
How can this entire object, with the full array of contacts, be written to the user database in Meteor from the server, ideally in a single command?
I have spent considerable time reading the mongo docs and meteor docs, but can't get this to work.
I have also tried a large number of different commands and approaches using both the whole object and iterating through the component parts to try to achieve this, unsuccessfully. Here is an (unsuccessful) example that attempts to write the entire contacts object using $set:
Meteor.users.update({ _id: this.userId }, {$set: { 'Contacts': contacts}});
Thank you.
Object definition (this is a field within the user collection):
"Contacts" : {
"contactInfo" : [
{
"phoneMobile" : "1234567890",
"lastName" : "Johnny"
"firstName" : "Appleseed"
}
]
}
This update should absolutely work. What I suspect is happening is that you're not publishing the Contacts data back to the client because Meteor doesn't publish every key in the current user document automatically. So your update is working and saving data to mongo but you're not seeing it back on the client. You can check this by doing meteor mongo on the command line then inspecting the user document in question.
Try:
server:
Meteor.publish('me',function(){
if (this.userId) return Meteor.users.find(this.userId, { fields: { profile: 1, Contacts: 1 }});
this.ready();
});
client:
Meteor.subscribe('me');
The command above is correct. The issue is schema verification. Simple Schema was defeating the ability to write to the database while running 'in the background'. It doesn't produce an error, it just fails to produce the expected outcome.

Synchronize Data across multiple occasionally-connected-clients using EventSourcing (NodeJS, MongoDB, JSON)

I'm facing a problem implementing data-synchronization between a server and multiple clients.
I read about Event Sourcing and I would like to use it to accomplish the syncing-part.
I know that this is not a technical question, more of a conceptional one.
I would just send all events live to the server, but the clients are designed to be used offline from time to time.
This is the basic concept:
The Server stores all events that every client should know about, it does not replay those events to serve the data because the main purpose is to sync the events between the clients, enabling them to replay all events locally.
The Clients have its one JSON store, also keeping all events and rebuilding all the different collections from the stored/synced events.
As clients can modify data offline, it is not that important to have consistent syncing cycles. With this in mind, the server should handle conflicts when merging the different events and ask the specific user in the case of a conflict.
So, the main problem for me is to dertermine the diffs between the client and the server to avoid sending all events to the server. I'm also having trouble with the order of the synchronization process: push changes first, pull changes first?
What I've currently built is a default MongoDB implementation on the serverside, which is isolating all documents of a specific user group in all my queries (Currently only handling authentication and server-side database work).
On the client, I've built a wrapper around a NeDB store, enabling me to intercept all query operations to create and manage events per-query, while keeping the default query behaviour intact. I've also compensated for the different ID systems of neDB and MongoDB by implementing custom ids that are generated by the clients and are part of the document data, so that recreating a database won't mess up the IDs (When syncing, these IDs should be consistent across all clients).
The event format will look something like this:
{
type: 'create/update/remove',
collection: 'CollectionIdentifier',
target: ?ID, //The global custom ID of the document updated
data: {}, //The inserted/updated data
timestamp: '',
creator: //Some way to identify the author of the change
}
To save some memory on the clients, I will create snapshots at certain amounts of events, so that fully replaying all events will be more efficient.
So, to narrow down the problem: I'm able to replay events on the client side, I'm also able to create and maintain the events on the client and serverside, Merging the events on serverside should also not be a problem, Also replicating a whole database with existing tools is not an option as I'm only syncing certain parts of the database (Not even entire collections as the documents are assigned different groups in which they should sync).
But what I am having trouble with is:
The process of determining what events to send from the client when syncing (Avoid sending duplicate events, or even all events)
Determining what events to send back to the client (Avoid sending duplicate events, or even all events)
The right order of syncing the events (Push/Pull changes)
Another Question I would like to ask, is whether storing the updates directly on the documents in a revision-like style is more efficient?
If my question is unclear, duplicate (I found some questions, but they didnt help me in my scenario) or something is missing, please leave a comment, I will maintain it as best as I can to keep it simple, as I've just written everything down that could help you understand the concept.
Thanks in advance!
This is a very complex subject, but I'll attempt some form of answer.
My first reflex upon seeing your diagram is to think of how distributed databases replicate data between themselves and recover in the event that one node goes down. This is most often accomplished via gossiping.
Gossip rounds make sure that data stays in sync. Time-stamped revisions are kept on both ends merged on demand, say when a node reconnects, or simply at a given interval (publishing bulk updates via socket or the like).
Database engines like Cassandra or Scylla use 3 messages per merge round.
Demonstration:
Data in Node A
{ id: 1, timestamp: 10, data: { foo: '84' } }
{ id: 2, timestamp: 12, data: { foo: '23' } }
{ id: 3, timestamp: 12, data: { foo: '22' } }
Data in Node B
{ id: 1, timestamp: 11, data: { foo: '50' } }
{ id: 2, timestamp: 11, data: { foo: '31' } }
{ id: 3, timestamp: 8, data: { foo: '32' } }
Step 1: SYN
It lists the ids and last upsert timestamps of all it's documents (feel free to change the structure of these data packets, here I'm using verbose JSON to better illustrate the process)
Node A -> Node B
[ { id: 1, timestamp: 10 }, { id: 2, timestamp: 12 }, { id: 3, timestamp: 12 } ]
Step 2: ACK
Upon receiving this packet, Node B compares the received timestamps with it's own. For each documents, if it's timestamp is older, just place it in the ACK payload, if it's newer place it along with it's data. And if timestamps are the same, do nothing- obviously.
Node B -> Node A
[ { id: 1, timestamp: 11, data: { foo: '50' } }, { id: 2, timestamp: 11 }, { id: 3, timestamp: 8 } ]
Step 3: ACK2
Node A updates it's document if ACK data is provided, then sends back the latest data to Node B for those where no ACK data was provided.
Node A -> Node B
[ { id: 2, timestamp: 12, data: { foo: '23' } }, { id: 3, timestamp: 12, data: { foo: '22' } } ]
That way, both node now have the latest data merged both ways (in case the client did offline work) - without having to send all your documents.
In your case, your source of truth is your server, but you could easily implement peer-to-peer gossiping between your clients with WebRTC, for example.
Hope this helps in some way.
Cassandra training video
Scylla explanation
I think that the best solution to avoid all the event order and duplication issues are to use the pull method. In this way every client maintains its last imported event state (with a tracker for example) and ask the server for the events generated after that last one.
An interesting problem will be to detect the breaking of business invariants. For that you could store on the client the log of applied commands also and in case of a conflict (events were generated by other clients) you could retry the execution of commands from the command log. You need to do that because some commands will not succeed after re-execution; for example, a client saves a document after other user deleted that document in the same time.

How do I create an A record in a hosted zone with the AWS node SDK

I am using the Route 53 Node API to create and configure a hosted zone. Creating the zone works fine, but when I try to use the changeResourceRecordSets function to add an A record, I get an error that says 'InvalidInput: Invalid request' but doesn't say what is invalid about it. Here is my request params object:
var zoneConfig = {
ChangeBatch: {
Changes: [{
Action: 'CREATE',
ResourceRecordSet: {
Name: 'my.domain.com',
Type: 'A',
Region: 'us-east-1',
TTL: 300,
ResourceRecords: [{
Value: '111.222.111.000'
}]
}
}],
Comment: 'direct hosted zone A record to point to the server'
},
HostedZoneId: 'ZZH1GLJKE22DK'
};
rt53.changeResourceRecordSets( zoneConfig, function(...
Any ideas what might be wrong in the request?
Finally figured it out. The problem was the Region field in the ResourceRecordSet. I missed it in the documentation, but that is only to be used for latency-based resource record sets. So, deleting that line fixed the issue
Really wish the API error message could have just said that.

What is the correct way to listen to nested changes using Firebase?

Background:
I'm trying to send SMS messages via the browser using Firebase, Twilio, and Node.js. My current data structure in Firebase looks like this:
{ messages :
{ +15553485 :
{ FB-GENERATED-KEY-1 :
{ body: "hello world"
timestamp: 1461758765472 }
},
{ FB-GENERATED-KEY-3 :
{ body: "I love dogs"
timestamp: 1461758765475 }
}
}
},
{ +15550000 :
{ FB-GENERATED-KEY-2 :
{ body: "goodbye world"
timestamp: 1461758765473 }
},
{ FB-GENERATED-KEY-4 :
{ body: "I love cats"
timestamp: 1461758765476 }
}
}
}
}
When a message is added to Firebase via the frontend the backend needs to get notified in order to send an SMS via Twilio. When the backend gets a reply from the phone (via Twilio), it adds it to Firebase.
Problems:
When I listen for changes to a thread I receive all messages sent/recieved for that phone number. Obviously the backend doesn't want to send all the messages again, so I'm only interested in the most recent message added to the thread.
Also I can't seem to easily get the phone number (the key) that has messages underneath it.
What I've tried:
ref.child('messages').on('child_added', ...) — this works for new phone numbers that are added at /messages, however Firebase doesn't send through the new phone number (key), only everything from FB-GENERATED-KEY-2 down.
ref.child('messages').on('child_changed', ...) — this returns all of the messages in a thread, not only the new ones. I can sort on the server and find the most recent message, but that seems like it'll get heavy quite quickly – what if you've sent thousands of messages?
Storing messages at the root level (aka. flattening the tree) and storing the number as an attribute instead could work, but I'm going to need to use the phone number as a sort of index to connect with other data later (like a foreign key).
Questions:
How can I only get the most recent message when listening to activity on the parent /messages and not a particular phone number?
How can I get the key (phone number) when using a child_ event?
Does this data structure make sense?
You can get the Firebase key by calling key() on the snapshot returned by your child_added listener.
Then you can add another nested listener like this:
ref.child('messages').on('child_added', function (snapshot) {
var phone = snapshot.key();
ref.child('messages').child(phone).on('child_added', function (message) {
//send SMS
}, function (error) {
});
}, function (error) {
});
The Firebase API allows you to listen for changes in value or for operations on children. It does not have a way to listen for changes in grandchildren.
In NoSQL databases you often need to model the data for the way your application uses it. If I look at your specific use-case:
When a message is added to Firebase via the frontend the backend needs to get notified in order to send an SMS via Twilio.
I see a queue here:
smsQueue: {
pushId1: {
number: "+15553485",
body: "hello world",
timestamp: 1461758765472
},
pushId2: {
number: "+15550000",
body: "goodbye world",
timestamp: 1461758765473
},
pushId3: {
number: "+15553485",
body: "I love dogs",
timestamp: 1461758765475
},
pushId4: {
number: "+15550000",
body: "I love cats",
timestamp: 1461758765476
}
}
With this structure your back-end (which hopefully uses firebase-queue) can take each task from the queue, call twilio and delete the item from the queue.

Categories