How can I use D3 to graph configuration items and their dependencies? - javascript

I'm a new user to D3 and am trying to think about how to best implement a mapping of our configuration items.
What I'm looking for is essentially a treemap (I think) but with inter-related dependencies.
The Data
I'm working with ITIL-style configuration items, so logical services, applications, machines, etc. that make up an IT service that we offer to our customers.
The JSON data I'd be providing is going to come from a WebAPI service that I am defining and so the data can be returned however is necessary.
The Goal
I need to get across:
The name of the configuration item ("Service A", "Server 1", "Database XYZ", etc.)
The configuration item type (represented by either an icon or color -- not too important right now)
Those things with the least dependents at the top
i.e. a service is represented by all the things that compose it -- applications, DBs, etc. and I'd like to have the hierarchy in order from services down.
The relationships between all the elements, which aren't strictly hierarchical.
Multiple services could depend on one application
Multiple applications could depend on multiple databases which could depend on one database server.
If possible, the ability to focus on one branch of the tree from top to bottom by clicking on it (though this can come way later)
Once I get my head around it, I'd like to set up something simple on GitHub and see if I can use D3 to help contribute to the world of IT Service Management.
The Question
Philosophically, is D3 designed to support a visualization of this nature, and what is my best route to accomplish it?
Thanks!

Related

RESTful route structure of 3 related categories in Express/Node

I am a noob designing an API for an ecommerce site and I need to work out my restful route structure. We are basically using Node and React along with Express + GraphQL. I was wondering if someone could shine a light on the structure?
Here is what we Have:
It is a fashion store that has many products: skirts, blouses, tops, shirts, dresses, etc. You know the drill.
Each of these items will be in categories like this:
/dresses (GET all dresses)
/dresses/:id (GET a particular dress)
/skirts (GET all skirts)
/skirts/:id (GET a particular skirt)
In addition to all the products and the categories of products, at the top level we have DESIGNERS.
So, you would have a designer who has many categories, and each category has many products. Make sense?
I am puzzling over how to nest the express routes.
I want to do these things....
Get all designers
Get all categories a designer has a product in
Get all products a designer has in the store
Get an individual (:id) of a product a designer has.
by the same token I also want to ...
Get all categories.
Get all products in a category
Get and individual product (:id) in a category.
So we have a parent route (designers)
and then two child routes (I think) within designers which
are /categories, /categories/:id, and /categories/products, and /categories/products/:id
Could someone get me a code hint on creating the routes in express? I am wanting to create a JSON structure to render with this
Thanks!
You're essentially asking how to build out an entire REST framework for a backend, which is quite a lengthy thing to answer :)
I'd suggest you start by designing the structure of the JSON for all these objects. Here's a handy site that allows you to visually see the layout of the JSON: https://jsoneditoronline.org/
Once you've done that for all the objects you've mentioned, here's a good tutorial on how to build basic node.js endpoints: https://medium.com/#purposenigeria/build-a-restful-api-with-node-js-and-express-js-d7e59c7a3dfb

Classifier to Predict activities taking place

I have multiple datasets here that i took from Kaggle. There are multiple csv files and each csv file is made specifically for sit, stand, walking, running etc. The data is taken from sensors like accelerometers and gyroscopes. The values in datasets are of axis like x, y and z.
Sample Data
Here is a sample dataset of jogging. Now i need to make classifiers in my program so that my program can detect itself whether the data is of jogging, sitting, standing etc. I want to mix all the datasets in a single csv file and then upload it into my webapge and then i want the javascript code to start detecting whether a particular row is of sitting, standing, jogging etc. I don't want any code help but instead i just need a little explanation or a way to start coding it. How can i started making such classifier? I know it is kind of broad question but i think i have tried to explain myself in best way possible. Once my program has detected every row with specific activity it will count all the activities separately and then show it in a table format in webpage.
In order to answer properly to your question, it would be very helpful to know which is your level of understanding and experience with machine learning.
If you are a beginner I would suggest to try to run and understand a couple of tutorials that can be easily found on the web.
If you need an idea of which is the "standard" approach for machine learning development, I will try to give you a general idea of the process.
You can summarize the process in these main steps:
Data pre-processing-> Data splitting -> Feature selection -> Model Training -> Validation -> Deployment
Data pre-processing is meant to clean and format the data: removing NA values, decision about categorical variables, outlier analysis,.... This is a complex step that depends on the application. In your case I would start checking that the data in the different data-sets are homogeneous, i.e. the features have the same meaning across csv and corresponding features respect the same distribution. While the meaning of each feature should be explained in the description of your csv, the check of the distributions could be easily done plotting the box-plots for each feature and csv. If distributions of the same feature across different csv files don't overlap you should investigate further the issue.
An important step in the design of a good model is the splitting of the data. You should split your data in training/validation set (training/validation/test for a more comprehensive approach). This step allows you to train your model on the training set and test the model on the validation set computing unbiased performance of your model. I suggest here to become familiar with concepts as: Cross Validation, stratified-cross-validation, nested-cross-validation for hyper-parameter tuning, overfitting, bias,.... The Validation of the model will give you an idea of the expected performance that it will have on unseen data. If you are considering the use of more than one model, you can use the validation results to choose the "best" one. I suggest here a comparison using the confidence interval or if possible a significance test (e.g t-test, anova,...). Before the deployment the model is trained on all the available data.
The choice of the model depends on the data that you are using: number of samples, number of features, type of variable (numerical, categorical),....
I'm not an expert of javascript, but I believe (just a feeling) that python and R are more common choices for developing Machine learning applications. Both have libraries specifically developed for the task and you can find a lot of materials and tutorial around.
With a bit of more context I think that I could be more specific.
I hope it helps

Best way to scrape a set of pages with mixed content

I’m trying to show a list of lunch venues around the office with their today’s menus. But the problem is the websites that offer the lunch menus, don’t always offer the same kind of content.
For instance, some of the websites offer a nice JSON output. Look at this one, it offers the English/Finnish course names separately and everything I need is available. There are couple of others like this.
But others, don’t always have a nice output. Like this one. The content is laid out in plain HTML and English and Finnish food names are not exactly ordered. Also food properties like (L, VL, VS, G, etc) are just normal text like the food name.
What, in your opinion, is the best way to scrape all these available data in different formats and turn them into usable data? I tried to make a scraper with Node.js (& phantomjs, etc) but it only works with one website, and it’s not that accurate in case of the food names.
Thanks in advance.
You may use something like kimonolabs.com, they are much easier to use and they give you APIs to update your side.
Remember that they are best for tabular data contents.
There my be simple algorithmic solutions to the problem, If there is a list of all available food names this can be really helpful, you find the occurrence of a food name inside a document (for today).
If there is not any food list, You may use TF/IDF. TF/IDF allows to calculate the score of a word inside a document among the current document and also other documents. But this solution needs enough data to work.
I think the best solution is some thing like this:
Creating a list of all available websites that should be scrapped.
Writing driver classes for each website data.
Each driver has the duty of creating the general domain entity from its standard document.
If you can use PHP, Simple HTML Dom Parser along with Guzzle would be a great choice. These two will provide a jQuery like path finder and a nice wrapper arround HTTP.
You are touching really difficult problem. Unfortunately there are no easy solutions.
Actually there are two different parts to solve:
data scraping from different sources
data integration
Let's start with first problem - data scraping from different sources. In my projects I usually process data in several steps. I have dedicated scrapers for all specific sites I want, and process them in the following order:
fetch raw page (unstructured data)
extract data from page (unstructured data)
extract, convert and map data into page-specific model (fully structured data)
map data from fully structured model to common/normalized model
Steps 1-2 are scraping oriented and steps 3-4 are strictly data-extraction / data-integration oriented.
While you can easily implement steps 1-2 relatively easy using your own webscrapers or by utilizing existing web services - data integration is the most difficult part in your case. You will probably require some machine-learning techniques (shallow, domain specific Natural Language Processing) along with custom heuristics.
In case of such a messy input like this one I would process lines separately and use some dictionary to get rid Finnish/English words and analyse what has left. But in this case it will never be 100% accurate due to possibility of human-input errors.
I am also worried that you stack is not very well suited to do such tasks. For such processing I am utilizing Java/Groovy along with integration frameworks (Mule ESB / Spring Integration) in order to coordinate data processing.
In summary: it is really difficult and complex problem. I would rather assume less input data coverage than aiming to be 100% accurate (unless it is really worth it).

Should I use one publish per collection or several?

I am working on a user group system. Each group has several features and I want to make the interaction with the group collection as secure and simple as it can be since it is still at an early stage.
Right now, I have a group section in my website where I use several nested pages. The purpose of the section is to allow the user to get in a group, request membership if the group is private, browse one group objects, etc.
For example, within my group section, I can load in the yield a "see all groups" page, a "create a new group" page or "see only my groups" (the ones I am member of) or a "view group" to get a group details.
My first approach was to create one controller.js file for each subpage, which call one subscription tailored for the subpage needs. For instance, I have an 'all_group' publication/subscription for the "see all groups" subpage and a "my_groups" one for the "see only my groups" subpage.
But this is becoming really messy. Additionally, I declared my "group" collection in the both folder, so I am not sure to follow where the data available to the client comes from.
Now that I explained the situation, here are my questions:
when I do a console.table(Groups.find().fetch()); on client, I see fields that shouldn't be there (i.e. not returned by my current publication or any other). Is that because I declared the "group" collection on client side? How to fix that?
Should I get rid of all these publications and create only one with everything the client is allowed to see? I would then subscribe to it from the group section page controller and work with a single set of data.
Should I simply block any insert/update/remove from client with allow/deny rules and make these using methods only?
Would it be safe/advised to put my methods in both folder so I don't lose the latency compensation feature?
EDIT
Ok, I was freaking out because I had all my collection data on client-side but it was just a bad query in the publish (I was using both field:1 and field:0 projections).
Two questions remain:
If I use methods, I assume I don't have to deny everything in the native driver, I just have to be more restrictive than what method allow, right?
If I put my methods in the both folder, it will be executed both on client and server, so in "client offline" context, even if the client mess with my methods, the server should roll back the changes if the client result is different than his (assuming that the changes couldn't be done using the allow-deny rules)? And I will have latency compensation working even with the methods?
To better control and visualize your subscriptions, you can use msavin:mongol.
Creating one catch-all publication is not a good idea performance-wise (sending all data to all clients will be a pain to everyone involved).
If you use methods and have removed autopublish, then yes everything is denied... Except for updates on the user's own profile. You may want to manually deny that too.
With methods and collection rules you should share the validation code. This way, client and server validate the same way (and should always come up with the same results), so unless your client is screwing up with the console there should be no issue and lag compensation should remain.
If your server method does something the client should not know about, you can also define the method once on the server, and once on the client. Same effect.

Meteor.js - Should you denormalize data?

This question has been driving me crazy and I can't get my head around it. I come from a MySQL relational background and have been using Meteorjs and Mongo. For the purposes of this question take the example of posts and authors. One Author to Many Posts. I have come up with two ways in which to do this:
Have a single collection of posts - Each post has the author information embedded into the document. This of course leads to denormalization and issues such as if the author name changes how do you keep the data correct.
Have two collections: posts and authors - Each post has an author ID which references the authors collection. I then attempt to do a "join" on a non relational database while trying to maintain reactivity.
It seems to me with MongoDB degrees of denormalization is acceptable and I am tempted to embed as implementing joins really does feel like going against the ideals of Mongo.
Can anyone shed any light on what is the right approach especially in terms of wanting my app data to scale well and be manageable?
Thanks
Denormalisation is useful when you're scaling your application and you notice that some queries are taking too much time to complete. I also noticed that most Mongodb developers tend to forget about data normalisation but that's another topic.
Some developers say things like: "Don't use observe and observeChanges because it's slow". We're building real-time applications so that a normal thing to happen, it's a CPU intensive app design.
In my opinion, you should always aim for a normalised database design and then you have to decide, try and test which fields, that duplicated/denormalised, could improve your app's performance. Example: You remove 1 query per user. The UI need an extra field and it's fast to duplicated it, etc.
With the denormalisation you've an extra price to pay. You've to update the denormalised fields according to the main collection.
Example:
Let's say that you Authors and Articles collections. On each article you have the author name. The author might change his name. With a normalised scenario, it works fine. With a denormalised scenario you have to update the Author document name AND every single article, owned by this author, with the new name.
Keeping a normalised design makes you life easier but denormalisation, eventually, becomes necessary.
From a MeteorJs perspective: With the normalised scenario you're sending data from 2 Collections to the client. With the denormalised scenario, you only send 1 collection. You can also reactively join on the server and send 1 collection to the client, although it increases the RAM usage because of MergeBox on the server.
Denormalisation is something that it's very specify for you application needs. You can use Kadira to find ways of making your application faster. The database design is only 1 factor out of many that you play with when trying to improve performance.

Categories