I'm wondering what is the best way to do an "on cascade copy/insert" of related elements in PostgreSQL.
I made a simple example to illustrate my use case:
Database Definition
Three entities: Version, Element, ElementEffect.
A version has many elements.
An element belongs to one version.
An Element has many element effects.
An Effect belongs to one Element.
The Problem
Let say that we have 1 version, with 1 element with 1 effect in the database.
For my use case, I need to be able to create a new version copying all elements and element effects of the previous version and update some of them.
For example, if a new version is created: Version 2:
The database should copy the existence element into a new one referencing the new version.
The new element should create a new element effect referencing the new element.
A new version arrived. The new version has the same elements and effects that the previous version and one change: the element effect text changed from null to "loremp ipsum".
The operations that we need to do are:
Create a copy for all elements and their relationships relate to version —> Element > ElementEffect
Make data updates to new copy elements with the changes that the new version has.
Question
What is the best way to achieve requisite 1 in PostgreSQL with/without Sequelize ORM and Node.js?
Does PostgreSQL have any built-in feature to make this possible?
Should I solve this at the database level (maybe with psql rules and triggers) or a code level with a node script making queries and transactions?
Solution Requisites
I'm building this on PostgreSQL using Sequelize as ORM managed by Node.js so if I can build this using Sequelize will be even better.
My use case is way more complex than the example. I have more than 15 entities including Many to Many relationships so the solution needs to scale over time. Meaning, I do not want to test this every time I add a new entity or modify an existing one.
The best idea coming in my mind is to create the stored procedure in psql like
make_new_version(my_table, oldid, newid) and this procedure must copy rows with old id in table my_table and replace oldid with newid. this function also needs to return id from newly inserted rows to make another call of same function to copy rows for the next entity.
When you test and confirm this stored procedure, you can call it from another procedure (almost) without testing.
This way you will solve problem at the database layer.
Related
I have a score entry page on a PHP-based website. It uses a DB query to retrieve the player’s Name & DB ID into a PHP array variable, then loops around the returned data to create an HTML <select> dropdown list.
This has now become too long… especially when entering on a mobile device (on iOS its even worse with that scroll wheel implementation they use - I dont even know what it looks like on Android!)
So I have a PHP array with the Name & ID fields in it.
I would like to convert this to filter a dynamic dropdown list as characters are typed in by the user.
I am novice at Javascript and it's nuances, though understand the principles of the language. The DOM model and other things I am also not very familiar with. I expect to use an onChange() function on the <input> textbox(?).
But how do I either tie that back to my existing PHP array variable? Or copy this variable “across” to the JS function?
I have changed the underlying DB to a NoSQL version (MongoDB). The bulk of tutorials, blogs, etc, for “AJAX Live Search” or equivalent seem to be centred around MySQL (which I previously used…)
The esiest approach would be usage of some js library, which can do what you need.
In my experience this should work for you: https://selectize.dev/
It combines select with text input, where text input is only used to seach in the select and hides unmatching records from it.
I have an interesting situation where I'm working with posts. I don't know how the user will want to structure the posts. It would either be one block of text, or structured in an a-> b -> c structure where a, b, and c are all text blocks, and if represented as a table, there would be an unknown number of columns and unknown number of rows.
Outside of the post data, there is the possibility of adding custom attributes to the post. Most of these would be shorter text strings, but an unknown number of them.
Understanding that a json object would probably be the simplest solution, I have to fit this into a self-serving db. SQLite seems to be the current accepted solution for Redwoodjs, the framework I'm building out of. How would I go about storing this kind of data within Redwoodjs using the prisma.js that it comes with?
Edit: The text blocks need to be separate when displaying the post and able to be referenced separately. There is another part of the project that will link to each text block specifically. The user would be choosing how many columns there are before entering any posts (configured in settings), but the rows would have to be updated dynamically. Closest example I can think of is like a test management software where you have precondition, execution steps, and expected results across the top for columns, and each additional step is a row.
Well, there are two routes that you could take. If possible use a NoSQL database, such as mongoDB, which Prisma has support for. There you would be able to create a JSON like structure with as many or as little paragraphs you would like.
If that is not possible a workaround, since SQLite does not support JSON data, you could store the stringified JSON data in a text field, and then parse it. This is not the optimal solution, so if possible use the first one.
I have been wondering about how to write down function in migration file. Ideally, it should be exactly opposite of what we are doing in up method. Now suppose I wrote up function to drop unique constraint on a column, added some new rows(having duplicate data) to a table and now I want to rollback the migration. Ideally, I would write down method to add a unique constraint again on the column but migration would not rollback as a table now contains duplicate data.
So my questions are -
What to do in such a situation?
How to write down function in migrations?
Can I keep the down function blank in such a situations?
Thanks.
I usually don't write down functions at all and just leave them empty.
I never rollback migrations and if I want to get to earlier DB state I just restore whole DB from backups.
If I just want to put unique constraint back, I will write another up migration which fixes duplicate rows and then adds unique constraint back.
I know that many people is using rollback between tests to reset DB, but that is really slow way to do it.
I'm working on a little side project which has a search capability. I'm using typeahead.js attached to a REST api built with expressJS and mongoDB. I'm wondering what the best approach to two problems I have it. I'm primarily a front-end guy just starting out with Node and MongoDB. Here are the two issues I need help with. But first a little background to better understand the issues.
The site I'm building allows you to upload videos. You can add tags to these videos. When searching for a video I want to be able to search through these tags using the typeahead.js. Just like YouTube.
So here are the issues.
1 - I have a "tags" collection in MongoDB. When uploading a video I take the tags for that video and add them to this collection which I'll use for predictive searching. As time progresses this collection should have plenty of tags to search through. The issue I'm having is how to insert only the unique tags (the ones that don't already exist). For example say I want to insert the following document into MongoDB:
{
tags: "tag1, tag2, tag3, tag4, tag5, tag6, tag7, tag8"
}
The collection already has "tag1, tag2, tag4 and tag7". So I only want to insert 3, 5, 6 and 8. My issues/question is what would be the best approach to do this. Should I just first query the collection, parse through it and compare each tag, separate the ones that don't exist and then "append" them to the collection? The issue I see with this is that, again, as time progresses this will be alot to parse through. So I'm not sure what the best approach here is.
2 - Would storing all of the tags in a simple array in a collection be the best approach? In time this array will be EXTREMEMLY large. Again I'm not a database guy, so I don't have a great understanding of how to approach an issue like this.
As always any and all help is much appreciated.
Since mongodb can't do joins I would store the tags in each video document a la myVideo.tags = ['sports', 'baseball', 'pitcher']. Then to power your autosuggest I would periodically map/reduce across the videos collection and output the set of active tags to a separate tags collection. You could even compute a popularity score and store something like {tag: 'baseball', score: 156} for the case where the 'baseball' tag was used in 156 videos, and use that to sort your autosuggest results so that more popular tags are shown earlier when the user is typing 'ba' for example 'baseball' is listed before 'baking' because it's a more likely correct completion vs being alphabetically second.
Here's an example of exactly this straight out of the mongodb cookbook.
To point 2 in your question, nope. Never store an unbounded-length set of data as an array within a mongodb document. There's a maximum document size (currently 16MB), so anything that will just grow and grow over time must be a collection of distinct documents.
I am very new to Javascript and Jquery, so my apologies for this beginner's question.
In a simple ajax web app, I am creating an HTML page which is mainly a big table. Each row in this table describes an event (party, show, etc). My page not only displays this information but is meant to let the user do a bunch of things with it, in particular to dynamically search and filter the table according to a variety of criteria.
First an abstract beginner's question: in this broad kind of situation (by which I mean that you want your javascript code to run a bunch of operations on the information you retrieve from the webserver) would you use the DOM as a data structure? The ease with which one can search and manipulate it (using Jquery) makes that a possibility. (E.g., "find me table rows describing an event with date column = 2010-01-01 and event type column = 'private party'.) Or would you keep the same information in a traditional Javascript data structure, search/filter/operate on that using plain javascript code and then update the DOM accordingly to display the results to the user?
(As a newbie, I imagine the first, DOM-only approach to be slower while the latter to be take up a good deal of memory. Right? Wrong?)
Assuming the second strategy is reasonable (is it?), then a practical question: can I simply store in my Javascript objects a pointer to the corresponding Jquery object? Eg, can I do
var events = new Array();
// ....
var event3094 = new Event('party','2010-01-01' /*, ... */);
event3094.domElement = $("#correctIdOfTheEventRowInMyTable");
events.push(event3094)
Does this store just a reference (pointer?) to the Jquery object in each Event object or is it creating a new copy of the Jquery object?
I am just wondering "how the pros" do it. : )
Thank you for any advice and insight.
cheers
lara
There are so many ways to do this, but DOM manipulation will almost always be slower than JS manipulation.
To answer your question, anytime you use $(selector) a new jQuery object is created and a match to find the element is performed.
I would recommend two approaches:
FIRST OPTION
Load data in a normal HTML table
Read through the rows, and store just the data (each cell's contents) in an array similar to your code example.
Store a reference to the tr in that object.
Then you can process filter, etc, and only apply changes and searches to the DOM as needed.
SECOND OPTION
Load the page without the table
Load the data as JSON from the server, and generate a table from the data
Store reference to the tr element
Basically, you don't want to perform a $(selector) a 1000 times. The concept is something like this:
var $rows = $("table tr");
// ...
event.domElement = $rows[0]; // Stores reference to existing DOM node, not a new jQuery object.
Then when you need to use jQuery methods on the object, you could use $(yourEvent.domElement) to wrap it in a jQuery wrapper.
Depending on the number of rows you might expect to be shown for most of your users (let's assume it's no more than a few hundred), I myself would probably aim to just keep everything in the DOM table that you're already building. (If you are expecting to be dealing with thousands of rows on one page, you might want to explore a different solution rather than sending it all to the browser.)
There are a few things that you did not mention in your original post. First, how are you creating this table? I imagine using a server-side solution. How easy is that to modify? How much extra work would it be to go through and generate all of your data a second time in a different format, as XML or JSON? Does this add a bunch of complexity on the server-side, only so that you can add more complexity client-side to match? Certain platforms may make this trivial, but is something to consider.
Now, in regards to your alternatives to the DOM:
I agreed and mentioned in a comment above that I don't think JSON would be very optimal "out of the box" for what you want to do. A javascript array is no better. XML is nice in that you can use jquery to easily traverse/filter, but then you still have to deal with your DOM. Sure, you can store references to your DOM elements, but that just seems like a bunch of work up front and then some more work later when matching them up. And without necessarily guaranteeing any major performance boost.
So, to answer your question directly as it is phrased, should you ALSO keep your data in a JavaScript data structure, or just in the DOM: You did mention this was a "simple" ajax web app. My recommendation is to try and keep this simple, then! Your example of how you can so easily use jquery to find rows and cells based on search criteria should be enough to convince you to give this a try!
Best of luck!-Mike
I think you'l find that the DOM method is close to the same speed if I follow your logic right. The second method will require you to manipulate the data, and then apply the changes to the DOM, where the first method allows both operations at the same time.
If youve got alot of data i would for go making objects and just supply it as XML. that way you get most of the same features as operating on the HTML DOM but you dont have a crazy markup structure like a table to navigate through.
I think it's largely a personal preference, but I like to store the objects in JavaScript and then render them to HTML as needed. From my understanding
event3094.domElement = $("#correctIdOfTheEventRowInMyTable");
will store a reference to the jQuery wrapper of your row element (not a copy).
One benefit of storing is JS is that if you have lost of objects (eg 10,000 ) and only render a small fraction of them, the browser will perform a lot better if you're not creating 10,000 * (number of DOM elements per object) elements.
Edit: Oh, and if you can, you might want to send the data from the server to the client as JSON. It's very compact (compared to XML) and in the new browsers it can be parsed quickly and safely using JSON.parse(). For older browsers, just use this library http://www.json.org/js.html . The library will only create a global JSON namespace if the browser doesn't supply it.
One thing to consider is how you need to access your data. If all the data for an element's event is contained in the element (an event operating solely on a table cell for example), then storing in the DOM makes sense.
However, if the calculation of one element depends on data form other elements (a summation of all table cells in a particular column, for example), you may find it difficult to gather up all the data if it's scattered about in DOM elements compared to a single data structure.