I have two datatables that I am trying to populate with data via a GET request to a flask API. My datasource url is localhost:5000/data but I am unable to get datatables to display the data. When I create a static .txt file, I can get the data to come through. I looked at my GET request and it looks like it is being appended with some sort of event id from jQuery (I am pretty new to this...). I would eventually like to be able to pass a custom argument to the GET request in order to filter the second table based on which row in the first table is clicked on by the user.
I have experimented with both aaData and sAjaxSource and I cannot get either one to work.
My JSON object is this form:
{
"items": [
{
"column1": "Foo",
"column2": "Bar",
"column3": "1.54"
},
{
"column1": "Blah",
"column2": "Tah",
"column3": "1.54"
}
]
}
Table 1 - I am using a static .txt file and this table displays fine
$(document).ready(function() {
$('#table1').dataTable( {
"bProcessing": true,
"sAjaxSource": "/thisWorks.txt",
"sAjaxDataProp": "item",
"aoColumns": [
{
"mData": "column1"
},
{
"mData": "column2"
},
{
"mData": "column3"
}
]
} );
$('#example tbody').on('click', 'tr', function () {
var clickId = $('td', this).eq(0).text();
} );
Table 2 - Can't get this one to work
$('#table2').dataTable( {
"bProcessing": true,
"sAjaxSource": "http://localhost:5000/data?column1=1234",
"sAjaxDataProp": "items",
"aoColumns": [
{ "mData": "column1" },
{ "mData": "column2" },
{ "mData": "column3" }
]
} );
When I look in my chrome console, I see my second Ajax request being interpreted as something like:
http://localhost:5000/data?column1=1234&_1412145757890
Eventually, I would like to pass the value of clickId from my first table to the Ajax source in my second table so any guidance there would be appreciated.
Thanks!
https://softwareengineering.stackexchange.com/questions/216605/how-do-web-servers-enforce-the-same-origin-policy
The same origin policy is a wholly client-based restriction, and is primarily engineered to protect users, not services. All or most browsers include a command-line switch or configuration option to to turn it off. The SOP is like seat belts in a car: they protect the rider in the car, but anyone can freely choose not to use them. Certainly don't expect a person's seat belt to stop them from getting out of their car and attacking you (or accessing your Web service).
Suppose I write a program that accesses your Web service. It's just a program that sends TCP messages that include HTTP requests. You're asking for a server-side mechanism to distinguish between requests made by my program (which can send anything) and requests made by a browser that has a page loaded from a permitted origin. It simply can't be done; my program can always send a request identical to one formed by a Web page.
The same-origin policy was invented because it prevents code from one website from accessing credential-restricted content on another site. Ajax requests are by default sent with any auth cookies granted by the target site. For example, suppose I accidentally load http://evil.com/, which sends a request for http://mail.google.com/. If the SOP were not in place, and I was signed into Gmail, the script at evil.com could see my inbox. If the site at evil.com wants to load mail.google.com without my cookies, it can just use a proxy server; the public contents of mail.google.com are not a secret (but the contents of mail.google.com when accessed with my cookies are a secret).
Related
I'm using of DataTables for show information in a table and I've enabled server-side load (I need that just for load data and pagination) and I don't need to search or sort by server side and I need default search and sorting (jQuery) for my table.
How can I do that ?
var table2 = $('#datatable-buttons2').DataTable({
"serverSide": true,
"processing": true,
"asSorting": ['desc', 'asc'],
"ajax": {
'type': 'POST',
'url': 'test.php?nowsearch=1',
'data': {
inputaz: $("#inputaz").val(),
inputta: $("#inputta").val(),
inputkey: $("#inputkey").val()
}
},
"columns": [{
"data": "group_name"
}, {
"data": "sender"
}, {
"data": "date"
}],
});
Maybe you can do this, customizing data retrieved in Javascript, but this is not useful.
If you sort your items in clientside with serverSide: true, it will be sorted the current retrieved data only, not based in all dataset in your database (if you actually limit results, if you don't, then use directly serverSide: false witch will retrieve all records).
For example, if you have 1000 records, you only get the data of first 10 of then only, not all (10/25/50/100). If you sort locally by age, it will sorted the retrieved 10. Then, if you go to the next page, you will notice the 2nd page is not sequential from the 1st page.
That's why you must to code your sorting/search in your back-end language when you use serverSide: true: in there you will sort first and then you return 10 records sorted.
That's the idea for this option.
Anyway... You can have pagination for client-side (serverSide: false) too by using paging option. But you need to be aware about performance.
In short, if you will handle many records, you should keep using server-side and code the sorting/search/pagination/etc from your back-end an return corresponding data. Is not so hard. You can play with parameters sent to achieve this.
For now I have
require([
"dojo/on", "dgrid/OnDemandGrid","dgrid/Tree","dgrid/Editor", "dgrid/Keyboard", "dojo/_base/declare",
"dgrid/data/createHierarchicalStore", "data/projects_data",
"dojo/domReady!"
], function(
on, Grid, Tree, Editor, Keyboard, declare, createHierarchicalStore, hierarchicalCountryData
){
var count = 0; // for incrementing edits from button under 1st grid
function nbspFormatter(value){
// returns " " for blank content, to prevent cell collapsing
return value === undefined || value === "" ? " " : value;
}
var StandardGrid = declare([Grid, Keyboard, Editor, Tree]);
window.grid = new StandardGrid({
collection: createHierarchicalStore({ data: hierarchicalCountryData }, true),
columns: [
{renderExpando: true, label: "Name", field:"variant_name", sortable: false, editor: "text", editOn: "dblclick"},
{label: "Visited", field: "bool", sortable: false, editor: "checkbox"},
{label:"Project", field:"project", sortable: false, editor: "text", editOn: "dblclick"},
{label:"locked", field:"locked", editor: "text", editOn: "dblclick"},
{label:"modified", field:"modified", editor: "text", editOn: "dblclick"},
{label:"summary", field:"summary", editor: "text", editOn: "dblclick"}
]
}, "treeGrid2");
grid.on("dgrid-datachange", function(evt){
console.log("data changed: ", evt.oldValue, " -> ", evt.value);
console.log("cell: ", evt.cell.row.id, evt.cell.column.field);
});
grid.on("dgrid-editor-show", function(evt){
console.log("show editOn editor: ", evt);
});
grid.on("dgrid-editor-hide", function(evt){
console.log("hide editOn editor: ", evt);
});
});
data/projects is a js file containing the data. but how to connect this dGrid now to a MySQL database? Can't find any good information in the docs. I think might be something with JSON rest but not sure about this.
Addition:
I can show the db in an HTML Table. is there a suitable possibilty to populate dGrid from a HTML Table?
I am still missing something. Have connections from
Database -> PHP
but can't get result in a proper JS to load into dStore.
The simplest path forward is to write a service in your server-side language of choice (sounds like PHP in this case) that produces JSON output based on the data in your MySQL database. Depending on the potential size of your data, you can potentially design your data to work with one of two out-of-the-box stores in dstore: Request (and Rest if write operations are also involved), or RequestMemory.
The simpler of the two is RequestMemory, which simply combines the features of the Memory store with an up-front server request (via Request). This store will expect the service to respond with one complete data payload: an array of objects where each object is a record in your database. Something like this:
[
{
"id": 1,
"foo": "bar"
},
{
"id": 2,
"foo": "baz"
}
]
The Rest store expects data in the same format, but also expects the service to handle filtering, sorting, and ranges. Filtering and sorting are represented via query string parameters (e.g. foo=bar&baz=bim in the simplest case for filter, and sort(+foo) or sort(-foo) for sort), while ranges are typically represented via the HTTP Range header (e.g. Range: Items 0-9 for the first 10 items).
Implementing a service for the Rest store is obviously more work, but would be preferable if you're expecting your data source to potentially have thousands of items, since RequestMemory would have no choice but to request all of the items up-front.
With either of these stores, once you have a service that outputs JSON as appropriate, you can create an instance of the store with its target pointing to the service endpoint, then pass it to a grid via the collection property.
If your data is intended to represent a hierarchical structure, it should still be possible to mix dstore/Tree into dstore/RequestMemory or dstore/Request, provided that your hierarchy is represented via parent ID references. By default, Tree filters children via a parent property on each item, and reports mayHaveChildren results by inspecting a hasChildren property on each item.
In JavaScript code of web application, the table sorter is defined by:
$("#my-table").tablesorter({
headers: {
1: {
sorter: false
}
},
widgets: ["saveSort"]
});
So when the page is refreshed the sorting of table is saved, but when browser is closed, the table backs to its original sorting. So what I want is to get how table is sorted and save it to database. Can someone suggest me how I can obtain the cookie(s), which stores how table is sorted?
Thanks
When the saveSort widget (demo) saves the information, it tests the browser for localStorage first, then if that isn't available, it falls back to saving the sort to a cookie. So, you can either use the function built into the widgets file like this:
var myTable = $('#table1')[0],
myLastSort = $.tablesorter.storage( myTable, 'tablesorter-savesort');
or if you are using Chrome, go to that page and press F12. now click on the resources tab and look under "Local Storage"
The value may look a bit confusing, but it's just a JSON format:
{
"/tablesorter/docs/example-widget-savesort.html": {
"0": {
"sortList": [ [0,0],[2,1] ]
},
"1": {
"sortList": [ [0,0] ]
}
}
}
And it is broken down as follows:
"/tablesorter/docs/example-widget-savesort.html" is the url of the web page
"0" or "1" would either be the table ID or the index of the table on the page
"sortList" contains the actual sort list value
So as you can see in the above data, it is saving sort information for two tables on one web page.
You could use jQuery to save a cookie with the sorted colum. The next time that page is loaded, use either jQuery or server-side logic to get the value of the cookie, and sort the appropriate column. This might be useful reading: Can jQuery read/write cookies to a browser?
I have multiple data tables per page, ranging from 4 to 8 ish.
All the tables have different settings. All the data is gotten via sAjaxSource (a javascript array).
My question boils down to:
Solution 1)
Should I have one seperate URL for each table? This seems to work, but means a full page load takes a lot longer.
Solution 2)
Have one same link for all the tables (and have seperate array name), so its only 1 download.
My questions are as follows:
Is there any recommmended solution for multiple data tables per page, that's best practice in terms of 1 or multiple links to get the javascript arrays.
If you provide the same ajax link to multiple datatables the browser seems to download them once per table instead of 1 time for all tables. Is this per "design" or a fault in my code?
Side note: I have checked http://www.datatables.net/examples/basic_init/multiple_tables.html and search the documentation but did not learn anything about the above questions.
In the case you described above, I would not rely on browser cashing, instead I would get data with my own single Ajax request. Store it in a local variable and for different tables use 'aaData' option.
var mydata;
$.ready(function(){
$.get("source/file.php", function(data){
mydata = data;
$('#table1').dataTable({ "aaData": mydata[0] });
$('#table1').dataTable({ "aaData": mydata[1] });
}, 'json');
});
but in the end solution depends on your needs, maybe you'll need lots of data, maybe it ll require paging and would be better off with multiple 'source' files with differed loading options etc.
The fact that the browser download the file only the first time when you provide the same link is, I think, due to the browser caching capabilities and has nothing to do with DataTables or your code. The browser put the content in its cache the first time and then serves it from there.
You can use this fact to your advantage by using the sAjaxDataProp option. I'm thinking something along these lines :
$('#table1').dataTable( {
"sAjaxSource": "sources/data.txt",
"sAjaxDataProp": "table1"
} );
$('#table2').dataTable( {
"sAjaxSource": "sources/data.txt",
"sAjaxDataProp": "table2"
} );
[ ... ]
$('#tableN').dataTable( {
"sAjaxSource": "sources/data.txt",
"sAjaxDataProp": "tableN"
} );
This will tell DataTable to look for a specific javascript array in the loaded content. Obviously, the data.txt file must contain the declaration of each table.
If you want to be sure that the browser do only one request, you could also load the data by an other means, a jQuery AJAX function for example, and then initialize the DataTables with an javascript array :
$('#table1').dataTable( { "aaData": array1 } );
$('#table2').dataTable( { "aaData": array2 } );
$('#tableN').dataTable( { "aaData": arrayN } );
I hope this will help :)
I'm writing a simple web service that return a JSON response. It'll be heavily used, so I want to try and make the JSON response as small as possible for performance reasons. I'm on the fence over a design decision; penny for your thoughts!
My JSON response from the server looks like this:
{
"customers":
[
{
"id": "337",
"key": "APIfe45904c"
},
{
"id": "338",
"key": "somethingDifferent"
},
{
"id": "339",
"key": "APIfe45904c"
},
{
"id": "340",
"key": "APIfe45904c"
}
]
}
The APIfe45904c here is used in about 60-70% of the records, so I could also modify the the JSON response to remove the repeated information and add a default_key i.e. if there's no key specified, the client should assume the default_key like this:
{
"default_key": "APIfe45904c",
"customers":
[
{
"id": "337"
},
{
"id": "338",
"key": "somethingDifferent"
},
{
"id": "339"
},
{
"id": "340"
}
]
}
No client is using the web service yet, so this wouldn't break anything. Is this good practice? It works, and makes for a small JSON response, but I'm conflicted. I like the KISS principle for developers using the service, but I also want as small a JSON response as possible.
I was tempted to replace customers with c, id with i and key with k to aid reducing the file size, but I figured this will be a problem if I want to get other clients to start using it. Should I drop the idea of default_key for the same reason?
Each JSON response will likely be no more 200 lines of id/key pairs, so I don't need to incorporate pagination, etc.
I would keep it simple as you say, and then use gzip to compress it. It should compress very well as it is repetitive, and remains convenient for programmers.
See here for pointers in outputting gzip headers for AJAX: Is gzip encoding compatible with JSON?
Unless you have very special performance needs, I would always choose clarity over brevity. Especially for an API that is going to be used by many developers.
You should use the consistent format where each record has an id and a key field. What you lose in bandwidth you gain from not having to pre-process the JSON on the client-side.
I tend to analyze my JSON data structure like you but in the end it isn't worth the tiny bit of space you save. Your JSON data structure looks good... have you seen Twitter's JSON data structure? Now that is ugly.
I would go with the default key idea, but I wouldn't go as far as shortening the attribute names since that can be confusing. Perhaps you can take an argument from the web service call (from query string) that specifies whether or not the client desires to have shortened attribute names.