I'm using a BIRT report with rather complicated query to get different metrics from multiple tables.
Those metrics are done for data between two dates set as parameter and with different periodicity.
So I don't know how many rows will be in the output.
In the end I get table like this:
Metric A | Metric B | Metric C
2017/01 1 0 4
2017/02 0 3 4
2017/03 1 2 3
In the report design, I need to display it as transposed (metric names are long and there is too many of them).
How can I do it within the report itself?
I think this is more difficult to be done in the query (I don't know how many rows I get in the result - it is dynamically taken from the dates and according to the periodicity).
I tried so far:
Cubes and crosstables - but how can I do it, when I don't know how many columns I get in the end?
Scripting - I can dynamically add more columns or create the table as it is. But how do I get the data from dataset? Here I tried to do something like this:
Firstly, in the dataset onFetch method, I put something in the report variable:
reportContext.setPersistentGlobalVariable("metric_1",row["metric_1"].toString());
Then, in the report beforeFactory, I try to access the variable:
mylabel.setText( reportContext.getPersistentGlobalVariable("metric_1") );
But the result is empty.
Is this a wrong way to do it? Do you have any ideas? I would like to do it in the script - set report data to the globalVariable, then in the beforeFactory access the resultSet and create table for it.
Do you have any other ideas?
P.S. it is very unfortunate that you can't have something like detailColumn (you do have detailRow).
Related
I have an application where I need to measure timestamp based parameter values from each device. The information is heavily structured and the reason I haven't looked into databases is because I have to get all the data for 100 x 1000 = 100k rows every few minutes. I want to delete the data corresponding to the oldest timestamp in each group. I am using Python for programming but even JavaScript would do. I could not find the limit parameter in the Python CSV official module. Help is super appreciated.
Item 1
Timestamp, parameter1, parameter2...parameterN
...
100 rows
Item 2
Timestamp, parameter1, parameter2...parameterN
...
100 rows
...1000 items
Note: There are no headers to separate any rows, the Item 1,2 etc. are shown for representational purposes.
I need to be able to add new row every few minutes under each group and get rid of the old one effectively keeping the numbers at 100 per group
There's no limit parameter, because a reader is just an iterator, and Python has generic ways to do anything you might want to do with any iterator.
with open(path) as f:
r = csv.reader(f)
First 100:
itertools.islice(r, 100)
Last 100:
collections.deque(r, maxlen=100)
Max 100 by 3rd column:
heapq.nlargest(100, r, key=operator.itemgetter(2))
… and so on.
Store your data internally like this
dict [key] [timestamp][array of values]
data={}
if 'bob' in data:
data['bob']={}
data['bob'][timestamp]=list(values)
else:
data['bob'][new_timestamp]=list(values)
After 2 iterations your data array will look like
data['bob'][15000021][1,2,3,4,5]
data['bob'][15003621][5,6,7,8,9,0]
If you want the latest ... just get the unique keys for bob - and delete
- either anything more than n items (bob's values sorted by timestamp)
- or if the timestamp is less than now() - 2 days [or whatever your rule]
I use both mechanisms in similar datasets. I strongly suggest you then save this data, in case your process exists.
Should your data contain an OrderedDictionary (which would make the removal easier) - please not pickle will fail, however the excellent module dill (I am not kidding) is excellent, and handles all datatypes and closes much nicer IMHO.
** Moving from Comments **
I'm assuming reading the file from the bottom up help you... This can be done by prepending entries to the beginning of the file.
With that assumption you just need to rewrite the file on each entry. Read the new file to an array, push() the new entry, shift() the list and write to new file.
Alternatively you can continue to push() to the file and only read the first 100 entries. After doing your read you can remove the file and start a new one if you expect to consistently get more than 100 entries between reads, or you can clean the file to just 100 entries
I have TableA which has two columns with data 1,50(ie 1,2,3....50) respectively & i have TableB which Contains same two columns but with data 20,30(ie 20,21.....30) & 40,45 (40,41....45)respectively and i want to find out remaining set of series as output in TableC Like entries 1,19 & 31,39 &46,49. How Can i Achieve this using hibernate ,mysql,or simply java. please refer this snap for database structure
So as we know firebase won't let order by multiple childs. I'm looking for a solution to filter my data so at the end I will be able to limit it to 1 only. So if I won't to get the lowest price it will be something like that:
ref.orderByChild("price").limitToFirst(1).on...
The problem is that I also need to filter it by dates (timestamp)
so for that only I will do:
.orderByChild("timestamp").startAt(startValue).endAt(endValue).on...
So for now that's my query and then I'm running on all results and checking for that one row that has the lowest price. my Data is pretty big and contains around 100,000 rows. I can changed it however I want.
for the first query that gets the lowest price but all timestamps causes that the returned row might be the lowest price but not in my dates range. However this query takes ONLY 2 seconds compared to the second one which takes 20 including my code to get the lowest price.
So, what are your suggestions on how to do it best? I know I can make another index which contains the timestamp and the price but those are different data values and it makes it impossible.
full data structure:
country
store
item
price,
timestamp
just to make it even more clear, I have 2 inner loops which runs over all countries and then over all stores. so the real query is something like that:
ref.child(country[i]).child(store[j]).orderByChild("timestamp").startAt(startValue).endAt(endValue).on...
Thanks!
I've got a table which manages user scores, e.g.:
id scoreA scoreB ... scoreX
------ ------- ------- ... -------
1 ... ... ... ...
2 ... ... ... ...
Now i wanted to create a scoreboard which can be sorted by each of the scores (only descending).
However, I can't just query the entries and send them to the client (which renders them with Javascript) as the table contains thousands of entries and sending all of those entries to the client would create unreasonable traffic.
I came to the conclusion that all non-relevant entries (entries which may not show up in the scoreboard as the score is too low) should be discarded on the server-side with the following rule of thumb:
If any of the scores is within the top ten for this specific score keep the entry.
If none of the scores is within the top ten for this specific score discard it.
Now I ran into the question if this can be done efficiently with (My)SQL or if this processing should take place in the php-code querying the database to keep the whole thing performant.
Any help is greatly appreciated!
Go with rows, not columns, for storing scores. Have composite index on userid,score. A datetime column could also be useful. Consider not having the top 10 snapshot table anyway, just the lookup that you suggest. So an order by score desc and Limit 10 in query.
Not that the below reference is the authority on Covering Indexes, but to throw the term out there for your investigation. Good luck.
you can try to use INDEX for specific and performance enhances.
This will query specific results for your kind of problem.
Read about it here
good luck, buddy.
I would first fire a query to obtain the top 10. Then fire the query to get the results, using the top 10 in your sql.
I can't formulate the query until I know what you mean by top 10 - give an example.
I've recently started using Interactive Reports in my Oracle APEX application. Previously, all pages in the application used Classic Reports. The Interactive Report in my new page works great, but, now, I'd like to add a summary box/table above the Interactive Report on the same page that displays the summed values of some of the columns in the Interactive Report. In other words, if my Interactive Report displays 3 distinct manager names, 2 distinct office locations, and 5 different employees, my summary box would contain one row and three columns with the numbers, 3, 2, and 5, respectively.
So far, I have made this work by creating the summary box as a Classic Report that counts distinct values for each column in the same table that my Interactive Report pulls from. The problem arises when I try to filter my interactive report. Obviously, the classic report doesn't refresh based on the interactive report filters, but I don't know how I could link the two so that the classic report responds to the filters from the interactive report. Based on my research, there are ways to reference the value in the Interactive Report's search box using javascript/jquery. If possible, I'd like to reference the value from the interactive table's filter with javascript or jquery in order to refresh the summary box each time a new filter is applied. Does anyone know how to do this?
Don't do javascript parsing on the filters. It's a bad idea - just think on how you would implement this? There's massive amounts of coding to be done and plenty of ajax. And with apex 5 literally around the corner, where does it leave you when the APIs and markup are about to change drastically?
Don't just give in to a requirement either. First make sure how feasible it is technically. And if it's not, make sure you make it abundantly clear what the implications are in regard of time consumption. What is the real value to be had by having these distinct value counts? Maybe there is another way to achieve what they want? Maybe this is nothing more than an attempted solution, and not the core of the real problem. Stuff to think about...
Having said that, here are 2 options:
First method: Count Distinct Aggregates on Interactive reports
You can add these to the IR through the Actions button.
Note though, that this aggregate will be THE LAST ROW! In the example I've posted here, reducing the rows per page to 5 would push the aggregate row to the pagination set 3!
Second Method: APEX_IR and DBMS_SQL
You could use the apex_ir API to retrieve the IR's query and then use that to do a count.
(Apex 4.2) APEX_IR.GET_REPORT
(Apex 5.0) APEX_IR.GET_REPORT
Some pointers:
Retrieve the region ID by querying apex_application_page_regions
Make sure your source query DOES NOT contain #...# substitution strings. (such as #OWNER#.)
Then get the report SQL, rewrite it, and execute it. Eg:
DECLARE
l_report apex_ir.t_report;
l_query varchar2(32767);
l_statement varchar2(32000);
l_cursor integer;
l_rows number;
l_deptno number;
l_mgr number;
BEGIN
l_report := APEX_IR.GET_REPORT (
p_page_id => 30,
p_region_id => 63612660707108658284,
p_report_id => null);
l_query := l_report.sql_query;
sys.htp.prn('Statement = '||l_report.sql_query);
for i in 1..l_report.binds.count
loop
sys.htp.prn(i||'. '||l_report.binds(i).name||' = '||l_report.binds(i).value);
end loop;
l_statement := 'select count (distinct deptno), count(distinct mgr) from ('||l_report.sql_query||')';
sys.htp.prn('statement rewrite: '||l_statement);
l_cursor := dbms_sql.open_cursor;
dbms_sql.parse(l_cursor, l_statement, dbms_sql.native);
for i in 1..l_report.binds.count
loop
dbms_sql.bind_variable(l_cursor, l_report.binds(i).name, l_report.binds(i).value);
end loop;
dbms_sql.define_column(l_cursor, 1, l_deptno);
dbms_sql.define_column(l_cursor, 2, l_mgr);
l_rows := dbms_sql.execute_and_fetch(l_cursor);
dbms_sql.column_value(l_cursor, 1, l_deptno);
dbms_sql.column_value(l_cursor, 2, l_mgr);
dbms_sql.close_cursor(l_cursor);
sys.htp.prn('Distinct deptno: '||l_deptno);
sys.htp.prn('Distinct mgr: '||l_mgr);
EXCEPTION WHEN OTHERS THEN
IF DBMS_SQL.IS_OPEN(l_cursor) THEN
DBMS_SQL.CLOSE_CURSOR(l_cursor);
END IF;
RAISE;
END;
I threw together the sample code from apex_ir.get_report and dbms_sql .
Oracle 11gR2 DBMS_SQL reference
Some serious caveats though: the column list is tricky. If a user has control of all columns and can remove some, those columns will disappear from the select list. Eg in my sample, letting the user hide the DEPTNO column would crash the entire code, because I'd still be doing a count of this column even though it will be gone from the inner query. You could block this by not letting the user control this, or by first parsing the statement etc...
Good luck.