I am creating a enterprise search engine. I am using Solr for creating a search engine and SolrJ for front end with JSP. I want apply pagination to my search results. My code for getting the results from solr core is as follows.
while(iter.hasNext())
{
SolrDocument doc1 =iter.next();
String dc =iter.next().toString();
out.println(doc1.getFieldValue("id"));
out.println(doc1.getFieldValue("title"));
out.println("<BR>");
out.println("content_type :");
out.println(doc1.getFieldValue("content_type"));
out.println("<BR>");
out.println("doc_type :");
out.println(doc1.getFieldValue("doc_type"));
} %>
there are 600 records in my search engine. if a search for a specific keyword all the records related to it come on single page. Can any body suggest me any logic for pagination using javascript. i want to use client side pagination. please help.
While creating solr query you can set start and rows e.g:
SolrQuery query = new SolrQuery(searchTerm);
query.setStart((pageNum - 1) * numItemsPerPage);
query.setRows(numItemsPerPage);
// execute the query on the server and get results
QueryResponse res = solrServer.query(solrQuery);
Also, each time you do iter.next() you are reading next element in the Iterator. Hence by doing
SolrDocument doc1 =iter.next();
String dc =iter.next().toString();
you are skiping one element. Refer: here
Related
This question might be a basic one but I am very new to SSJS so thank you for your understanding.
The data extension names JourneyA, JourneyB, JourneyC...infinity are the result of Journey Builder. Then, I got the data extension name AllJourneys from _Journey.
SELECT JourneyName as "JourneyName",
FROM _Journey j
INNER JOIN(select JourneyID, max(CreatedDate) as MaxDate FROM _Journey
GROUP BY JourneyID) sort on sort.JourneyID = j.JourneyID and j.CreatedDate = sort.MaxDate
After that, I would like to count the number of audiences in each journey and put the results in data extension name Summary using upsert. Additionally, data extension name Summary is a non-sendable data extension.
According to my understanding, the data extension name Summary can be done by SSJS (Script Activity in Automation Studio), right? or please suggest me if it can be done by any other way.
I have done some SSJS but cannot figure out what I sould do next.
<script runat="server">
Platform.Load("core","1");
var AllJourneys = DataExtension.Init('AllJourneys');
var AllJourneysData = testDE.Rows.Retrieve();
var Summary = DataExtension.Init('Summary');
for (var i = 0; i < testDEData.length; i++) {
var JourneyName = AllJourneysData[i].JourneyName;
var count = xxx
var result = Summary.Rows.Upsert({"JourneyName":JourneyName}, ['Count(CusID)'], [count]);
}
</script>
If you know the number of Journey DE's, you could also use SQL Query Activities in Automation studio to do this. Create your data extension for the Summary table, and then run a SQL query or series of queries to look up each of the journey DE's and update the table with the updated value.
I would recommend adding one additional field to your summary table for a REFRESHDATE, so that you can see in the data the last time it was updated.
I suggest you to use the WSproxy to manage data extension data.
The main problem that you have to bypass is that SSJS cannot retrieve more than 2,500 rows per "call".
This tutorial explains how to retrieve more than 2,500 rows.
https://ampscript.xyz/how-tos/how-to-retrieve-more-than-2500-records-from-a-data-extension-with-server-side-javascript/
Then I suggest you to use basic javascript function to check the length or simply manage your returned json to get the desired id count.
You can finally upsert that data with t
I have a set of tables on BigQuery with some kind of data, and I want to process that data via a JavaScript function I defined. The JS function maps the old data to a new schema, that has to be the one implemented by the new tables.
My set of tables has a common prefix and I want to migrate all of them together to the new schema, by creating tables with a different prefix but keeping the same suffix for all of them.
Example: I have 100 tables called raw_data_SUFFIX and I want to migrate them to 100 tables with a new schema called parsed_data_SUFFIX, keeping each suffix.
This is the simple query for migrating the data
SELECT some_attribute, parse(another_attribute) as parsed
FROM `<my-project>.<dataset>.data_*`
Is there a way to do it via the BigQuery UI?
In order to achieve what you aim, you wold have to use a DDL statement CREATE TABLE as follows:
CREATE TABLE 'project_id.dataset.table' AS SELECT * FROM `project_id.dataset.table_source`
However, it would not be possible to reference multiple destinations with wildcards. As stated in the documentation, here, there are some limitations when using wildcards, among them:
Queries that contain DML statements cannot use a wildcard table as the
target of the query. For example, a wildcard table can be used in the
FROM clause of an UPDATE query, but a wildcard table cannot be used as
the target of the UPDATE operation.
Nonetheless, you can use the Python API to make a request to BigQuery. Then, save each view to a new table, each table's name with a new prefix and old suffix. You can do it as below:
from google.cloud import bigquery
client = bigquery.Client()
dataset_id = 'your_dataset_id'
#list all the tables as objects , each obj has table.project,table.dataset_id,table.table_id
tables = client.list_tables(dataset_id)
#initialising arrays (not necessary)
suffix=[]
table_reference=[]
#looping through all the tables in you dataset
for table in tables:
#Filter if the table's name start with the prefix
if "your_table_prefix" in table.table_id:
#retrieves the suffix, which will be used in the new table's name
#extracts the suffix of the table's name
suffix=table.table_id.strip('your_table_prefix')
#reference the source table
table_reference=".".join([table.project,table.dataset_id,table.table_id])
#table destination with new prefix and old suffix
job_config = bigquery.QueryJobConfig()
table_ref = client.dataset(dataset_id).table("_".join(['new_table_prefix',suffix]))
job_config.destination = table_ref
sql='''
CREATE TEMP FUNCTION
function_name ( <input> )
RETURNS <type>
LANGUAGE js AS """
return <type>;
""";
SELECT function_name(<columns>) FROM `{0}`'''.format(table_reference)
query_job = client.query(
sql,
# Location must match that of the dataset(s) referenced in the query
# and of the destination table.
location='US',
job_config=job_config)
query_job.result() # Waits for the query to finish
print('Query results loaded to table {}'.format(table_ref.path))
Notice that in the sql query, first ''' then """ were used in order to define the query and the JS Temp Function, respectively.
I would like to point that you have to make sure you environment has the appropriate packages to use the Python API for BigQuery, here. You can install the BigQuery package using: pip install --upgrade google-cloud-bigquery.
I have a situation where I need to form a select query in Modified Java Script Value step and pass that query to run in a database.
I can successfully form the query and I have trouble in running the query and getting back the query result.
I tried Database join step since it is complete query it cannot run it. Please guide.
Thanks in advance!
'Modified Java Script Value' Step executes for each row. I'm assuming It is receiving at least 1 row. If Not, Then 'Generate Rows' can be used.
Make sure the generated query string reaches next step (Preview data view)
Use 'Dynamic SQL row' Step to execute SQL :
Specify the field which holds generated query in previous step and Number of rows to retrieve ('0' to retrieve all rows)
Specify the Static Template SQL to retrieve metadata
Example:
Query : SELECT * FROM pg_catalog.pg_tables where schemaname = 'pg_catalog';
Template SQL : SELECT * FROM pg_catalog.pg_tables;
Using Template SQL, The PDI will determine SQL Resultset structure (Metadata).
Hope this Helps!!
Im using AngularFire+Firebase and have data at firebase-database.
Im trying to paginate Data with Smart Table
My problem is that I dont know how to range query without specifying any child i,e fetch records from record # 25 to 35
Below query gives me first 5 records
var queryFIrst = visitRef.startAt().limitToFirst(5);
$scope.Visits = $firebaseArray(queryFIrst);
now Im trying to get records next 5,from 6 to 10 and I tried below
var queryFIrst = visitRef.startAt().limitToFirst(5).endAt().limitToFirst(5);
$scope.Visits = $firebaseArray(queryFIrst);
but it giving error that startAt and endAt can't be used like this with limit
In general pagination is not a good fit for Firebase's realtime data model/API. You're trying to model a SQL SKIP operator, which won't work with the Firebase Database.
But if you want to model pagination in Firebase, you should think of having an "anchor point".
When you've loaded the first page, the last item on that page becomes the anchor point. When you then want to load the next page, you create a query that starts at the anchor point and load n+1 item.
In pseudo-code (it's real JavaScript, I just didn't run it):
var page1 = visitRef.orderByKey().limitToFirst(5);
var anchorKey;
page1.on('child_added', function(snapshot) {
anchorKey = snapshot.key; // this will always be the last child_added we received
});
Now when you want to load the next page of items, you create a new query that starts at the anchor key:
var page2 = visitRef.orderByKey().startAt(anchorKey).limitToFirst(6);
A few things to note here:
You seem to be using an approach from the Firebase 1.x SDK, such as an empty startAt(). While that code may still work, my snippets use the syntax/idiom for the 3.x SDK.
For the second page you'll need to load one extra item, since the anchor item is loaded for both pages.
If you want to be able to paginate back, you'll also need the anchor key at the start of the page.
Is that what you needed that time?
visitRef.orderByKey().startAt("25").endAt("35")
I asked a similar question Get specific range of Firebase Database children
I have a grid with data in Lighswitch application. Grid has on every column posibility to filter column. Thanks to lsEnhancedTable
Right now I am sending an ajax request to the web api controler with the list of ids of the Customers that I want to export. It works but with a lot of data it is very slow because I have to turn off the paging of the data to get all visible customers ids so I can iterate over the VisualCollection.
To optimize this I would have to turn on back the paging of the data to 50 records so that the initial load is fast and move the loading of the data to a save/export to excel button.
Possible solutions:
Load data all data on save button click. To do this I have to somehow load all items before I can iterate over collection.
The code bellow locks UI thread since the loadMore is async. How to load all data synchronously? Ideally I would like to have some kind of progress view using a msls.showProgress.
while(3<4)
{
if (screen.tblCustomers.canLoadMore) {
screen.tblCustomers.loadMore();
}
else
break;
}
var visibleItemsIds = msls.iterate(screen.tblCustomers.data)
.where(function (c) {
return c;
})
Second approach would be turn on paging and pass just the filters applied by the users to the web api controller so I can query database and return only filtered records. But I don't know how to do that.
Third approach is the one that I am using right now. Turn off the paging->iterate over visual collection, get the customers id, pass them to the controller and return a filtered excel. This doesn't work well when there are a lot of records.
Iterate over filtered collection in the server side? I don't know if there is a way to do this in Lighswitch?
Here's an option for client side javascript.
// First build the OData filter string.
var filter = "(FieldName eq " + msls._toODataString("value", ":String") + ")";
// Then query the database.
myapp.activeDataWorkspace.ApplicationData.[TableName].filter(filter).execute().then(function (result) { ... });