I have the following Algolia request:
index.setSettings({
getRankingInfo : 1,
attributesToIndex:"name,colour,style,material,category",
hitsPerPage: 50,
ignorePlurals : false,
attributesToRetrieve : "objectID",
restrictSearchableAttributes :
"name,colour,style,material,category",
typoTolerance: "strict",
queryType: "prefixNone",
page : skipParameter
});
index.search(query, function(error, content) {
....
})
However, some of the settings don't seem to be applied to the search. For instance, it retrieves all attributes and I'm pretty sure the searchable attributes aren't restricted. Furthermore, the ranking info isn't returned as can be seen by the returned JSON with the hits post-emptied which means it is definitely not accepting at least that setting.
{"hits":[],"nbHits":173,"page":0,"nbPages":4,"hitsPerPage":50,"processingTimeMS":3,
"query":"Red sofa","params":"query=Red%20sofa"}
I'm running this code in a Parse.com cloud code search method if that may have an effect on the outcome?
There is some syntax errors.
First attributesToIndex should be an array:
'attributesToIndex': ["name", "colour", "style", "material", "category"]
same for restrictSearchableAttributes
Also you can get a response from algolia when you set settings, so you would be able to see errors with the config. ex:
index.setSettings({
'customRanking': ['desc(followers)']
}, function(err, content) {
console.log(content);
});
Some helpful resources:
https://github.com/algolia/algoliasearch-client-js
https://www.algolia.com/doc/rest_api
https://www.algolia.com/doc/tutorials/parse-algolia
And be sure to use the latest version of Algolia JS client
https://github.com/algolia/algoliasearch-client-js/wiki/Migration-guide-from-2.x.x-to-3.x.x
Happy sunday coding! :)
Related
I am looking for a generic way to pass any query string (from any oracle table, NOT hardcoded) from a webpage form/field to database and make the webpage display table/grid of the results. All examples i have seen so far require hardcoding columns/table name upfront in CRUD apps on github. I would like to be able to get results from various tables each with different columns, data types. I dont want the tables/columns hardcoded in the app. I have been using SpringBoot so far to accept any query string in POST req and return results as list of json records but i want to make it more interactive, easy to use for casual users so seeking some examples for simple textfield input and dynamic results grid.
Have a look at Knex.js: https://knexjs.org/
It's a query builder that works with different databases. Here's a little sample from their doc:
var knex = require('knex')({
client: 'oracle'
});
function handleRequest(req, res, next) {
query = knex.select(req.body.columns).from(req.body.table);
console.log(query.toString()); // select "c1", "c2", "c3" from "some_table"
}
// Imagine this was invoked from Express and the body was already parsed.
handleRequest({
body: {
table: 'some_table',
columns: ['c1', 'c2', 'c3']
}
});
As you can see, the inputs are just strings which can come from anywhere, including clients/end-users. Just be careful that the user that's connecting to the database has the appropriate grants (least privilege applies here).
I have a nested firebase database with a structure like this:
posts: {
0: {
title: "...",
content: '...',
comments: [{
by: "someone"
}, {
by: "anotherone"
}]
},
1: {
...
}
}
Now I want to get the first comments on each post so I tried
firebase.database().ref('/posts/{postId}/comments/0').once('value',function(snapshot){
snapshot.forEach(function(child){ console.log(child.val());});
})
But don't know why the only thing I got in the console is false. So are there anyone knows what's wrong or is it impossible to query like that?
The Firebase Database SDKs will also read entire nodes. It is not possible to retrieve a subset of a node's data or to get just the keys.
To get the first comment of each post, you must know the key of each post already. Since you can't read just the keys of the posts, this means that you must read all data to get just the first comment of each post:
firebase.database().ref('/posts').once('value',function(snapshot){
snapshot.forEach(function(child){
console.log(child.val().comments[0]);
});
})
While this gives the result you need, it is quite wasteful in bandwidth: the client is ready way more data than you need. As usual in NoSQL databases, a better solution may require you to change your data model to fit your use-case. For example: consider storing the latest comment for each post in a separate top-level list:
latest-comments: {
0: {
by: "someone"
},
1: {
...
}
}
You will need to update this list (in addition to your original comments list) whenever a new comment is posted. But in return, reading the latest comment for each post is now very cheap.
I'm recently working with Firebase Cloud functions to delegate lot of work from my client side to the server, reducing the data cost for the user. But recently I wondered if it's worth it or not, or maybe a better database structure could fix it.
I have a social app where the user can workout and post their results, you can follow users and all kind of "typical" social media stuffs. Well, my problem appear when I want to implement pagination retrieving the last X workouts that I should show to each user on their feed.
My question is : How expensive could be update from 1-1000(worst case) fields on the database on a common event trigger on Firebase Cloud functions. It's enough expensive at client side to look for avoid it and look for better ways talking about performance even if it's more expensive at client side?
I will explain it looking at my example:
Database Structure
"privateUserData" : {
"user1" : {
"messagingTokens": {
"someToken": true,
"someToken2": true,
},
"accountCreationDate" : 1495819217216,
"email" : "abcd#gmail.com",
"followedBy" : {
"user2": true,
"user3": true,
},
"following" : {
"user2": true,
"user3": true,
},
"lastLogin" : 1498654134543,
"photoUrl" : "photo.png",
"username" : "Francisco Durdin Garcia"
},
},
"publicUserData": {
"user1": {
"username": "someUserName",
"followersCount": 5,
"followingCount": 1,
"photoUrl" : "someUrl"
}
...
},
"workouts" : {
"workout1" : {
"likes": {
"user1": true,
"user2": true,
...
},
"followers": {
"user1": true,
"user2": true,
...
},
"comments": {
"comment1": {
"owner": "user1",
"content": "somecomment",
"time": 1493153530311,
"replies": {
"reply1": {
"owner": "user1",
"content": "somecomment",
"time": 1493153530311,
}
}
}
}
"authorUid" : "user1",
"description" : "desc",
"points" : 63,
"time" : "00:03",
"createdAt" : 1493153530311,
"title" : "someTitle",
"workoutJson" : "workoutJsonDataHere"
}
}
To be able to do that query I should do individual queries for each user I follow:
The problem is that I can do a "global" query and limit it to just X dataSnapshots. I can just filter few workouts for each individual query:
mDatabase.child("workouts").orderByChild("authorId).equalTo("userIFollow").limitToLast(10)
This query will return me a filter applied just for one userIFollow it's not possible to do it over all of them, so I have three options:
1. Create a table which stores relation between usersId and workoutsId visible by them with a timeStamp value. But I should
keep track of this values thought a Firebase cloud function, and
obviously maybe I follow an user with Thousands of workouts I my
cloud functions would need to copy ALL OF THEM to the right
reference.
This was the way I wanted to go, but I don't know if it's
the proper way talking about client side cost.
2. I can add a lastActivityTimeStamp on publicUserData and filtering by that retrieve just a few workouts of the last users with activity, growing this query with a pagination too.
3. Finally I can always retrieve all the workouts from this user and filter on client side, this will be expensive just one, because later the cache will do everything easier.
This are the ways I found to resolve my problem, and my question is still how expensive and useful are Firebase Cloud functions to copy large amounts of data with common triggers.
From the way you worded your question, you seem familiar with the Database Cloud Functions for Firebase and it also seems that 'workouts' is your payload (the biggest chunk of data that you don't want to download repeatedly).
I would recommend the following approach based roughly off how GitHub's API works.
Prerequisites
In your /privateUserData/{user} data, you seem to have the list of followed user IDs (at /privateUserData/{user}/following). To make your queries simpler, I'd recommend implementing a list of workout IDs authored by that user (under something like /publicUserData/{user}/authorOf).
Implementation
I'd recommend building a HTTP Cloud Function, at say https://FUNCTION_URL/followedWorkouts. When called you would generate a list of workout IDs for a given user by checking who they follow and then getting the list of workouts authored by each followed user and return them as one array. To identify the user, you could pass in their ID using a GET parameter such as ?user=<someUserId> or through some form of authentication. How you go about it is up to you.
The function should return data in the following (or similar) format (in this case I'm using JSON):
[{"id": "workoutId1", "lastMod": "1493153530311"}, {"id": "workoutId2", "lastMod": "1493153530521"}, ...]
id is the the workout ID.
lastMod (short form of last modified) is the last time that workout's data was updated (from {workoutId}/lastModificationDate). See the 'caching' section below.
Filtering
I'd also implement the following 'filters' on the Cloud Function:
Since (?since=<someTimeStamp>): will return workout IDs that have been modified since that timestamp. (Say you downloaded some information at some time, T, you would then set since=T to then only receive workouts changed after that time.
Max (?max=X): will return the X most recent entries.
Start At (?startAt=X): will return the most recent entries starting at the index X (I'd make it a 1-based index).
So if you wanted to grab the 10 most recent entries, you could call https://FUNCTION_URL/followedWorkouts?max=10 which would give you the IDs for the 1st-10th most recently updated workouts. For the next 'page' of entries, you would call https://FUNCTION_URL/followedWorkouts?startAt=10&max=10 which would give you the 11th-20th most recently updated workout IDs.
Caching
As each workout is a payload, it doesn't make sense to download the multiple times. I would recommend caching this data to prevent this. In the response I suggested above, the field lastMod (last modified) allows you to check if a locally cached version needs updating. How you go about this, is yet again up to you.
Extending
If you need more of these paginated feeds, you could name the function more generally such as https://FUNCTION_URL/feeds and pass in the feed type as a parameter https://FUNCTION_URL/feeds?type=workouts. You could use this for things like followers, following, comments, etc.
Feel free to reach out if you need some more information.
In my user collection, I have an object that contains an array of contacts.
The object definition is below.
How can this entire object, with the full array of contacts, be written to the user database in Meteor from the server, ideally in a single command?
I have spent considerable time reading the mongo docs and meteor docs, but can't get this to work.
I have also tried a large number of different commands and approaches using both the whole object and iterating through the component parts to try to achieve this, unsuccessfully. Here is an (unsuccessful) example that attempts to write the entire contacts object using $set:
Meteor.users.update({ _id: this.userId }, {$set: { 'Contacts': contacts}});
Thank you.
Object definition (this is a field within the user collection):
"Contacts" : {
"contactInfo" : [
{
"phoneMobile" : "1234567890",
"lastName" : "Johnny"
"firstName" : "Appleseed"
}
]
}
This update should absolutely work. What I suspect is happening is that you're not publishing the Contacts data back to the client because Meteor doesn't publish every key in the current user document automatically. So your update is working and saving data to mongo but you're not seeing it back on the client. You can check this by doing meteor mongo on the command line then inspecting the user document in question.
Try:
server:
Meteor.publish('me',function(){
if (this.userId) return Meteor.users.find(this.userId, { fields: { profile: 1, Contacts: 1 }});
this.ready();
});
client:
Meteor.subscribe('me');
The command above is correct. The issue is schema verification. Simple Schema was defeating the ability to write to the database while running 'in the background'. It doesn't produce an error, it just fails to produce the expected outcome.
I am tying to integrate webix tables with a backbone collection as shown in the webix docs (http://docs.webix.com/desktop__backbone_collections.html) however it does not seem to work. The object sync call happens, but no data is loaded.
budgets = new Backbone.Budget.Collection(window.budget)
list =
width : 320
view : "datatable"
id : "budget_list"
backbone_collection : budgets
select : true
scroll : false
columns :[
{header : "Month", id: "budget_month"}
{header : "Year", id: "budget_year"}
{header : "Currency", id: "base_currency"}
]
on: {
onAfterRender : () ->
console.log("Sync ", #_settings)
#sync(#_settings.backbone_collection)
}
Calling .sync from the onAfterRender causes the problem, as sync causes re-rendering of datatable, which triggers new sync and it causes new re-rendering and etc.
To break this loop you can use webix.once which will guarantee that handler will be executed only once.
Check the updated snippet http://webix.com/snippet/5dd61a47
Its very possible that the server you are hitting 1) is not specifying 'Content-type: application/json' and this is rejected by the client on the response; and, or 2) doesn't response to the OPTIONS pre-flight thus throws up a CORS security block. Both are difficult to solve without access to the server.
Curl will not be subject to the CORS issue but a browser-based REST client will -- and thus best represent your issue.
Try using the Chrome advanced rest client with the URL and headers given in the UI.
And if you dont know the URL and header then sniff your requests when you run that UI.