I would like to get my heart rate data stored on Google Fit. Through this page I can try the API: https://developers.google.com/fit/rest/v1/reference/users/dataSources/datasets/get?apix=true
and work because the json result is:
{
"minStartTimeNs": "1607036400000000000",
"maxEndTimeNs": "1607122800000000000",
"dataSourceId": "raw:com.google.heart_rate.bpm:AA:62:2a5297f4:Notify for Amazfit - heart rate",
"point": [
{
"startTimeNanos": "1607036509703000000",
"endTimeNanos": "1607036509703000000",
"dataTypeName": "com.google.heart_rate.bpm",
"value": [
{
"fpVal": 46,
"mapVal": []
}
],
"modifiedTimeMillis": "1607076710847"
},
...
...
...
If I click on the javascript tab it generates the code that must only be modified with API_KEY and CLIENT_ID but if I run it the result is:
minStartTimeNs "1607036400000000000"
maxEndTimeNs "1607122800000000000"
dataSourceId "raw:com.google.heart_rate.bpm:com.mc.amazfit1:Amazfit:Amazfit Bip Watch:97f19a4a:Notify for Amazfit - heart rate"
point []
The "point" array is empty. Without any errors, it doesn't tell me, that I don't have access, or the scope is wrong, it's just empty. Even looking with the firefox debugger the ajax calls are identical, only the access token changes. how can I do? thanks.
I see you are using the device's datasource directly (Amazfit Bip watch), not the derived Google Fit datasource. When I tried accessing the device's datasources, I also saw no data returned. After much trial and error, I had better luck fetching it from the derived sources, examples below.
DATA_SOURCE = {
"steps": "derived:com.google.step_count.delta:com.google.android.gms:merge_step_deltas",
"dist": "derived:com.google.distance.delta:com.google.android.gms:from_steps<-merge_step_deltas",
"bpm": "derived:com.google.heart_rate.bpm:com.google.android.gms:merge_heart_rate_bpm",
"rhr": "derived:com.google.heart_rate.bpm:com.google.android.gms:resting_heart_rate<-merge_heart_rate_bpm",
"sleep" : "derived:com.google.sleep.segment:com.google.android.gms:sleep_from_activity<-raw:com.google.activity.segment:com.heytap.wearable.health:stream_sleep",
"cal" : "derived:com.google.calories.expended:com.google.android.gms:from_activities",
"move": "derived:com.google.active_minutes:com.google.android.gms:from_steps<-estimated_steps",
"points" : "derived:com.google.heart_minutes:com.google.android.gms:merge_heart_minutes",
"weight" : "derived:com.google.weight:com.google.android.gms:merge_weight"
}
Obviously, the only other time I have seen this happen (point []) is when there is no data for that timeframe, i.e. the datasetId (start-end). It is common with 'Heart Points' and 'Active Minutes'. But I wouldn't think that would be the case for heart rate data (unless you weren't wearing the watch).
Related
Need to pull from the end point "fb_page_categories" which returns an array of all categories a page could fall under. Which looks like so:
GET graph.facebook.com
/fb_page_categories?
This returns something like so:
{
"data": [
{
"name": "Interest",
"fb_page_categories": [
{
"name": "Literary Arts",
"id": "856055631167537"
},
{
"name": "Performance Art",
"id": "756092301147942"
},
{
"name": "Performing Arts",
"id": "1758092431143387"
},
I then need to pull all of the results of certain specific "categories" that querying said end point returned, and finally filter those categories by proximity to current location. I'm new to java and have no idea where to even begin, or what that code would look like. Any advice, articles, Stack questions, Git projects, etc that can point me in the correct direction would be greatly appreciated. Thanks!
Facebook’s own Graph search for places can do this in a fashion. On Facebook your current location is implicit with being logged in.
First you should be aware that the list of page categories you referenced don’t all allow location. Only categories within the business and organization trees do so. (Facebook refers to this organization as a category taxonomy.)
For example, the ID for Restaurant is currently 273819889375819. So the search result for restaurants nearby would be found at this link.
Graph query for places on Facebook has many other modifiers and it appears there has been continuing development of this feature. After a significant splash a few years ago it appears they have stepped back from fully documenting its use.
I'm recently working with Firebase Cloud functions to delegate lot of work from my client side to the server, reducing the data cost for the user. But recently I wondered if it's worth it or not, or maybe a better database structure could fix it.
I have a social app where the user can workout and post their results, you can follow users and all kind of "typical" social media stuffs. Well, my problem appear when I want to implement pagination retrieving the last X workouts that I should show to each user on their feed.
My question is : How expensive could be update from 1-1000(worst case) fields on the database on a common event trigger on Firebase Cloud functions. It's enough expensive at client side to look for avoid it and look for better ways talking about performance even if it's more expensive at client side?
I will explain it looking at my example:
Database Structure
"privateUserData" : {
"user1" : {
"messagingTokens": {
"someToken": true,
"someToken2": true,
},
"accountCreationDate" : 1495819217216,
"email" : "abcd#gmail.com",
"followedBy" : {
"user2": true,
"user3": true,
},
"following" : {
"user2": true,
"user3": true,
},
"lastLogin" : 1498654134543,
"photoUrl" : "photo.png",
"username" : "Francisco Durdin Garcia"
},
},
"publicUserData": {
"user1": {
"username": "someUserName",
"followersCount": 5,
"followingCount": 1,
"photoUrl" : "someUrl"
}
...
},
"workouts" : {
"workout1" : {
"likes": {
"user1": true,
"user2": true,
...
},
"followers": {
"user1": true,
"user2": true,
...
},
"comments": {
"comment1": {
"owner": "user1",
"content": "somecomment",
"time": 1493153530311,
"replies": {
"reply1": {
"owner": "user1",
"content": "somecomment",
"time": 1493153530311,
}
}
}
}
"authorUid" : "user1",
"description" : "desc",
"points" : 63,
"time" : "00:03",
"createdAt" : 1493153530311,
"title" : "someTitle",
"workoutJson" : "workoutJsonDataHere"
}
}
To be able to do that query I should do individual queries for each user I follow:
The problem is that I can do a "global" query and limit it to just X dataSnapshots. I can just filter few workouts for each individual query:
mDatabase.child("workouts").orderByChild("authorId).equalTo("userIFollow").limitToLast(10)
This query will return me a filter applied just for one userIFollow it's not possible to do it over all of them, so I have three options:
1. Create a table which stores relation between usersId and workoutsId visible by them with a timeStamp value. But I should
keep track of this values thought a Firebase cloud function, and
obviously maybe I follow an user with Thousands of workouts I my
cloud functions would need to copy ALL OF THEM to the right
reference.
This was the way I wanted to go, but I don't know if it's
the proper way talking about client side cost.
2. I can add a lastActivityTimeStamp on publicUserData and filtering by that retrieve just a few workouts of the last users with activity, growing this query with a pagination too.
3. Finally I can always retrieve all the workouts from this user and filter on client side, this will be expensive just one, because later the cache will do everything easier.
This are the ways I found to resolve my problem, and my question is still how expensive and useful are Firebase Cloud functions to copy large amounts of data with common triggers.
From the way you worded your question, you seem familiar with the Database Cloud Functions for Firebase and it also seems that 'workouts' is your payload (the biggest chunk of data that you don't want to download repeatedly).
I would recommend the following approach based roughly off how GitHub's API works.
Prerequisites
In your /privateUserData/{user} data, you seem to have the list of followed user IDs (at /privateUserData/{user}/following). To make your queries simpler, I'd recommend implementing a list of workout IDs authored by that user (under something like /publicUserData/{user}/authorOf).
Implementation
I'd recommend building a HTTP Cloud Function, at say https://FUNCTION_URL/followedWorkouts. When called you would generate a list of workout IDs for a given user by checking who they follow and then getting the list of workouts authored by each followed user and return them as one array. To identify the user, you could pass in their ID using a GET parameter such as ?user=<someUserId> or through some form of authentication. How you go about it is up to you.
The function should return data in the following (or similar) format (in this case I'm using JSON):
[{"id": "workoutId1", "lastMod": "1493153530311"}, {"id": "workoutId2", "lastMod": "1493153530521"}, ...]
id is the the workout ID.
lastMod (short form of last modified) is the last time that workout's data was updated (from {workoutId}/lastModificationDate). See the 'caching' section below.
Filtering
I'd also implement the following 'filters' on the Cloud Function:
Since (?since=<someTimeStamp>): will return workout IDs that have been modified since that timestamp. (Say you downloaded some information at some time, T, you would then set since=T to then only receive workouts changed after that time.
Max (?max=X): will return the X most recent entries.
Start At (?startAt=X): will return the most recent entries starting at the index X (I'd make it a 1-based index).
So if you wanted to grab the 10 most recent entries, you could call https://FUNCTION_URL/followedWorkouts?max=10 which would give you the IDs for the 1st-10th most recently updated workouts. For the next 'page' of entries, you would call https://FUNCTION_URL/followedWorkouts?startAt=10&max=10 which would give you the 11th-20th most recently updated workout IDs.
Caching
As each workout is a payload, it doesn't make sense to download the multiple times. I would recommend caching this data to prevent this. In the response I suggested above, the field lastMod (last modified) allows you to check if a locally cached version needs updating. How you go about this, is yet again up to you.
Extending
If you need more of these paginated feeds, you could name the function more generally such as https://FUNCTION_URL/feeds and pass in the feed type as a parameter https://FUNCTION_URL/feeds?type=workouts. You could use this for things like followers, following, comments, etc.
Feel free to reach out if you need some more information.
I am attempting to use the Wikipedia API to retrieve article titles and snippets of the article's text. But when I try to access those properties, I am getting the error "Cannot read property of undefined."
Here is my JSON response:
{
"batchcomplete": "",
"continue": {
"gsroffset": 10,
"continue": "gsroffset||"
},
"query": {
"pages": {
"13834": {
"pageid": 13834,
"ns": 0,
"title": "\"Hello, World!\" program",
"index": 6,
"extract": "<p>A <b>\"Hello, World!\" program</b> is a computer program that outputs or displays \"Hello, World!\" to a user. Being a very simple program in most programming languages, it is often used to illustrate the</p>..."
},
"6710844": {
"pageid": 6710844,
"ns": 0,
"title": "Hello",
"index": 1,
"extract": "<p><i><b>Hello</b></i> is a salutation or greeting in the English language. It is first attested in writing from 1826.</p>..."
},
"1122016": {
"pageid": 1122016,
"ns": 0,
"title": "Hello! (magazine)",
"index": 7,
"extract": "<p><i><b>Hello</b></i> (stylised as <i><b>HELLO!</b></i>) is a weekly magazine specialising in celebrity news and human-interest stories, published in the United Kingdom since 1988. It is the United Kingdom</p>..."
}
}
}
}
I have tried a couple different ways of writing the code. For example, this works (logs the pages as an object in the console):
console.log(response.query.pages);
But this returns the error I wrote above ("Cannot read property of undefined"):
console.log(response.query.pages[0].title);
Any suggestions on how to access the attributes "title" and "extract" would be appreciated. Thanks.
That's because pages is not an array; it's an object where the keys are the ids. So you need to do:
console.log(response.query.pages[1122016].title);
This will work. If you want the "first" page, for instance, then
let pages = response.query.pages;
console.log(pages[Object.keys(pages)[0]].title);
Note that I'm not sure if the order of the keys in JS objects is guaranteed.
If you want to iterate over the pages, do
let pages = response.query.pages;
Object.keys(pages).forEach(id => {
let page = pages[id];
console.log(page.title, page.foo);
});
Special Case: Working with Asynchronous Calls
Howdy fellow devs,
If you're checking out this thread because you're working with a framework like React, or some other framework that has you using a development server (e.g. using npm start or something similar), your dev server may be crashing when you try to do something like console.log(response.foo.bar) before it refreshes on data reload.
Specifically, my dev server was crashing with the Cannot read property 'bar' of undefined type message, and I was like, "what the heck is going on here!?". Solution: put that baby in a try/catch block:
try {
console.log(rate['bar'].rate)
} catch (error) {
console.log(error)
}
Why? If your App has a default state (even an empty array, for example), then it tries to console.log the response before the data has been received from the remote source, it will successfully log your empty state, but if you try to reference parts of the object you're expecting to receive from the remote source in your console.log or whatever else, the dev server will be trying to reference something not there on initial load and crash before it has a chance to reference it when it's actually received from the remote source via API or whatever.
Hope this helps someone!
I'm not sure which language you're using to parse the JSON (looks like Javascript from console.log?) but the issue is that query.pages is a dictionary, not an array, so it can't be iterated by index, only by key.
So you want something like (pseudocode):
for (key in response.query.keys)
{
console.log(response.query[key].title);
}
I need all google reviews for particular location but I am unable to use business api from google. Here is url for get request
https://mybusiness.googleapis.com/v3/accounts/account_name/locations/location_name/reviews
Now my question is what is the value for param account_name and location_name
How can I get that.
Please answer with sample location example
I think first of all you need to white list your google my business api for whatever project you are working on in your project as its private api. Google my business api will work on the locations associated with your account so make sure you verified the LOCATIONS from any account you know. Then you can try out the api call you mentioned in OAuthplayground.
Follow steps mentioned in below documentation URL to set it up:
https://developers.google.com/my-business/content/prereqs
After the setup and etc you will automatically understand the account id and location id.
Also few more urls you can go to understand it better.
https://console.developers.google.com (here you will setup your project)
https://business.google.com/manage (here you will add/can see the locations - for which you need reviews)
https://developers.google.com/my-business/content/basic-setup (Steps after completing the prereq)
https://developers.google.com/oauthplayground (You will test the my
business api here after approval)
When you make a request to https://mybusiness.googleapis.com/v3/accounts it gives you a list of accounts. On those accounts they have a field called name. That field is accounts/account_name.
{
"state": {
"status": "UNVERIFIED"
},
"type": "PERSONAL",
"name": "accounts/1337",
"accountName": "example"
}
When you make a request to https://mybusiness.googleapis.com/v3/accounts/account_name/locations it gives you a list of locations. On those locations they have a field called name. That field is accounts/account_name/locations/location_name.
{
"locations": [
{
"languageCode": "en",
"openInfo": {
"status": "OPEN",
"canReopen": true
},
"name": "accounts/1337/locations/13161337",
...
}
I have been getting like and comment counts per post of facebook page/group feed call by graph api separately using FQL but as version 2 of graph api released fql no longer working to serve purpose.
So i have to find new ways to get comment and like counts per post of page feed display. I will make a separate call to get comment and like counts per post of the fb page as it may not be possible to get things in same page feed call(or it is?).
So, searching through google, i found following way using graph api call -
..page_id/feed?fields=likes.limit(1).summary(true){id},comments.limit(1).summary(true)&limit=10
Is this the best and error free way?? Also besides id and summary fields i also get created_time, paging, likes data by the above call which is unexpected and redundant, how do i exclude these additional fields?
So please any FB employee show me light on what is the best way to retrieve like and comment count per post of page/group feed using graph api version 2.
If you want to retrieve Likes and comments count of a post on FB you can achieve this by using Id of the post Like this
..Your_Post_ID?fields=likes.limit(0).summary(true),comments.limit(0).summary(true)
the result will contain
Total numbers of likes of the post, total numbers of comments of the post, post ID and post created time.
The result will be like this
{
"likes": {
"data": [
],
"summary": {
"total_count": 550
}
},
"comments": {
"data": [
],
"summary": {
"order": "chronological",
"total_count": 858
}
},
"created_time": "2014-10-12T05:38:48+0000",
"id": "Your_Post_ID"
}