Problem
I'd like to be able to track a users location even when the app is no longer in the foreground (e.g. The user has switch to another app or switched to the home screen and locked their phone).
The use case would be a user tracking a run. They could open the app and press 'start' at the beginning of their run, then switch or minimise the app (press the home button) and lock the screen. At the end of the run they could bring the app into the foreground and press 'stop' and the app would tell them distance travelled on the run.
Question
Is tracking background geolocation possible on both iOS and Android using pure react native?
The react native docs on geolocation (https://facebook.github.io/react-native/docs/geolocation) are not very clear or detailed. The documented linked above eludes to background geolocation on iOS (without being fully clear) but does not mention Android.
Would it be best that I use Expo?
UPDATE 2019 EXPO 33.0.0:
Expo first deprecated it for their SDK 32.0.0 to meet app store guidelines but then reopened it in SDK 33.0.0.
Since, they have made it super easy to be able to implement background location. Use this code snippet that I used to make background geolocation work.
import React from 'react';
import { Text, TouchableOpacity } from 'react-native';
import * as TaskManager from 'expo-task-manager';
import * as Location from 'expo-location';
const LOCATION_TASK_NAME = 'background-location-task';
export default class Component extends React.Component {
onPress = async () => {
await Location.startLocationUpdatesAsync(LOCATION_TASK_NAME, {
accuracy: Location.Accuracy.Balanced,
timeInterval: 5000,
});
};
render() {
return (
<TouchableOpacity onPress={this.onPress} style={{marginTop: 100}}>
<Text>Enable background location</Text>
</TouchableOpacity>
);
}
}
TaskManager.defineTask(LOCATION_TASK_NAME, ({ data, error }) => {
if (error) {
alert(error)
// Error occurred - check `error.message` for more details.
return;
}
if (data) {
const { locations } = data;
alert(JSON.stringify(locations); //will show you the location object
//lat is locations[0].coords.latitude & long is locations[0].coords.longitude
// do something with the locations captured in the background, possibly post to your server with axios or fetch API
}
});
The code works like a charm. One thing to note is that you cannot use geolocation in the Expo App. However, you can use it in your standalone build. Consequently, if you want to use background geolocation you have to use this code and then do expo build:ios and upload to the appstore in order to be able to get a users background location.
Additionally, note that you must include
"UIBackgroundModes":[
"location",
"fetch"
]
In the info.plist section of your app.json file.
The Expo Team release a new feature in SDK 32 that allow you tracking in background the location.
https://expo.canny.io/feature-requests/p/background-location-tracking
Yes is possible, but not using Expo, there are two modules that I've seen:
This is a comercial one, you have to buy a license https://github.com/transistorsoft/react-native-background-geolocation
And this https://github.com/mauron85/react-native-background-geolocation
Webkit is currently evaluating a Javascript-only solution. You can add your voice here
For a fully documented proof-of-concept example please see Brotkrumen.
The most popular RN geolocation library is https://github.com/react-native-geolocation/react-native-geolocation, and it supports this quite easily. I prefer this library over others because it automatically handles asking for permissions and such, and seems to have the simplest API.
Just do this:
Geolocation.watchPosition((position)=>{
const {latitude, longitude} = position.coords;
// Do something.
})
This requires no additional setup other than including the background modes fetch and location, and also the appropriate usage descriptions.
I find this more usable than Expo's API because it doesn't require any weird top level code and also doesn't require me to do anything other than create a watch position handler, which is really nice.
EDIT 2023!:
These days I would highly recommend using Expo's library instead of any of the other community libraries (mainly because our app started crashing when android got an OS update b/c of the lib I was using).
In fact, if you have to choose between expo and non expo library, always choose the expo library if only for the stability. Setting up expo's background location watching isn't super well documented but here's what I did to get it working in our app:
import { useEffect, useRef } from "react";
import * as Location from "expo-location";
import { LatLng } from "react-native-maps";
import * as TaskManager from "expo-task-manager";
import { LocationObject } from "expo-location";
import { v4 } from "uuid";
type Callback = (coords: LatLng) => void;
const BACKGROUND_TASK_NAME = "background";
const executor: (body: TaskManager.TaskManagerTaskBody<object>) => void = (
body
) => {
const data = body.data as unknown as { locations: LocationObject[] };
const l = data?.locations[0];
if (!l) return;
for (const callback of Object.values(locationCallbacks)) {
callback({
latitude: l.coords.latitude,
longitude: l.coords.longitude,
});
}
};
TaskManager.defineTask(BACKGROUND_TASK_NAME, executor);
const locationCallbacks: { [key: string]: Callback } = {};
const hasStartedBackgroundTaskRef = {
hasStarted: false,
};
function startBackgroundTaskIfNecessary() {
if (hasStartedBackgroundTaskRef.hasStarted) return;
Location.startLocationUpdatesAsync(BACKGROUND_TASK_NAME, {
accuracy: Location.Accuracy.Balanced,
}).catch((e) => {
hasStartedBackgroundTaskRef.hasStarted = false;
});
hasStartedBackgroundTaskRef.hasStarted = true;
}
function addLocationCallback(callback: Callback) {
const id = v4() as string;
locationCallbacks[id] = callback;
return {
remove: () => {
delete locationCallbacks[id];
},
};
}
export default function useLocationChangeListener(
callback: Callback | null,
active: boolean = true
) {
const callbackRef = useRef<null | Callback>(callback);
callbackRef.current = callback;
useEffect(() => {
if (!active) return;
if (!callback) return;
Location.getLastKnownPositionAsync().then((l) => {
if (l)
callback({
latitude: l.coords.latitude,
longitude: l.coords.longitude,
});
});
startBackgroundTaskIfNecessary();
const watch = Location.watchPositionAsync({}, (location) => {
callback({
latitude: location.coords.latitude,
longitude: location.coords.longitude,
});
});
const subscription = addLocationCallback(callback);
return () => {
subscription.remove();
watch.then((e) => {
e.remove();
});
};
}, [callback, active]);
useEffect(() => {
if (__DEV__) {
addLocationCallback((coords) => {
console.log("Location changed to ");
console.log(coords);
});
}
}, []);
}
You need to ask for background location permissions before this, BTW. Follow expos guide.
It's pretty risky trusting community libraries for stuff like this because of the fact that breaking android OS updates can happen at any moment and with open source maintainers they may or may not stay on top of it (you can more or less trust expo too, though)
Related
In my React Native app, I'm using react-native-geolocation-service to track location changes. In iOS, the background location tracking works perfectly just by following these instructions. The problem arises in Android which causes the tracking to stop or work randomly when the app goes into background.
Let me emphasize that I don't want the location to be tracked when the app is fully closed. I ONLY want the tracking to work when the app is in the foreground (active) and background states.
I've followed the instructions given in the package's own example project to configure and start the tracking service and just like them I use react-native-foreground-service.
This is the function responsible for tracking the user location with the watchPosition method of Geolocation:
// Track location updates
export const getLocationUpdates = async (watchId, dispatch) => {
// Check if app has permissed
const hasPermission = await hasLocationPermission();
// Show no location modal and return if it hasn't
if (!hasPermission) {
dispatch(setAvailability(false));
return;
}
// Start the location foreground service if platform is Android
if (Platform.OS === 'android') {
await startForegroundService();
}
// Track and update the location refernce value without re-rendering
watchId.current = Geolocation.watchPosition(
position => {
// Hide no location modal
dispatch(setAvailability(true));
// Set coordinates
dispatch(
setCoordinates({
latitude: position?.coords.latitude,
longitude: position?.coords.longitude,
heading: position?.coords?.heading,
}),
);
},
error => {
// Show no location modal
dispatch(setAvailability(false));
},
{
accuracy: {
android: 'high',
ios: 'best',
},
distanceFilter: 100,
interval: 5000,
fastestInterval: 2000,
enableHighAccuracy: true,
forceRequestLocation: true,
showLocationDialog: true,
},
);
};
And this is how the foreground service of react-native-foreground-service is initialized:
// Start the foreground service and display a notification with the defined configuration
export const startForegroundService = async () => {
// Create a notification channel for the foreground service
// For Android 8+ the notification channel should be created before starting the foreground service
if (Platform.Version >= 26) {
await VIForegroundService.getInstance().createNotificationChannel({
id: 'locationChannel',
name: 'Location Tracking Channel',
description: 'Tracks location of user',
enableVibration: false,
});
}
// Start service
return VIForegroundService.getInstance().startService({
channelId: 'locationChannel',
id: 420,
title: 'Sample',
text: 'Tracking location updates',
icon: 'ic_launcher',
});
};
And this is how it's supposed to stop:
// Stop the foreground service
export const stopLocationUpdates = watchId => {
if (Platform.OS === 'android') {
VIForegroundService.getInstance()
.stopService()
.catch(err => {
Toast.show({
type: 'error',
text1: err,
});
});
}
// Stop watching for location updates
if (watchId.current !== null) {
Geolocation.clearWatch(watchId.current);
watchId.current = null;
}
};
The way I start the tracking is just when the Map screen mounts:
const watchId = useRef(null); // Location tracking reference value
const dispatch = useDispatch();
// Start the location foreground service and track user location upon screen mount
useMemo(() => {
getLocationUpdates(watchId, dispatch);
// Stop the service upon unmount
return () => stopLocationUpdates(watchId);
}, []);
I still haven't found a way to keep tracking the location when the app goes into background and have become frustrated with react-native-foreground-service since its service won't stop even after the app is fully closed (The problem is that the cleanup function of useMemo never gets called upon closing the app).
I have heard about react-native-background-geolocation (The free one) but don't know if it will still keep tracking after closing the app (A feature I DON'T want) and am trying my best not to use two different packages to handle the location service (react-native-background-geolocation and react-native-geolocation-service).
Another option would be Headless JS but even with that I'm not quite sure if it would stop tracking after the app is closed.
I welcome and appreciate any help that might guide me to a solution for this frustrating issue.
I have a generic implementation to fire a page_view google analytics event in my react application every time there's a route change:
const usePageViewTracking = () => {
const { pathname, search, hash } = useLocation();
const pathnameWithTrailingSlash = addTrailingSlashToPathname(pathname) + search + hash;
useEffect(() => {
invokeGAPageView(pathnameWithTrailingSlash);
}, [pathname]);
};
export default usePageViewTracking;
This works fine, but I need to fire ga4 page_view events with custom dimensions and if the page doesn't have some data, I should not send it in page_view event.
I turned my previous code into this:
const usePageViewTracking = () => {
const { pathname, search, hash } = useLocation();
const subscriptionsData = useAppSelector(
(state) => state?.[REDUX_API.KEY]?.[REDUX_API.SUBSCRIPTIONS]?.successPayload?.data
);
useEffect(() => {
sendPageViewEvent({ subscriptionsData });
}, [pathname, subscriptionsData]);
};
export default usePageViewTracking;
sendPageViewEvent is where I collect most of the information I need to be sent, and currently is like this:
export const sendPageViewEvent = ({ subscriptionsData }: SendPageViewEventProps): void => {
const { locale, ga } = window.appData;
const { subscriptions, merchants, providers } =
prepareSubscriptionsData({ subscriptionsData }) || {};
const events = {
page_lang: locale || DEFAULT_LOCALE,
experiment: ga.experiment,
consent_status: Cookies.get(COOKIES.COOKIE_CONSENT) || 'ignore',
...(subscriptionsData && {
ucp_subscriptions: subscriptions,
ucp_payment_providers: providers,
ucp_merchants: merchants,
}),
};
sendGA4Event({ eventType: GA4_EVENT_TYPE.PAGE_VIEW, ...events });
};
So as you can see, I have some dimensions that are always sent, and some that are conditionally sent (subscriptionsData).
The problem
The problem with this implementation is that once the page renders, it waits for subscriptionData to be available to fire the event, which would be ok, if this data would be fetched in all pages. If a page doesn't have this data, I still need to send the event, just not attach subscriptions dimensions into it.
I tried different approaches in my application, like:
going to each page and firing it individually, but since it's a huge application, it would require a huge refactoring that turns out to not to be justified for analytics purposes.❌
Having some sort of config file to define which routes fire which endpoints, but this is a terrible and unmaintainable idea ❌
Now if there would be a way to know based on the redux store how figure out which endpoints are being triggered on a page, maybe I could then detect it and decide if I should wait for this property to be available or fire the event without it.
PS: there will be more fetched data from different endpoints that I'll have to fire on page_view experiment too... and it's an SPA aplication (so everything is using CSR).
Any ideas are welcome! :D
My App which created using React Native require functionality to scan QR code with default camera app then open specific screen in the app, in order to achieve this I setup Firebase dynamic links by also using React Native Firebase library.
The setup were pretty simple, a dynamic link using Firebase provided domain, the link also contain deep link in url format https://example.page.link/abc-xyz.
After the app scan the QR it use deep link url to extract the abc-xyz part and navigate to different screen, here is my implementation.
// App.js
const handleDynamicLink = link => {
const linkCheck = new RegExp('^https://example.page.link/.*$');
let title;
if (linkCheck.test(link.url)) {
title = link.url.substring(link.url.lastIndexOf('/') + 1).split('-');
RootNavigation.navigate(Screens.offer, { title: title });
}
};
...
React.useEffect(() => {
// Handler for background/quit events
dynamicLinks().getInitialLink().then(link => {
handleDynamicLink(link);
});
// Handler for foreground events
const unsubscribe = dynamicLinks().onLink(handleDynamicLink);
return () => unsubscribe();
}, []);
// RootNavigation.js
import * as React from 'react';
export const navigationRef = React.createRef();
export function navigate(name, params) {
navigationRef.current?.navigate(name, params);
}
All necessary setup already configured both for iOS and Android, doing test with Android device by scanning the QR code, it recognise the link and navigate to intended screen, but not for iOS even though it understand the link and it only open the initial screen then stop there.
Strange thing is if I open the link directly in device browser it will open the preview page, then if I click the "open" button it open the app and navigate to target screen.
Wondering is this has something to do with navigation stuff in the iOS native side?
Turns out I need to update AppDelegate.m and add link handler for iOS.
Added below code above #end line in AppDelegate.m.
// AppDelegate.m
- (BOOL)application:(UIApplication *)application
openURL:(NSURL *)url
options:(NSDictionary<UIApplicationOpenURLOptionsKey,id> *)options
{
return [RCTLinkingManager application:application openURL:url options:options];
}
- (BOOL)application:(UIApplication *)application continueUserActivity:(nonnull NSUserActivity *)userActivity
restorationHandler:(nonnull void (^)(NSArray<id<UIUserActivityRestoring>> * _Nullable))restorationHandler
{
return [RCTLinkingManager application:application
continueUserActivity:userActivity
restorationHandler:restorationHandler];
}
Handling when app in foreground state or already open.
// App.js
if (Platform.OS === 'ios') {
Linking.addEventListener('url', handleDynamicLink);
}
Handling when app is fully closed/initiated by dynamic link.
// App.js
if (Platform.OS === 'ios') {
Linking.getInitialURL()
.then(link => {
handleDynamicLink({ url: link });
})
.catch(error => {
// Error handling
});
} else {
// This part for Android
dynamicLinks().getInitialLink().then(link => {
handleDynamicLink(link);
});
}
I'm trying to make a web worker to prevent stalling the React main thread. The worker is supposed to read an image and do various things.
The app was created using create-react-app.
Currently I have
WebWorker.js
export default class WebWorker {
constructor(worker) {
const code = worker.toString();
const blob = new Blob(['('+code+')()'], {type: "text/javascript"});
return new Worker(URL.createObjectURL(blob), {type: 'module'});
}
}
readimage.worker.js
import Jimp from "jimp";
export default () => {
self.addEventListener('message', e => { // eslint-disable-line no-restricted-globals
if (!e) return;
console.log('Worker reading pixels for url', e.data);
let data = {};
Jimp.read(e.data).then(image => {
// jimp does stuff
console.log('Worker Finished processing image');
})
postMessage(data);
})
};
And then in my React component AppContent.js I have
import WebWorker from "./workers/WebWorker";
import readImageWorker from './workers/readimage.worker.js';
export default function AppContent() {
const readWorker = new ReadImageWorker(readImageWorker);
readWorker.addEventListener('message', event => {
console.log('returned data', event.data);
setState(data);
});
// callback that is executed onClick from a button component
const readImageContents = (url) => {
readWorker.postMessage(url);
console.log('finished reading pixels');
};
}
But when I run it, I get the error
Uncaught ReferenceError: jimp__WEBPACK_IMPORTED_MODULE_0___default is not defined
How can I properly import a module into a web worker?
EDIT:
As per suggestions from Kaiido, I have tried installing worker-loader, and edited my webpack.config.js to the following:
module.exports = {
module: {
rules: [
{
test: /\.worker\.js$/,
use: { loader: 'worker-loader' }
}
]
}
};
But when I run it, I still get the error
Uncaught ReferenceError: jimp__WEBPACK_IMPORTED_MODULE_0__ is not defined
I'm not too much into React, so I can't tell if the module-Worker is the best way to go (maybe worker-loader would be a better solution), but regarding the last error you got, it's because you didn't set the type of your Blob when you built it.
In this case, it does matter, because it will determine the Content-Type the browser sets when serving it to the APIs that fetch it.
Here Firefox is a bit more lenient and somehow allows it, but Chrome is picky and requires you set this type option to one of the many javascript MIME-types.
const script_content = `postMessage('running');`;
// this one will fail in Chrome
const blob1 = new Blob([script_content]); // no type option
const worker1 = new Worker(URL.createObjectURL(blob1), { type: 'module'});
worker1.onerror = (evt) => console.log( 'worker-1 failed' );
worker1.onmessage = (evt) => console.log( 'worker-1', evt.data );
// this one works in Chrome
const blob2 = new Blob([script_content], { type: "text/javascript" });
const worker2 = new Worker(URL.createObjectURL(blob2), { type: 'module'});
worker2.onerror = (evt) => console.log( 'worker-2 failed' );
worker2.onmessage = (evt) => console.log( 'worker-2', evt.data );
But now that this error is fixed, you'll face an other error, because the format import lib from "libraryname" is still not supported in browsers, so you'd have to change "libraryname" to the path to your actual script file, keeping in mind that it will be relative to your Worker's base URI, i.e probably your main-page's origin.
I experienced the same problem. Firefox could not show me where exactly the error was (in fact it was plain misleading...) but Chrome did.
I fixed my problem by not relying on an import statement (importing one of my other files) which would only have worked within a React context. When you load a Worker script (via the blob()
/ URL() hack), it has no React context (as it is loaded at runtime and not at transpile time). So all the React paraphernalia __WEBPACK__blah_blah is not going to exist / be visible.
So... within react... import statements in worker files will not work.
I haven't thought of a workaround yet.
I am new to testing using protractor so for testing I have to take screenshots in an angular application for all the different routes in my app. I tried to do it on a small dummy angular app, so I cloned the Tour of heroes repo it has dashboard and Heroes route. I wrote the following code in app.po.ts :
import { browser, element, by } from 'protractor';
export class BlankPage {
navigateTo() {
return browser.get('/heroes');
}
getParagraphText() {
return element(by.tagName('h2')).getText();
}
}
and in app.e2e-spec.ts
import { BlankPage } from './app.po';
import {browser,by,element} from 'protractor';
import { protractor } from 'protractor';
import {createWriteStream} from 'fs' ;
describe('blank App', () => {
let page: BlankPage;
beforeEach(() => {
page = new BlankPage();
});
it('should display message saying app works', () => {
page.navigateTo();
expect(page.getParagraphText()).toEqual('My Heroes');
browser.takeScreenshot().then((png) =>{
var stream = createWriteStream("heroes.png"); /** change the png file name */
stream.write(new Buffer(png, 'base64'));
stream.end;
});
});
});
The idea was to navigate to heroes route and capture the screenshot. I got the screenshot but
Is there a way I can automate the task of going to all the routes and take screenshots ? In my actual website there are a lot of routes
I think the better solution for you is to add some reporter that will do everything for you, like taking screenshots after each test or after each failed tests and e.t.c.
Take a look at some reporters:
allure-jasmine - Highly recommended.
protractor-jasmine2-screenshot-reporter
protractor-jasmine2-html-reporter
protractor-html-reporter-2
protractor-html-screenshot-reporter
protractor-beautiful-reporter
But If you don't want to add any extra libraries to your project you can just put the browser.takeScreenshot() function to the afterEach function to take a screenshot after each test (it).
For instance:
describe('blank App', () => {
let page: BlankPage;
beforeEach(() => {
page = new BlankPage();
});
afterEach(() =>
browser.takeScreenshot().then((png) =>{
var stream = createWriteStream("heroes.png"); /** change the png file name */
stream.write(new Buffer(png, 'base64'));
stream.end;
});
});
it('should display message saying app works', () => {
page.navigateTo();
expect(page.getParagraphText()).toEqual('My Heroes');
});
});
I think the best approach for you would be the have a list of all the routes in your application and create a datadriven test to iterate over each one.
You would need a generic navigation function which could get to each page e.g navigateTo(routeName). That code would look something like this.
var routes = [
'homepage',
'myheroes',
'mainpage',
'heroprofile'
]
describe('blank App', () => {
for (let i = 0; i < routes.length; i++) {
it('should display message saying app works', () => {
navigateTo(routes[i]);
browser.takeScreenshot().then((png) => {
var stream = createWriteStream(routes[i] + ".png"); /** change the png file name */
stream.write(new Buffer(png, 'base64'));
stream.end;
});
});
}
});
protractor-image-compare
Really though I would recommend you use the npm package protractor-image-comparison. I've worked with this package and it does make visual validation very straightforward. It allows you to save new baseline images (golden images as you call them) if they are absent and compares them if they are present. The comparison are very sensitive to change but you can set how much of a difference you want to allow.
There would be no database required with this approach.
Note
Be aware also that different browsers take screenshots differently based. Chrome considers the "viewport" to be the visible portion of the browser but I believe in firefox you can screenshot the entire webpage at once.