How to determine if user picked video or image? - javascript

Using react native and expo, we want a user to be able to post a video or an image.
Here is the code:
import * as ImagePicker from 'expo-image-picker';
const openImagePickerAsync = async() => {
const permissionResult = await ImagePicker.requestMediaLibraryPermissionsAsync();
if (permissionResult.granted === false) {
setImage(null);
setHasImage(false);
alert('Permission to access camera roll is required!');
return;
}
const pickerResult = await ImagePicker.launchImageLibraryAsync({
mediaTypes: ImagePicker.MediaTypeOptions.All, <---------------- allowing photos and video
allowsEditing: true,
});
try {
if (pickerResult.cancelled === true) {
setHasImage(false);
console.log('pickerResult is cancelled');
return;
}
if (pickerResult !== null) {
setHasImage(true);
setImage(pickerResult.uri);
console.log(image);
} else {
setImage(null);
setHasImage(false);
console.log('pickerResult is null');
return;
}
} catch (error) {
console.log(error);
}
};
How do we make it so that we can know if a user picked a photo or a video? Is it in the metadata?

You should use pickerResult.type which would have video or image
You can refer the documentation
Returns If the user cancelled the picking, returns { cancelled: true
}.
Otherwise, this method returns information about the selected media
item. When the chosen item is an image, this method returns {
cancelled: false, type: 'image', uri, width, height, exif, base64 };
when the item is a video, this method returns { cancelled: false,
type: 'video', uri, width, height, duration }.

Related

interaction of a chrome extension based on React TSX UI chrome API

I'm attempting to build some extension which contains a form and an option to capture screen with desktopCapture, which looks like this:
The form is written in React TypeScript and the code for capturing the screen (taken from here) is the following:
chrome.runtime.onMessage.addListener(
(message, sender, senderResponse) => {
if (message.name === "stream" && message.streamId) {
let track, canvas;
navigator.mediaDevices
.getUserMedia({
video: {
mandatory: {
chromeMediaSource: "desktop",
chromeMediaSourceId: message.streamId,
},
},
})
.then((stream) => {
track = stream.getVideoTracks()[0];
const imageCapture = new ImageCapture(track);
return imageCapture.grabFrame();
})
.then((bitmap) => {
track.stop();
canvas = document.createElement("canvas");
canvas.width = bitmap.width;
canvas.height = bitmap.height;
let context = canvas.getContext("2d");
context.drawImage(bitmap, 0, 0, bitmap.width, bitmap.height);
return canvas
.toDataURL()
.then((url) => {
//TODO download the image from the URL
chrome.runtime.sendMessage(
{ name: "download", url },
(response) => {
if (response.success) {
alert("Screenshot saved");
} else {
alert("Could not save screenshot");
}
canvas.remove();
senderResponse({ success: true });
}
);
})
.catch((err) => {
alert("Could not take screenshot");
senderResponse({ success: false, message: err });
});
});
}
return true;
}
);
My intention is that when the user will click on "take screen shot", the code above will run, and then, on save, the image will be presented in that box.
I was able to 'grab' the two elements, both the box where I wish the image to appear after screenshooting, and the "TAKE SCREEN SHOT" button.
as far as I'm aware of, content_script only injects into web-pages (browser), and has no access to extension, therefor, that's not the way to add the code inside..
What am I missing? How could I add an eventListener, that if the button is clicked, the screenCapturing code will run, and I'll be able to set the box to be the captured image?
Best regards!
As i understand,
you want to take screenshot of tab's page content.
(I assume you don't need to grab playing video or audio content)
Fix 1:
Use chrome.tabs.captureVisibleTab api for capture screenshot.
API link
chrome.tabs
Add this in background.js
const takeShot = async (windowId) => {
try {
let imgUrl64 = await chrome.tabs.captureVisibleTab(windowId, { format: "jpeg", quality: 80 });
console.log(imgUrl64);
chrome.runtime.sendMessage({ msg: "update_screenshot",imgUrl64:imgUrl64});
} catch (error) {
console.error(error);
}
};
chrome.runtime.onMessage.addListener(async (req, sender, sendResponse) => {
if(req.msg === "take_screenshot") takeShot(sender.tab.windowId)
}
Fix 2:
Content_script has limited api access.
Check this page. Understand content script capabilities
Solution:
Send message from content_script to background and ask them to capture screenshot.
Background capture screenshot
content.js
chrome.runtime.sendMessage({ msg: "take_screenshot"});
popup.js
chrome.runtime.onMessage.addListener(async (req, sender, sendResponse) => {
if(req.msg === "update_screenshot") console.log(req.imgUrl64)
}

create a channel with a embed

I try to create a very small ticket bot.
I would only like that when reacting a support channel opens and nothing else.
This is the code i am working with.
const ember = new Discord.MessageEmbed()
.setColor('#E40819')
.setTitle('⚠️SUPPORT')
.setDescription("Open a Ticket")
let msgEmbed6 = await message.channel.send(ember)
await msgEmbed6.react('⚠️')
The code inside the if statement will only run if the user reacts, I'm not sure what you mean by 'open a support channel'.
const reaction = msgEmbed6.awaitReactions((reaction, user) => user.id === message.author.id, { max: 1, timeout: TIME_IN_MILLISECONDS });
if (reaction.size > 0) {
// Creates a new text channel called 'Support'
const supportChannel = await message.guild.channels.create('Support', { type: 'text' });
// Stops #everyone from viewing the channel
await supportChannel.updateOverwrite(message.guild.id, { VIEW_CHANNEL: false });
// Allows the message author to send messages to the channel
await supportChannel.updateOverwrite(message.author, { SEND_MESSAGES: true, VIEW_CHANNEL: true });
}

React - Multiple images upload timing is wrong (Firebase Firestore and Storage)

I am having some trouble trying to register data on both Firebase Firestore and Storage register at the right time.
I am using React and Firebase and I have a screen where the user registers a ball with its information as well as its images. Once the user has prompted that information, I use this Firebase function below. First, I register the newly created data, grab its ID and then use it to create a path in Storage and after that, I save the images uploaded by the user. Both data for Firestore and Storage are saving as they should, except for the timing.
The problem is that I get a response right after the ball information has been added and not until the images have finished been uploaded. Once I run the code below, I am getting a response immediately at where console.log(resultCheck) (Which is a promise) is and before console.log(snapshot);. I need to return both responses at the same time but I can't find the right asynchronous time.
async registerBall(ball, images) {
let result = await this.firebase.firestore()
.collection("balls")
.add(ball)
.then(async function(ballResult) {
if (images.length > 0) {
var storageRef = firebase.storage().ref();
let imageUploadResult = await images.map(async image => {
let uploadTask = await storageRef
.child("images/balls/" + ballResult.id + "/" + image.name)
.put(image.file, { contentType: image.file.type })
.then(snapshot => {
console.log(snapshot);
return { isError: false };
})
.catch(error => {
return { isError: true, errorMessage: error };
});
return uploadTask;
});
let resultCheck = { isError: false };
imageUploadResult.forEach(result => {
if (result.isError) {
return (resultCheck = {
isError: true,
errorMessage: result.errorMessage
});
}
});
console.log(resultCheck);
return resultCheck;
} else {
return { isError: false };
}
})
.catch(function(error) {
return { isError: true, errorMessage: error };
});
return result;
}
Thanks for the help!
This issue is likely that you are trying to await your images.map when uploading which is actually returning an array of promises. To wait for all of them you can use Promise.all, i.e.:
const imageUploadPromises = images.map(async image => {...
Then:
const imageUploadResults = await Promise.all(imageUploadPromises);

Check always location services are enabled or not in React Native (iOS and Android both)

I am working on react native application. There I have to fetch user locations like multiple if user moves/navigates from one place to other. This is working fine, but, If user disables location permission after some time like user goes to settings there disabled permission, I have to show some button like enable location and again Once user tap on that button It should ask to Request Permission for location.
But, If user first time gives permission and later in some time if he disables permission, The popup for Request permission not showing popup in Android on tap of button.
I am using following library to fetch user location details.
import Geolocation from 'react-native-geolocation-service';
// button on click method following
enableLocationHandler = () => {
if (Platform.OS === 'android') {
this.requestLocationPermissions();
} else {
Linking.openURL('app-settings:');
this.getLatitudeLongitude();
}
}
requestLocationPermissions = async () => {
if (Platform.OS === 'android') {
this.getLatitudeLongitude();
} else {
Geolocation.requestAuthorization();
this.getLatitudeLongitude();
}
}
getLatitudeLongitude() {
Geolocation.getCurrentPosition((position) => {
const initialPosition = JSON.stringify(position);
},
(error) => {
if (error.code === 1) {
this.setState({ errorMessage: 'Location permission is denied', isLoading: false });
Geolocation.clearWatch(this.watchID);
}
},
{ enableHighAccuracy: true, distanceFilter: 100, timeout: 20000, maximumAge: 1000 }
);
this.watchID = Geolocation.watchPosition((position) => {
// this.showLoader();
// console.log('position', position);
});
}
Any suggestions?
IN this plugin react-native-geolocation-service, There is no declared run time permission in android. that's by in android , permission dialog is not showing .
To resolve this issue add this permission before request for fetch location
import {PermissionsAndroid} from 'react-native';
async function requestAccessLocationPermission() {
try {
const granted = await PermissionsAndroid.request(
PermissionsAndroid.PERMISSIONS. ACCESS_FINE_LOCATION,
{
title: 'Application wants location Permission',
message:
'Application wants location Permission',
buttonNeutral: 'Ask Me Later',
buttonNegative: 'Cancel',
buttonPositive: 'OK',
},
);
if (granted === PermissionsAndroid.RESULTS.GRANTED) {
} else {
}
} catch (err) {
console.warn(err);
}
}
this will helps you, it helps me.

How do I log into a website in an IOS APP through swift/javascript

I'm trying to create an application where the user sign's in through my UI. I have textLabel's that I'm pulling data from, I want to taje that data and use it to log into a website online and get data once logged in. What I have right now is a WKWebView that is invisible most of the time that loads the website I want to log into it, then fills out the log in forms and clicks the button - all via evaluateJavaScript. My only problem right now is that I'm trying to come up with something that will be able to check if the user logged in incorrectly. What I am trying to do is wait till the javascript executes, then check the webView and see if the url has changed to the log in page. This works, however only if the user hasn't failed to enter a correct password. If they get an incorrect password once, all the entries after that say the password is incorrect. Im using an observe value to check if the page is done loading but i need help implementing this.
#IBAction func loginBtnPressed(_ sender: Any) {
if(!validate(usernameText) || !validate(passwordText))
{
self.validationLabel.text = "One or more fields are empty."
UIView.animate(withDuration: 0.25, animations: {
self.validationLabel.isHidden = false
})
return;
}
let oldUrl = webView.url?.absoluteString;
self.validationLabel.isHidden = true
webView.evaluateJavaScript("document.getElementById('fieldAccount').value = \(usernameText!.text!)", completionHandler: {(result,err) in print(result ?? "No Result"); print (err ?? "No Error") })
webView.evaluateJavaScript("document.getElementById('fieldPassword').value = '\(passwordText!.text!)'", completionHandler: {(result,err) in print(result ?? "No Result"); print (err ?? "No Error") })
webView.evaluateJavaScript("document.getElementById('btn-enter').click()", completionHandler: {(result,err) in
print(result ?? "No Result"); print (err ?? "No Error") })
DispatchQueue.main.asyncAfter(deadline: .now() + .seconds(6), execute: {
// Put your code which should be executed with a delay here
if(self.webView.url?.absoluteString.contains("termGrades"))!
{
print("We're in bois: ")
self.view = self.webView
}else
{
print("Did not get to grades page..")
self.validationLabel.text = "Invalid Username/Password"
UIView.animate(withDuration: 0.25, animations: {
self.validationLabel.isHidden = false
})
let portalURL = URL(string: "*website*")
let request = URLRequest(url: portalURL!)
self.webView.load(request)
return;
}
}) }
I'm a new ios developer but I don't want to give up on this, If you have any help , or suggestions on a better way to wait for the javascript to click the button and the page to fullyload please share
EDIT:
I think I found a solution but for some reason now the login button has to be clicked twice for it to process anything...
#IBAction func loginBtnPressed(_ sender: Any) {
if(!validate(usernameText) || !validate(passwordText))
{
self.validationLabel.text = "One or more fields are empty."
UIView.animate(withDuration: 0.25, animations: {
self.validationLabel.isHidden = false
})
return;
}
self.validationLabel.isHidden = true
webView.evaluateJavaScript("document.getElementById('fieldAccount').value = \(usernameText!.text!.description)", completionHandler: {(result,err) in print(result ?? "No Result"); print (err ?? "No Error") })
webView.evaluateJavaScript("document.getElementById('fieldPassword').value = '\(passwordText!.text!)'", completionHandler: {(result,err) in print(result ?? "No Result"); print (err ?? "No Error") })
webView.evaluateJavaScript("document.getElementById('btn-enter').click()", completionHandler: {(result,err) in
print(result ?? "No Result"); print (err ?? "No Error") })
}
override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
if keyPath == "loading"
{
if(!webView.isLoading) {
print("Finished navigating to url \(webView.url!)");
if(webView.url! == URL(string: "**"))
{
statusLabel.text = "Done."
}else if(webView.url! == URL(string: "**"))
{
self.validationLabel.text = "Invalid Username/Password"
UIView.animate(withDuration: 0.25, animations: {
self.validationLabel.isHidden = false
})
}else if(webView.url! == URL(string: "**"))
{
self.view = webView
}
}
}
func webView(webView: WKWebView!, didFinishNavigation navigation: WKNavigation!) {
print("Finished navigating to url \(webView.url)");
}
Make sure to set the delegate of the WKWebView to your view controller

Categories