Set NDEF message on Android Chrome - javascript

I would like to set an NDEF message on my website if it is opened from android chrome and read it with an NFC Reader using nfcpy.
According to https://developer.mozilla.org/en-US/docs/Web/API/NDEFMessage/NDEFMessage I think I should be able to do this.
My page looks like this:
<button onclick="set_ndef()">Set NDEF msg</button>
<pre id="log"></pre>
<script>
async function set_ndef() {
if ("NDEFReader" in window) {
try {
const ndefmsg = new NDEFMessage({'records': [{'recordType': 'text', 'data': 'asd'}]});
consoleLog(ndefmsg)
} catch(error) {
consoleLog(error);
}
} else {
consoleLog("Web NFC is not supported.");
}
}
function consoleLog(data) {
var logElement = document.getElementById('log');
logElement.innerHTML += data + '\n';
}
</script>
The website uses HTTPS, and when I press the button an NDEFRecord object is printed into the div.
I have an acr122u NFC reader, which I was able to set up using nfcpy:
import nfc
import time
def on_connect(tag):
print(tag.identifier.hex())
tag_ident = tag.identifier.hex()[:8]
print(tag.ndef)
if tag.ndef:
for record in tag.ndef:
print(record)
return True
while True:
with nfc.ContactlessFrontend('usb') as clf:
tag = clf.connect(rdwr={'on-connect': on_connect, 'beep-on-connect': True})
time.sleep(1)
After I place my phone on the reader (after pressing the button on the page) I am only able to read the UID of the phone, but the tag.ndef is None
How can I do this (if I am even able to)?

https://stackoverflow.com/a/65659726/2373819 should give you some background as why this won't work, Android Chrome NFC is an NFC reader and 2 readers won't work.
Though with the acr122u NFC reader can usually be configure to behave as an NFC Tag, then Android Chrome should be able read and write NDEF messages to it (https://nfcpy.readthedocs.io/en/latest/topics/get-started.html#emulate-a-card)

Related

Detect QR Code via HTML5 file input in iOS

Based on this issue, it appears that I cannot use one of the many QR Code scanner libraries to embed a web-based scanner on iOS and need to use HTML5 file input. The html5-qrcode library seems to be working well.
However, the iOS camera does not detect the QR Code automatically (at least on my personal device), and I need to 1) trigger the camera, and 2) Select "Use Photo" in order to load the file onto the input element and execute the onChange event.
The default behavior of the iOS camera detects QR Codes automatically.
Is there any configuration which would get the default iOS behavior for the camera to recognize the QR Code and thus skip the extra two steps?
Here's the React input element for reference
<input
type="file"
ref={inputRef}
style={{ visibility: 'hidden' }}
accept="image/*"
id="cameraScanner"
capture />
and the handler
const handler = async (e: ChangeEvent<HTMLInputElement>) => {
const { target } = e;
const { files = [] } = target;
const fileList = files as FileList;
if (fileList.length === 0) {
return;
}
const scanner = new Html5Qrcode(READER_ELEMENT_ID);
const [imageFile] = Array.from(fileList);
// Scan QR Code
try {
const spaceId = await scanner.scanFile(imageFile, false);
processScan(spaceId);
} catch (err) {
handleError(e);
}
};

How to play audio in WKWebView?

I'm making an iOS app to wrap my javascript game. On mobile safari it works fine because after I play a sound in ontouchstart, then I'm allowed to play any audio whenever I want, and I can set their volume too.
Problem 1: In a WKWebView I can only play the specific sounds that I played in ontouchstart, not the rest. So I'm playing every single audio in my game on the first tap. It sounds really bad. Otherwise in the javascript on audio.play() I get
Unhandled Promise Rejection: NotAllowedError: The request is not allowed by the user agent or the platform in the current context, possibly because the user denied permission.
Problem 2: I also can't lower the volume of sounds in a WKWebView. If I set myAudio.volume=.5 it's instantly 1 again. Which means the user has to actually hear them, in order for them to get to readyState=4
Any good solution or hack? Right now I'm playing every single sound on the first tap, and everything is full volume.
<html>
<body style='bakcground:#DDD;height:333px'>
<div id=debug style='font-size:20px' onclick="playSound(myAudio)">
Debug<br />
Tap here to play the sound.<br />
</div>
<script>
var context = new webkitAudioContext()
var myAudio = new Audio("sound/upgrade.wav")
myAudio.addEventListener('canplaythrough', function(){log('canplaythrough1')}, false)
var myAudio2 = new Audio("sound/dragon.wav")
myAudio2.addEventListener('canplaythrough', function(){log('canplaythrough2')}, false)
function playSound(sound)
{
log("playSound() readyState="+sound.readyState)
try{
sound.play()
context.resume().then(() => {
log("Context resumed")
sound.play()
})
}
catch(e)
{
log(e)
}
}
function log(m)
{
console.log(m)
debug.innerHTML += m+"<br />"
}
window.ontouchstart = function()
{
log("ontouchstart()")
playSound(myAudio)
setTimeout(function() {
playSound(myAudio2, 1111)
})
}
</script>
</body>
</html>
And ViewController.swift
import UIKit
import WebKit
class ViewController: UIViewController {
private var myWebView3 : WKWebView!
override func viewDidLoad() {
super.viewDidLoad()
self.myWebView3 = WKWebView(frame: .zero)
self.myWebView3.configuration.preferences.setValue(true, forKey: "allowFileAccessFromFileURLs")
self.myWebView3.configuration.mediaTypesRequiringUserActionForPlayback = []
self.myWebView3.configuration.allowsInlineMediaPlayback = true
let url = Bundle.main.url(forResource: "index6", withExtension: "html", subdirectory: "/")!
self.myWebView3.loadFileURL(url, allowingReadAccessTo: url)
let request = URLRequest(url: url)
self.myWebView3.load(request)
self.view.addSubview(self.myWebView3)
}
override func viewDidLayoutSubviews() {
self.myWebView3.frame = self.view.bounds
}
}
I'm using Xcode 10.1
iPhone SE 12.1.1
I have also tried on an iPad with iOS 10 and get the same error.
I have also tried context.decodeAudioData() / context.createBufferSource() and get the same error.
Re: problem 1, you need to create the configuration object before you set the web view's configuration. The line
self.myWebView3.configuration.mediaTypesRequiringUserActionForPlayback = []
will fail silently. Just create a new WKWebViewConfiguration object, set its properties, and then create the web view:
webView = WKWebView(frame: .zero, configuration: webConfiguration)
In the WKWebView's delegate, after webview loading finished, execute javascript that addEventListener to audio and video tag with event play, pause, ended. Then check the playback state for purpose.

JMeter WebDriver Sampler: working with Firefox but browser does not open when using Chrome

I have been able to successfully run a (javascript) test script using the WebDriver Sampler in JMeter with the Firefox Driver Config. I now want to use the JMeter Chrome Driver Config to run the same test in Chrome.
I know that the Chrome Driver that I have installed on my PC is working as I have successfully used it to run other (non-JMeter) tests. The path to the Chrome Driver is also definitely correct.
My site does not use a proxy so I have selected "No Proxy" under the "Proxy" tab of the Chrome Driver Config.
Problem: When I click "Run" in JMeter with the Firefox Driver Config disabled and the Chrome Driver Config enabled, nothing happens (browser does not open, test quickly ends and nothing is recorded in the "View Results Tree" listener).
I am using version 3.1 of JMeter, version 60.0.3112.101 of Chrome and version 2.31 of ChromeDriver.
My code looks like this in case that helps:
var pkg = JavaImporter(org.openqa.selenium); //WebDriver classes
var pfg = JavaImporter(org.openqa.selenium.Keys); //WebDriver classes
var support_ui = JavaImporter(org.openqa.selenium.support.ui.WebDriverWait);
var wait = new support_ui.WebDriverWait(WDS.browser, 5000);
var username = WDS.args[0];
var password = WDS.args[1];
var docNo = WDS.args[2];
WDS.sampleResult.sampleStart();
WDS.sampleResult.getLatency();
WDS.log.info("Sample started");
WDS.browser.get('blah blah blah');
var usernameBox = WDS.browser.findElement(pkg.By.id('TextBoxCustomer'));
var passwordBox = WDS.browser.findElement(pkg.By.id('PIN'));
var loginBtn = WDS.browser.findElement(pkg.By.id('btnLogin'));
usernameBox.click(); //click on User ID textbox
usernameBox.sendKeys([username]); //enter User ID
passwordBox.click(); //click on Password textbox
passwordBox.sendKeys([password]); //enter password
loginBtn.click(); //click Login button
java.lang.Thread.sleep(5000);
//Check that "Home" page has been reached by verifying presence of "News
Header"
try {
wait.until(conditions.presenceOfElementLocated(pkg.By.id('ct100_CP1_ctlNewsMessa
geList_NewsHeader')));
}
catch (Exception) {
WDS.sampleResult.sampleEnd();
WDS.sampleResult.setSuccessful(false);
}
//Navigate to "Invoice PDFs" screen
var accountMnu = WDS.browser.findElement(pkg.By.xpath("//[contains(text(),'Account')]"));
accountMnu.click();
var InvPDFSubMnu = WDS.browser.findElement(pkg.By.xpath("//*
[contains(text(),'Invoice PDFs')]"));
InvPDFSubMnu.click();
java.lang.Thread.sleep(5000);
try {
wait.until(conditions.presenceOfElementLocated(pkg.By.id('ctl00_CP1_tbDocNo')));
}
catch (Exception) {
WDS.sampleResult.sampleEnd();
WDS.sampleResult.setSuccessful(false);
}
//Enter document number
java.lang.Thread.sleep(5000);
var docNoBox = WDS.browser.findElement(pkg.By.id('ctl00_CP1_tbDocNo'));
docNoBox.click(); //click on "Doc Bo." textbox
docNoBox.sendKeys([docNo]); //enter Doc No.
java.lang.Thread.sleep(5000);
//Retrieve document with specified document number
var retrieveBtn = WDS.browser.findElement(pkg.By.id('ctl00_CP1_btnRetrieve'));
retrieveBtn.click();
try {
wait.until(conditions.presenceOfElementLocated(pkg.By.xpath("//*[contains(text(),'download')]")));
}
catch (Exception) {
WDS.sampleResult.sampleEnd();
WDS.sampleResult.setSuccessful(false);
}
java.lang.Thread.sleep(5000);
//Click on "download" button
var downloadBtn = WDS.browser.findElement(pkg.By.xpath("//*[contains(text(),'download')]"));
downloadBtn.click();
WDS.sampleResult.sampleEnd();
It seems that the solution was that in the "Path to Chrome Driver" (under the "Chrome" tab of the "jp#gc - Chrome Driver Config" element) I needed to end the "path" with "\chromedriver.exe".

Code behind Content editor button for Experience editor button

Is it possible to use a code behind of a command used for ribbon button in content editor as a request for experience editor button? We want to stick to SPEAK and not make any changes to Sitecore.ExperienceEditor.config.
After creating new button in Experience editor, telling .js to call NewCommand request by
Sitecore.ExperienceEditor.PipelinesUtil.generateRequestProcessor("ExperienceEditor.NewCommand");
that was referenced in Sitecore.ExperienceEditor.Speak.Requests.config as
<request name="ExperienceEditor.NewCommand" type="Sitecore.Starterkit.customcode.MyCommand,MyProject"/>
nothing happens and logs say
ERROR Could not instantiate speak request object,
name:ExperienceEditor.NewCommand,
type:Sitecore.Starterkit.customcode.MyCommand,MyProject`
Do we have to import PipelineProcessorRequest as suggested by some tutorials or is there a way to use our existing code?
Have you seen this blog post on adding custom SPEAK command buttons to Sitecore 8 Experience Editor?
https://doc.sitecore.net/sitecore%20experience%20platform/the%20editing%20tools/customize%20the%20experience%20editor%20ribbon
Otherwise if that doesn't achieve what your looking for, it might be worth trying to standard SPEAK application way of triggering a button, In a SPEAK application you can call a JavaScript function from a button click using this code.
javascript:app.FunctionName();
In the core DB update the click field on your button to call JavaScript with the javascript: prefix. Does this allow you to trigger your JavaScript?
I was able to use my existing control using guidelines from:
http://jockstothecore.com/sitecore-8-ribbon-button-transfiguration/
Relevant pieces of the old command:
if (args.IsPostBack)
{
// act upon the dialog completion
if (args.Result == "yes")
{
Context.ClientPage.SendMessage(this, "item:load(...)");
}
}
else
{
// trigger the dialog
UrlString url = new UrlString(UIUtil.GetUri("control:CopyLanguage"));
url.Add("id", item.ID.ToString());
url.Add("lang", item.Language.ToString());
url.Add("ver", item.Version.ToString());
SheerResponse.ShowModalDialog(url.ToString(), true);
args.WaitForPostBack();
}
The redressed command:
define(["sitecore"], function (Sitecore) {
Sitecore.Commands.ScoreLanguageTools = {
canExecute: function (context) {
return true; // we will get back to this one
},
execute: function (context) {
var id = context.currentContext.itemId;
var lang = context.currentContext.language;
var ver = context.currentContext.version;
var path = "/sitecore/shell/default.aspx?xmlcontrol=CopyLanguage" +
"&id=" + id + "&lang=" + lang + "&ver=" + ver;
var features = "dialogHeight: 600px;dialogWidth: 500px;";
Sitecore.ExperienceEditor.Dialogs.showModalDialog(
path, '', features, null,
function (result) {
if (result) {
window.top.location.reload();
}
}
);
}
};
});

How to access the camera from inside the webview in Titanium Appcelerator

I've very nearly finished developing a HTML5 app with Appcelerator, and I have one function left to add which is a function to allow the user to take a photo when sending the client a message through the app. There is a specific div that is displayed which contains the message form, and I'd like the user to be able to take a photo with their phone, and have it automatically attached to the message which is then submitted to our server.
However, after hunting around I'm stumped as to how to get it working. While the API shows the Javascript to make the camera work, I can't seem to access it, and I don't know where the API call should be located. Does it go in the app.js file, or it's own file, or does it not really matter where it's called? Any help/advice would be appreciated on this.
EDIT
Thanks to Dragon, I've made the following changes to my code:
index.html
<div class="col-square">
<i class="fa fa-camera fa-squareBlock"></i><br />Take Photo
</div>
<script type="text/javascript">
Ti.App.addEventListener("app:fromTitanium", function(e) {
alert(e.message);
});
</script>
app.js
Ti.App.addEventListener('app:fromWebView', function(e){
Titanium.Media.showCamera({
success:function(event)
{
var image = event.media;
var file = Titanium.Filesystem.getFile(Titanium.Filesystem.applicationDataDirectory,"userImg.jpg");
file.write(image);
var data = file.nativePath;
Ti.App.fireEvent('app:fromTitanium', {message: "photo taken fine"});
},
cancel:function()
{
},
error:function(error)
{
var a = Titanium.UI.createAlertDialog({title:'Camera'});
if (error.code == Titanium.Media.NO_CAMERA)
{
a.setMessage('Please run this test on device');
}
else
{
a.setMessage('Unexpected error: ' + error.code);
}
a.show();
},
showControls:false, // don't show system controls
mediaTypes:Ti.Media.MEDIA_TYPE_PHOTO,
autohide:false // tell the system not to auto-hide and we'll do it ourself
});
});
However, in this case the the button opens the camera up fine. But, when the photo is taken, and selected, it returns to the screen but nothing happens. It then gives this error in the debug - "Ti is undefined". When I then define Ti, it will return "App is undefined".
The peculiar thing with this is that if I remove the code that will handle data being sent from app.js to the webview, it works fine, even though the code to open the camera from the webview is near enough the same code?
here is what you can do :
Inside your webview call and Event and write the event listener inside the parent of webview.
Something like this will go inside webview:
<button onclick="Ti.App.fireEvent('app:fromWebView', { message: 'event fired from WebView, handled in Titanium' });">fromWebView</button>
Followed by something like this inside the parent js of webview :
Ti.App.addEventListener('app:fromWebView', function(e) {
alert(e.message);
//Here you can call the camera api.
})
for sending the image to the webview follow the reverse process.
Don't forget to check the Docs.
Hope it helps.
I always avoid having Event Listeners in the 'web' world in Titanium. When you call 'fireEvent' from your webview, you are crossing the bridge from the webview sandbox to the native world. That is where Titanium takes the picture and saves it to the file system. For Titanium to tell the webview it is finished, I recommend evalJS. Much more reliable.
Here is an example using the Photo Gallery instead of the Camera. Much easier to test in a simulator. Just replace Titanium.Media.openPhotoGallery with Titanium.Media.showCamera to use the camera instead.
app.js
var win = Ti.UI.createWindow({
background : 'white',
title : 'camera test'
});
var webview = Ti.UI.createWebView({
url : 'test.html'
});
win.add(webview);
win.open();
Ti.App.addEventListener('choosePicture', function(e) {
var filename = e.filename;
Titanium.Media.openPhotoGallery({
success : function(event) {
var image = event.media;
var file = Titanium.Filesystem.getFile(Titanium.Filesystem.applicationDataDirectory, filename);
file.write(image);
var full_filename = file.nativePath;
webview.evalJS('photoDone("' + full_filename + '");');
},
cancel : function() {
},
error : function(error) {
var a = Titanium.UI.createAlertDialog({
title : 'Camera'
});
if (error.code == Titanium.Media.NO_CAMERA) {
a.setMessage('Please run this test on device');
} else {
a.setMessage('Unexpected error: ' + error.code);
}
a.show();
},
showControls : false, // don't show system controls
mediaTypes : Ti.Media.MEDIA_TYPE_PHOTO,
autohide : true
});
});
test.html
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html lang="en">
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>My HTML Page</title>
<style>
body {
padding-top: 20px
}
</style>
<script>
var photoNumber = 0;
function doShowCamera() {
photoNumber++;
Ti.App.fireEvent('choosePicture', {
filename : photoNumber + ".png"
});
}
function photoDone(filename) {
var img = document.getElementById('myPhoto');
img.src = filename;
}
</script>
</head>
<body>
<img id="myPhoto" width="300" height="400"/>
<input type="button" value="Show Pictures" onclick="doShowCamera();" />
</body>
</html>
The Ti.App.addEventListener call can be anywhere in your Titanium code (not in your webviews) as long as it is only run once. app.js is as good a place as any to run it.

Categories