Based on this issue, it appears that I cannot use one of the many QR Code scanner libraries to embed a web-based scanner on iOS and need to use HTML5 file input. The html5-qrcode library seems to be working well.
However, the iOS camera does not detect the QR Code automatically (at least on my personal device), and I need to 1) trigger the camera, and 2) Select "Use Photo" in order to load the file onto the input element and execute the onChange event.
The default behavior of the iOS camera detects QR Codes automatically.
Is there any configuration which would get the default iOS behavior for the camera to recognize the QR Code and thus skip the extra two steps?
Here's the React input element for reference
<input
type="file"
ref={inputRef}
style={{ visibility: 'hidden' }}
accept="image/*"
id="cameraScanner"
capture />
and the handler
const handler = async (e: ChangeEvent<HTMLInputElement>) => {
const { target } = e;
const { files = [] } = target;
const fileList = files as FileList;
if (fileList.length === 0) {
return;
}
const scanner = new Html5Qrcode(READER_ELEMENT_ID);
const [imageFile] = Array.from(fileList);
// Scan QR Code
try {
const spaceId = await scanner.scanFile(imageFile, false);
processScan(spaceId);
} catch (err) {
handleError(e);
}
};
Related
I would like to set an NDEF message on my website if it is opened from android chrome and read it with an NFC Reader using nfcpy.
According to https://developer.mozilla.org/en-US/docs/Web/API/NDEFMessage/NDEFMessage I think I should be able to do this.
My page looks like this:
<button onclick="set_ndef()">Set NDEF msg</button>
<pre id="log"></pre>
<script>
async function set_ndef() {
if ("NDEFReader" in window) {
try {
const ndefmsg = new NDEFMessage({'records': [{'recordType': 'text', 'data': 'asd'}]});
consoleLog(ndefmsg)
} catch(error) {
consoleLog(error);
}
} else {
consoleLog("Web NFC is not supported.");
}
}
function consoleLog(data) {
var logElement = document.getElementById('log');
logElement.innerHTML += data + '\n';
}
</script>
The website uses HTTPS, and when I press the button an NDEFRecord object is printed into the div.
I have an acr122u NFC reader, which I was able to set up using nfcpy:
import nfc
import time
def on_connect(tag):
print(tag.identifier.hex())
tag_ident = tag.identifier.hex()[:8]
print(tag.ndef)
if tag.ndef:
for record in tag.ndef:
print(record)
return True
while True:
with nfc.ContactlessFrontend('usb') as clf:
tag = clf.connect(rdwr={'on-connect': on_connect, 'beep-on-connect': True})
time.sleep(1)
After I place my phone on the reader (after pressing the button on the page) I am only able to read the UID of the phone, but the tag.ndef is None
How can I do this (if I am even able to)?
https://stackoverflow.com/a/65659726/2373819 should give you some background as why this won't work, Android Chrome NFC is an NFC reader and 2 readers won't work.
Though with the acr122u NFC reader can usually be configure to behave as an NFC Tag, then Android Chrome should be able read and write NDEF messages to it (https://nfcpy.readthedocs.io/en/latest/topics/get-started.html#emulate-a-card)
I want to paste image from my clipboard to whatsapp chat from clipboard i tried using document exec command with different parameters like insertHTML , insertImage these two add image in content editable div but does not enable the send button. I also tried using document exec command paste but its pasting nothing. I double checked and images exists on clipboard.
async function sendToChat(blob) {
setToClipboardImg(blob)
try {
document.querySelector('._6h3Ps ._13NKt').focus()
} catch (e) {
document.querySelector('textarea').focus()
}
// const data = [new ClipboardItem({
// [text.type]: text
// })]
// await navigator.clipboard.write(data)
document.execCommand("Paste", null, null);
//setToClipboardWithTextInsta()
}
var setToClipboardImg = async blob => {
window.focus();
const data = [new ClipboardItem({
[blob.type]: blob
})]
await navigator.clipboard.write(data);
}
I needed exactly the same funcionality, I was able to resolve it using html2canvas, you can convert an html element into an image and then send it through whatsapp from the clipboard, here is how I implemented it:
document.querySelector("#btn").onclick(function () {
var table = document.querySelector("#table");
var elementContainer = document.querySelector("#previewImage");
try {
// Prevent duplicates since the element will be appended to the container
if (elementContainer.innerHTML) {
elementContainer.innerHTML = "";
}
html2canvas(table).then(canvas => {
elementContainer.appendChild(canvas);
canvas.toBlob(blob => navigator.clipboard.write([new
ClipboardItem({'image/png': blob})]));
alert('😁 The screenshot for the table currently displayed was copied to the clipboard, you can now paste into the WhatsApp chats 😁');
});
}
catch(err){
document.querySelector("#output").innerHTML = err.message;
}
}
Then in the head of the document add the following cdn:
<script src="https://cdnjs.cloudflare.com/ajax/libs/html2canvas/1.4.1/html2canvas.min.js"></script>
I am using the autodesk forge viewer and want to remove the 'measure' tool from the toolbar. I have tried the following but it will not remove the 'measure' button
const onToolbarCreated = (e) => {
const settingsTools = viewer.toolbar.getControl('settingsTools')
// settingsTools.removeControl('toolbar-modelStructureTool')
// settingsTools.removeControl('toolbar-propertiesTool')
settingsTools.removeControl('toolbar-settingsTool');
settingsTools.removeControl('toolbar-measureTool');
//settingsTools.removeControl('toolbar-fullscreenTool')
}
All of the other removeControl() functions work other than the one for the measure-tool. Any guidance on how I could remove this button from the viewer would be greatly appreciated!
Cheers!
EDIT: I have tried this without success
const onToolbarCreated = (e) => {
const settingsTools = viewer.toolbar.getControl('settingsTools');
const modelTools = viewer.toolbar.getControl('modelTools');
modelTools.removeControl('toolbar-measurementSubmenuTool');
// settingsTools.removeControl('toolbar-modelStructureTool')
// settingsTools.removeControl('toolbar-propertiesTool')
settingsTools.removeControl('toolbar-settingsTool');
//settingsTools.removeControl('toolbar-measurementSubmenuTool');
//settingsTools.removeControl('toolbar-fullscreenTool')
If you are not planning to use it anymore you can simply unload the extension from your project.
viewer.unloadExtension("Autodesk.Measure");
Measure tool is in modelTools group.
const modelTools = viewer.toolbar.getControl('modelTools')
modelTools.removeControl('toolbar-measurementSubmenuTool')
I've made a "pluggable" system in React, which dynamically runs tiny "apps" which consist of an HTML, JS and CSS file. The HTML and CSS files are optional. They intercommunicate through the window object.
I'm dynamically loading the three files here, but I'm having the problem that my CSS classes fail to work 1/5 of the time. They don't even seem to get parsed since I cannot manually apply them in Chrome devtools either.
I've tried using both link and style tags to load the CSS, but both have the same problem. Even a 1000ms setTimeout between the CSS and HTML injection doesn't help. CSS parsing consistently fails roughly every third time the component mounts..
I've tried Chrome, Firefox, and Safari. Same problem in all three.
I'm kind of stuck, I'd love to get some feedback on this..
Here is a video of the issue: (the "app" here is a simple SVG file viewer) http://www.giphy.com/gifs/dvHjBBolgA1xAdyRsv
const windowInitialized = useElementBlockInitialization({
id: elementBlockID,
payload: payload,
onResult: onResult
});
const [styleAndHTMLInitialized, setStyleAndHTMLInitialized] = useState(false);
// after some properties are set in Window, run this effect
useEffect(() => {
let gettingStyleAndHTML = false;
if (windowInitialized) {
gettingStyleAndHTML = true;
getStyleAndHTML().then(({ styleBody, htmlBody }) => { // async function that fetches some html and css as a string (both potentially null)
if (gettingStyleAndHTML) {
if (styleBody) {
const styleElement = document.createElement('style');
styleElement.type = 'text/css';
styleElement.appendChild(document.createTextNode(styleBody));
document.head.appendChild(styleElement);
}
if (htmlBody) {
// containerElement is a ref
containerElement.current.innerHTML = htmlBody;
}
setStyleAndHTMLInitialized(true);
}
});
}
return () => {
gettingStyleAndHTML = false;
};
}, [windowInitialized]);
// after the CSS and HTML is injected, run this hook
useEffect(() => {
if (styleAndHTMLInitialized) {
const scriptElement = document.createElement('script');
scriptElement.setAttribute('data-eb-container-id', containerElementID);
scriptElement.setAttribute('data-eb-id', elementBlockID);
scriptElement.setAttribute('src', makeElementBlockBaseURL() + '.js');
document.head!.appendChild(scriptElement);
return () => {
scriptElement.remove();
};
}
return;
}, [styleAndHTMLInitialized]);
// only render the container once the window properties are set
return windowInitialized ? (
<Container ref={containerElement} id={containerElementID} />
) : null;
I figured it out.
My automatically generated class names occasionally started with a number. CSS class names can not apparently start with a number!
Do'h.
I am making a web app that can be open for a long time. I don't want to load audio at load time (when the HTML gets downloaded and parsed) to make the first load as fast as possible and to spare precious resources for mobile users. Audio is disabled by default.
Putting the audio in CSS or using preload is not appropriate here because I don't want to load it at load time with the rest.
I am searching for the ideal method to load audio at run time, (after a checkbox has been checked, this can be after 20 minutes after opening the app) given a list of audio elements.
The list is already in a variable allSounds. I have the following audio in a webpage (there are more):
<audio preload="none">
<source src="sound1.mp3">
</audio>
I want to keep the same HTML because after second visit I can easily change it to (this works fine with my server-side HTML generation)
<audio preload="auto">
<source src="sound1.mp3">
</audio>
and it works.
Once the option is turned on, I want to load the sounds, but not play them immediately. I know that .play() loads the sounds. But I want to avoid the delay between pressing a button and the associated feedback sound.
It is better to not play sound than delayed (in my app).
I made this event handler to load sounds (it works) but in the chrome console, it says that download was cancelled, and then restarted I don't know exactly what I am doing wrong.
Is this is the correct way to force load sounds? What are the other ways? If possible without changing the HTML.
let loadSounds = function () {
allSounds.forEach(function (sound) {
sound.preload = "auto";
sound.load();
});
loadSounds = function () {}; // Execute once
};
here is playSound function, but not very important for the questions
const playSound = function (sound) {
// PS
/* only plays ready sound to avoid delays between fetching*/
if (!soundEnabled)) {
return;
}
if (sound.readyState < sound.HAVE_ENOUGH_DATA) {
return;
}
sound.play();
};
Side question: Should there be a preload="full" in the HTML spec?
See also:
Preload mp3 file in queue to avoid any delay in playing the next file in queue
how we can Play Audio with text highlight word by word in angularjs
To cache the audio will need to Base64 encode your MP3 files, and start the Base64 encoded MP3 file with data:audio/mpeg;base64,
Then you can pre-load/cache the file with css using something like:
body::after {
content:url(myfile.mp3);
display:none;
}
I think I would just use the preloading functionality without involving audio tag at all...
Example:
var link = document.createElement('link')
link.rel = 'preload'
link.href = 'sound1.mp3'
link.as = 'audio'
link.onload = function() {
// Done loading the mp3
}
document.head.appendChild(link)
I'm quite sure that I've found a solution for you. As far as I'm concerned, your sounds are additional functionality, and are not required for everybody. In that case I would propose to load the sounds using pure javascript, after user has clicked unmute button.
A simple sketch of solution is:
window.addEventListener('load', function(event) {
var audioloaded = false;
var audioobject = null;
// Load audio on first click
document.getElementById('unmute').addEventListener('click', function(event) {
if (!audioloaded) { // Load audio on first click
audioobject = document.createElement("audio");
audioobject.preload = "auto"; // Load now!!!
var source = document.createElement("source");
source.src = "sound1.mp3"; // Append src
audioobject.appendChild(source);
audioobject.load(); // Just for sure, old browsers fallback
audioloaded = true; // Globally remember that audio is loaded
}
// Other mute / unmute stuff here that you already got... ;)
});
// Play sound on click
document.getElementById('playsound').addEventListener('click', function(event) {
audioobject.play();
});
});
Of course, button should have id="unmute", and for simplicity, body id="body" and play sound button id="playsound. You can modify that of course to suit your needs. After that, when someone will click unmute, audio object will be generated and dynamically loaded.
I didn't try this solution so there may be some little mistakes (I hope not!). But I hope this will get you an idea (sketch) how this can be acomplished using pure javascript.
Don't be afraid that this is pure javascript, without html. This is additional functionality, and javascript is the best way to implement it.
You can use a Blob URL representation of the file
let urls = [
"https://upload.wikimedia.org/wikipedia/commons/b/be/Hidden_Tribe_-_Didgeridoo_1_Live.ogg"
, "https://upload.wikimedia.org/wikipedia/commons/6/6e/Micronesia_National_Anthem.ogg"
];
let audioNodes, mediaBlobs, blobUrls;
const request = url => fetch(url).then(response => response.blob())
.catch(err => {throw err});
const handleResponse = response => {
mediaBlobs = response;
blobUrls = mediaBlobs.map(blob => URL.createObjectURL(blob));
audioNodes = blobUrls.map(blobURL => new Audio(blobURL));
}
const handleMediaSelection = () => {
const select = document.createElement("select");
document.body.appendChild(select);
const label = new Option("Select audio to play");
select.appendChild(label);
select.onchange = () => {
audioNodes.forEach(audio => {
audio.pause();
audio.currentTime = 0;
});
audioNodes[select.value].play();
}
select.onclick = () => {
const media = audioNodes.find(audio => audio.currentTime > 0);
if (media) {
media.pause();
media.currentTime = 0;
select.selectedIndex = 0;
}
}
mediaBlobs.forEach((blob, index) => {
let option = new Option(
new URL(urls[index]).pathname.split("/").pop()
, index
);
option.onclick = () => console.log()
select.appendChild(option);
})
}
const handleError = err => {
console.error(err);
}
Promise.all(urls.map(request))
.then(handleResponse)
.then(handleMediaSelection)
.catch(handleError);