I noticed that Chrome Canary has an implementation of a web serial api at navigator.serial, and I'm interested in looking at it. The previous API for serial ports chrome.serial implements listener callbacks, while this new API seems to deal in streams.
I've looked at the example at https://wicg.github.io/serial/#usage-example, but it seems pretty bare bones.
<html>
<script>
var port;
var buffy = new ArrayBuffer(1);
var writer;
buffy[0]=10;
const test = async function () {
const requestOptions = {
// Filter on devices with the Arduino USB vendor ID.
//filters: [{ vendorId: 0x2341 }],
};
// Request an Arduino from the user.
port = await navigator.serial.requestPort(requestOptions);
// Open and begin reading.
await port.open({ baudrate: 115200 });
//const reader = port.in.getReader();
const reader = port.readable.getReader();
writer = port.writable.getWriter();
//const writer = port.writable.getWriter();
//writer.write(buffy);
while (true) {
const {done, data} = await reader.read();
if (done) break;
console.log(data);
}
} // end of function
</script>
<button onclick="test()">Click It</button>
</html>
I'd like to find a working example, and eventually find a way to migrate an app from chrome.serial to navigator.serial
Hey battling this as well. To enable this 'experimental api', open Canary, and punch this into the url: chrome://flags/#enable-experimental-web-platform-features
Enable that feature. Now you can use it.
Related
I've been trying out the web serial API in chrome (https://web.dev/serial/) to do some basic communication with an Arduino board. I've noticed quite a substantial delay when reading data from the serial port however. This same issue is present in some demos, but not all.
For instance, using the WebSerial demo linked towards the bottom has a near instantaneous read:
While using the Serial Terminal example results in a read delay. (note the write is triggered at the moment of a character being entered on the keyboard):
WebSerial being open source allows for me to check for differences between my own implementation, however I am seeing performance much like the second example.
As for the relevant code:
this.port = await navigator.serial.requestPort({ filters });
await this.port.open({ baudRate: 115200, bufferSize: 255, dataBits: 8, flowControl: 'none', parity: 'none', stopBits: 1 });
this.open = true;
this.monitor();
private monitor = async () => {
const dataEndFlag = new Uint8Array([4, 3]);
while (this.open && this.port?.readable) {
this.open = true;
const reader = this.port.readable.getReader();
try {
let data: Uint8Array = new Uint8Array([]);
while (this.open) {
const { value, done } = await reader.read();
if (done) {
this.open = false;
break;
}
if (value) {
data = Uint8Array.of(...data, ...value);
}
if (data.slice(-2).every((val, idx) => val === dataEndFlag[idx])) {
const decoded = this.decoder.decode(data);
this.messages.push(decoded);
data = new Uint8Array([]);
}
}
} catch {
}
}
}
public write = async (data: string) => {
if (this.port?.writable) {
const writer = this.port.writable.getWriter();
await writer.write(this.encoder.encode(data));
writer.releaseLock();
}
}
The equivalent WebSerial code can be found here, this is pretty much an exact replica. From what I can observe, it seems to hang at await reader.read(); for a brief period of time.
This is occurring both on a Windows 10 device and a macOS Monterey device. The specific hardware device is an Arduino Pro Micro connected to a USB port.
Has anyone experienced this same scenario?
Update: I did some additional testing with more verbose logging. It seems that the time between the write and read is exactly 1 second every time.
the delay may result from SerialEvent() in your arduino script: set Serial.setTimeout(1);
This means 1 millisecond instead of default 1000 milliseconds.
I am a total newb, I just started looking into this today. I have a a chromebook running chrome Version 96.0.4664.111 (Official Build) (64-bit), and a raspberry pi pico which I have loaded python bootloader on (drag & drop). I am trying to access the pico from my browser serially to load my source code since I cannot install thawny on my chromebook. I have pieced together this javascript function that uses web serial api to connect to the pico.
const filters = [
{ usbVendorId: 0x2E8A, usbProductId: 0x0003 },
{ usbVendorId: 0x2E8A, usbProductId: 0x0005 }
];
// Prompt user to select an Arduino Uno device.
const port = await navigator.serial.requestPort({ filters });
const { usbProductId, usbVendorId } = port.getInfo();
// Wait for the serial port to open.
await port.open({ baudRate: 9600 });
const textDecoder = new TextDecoderStream();
const readableStreamClosed = port.readable.pipeTo(textDecoder.writable);
const reader = textDecoder.readable.getReader();
// Listen to data coming from the serial device.
while (true) {
const { value, done } = await reader.read();
if (done) {
// Allow the serial port to be closed later.
reader.releaseLock();
break;
}
// value is a Uint8Array.
console.log(value);
}
// Listen to data coming from the serial device.
while (true) {
const { value, done } = await reader.read();
if (done) {
// Allow the serial port to be closed later.
reader.releaseLock();
break;
}
// value is a string.
console.log(value);
}
const textEncoder = new TextEncoderStream();
const writableStreamClosed = textEncoder.readable.pipeTo(port.writable);
const writer = textEncoder.writable.getWriter();
await writer.write("hi");
// Allow the serial port to be closed later.
writer.releaseLock();
I cannot find a way to make this program upload a file, I would really appreciate it if someone could help me out.
Please excuse me if I'm being unclear or extremley stupid, I am completley new to this and I am really tired from new-years last night. Thanks!
I have found a suitable solution to my question, tinkerdoodle.cc.
hello i am newbie in WebRTC and i tried this code
const yourVideo = document.querySelector("#face_cam_vid");
const theirVideo = document.querySelector("#thevid");
(async () => {
if (!("mediaDevices" in navigator) || !("RTCPeerConnection" in window)) {
alert("Sorry, your browser does not support WebRTC.");
return;
}
const stream = await navigator.mediaDevices.getUserMedia({video: true,
audio: true});
yourVideo.srcObject = stream;
const configuration = {
iceServers: [{urls: "stun:stun.1.google.com:19302"}]
};
const yours = new RTCPeerConnection(configuration);
const theirs = new RTCPeerConnection(configuration);
for (const track of stream.getTracks()) {
yours.addTrack(track, stream);
}
theirs.ontrack = e => theirVideo.srcObject = e.streams[0];
yours.onicecandidate = e => theirs.addIceCandidate(e.candidate);
theirs.onicecandidate = e => yours.addIceCandidate(e.candidate);
const offer = await yours.createOffer();
await yours.setLocalDescription(offer);
await theirs.setRemoteDescription(offer);
const answer = await theirs.createAnswer();
await theirs.setLocalDescription(answer);
await yours.setRemoteDescription(answer);
})();
and it works but partly https://imgur.com/a/nG7Xif6 . in short i am creating ONE-to-ONE random video chatting like in omegle but this code displays both "remote"(stranger's) and "local"("mine") video with my local stream but all i want is , user wait for second user to have video chat and when third user enters it should wait for fourth and etc. i hope someone will help me with that
You're confusing a local-loop demo—what you have—with a chat room.
A local-loop demo is a short-circuit client-only proof-of-concept, linking two peer connections on the same page to each other. Utterly useless, except to prove the API works and the browser can talk to itself.
It contains localPeerConnection and remotePeerConnection—or pc1 and pc2—and is not how one would typically write WebRTC code. It leaves out signaling.
Signaling is hard to demo without a server, but I show people this tab demo. Right-click and open it in two adjacent windows, and click the Call! button on one to call the other. It uses localSocket, a non-production hack I made using localStorage for signaling.
Just as useless, a tab-demo looks more like real code:
const pc = new RTCPeerConnection();
call.onclick = async () => {
video.srcObject = await navigator.mediaDevices.getUserMedia({video:true, audio:true});
for (const track of video.srcObject.getTracks()) {
pc.addTrack(track, video.srcObject);
}
};
pc.ontrack = e => video.srcObject = e.streams[0];
pc.oniceconnectionstatechange = e => console.log(pc.iceConnectionState);
pc.onicecandidate = ({candidate}) => sc.send({candidate});
pc.onnegotiationneeded = async e => {
await pc.setLocalDescription(await pc.createOffer());
sc.send({sdp: pc.localDescription});
}
const sc = new localSocket();
sc.onmessage = async ({data: {sdp, candidate}}) => {
if (sdp) {
await pc.setRemoteDescription(sdp);
if (sdp.type == "offer") {
await pc.setLocalDescription(await pc.createAnswer());
sc.send({sdp: pc.localDescription});
}
} else if (candidate) await pc.addIceCandidate(candidate);
}
There's a single pc—your half of the call—and there's an onmessage signaling callback to handle the timing-critical asymmetric offer/answer negotiation correctly. Same JS on both sides.
This still isn't a chat-room. For that you need server logic to determine how people meet, and a web socket server for signaling. Try this tutorial on MDN which culminates in a chat demo.
I would like to use the google iot core api from a firebase function.
It all works, but it is very slow. I think is due to the authentication process that needs to be carried out one very call. Is there a way to speed this up?
Right now I have this:
function getClient(cb) {
const API_VERSION = 'v1';
const DISCOVERY_API = 'https://cloudiot.googleapis.com/$discovery/rest';
const jwtAccess = new google.auth.JWT();
jwtAccess.fromJSON(serviceAccount);
// Note that if you require additional scopes, they should be specified as a
// string, separated by spaces.
jwtAccess.scopes = 'https://www.googleapis.com/auth/cloud-platform';
// Set the default authentication to the above JWT access.
google.options({ auth: jwtAccess });
const discoveryUrl = `${DISCOVERY_API}?version=${API_VERSION}`;
google.discoverAPI(discoveryUrl, {}).then( end_point => {
cb(end_point);
});
}
And this allows me to do:
export function sendCommandToDevice(deviceId, subfolder, mqtt_data) {
const cloudRegion = 'europe-west1';
const projectId = 'my-project-id;
const registryId = 'my-registry-id';
getClient(client => {
const parentName = `projects/${projectId}/locations/${cloudRegion}`;
const registryName = `${parentName}/registries/${registryId}`;
const binaryData = Buffer.from(mqtt_data).toString('base64');
const request = {
name: `${registryName}/devices/${deviceId}`,
binaryData: binaryData,
subfolder: subfolder
};
client.projects.locations.registries.devices.sendCommandToDevice(request,
(err, data) => {
if (err) {
console.log('Could not update config:', deviceId);
}
});
});
}
The way that I've found to speed it up is to avoid doing the authentication. I've solved it doing this:
const google = new GoogleApis();
const API_VERSION = 'v1';
const DISCOVERY_API = 'https://cloudiot.googleapis.com/$discovery/rest';
const jwtAccess = new google.auth.JWT();
jwtAccess.fromJSON(serviceAccount);
// Note that if you require additional scopes, they should be specified as a
// string, separated by spaces.
jwtAccess.scopes = 'https://www.googleapis.com/auth/cloud-platform';
// Set the default authentication to the above JWT access.
google.options({ auth: jwtAccess });
const discoveryUrl = `${DISCOVERY_API}?version=${API_VERSION}`;
var googleClient;
google.discoverAPI(discoveryUrl, {}).then( client => {
//cb(end_point);
googleClient = client;
});
// Returns an authorized API client by discovering the Cloud IoT Core API with
// the provided API key.
function getClient(cb) {
cb(googleClient);
}
But when happens then when the client expires? Is there any good solution from using google apis from firebase functions?
The problem may be the discovery pieces. There's a direct IoT Core admin REST API, so you don't have to use discovery...I think. I haven't worked with the Firebase Functions, but they're roughly equivalent to the Google Cloud Functions which may end up working here also. The code we (in a live demo we did) ran to do what you're doing is here if you wanted to tinker around and see if you can get this running in a Firebase Function.
Working on a Chrome Extension, which needs to integrate with IndexedDB. Trying to figure out how to use Dexie.JS. Found a bunch of samples. Those don't look too complicated. There is one specific example particularly interesting for exploring IndexedDB with Dexie at https://github.com/dfahlander/Dexie.js/blob/master/samples/open-existing-db/dump-databases.html
However, when I run the one above - the "dump utility," it does not see IndexedDB databases, telling me: There are databases at the current origin.
From the developer tools Application tab, under Storage, I see my IndexedDB database.
Is this some sort of a permissions issue? Can any indexedDB database be accessed by any tab/user?
What should I be looking at?
Thank you
In chrome/opera, there is a non-standard API webkitGetDatabaseNames() that Dexie.js uses to retrieve the list of database names on current origin. For other browsers, Dexie emulates this API by keeping an up-to-date database of database-names for each origin, so:
For chromium browsers, Dexie.getDatabaseNames() will list all databases at current origin, but for non-chromium browsers, only databases created with Dexie will be shown.
If you need to dump the contents of each database, have a look at this issue, that basically gives:
interface TableDump {
table: string
rows: any[]
}
function export(db: Dexie): TableDump[] {
return db.transaction('r', db.tables, ()=>{
return Promise.all(
db.tables.map(table => table.toArray()
.then(rows => ({table: table.name, rows: rows})));
});
}
function import(data: TableDump[], db: Dexie) {
return db.transaction('rw', db.tables, () => {
return Promise.all(data.map (t =>
db.table(t.table).clear()
.then(()=>db.table(t.table).bulkAdd(t.rows)));
});
}
Combine the functions with JSON.stringify() and JSON.parse() to fully serialize the data.
const db = new Dexie('mydb');
db.version(1).stores({friends: '++id,name,age'});
(async ()=>{
// Export
const allData = await export (db);
const serialized = JSON.stringify(allData);
// Import
const jsonToImport = '[{"table": "friends", "rows": [{id:1,name:"foo",age:33}]}]';
const dataToImport = JSON.parse(jsonToImport);
await import(dataToImport, db);
})();
A working example for dumping data to a JSON file using the current indexedDB API as described at:
https://developers.google.com/web/ilt/pwa/working-with-indexeddb
https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API/Using_IndexedDB
The snippet below will dump recent messages from a gmail account with the Offline Mode enabled in the gmail settings.
var dbPromise = indexedDB.open("your_account#gmail.com_xdb", 109, function (db) {
console.log(db);
});
dbPromise.onerror = (event) => {
console.log("oh no!");
};
dbPromise.onsuccess = (event) => {
console.log(event);
var transaction = db.transaction(["item_messages"]);
var objectStore = transaction.objectStore("item_messages");
var allItemsRequest = objectStore.getAll();
allItemsRequest.onsuccess = function () {
var all_items = allItemsRequest.result;
console.log(all_items);
// save items as JSON file
var bb = new Blob([JSON.stringify(all_items)], { type: "text/plain" });
var a = document.createElement("a");
a.download = "gmail_messages.json";
a.href = window.URL.createObjectURL(bb);
a.click();
};
};
Running the code above from DevTools > Sources > Snippets will also let you set breakpoints and debug and inspect the objects.
Make sure you set the right version of the database as the second parameter to indexedDB.open(...). To peek at the value used by your browser the following code can be used:
indexedDB.databases().then(
function(r){
console.log(r);
}
);