IndexedDB in Safari 10 supports blobs now. This works fine on desktop, however mobile Safari on iOS 10 throws an error:
UnknownError
and sometimes in combination:
TransactionInactiveError (DOM IDBDatabase Exception): Failed to store record in an IDBObjectStore:
The transaction is inactive or finished.
The code (shortened):
var indexedDB = window.indexedDB || window.webkitIndexedDB || window.mozIndexedDB || window.msIndexedDB,
READ_WRITE = IDBTransaction && IDBTransaction.READ_WRITE ? IDBTransaction.READ_WRITE : 'readwrite',
storeName = 'files',
db;
init: function() {
var request = indexedDB.open('mydb');
request.onerror = ...;
request.onupgradeneeded = function() {
db = request.result;
db.createObjectStore(storeName);
};
request.onsuccess = function() {
db = request.result;
};
},
save: function(id, data) {
var put = function(data) {
var objectStore = db.transaction([storeName], READ_WRITE).objectStore(storeName),
request = objectStore.put(data, id);
request.onerror = ...;
request.onsuccess = ...;
};
// not all IndexedDB implementations support storing blobs, only detection is try-catch
try {
put(data);
} catch(err) {
if (data instanceof Blob) {
Helpers.blobToDataURL(data, put);
}
}
}
On Mobile Safari 10 .put() doesn't throw like before, only later in the async error-callback.
Base64 strings work fine.
Bug in Mobile Safari or do I have to change code?
Test Case: http://fiddle.jshell.net/j7wh60vo/7/
Ran across the same problem. Chrome 54 and Safari 10 work fine on desktop, but on Mobile Safari I kept getting the Unknown error when trying to store a Blob into IndexedDB. I can confirm that this really is just an issue with Blobs on Mobile Safari, and not some misuse of the API.
Fortunately, ArrayBuffers work fine. So I instead downloaded the images like:
xhr.open('GET', url, true);
xhr.responseType = 'arraybuffer';
Then saved them into IndexedDB as ArrayBuffers, and converted them to Blobs after pulling them out to get a url:
putRequest = objectStore.put(arrayBuffer, id);
putRequest.onsuccess = function(event) {
objectStore.get(id).onsuccess = function(event) {
var blob = new Blob([event.target.result], { type: 'image/jpeg'});
var URL = window.URL || window.webkitURL;
blobUrl = URL.createObjectURL(blob);
};
};
I'd rather not have to convert ArrayBuffers to Blobs like this as I assume there is a performance penalty. But it works.
That error looks to me like you have to change the code. That error does not indicate an issue with blobs. That error indicates you have a problem somewhere in how you call functions. To better answer your question, you need to post more of the surrounding code. Specifically, display the parts of the code where you create the transaction and where you create requests on the transaction.
Edit: first, remove the window.indexedDB stuff. Second, do not use 'db' in the way you are using it, because that will not work, the db may be closed by the time save is called.
function save(id, data) {
var openRequest = indexedDB.open(...);
openRequest.onerror = console;
openRequest.onsuccess = function(event) {
var db = openRequest.result;
// Open the transaction
var tx = db.transaction(storeName, 'readwrite');
var store = tx.objectStore(storeName);
// Immediately use the transaction
try {
var putRequest = tx.put(data, id);
putRequest.onerror = console;
} catch(error) {
console.log(error);
}
};
}
Edit2: Additional notes:
Prefixes have been removed, just use indexedDB, not mozIndexedDB or webkitIndexedDB etc
Transaction mode constants have been removed, use either 'readonly' or 'readwrite', or nothing (defaults to readonly)
I am somewhat confused how you are calling request = transaction.put. As far as I am aware, there is no method IDBTransaction.prototype.put as shown in the spec https://w3c.github.io/IndexedDB/#idbtransaction. I am confused as to why the Mozilla docs show an example with transaction.put. Inspecting the prototype of IDBTransaction in Chrome 55 does not show a put method.
There is IDBObjectStore.prototype.put. Your code should not be working at all, on any platform, as it is currently written. So if it did ever work, I am surprised. You should only be using something like var store = transaction.objectStore('store'); store.put(obj); where you call put on the object store.
Related
I'm using the new v2 Twilio Javascript SDK to make calls from the browser to other people.
This works fine but I've been asked to add volume controls for the incoming audio stream.
After some research it seems that I need to take the remote stream from the call and feed it through a gain node to reduce the volume.
Unfortunately the result from call.getRemoteStream is always null even when I can hear audio from the call.
I've tested this on latest Chrome and Edge and they have the same behavior.
Is there something else I need to do to access the remote stream?
Code:
async function(phoneNumber, token)
{
console.log("isSecureContext: " + window.isSecureContext); //check we can get the stream
var options = {
edge: 'ashburn', //us US endpoint
closeProtection: true // will warn user if you try to close browser window during an active call
};
var device = new Device(token, options);
const connectionParams = {
"phoneNumber": phoneNumber
};
var activeCall = await device.connect({ params: connectionParams });
//Setup gain (volume) control for incoming audio
//Note, getRemoteStream always returns null.
var remoteStream = activeCall.getRemoteStream();
if(remoteStream)
{
var audioCtx = new AudioContext();
var source = audioCtx.createMediaStreamSource(remoteStream);
var gainNode = audioCtx.createGain();
source.connect(gainNode)
gainNode.connect(audioCtx.destination);
}
else
{
console.log("No remote stream on call");
}
}
The log output is:
isSecureContext: true
then
No remote stream on call
Twilio support gave me the answer: you need to wait until you start receiving volume events before requesting the stream.
ie
call.on('volume', (inputVolume, outputVolume) => {
if(inputVolume > 0)
{
var remoteStream = activeCall.getRemoteStream();
....
}
});
Working on a Chrome Extension, which needs to integrate with IndexedDB. Trying to figure out how to use Dexie.JS. Found a bunch of samples. Those don't look too complicated. There is one specific example particularly interesting for exploring IndexedDB with Dexie at https://github.com/dfahlander/Dexie.js/blob/master/samples/open-existing-db/dump-databases.html
However, when I run the one above - the "dump utility," it does not see IndexedDB databases, telling me: There are databases at the current origin.
From the developer tools Application tab, under Storage, I see my IndexedDB database.
Is this some sort of a permissions issue? Can any indexedDB database be accessed by any tab/user?
What should I be looking at?
Thank you
In chrome/opera, there is a non-standard API webkitGetDatabaseNames() that Dexie.js uses to retrieve the list of database names on current origin. For other browsers, Dexie emulates this API by keeping an up-to-date database of database-names for each origin, so:
For chromium browsers, Dexie.getDatabaseNames() will list all databases at current origin, but for non-chromium browsers, only databases created with Dexie will be shown.
If you need to dump the contents of each database, have a look at this issue, that basically gives:
interface TableDump {
table: string
rows: any[]
}
function export(db: Dexie): TableDump[] {
return db.transaction('r', db.tables, ()=>{
return Promise.all(
db.tables.map(table => table.toArray()
.then(rows => ({table: table.name, rows: rows})));
});
}
function import(data: TableDump[], db: Dexie) {
return db.transaction('rw', db.tables, () => {
return Promise.all(data.map (t =>
db.table(t.table).clear()
.then(()=>db.table(t.table).bulkAdd(t.rows)));
});
}
Combine the functions with JSON.stringify() and JSON.parse() to fully serialize the data.
const db = new Dexie('mydb');
db.version(1).stores({friends: '++id,name,age'});
(async ()=>{
// Export
const allData = await export (db);
const serialized = JSON.stringify(allData);
// Import
const jsonToImport = '[{"table": "friends", "rows": [{id:1,name:"foo",age:33}]}]';
const dataToImport = JSON.parse(jsonToImport);
await import(dataToImport, db);
})();
A working example for dumping data to a JSON file using the current indexedDB API as described at:
https://developers.google.com/web/ilt/pwa/working-with-indexeddb
https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API/Using_IndexedDB
The snippet below will dump recent messages from a gmail account with the Offline Mode enabled in the gmail settings.
var dbPromise = indexedDB.open("your_account#gmail.com_xdb", 109, function (db) {
console.log(db);
});
dbPromise.onerror = (event) => {
console.log("oh no!");
};
dbPromise.onsuccess = (event) => {
console.log(event);
var transaction = db.transaction(["item_messages"]);
var objectStore = transaction.objectStore("item_messages");
var allItemsRequest = objectStore.getAll();
allItemsRequest.onsuccess = function () {
var all_items = allItemsRequest.result;
console.log(all_items);
// save items as JSON file
var bb = new Blob([JSON.stringify(all_items)], { type: "text/plain" });
var a = document.createElement("a");
a.download = "gmail_messages.json";
a.href = window.URL.createObjectURL(bb);
a.click();
};
};
Running the code above from DevTools > Sources > Snippets will also let you set breakpoints and debug and inspect the objects.
Make sure you set the right version of the database as the second parameter to indexedDB.open(...). To peek at the value used by your browser the following code can be used:
indexedDB.databases().then(
function(r){
console.log(r);
}
);
I'm building a web app that uses EvaporateJS to upload large files to Amazon S3 using Multipart Uploads. I noticed an issue where every time a new chunk was started the browser would freeze for ~2 seconds. I want the user to be able to continue to use my app while the upload is in progress, and this freezing makes that a bad experience.
I used Chrome's Timeline to look into what was causing this and found that it was SparkMD5's hashing. So I've moved the entire upload process into a Worker, which I thought would fix the issue.
Well the issue is now fixed in Edge and Firefox, but Chrome still has the exact same problem.
Here's a screenshot of my Timeline:
As you can see, during the freezes my main thread is doing basically nothing, with <8ms of JavaScript running during that time. All the work is occurring in my Worker thread, and even that is only running for ~600ms or so, not the 1386ms that my frame takes.
I'm really not sure what's causing the issue, are there any gotchas with Workers that I should be aware of?
Here's the code for my Worker:
var window = self; // For Worker-unaware scripts
// Shim to make Evaporate work in a Worker
var document = {
createElement: function() {
var href = undefined;
var elm = {
set href(url) {
var obj = new URL(url);
elm.protocol = obj.protocol;
elm.hostname = obj.hostname;
elm.pathname = obj.pathname;
elm.port = obj.port;
elm.search = obj.search;
elm.hash = obj.hash;
elm.host = obj.host;
href = url;
},
get href() {
return href;
},
protocol: undefined,
hostname: undefined,
pathname: undefined,
port: undefined,
search: undefined,
hash: undefined,
host: undefined
};
return elm;
}
};
importScripts("/lib/sha256/sha256.min.js");
importScripts("/lib/spark-md5/spark-md5.min.js");
importScripts("/lib/url-parse/url-parse.js");
importScripts("/lib/xmldom/xmldom.js");
importScripts("/lib/evaporate/evaporate.js");
DOMParser = self.xmldom.DOMParser;
var defaultConfig = {
computeContentMd5: true,
cryptoMd5Method: function (data) { return btoa(SparkMD5.ArrayBuffer.hash(data, true)); },
cryptoHexEncodedHash256: sha256,
awsSignatureVersion: "4",
awsRegion: undefined,
aws_url: "https://s3-ap-southeast-2.amazonaws.com",
aws_key: undefined,
customAuthMethod: function(signParams, signHeaders, stringToSign, timestamp, awsRequest) {
return new Promise(function(resolve, reject) {
var signingRequestId = currentSigningRequestId++;
postMessage(["signingRequest", signingRequestId, signParams.videoId, timestamp, awsRequest.signer.canonicalRequest()]);
queuedSigningRequests[signingRequestId] = function(signature) {
queuedSigningRequests[signingRequestId] = undefined;
if(signature) {
resolve(signature);
} else {
reject();
}
}
});
},
//logging: false,
bucket: undefined,
allowS3ExistenceOptimization: false,
maxConcurrentParts: 5
}
var currentSigningRequestId = 0;
var queuedSigningRequests = [];
var e = undefined;
var filekey = undefined;
onmessage = function(e) {
var messageType = e.data[0];
switch(messageType) {
case "init":
var globalConfig = {};
for(var k in defaultConfig) {
globalConfig[k] = defaultConfig[k];
}
for(var k in e.data[1]) {
globalConfig[k] = e.data[1][k];
}
var uploadConfig = e.data[2];
Evaporate.create(globalConfig).then(function(evaporate) {
var e = evaporate;
filekey = globalConfig.bucket + "/" + uploadConfig.name;
uploadConfig.progress = function(p, stats) {
postMessage(["progress", p, stats]);
};
uploadConfig.complete = function(xhr, awsObjectKey, stats) {
postMessage(["complete", xhr, awsObjectKey, stats]);
}
uploadConfig.info = function(msg) {
postMessage(["info", msg]);
}
uploadConfig.warn = function(msg) {
postMessage(["warn", msg]);
}
uploadConfig.error = function(msg) {
postMessage(["error", msg]);
}
e.add(uploadConfig);
});
break;
case "pause":
e.pause(filekey);
break;
case "resume":
e.resume(filekey);
break;
case "cancel":
e.cancel(filekey);
break;
case "signature":
var signingRequestId = e.data[1];
var signature = e.data[2];
queuedSigningRequests[signingRequestId](signature);
break;
}
}
Note that it relies on the calling thread to provide it with the AWS Public Key, AWS Bucket Name and AWS Region, AWS Object Key and the input File object, which are all provided in the 'init' message. When it needs something signed, it sends a 'signingRequest' message to the parent thread, which is expected to provided the signature in a 'signature' message once it's been fetched from my API's signing endpoint.
I can't give a very good example or analyze what you are doing with only the Worker code, but I strongly suspect that the issue either has to do with either the reading of the chunk on the main thread or some unexpected processing that you are doing on the chunk on the main thread. Maybe post the main thread code that calls postMessage to the Worker?
If I were debugging it right now, I'd try moving your FileReader operations into the Worker. If you don't mind the Worker blocking while it loads a chunk, you could also use FileReaderSync.
Post-comments update
Does generating the presigned URL require hashing the file content + metadata + a key? Hashing file content is going to take O(n) in the size of the chunk and it's possible, if the hash is the first operation that reads from the Blob, that the loading of the file content could be deferred until the hashing starts. Unless you are compelled to keep the signing in the main thread (you don't trust the worker with key material?) that would be another good thing to bring into the worker.
If moving the signing into the Worker is too much, you could have the worker do something to force the Blob to be read and/or pass the ArrayBuffer(or Uint8Array or what have you) of file content back to the main thread for signing; this would ensure that reading the chunk does not occur on the main thread.
I am using service workers to intercept requests for me and provide the responses to the fetch requests by communicating with a Web worker (also created from the same parent page).
I have used message channels for direct communication between the worker and service worker. Here is a simple POC I have written:
var otherPort, parentPort;
var dummyObj;
var DummyHandler = function()
{
this.onmessage = null;
var selfRef = this;
this.callHandler = function(arg)
{
if (typeof selfRef.onmessage === "function")
{
selfRef.onmessage(arg);
}
else
{
console.error("Message Handler not set");
}
};
};
function msgFromW(evt)
{
console.log(evt.data);
dummyObj.callHandler(evt);
}
self.addEventListener("message", function(evt) {
var data = evt.data;
if(data.msg === "connect")
{
otherPort = evt.ports[1];
otherPort.onmessage = msgFromW;
parentPort = evt.ports[0];
parentPort.postMessage({"msg": "connect"});
}
});
self.addEventListener("fetch", function(event)
{
var url = event.request.url;
var urlObj = new URL(url);
if(!isToBeIntercepted(url))
{
return fetch(event.request);
}
url = decodeURI(url);
var key = processURL(url).toLowerCase();
console.log("Fetch For: " + key);
event.respondWith(new Promise(function(resolve, reject){
dummyObj = new DummyHandler();
dummyObj.onmessage = function(e)
{
if(e.data.error)
{
reject(e.data.error);
}
else
{
var content = e.data.data;
var blob = new Blob([content]);
resolve(new Response(blob));
}
};
otherPort.postMessage({"msg": "content", param: key});
}));
});
Roles of the ports:
otherPort: Communication with worker
parentPort: Communication with parent page
In the worker, I have a database say this:
var dataBase = {
"file1.txt": "This is File1",
"file2.txt": "This is File2"
};
The worker just serves the correct data according to the key sent by the service worker. In reality these will be very large files.
The problem I am facing with this is the following:
Since I am using a global dummyObj, the older dummyObj and hence the older onmessage is lost and only the latest resource is responded with the received data.
In fact, file2 gets This is File1, because the latest dummyObj is for file2.txt but the worker first sends data for file1.txt.
I tried by creating an iframe directly and all the requests inside it are intercepted:
<html>
<head></head>
<body><iframe src="tointercept/file1.txt" ></iframe><iframe src="tointercept/file2.txt"></iframe>
</body>
</html>
Here is what I get as output:
One approach could be to write all the files that could be fetched into IndexedDB in the worker before creating the iframe. Then in the Service Worker fetch those from indexed DB. But I don't want to save all the resources in IDB. So this approach is not what I want.
Does anybody know a way to accomplish what I am trying to do in some other way? Or is there a fix to what I am doing.
Please Help!
UPDATE
I have got this to work by queuing the dummyObjs in a global queue instead of having a global object. And on receiving the response from the worker in msgFromW I pop an element from the queue and call its callHandler function.
But I am not sure if this is a reliable solution. As it assumes that everything will occur in order. Is this assumption correct?
I'd recommend wrapping your message passing between the service worker and the web worker in promises, and then pass a promise that resolves with the data from the web worker to fetchEvent.respondWith().
The promise-worker library can automate this promise-wrapping for you, or you could do it by hand, using this example as a guide.
If you were using promise-worker, your code would look something like:
var promiseWorker = new PromiseWorker(/* your web worker */);
self.addEventListener('fetch', function(fetchEvent) {
if (/* some optional check to see if you want to handle this event */) {
fetchEvent.respondWith(promiseWorker.postMessage(/* file name */));
}
});
I am currently using getUserMedia(), which is only working on Firefox and Chrome, yet it got deprecated and works only on https (in Chrome). Is there any other/better way to get the speech input in javascript that works on all platforms?
E.g. how do websites like web.whatsapp.com app record audio? getUserMedia() prompts first-time-users to permit audio recording, whereas the Whatsapp application doesn't require the user's permission.
The getUserMedia() I am currently using looks like this:
navigator.getUserMedia(
{
"audio": {
"mandatory": {
"googEchoCancellation": "false",
"googAutoGainControl": "false",
"googNoiseSuppression": "false",
"googHighpassFilter": "false"
},
"optional": []
},
}, gotStream, function(e) {
console.log(e);
});
Chrome 60+ does require using https, since getUserMedia is a powerful feature. The API access shouldn't work in non-secure domains, as that API access may get bled over to non-secure actors. Firefox still supports getUserMedia over http, though.
I've been using RecorderJS and it served my purposes well.
Here is a code example. (source)
function RecordAudio(stream, cfg) {
var config = cfg || {};
var bufferLen = config.bufferLen || 4096;
var numChannels = config.numChannels || 2;
this.context = stream.context;
var recordBuffers = [];
var recording = false;
this.node = (this.context.createScriptProcessor ||
this.context.createJavaScriptNode).call(this.context,
bufferLen, numChannels, numChannels);
stream.connect(this.node);
this.node.connect(this.context.destination);
this.node.onaudioprocess = function(e) {
if (!recording) return;
for (var i = 0; i < numChannels; i++) {
if (!recordBuffers[i]) recordBuffers[i] = [];
recordBuffers[i].push.apply(recordBuffers[i], e.inputBuffer.getChannelData(i));
}
}
this.getData = function() {
var tmp = recordBuffers;
recordBuffers = [];
return tmp; // returns an array of array containing data from various channels
};
this.start() = function() {
recording = true;
};
this.stop() = function() {
recording = false;
};
}
The usage is straightforward:
var recorder = new RecordAudio(userMedia);
recorder.start();
recorder.stop();
var recordedData = recorder.getData()
Edit: You may also want to check this answer if nothing works.
Recorder JS does the easy job for you. It works with Web Audio API nodes
Chrome and Firefox Browsers has evolved now. There is an inbuilt MediaRecoder API which does audio recording for you.
navigator.mediaDevices.getUserMedia({audio:true})
.then(stream => {
rec = new MediaRecorder(stream);
rec.ondataavailable = e => {
audioChunks.push(e.data);
if (rec.state == "inactive"){
// Use blob to create a new Object URL and playback/download
}
}
})
.catch(e=>console.log(e));
Working demo
MediaRecoder support starts from
Chrome support: 47
Firefox support: 25.0
The getUserMedia() isn't deprecated, deprecated is using it over http. How far I know the only browser which requires https for getUserMedia() right now is Chrome what I think is correct approach.
If you want ssl/tls for your test you can use free version of CloudFlare.
Whatsapp page doesn't provide any recording functions, it just allow you to launch application.
Good article about getUserMedia
Fully working example with use of MediaRecorder