The server is sending video frame. I would like to use them in order to do a streaming. I wonder how I could assemble the frames to create a streaming video. So far, I could display the frames as pictures. Below is my angular code
component angular
getVideo() {
interval(250).switchMap(() => this.appService.getPictures())
.subscribe(data => {
const file = new Blob([data], {type:'image/png'});
this.url = this.sanitizer.bypassSecurityTrustResourceUrl(URL.createObjectURL(file));
})
}
template html
<img div="test" [src]="url" width="400px" height="300px"/>
I am trying to change the picture using the frame rate of the camera. But my picture is not updated and it freezes my browser due to the high number of http requests.
What I want to achieve is to buffer the frame in order to use the video tag instead of the img tag the same way I would connect to a live streaming send by a server using the video tag with src set to the url of the server.
Github link: https://github.com/kedevked/frameProcessing
Instead of triggering http request at a certain interval of times, using websocket is better.
By displaying the images frame by frame, it gives the impression of a live video. The only thing though is that there might be latency in the network which makes the frame rate not to be constant. However, it works well for use cases where buffering the video server side could not be considered such as when we need to send data along each frame.
service using websocket
createStream(url) {
this.ws = webSocket<any>(url);
return return this.ws.asObservable()
}
component
constructor (private streamingService: StreamingService) {}
ngOnInit(): void {
this.getStream()
}
getStream() {
this.streamingService.createStream().subscribe(data => {
const file = new Blob([data], {type:'image/png'});
this.url = this.sanitizer.bypassSecurityTrustResourceUrl(URL.createObjectURL(file));
})
}
This solution is well suited when we are sending not only image but also data along the image. In this scenario, the image needs to be sent as base64.
getStream() {
this.streamingService.createStream().subscribe(data => {
this.url = data.base64ImageData
})
}
Since the base64 encoding has a payload of 133% of the initial image, it might be costly to use it in all settings. In case the image needs to be served along with other data, the whole data can be sent as a binary. This will require encoding the data server-side and decode it client-side.
Since this might require a bit of computation, using a web-worker can be considered for not only decoding the data but also for displaying the image.
If the image is sent as a binary, using canvas for rendering will be faster
canvas.getContex('2d').putImageData(imageBinaryData, 0, 0)
I see two problems here, that may improve this solution.
First of all, how do you know that the request will spend less than 250 milliseconds? if not, you know that switchMap will cancel it? Maybe exhaustMap or concatMap could fit better if you don't want to lose the current request.
The second point is that the interval(X) and the HTTP request are inside the angular zone, so they will fire change detection... This could bring you several performance issues. So you should run the interval and the request outside the zone and detect changes manually.
Said that I will purpose a solution like this:
private subscription: Subscription;
cosntructor(public ngZone: NgZone, public cd: ChangeDetectorRef){
getVideo();
}
getVideo() {
this.subscription = this.ngZone.runOutsideAngular(() =>
interval(250).pipe(exhaustMap(() => this.appService.getPictures()))
.subscribe(data => {
const file = new Blob([data], {type:'image/png'});
this.url = this.sanitizer.bypassSecurityTrustResourceUrl(URL.createObjectURL(file));
// Do change detection in orther to actualize [url] binding
this.cd.detectChanges();
})
);
}
ngOnDestroy() {
// remember to unsubscribe from interval subscription
this.subscription.unsubscribe()
}
I don't think that with this code you would have much performance issues... although this type of long polling is never good, I don't think it would freeze your app. Try it and let me know.
Hope this helps.
Related
I've been building a music app and today I finally got around to the point where I started trying to work playing the music into it.
As an outline of how my environment is set up, I am storing the music files as MP3s which I have uploaded into a MongoDB database using GridFS. I then use a socket.io server to download the chunks from the MongoDB database and send them as individual emits to the front end where the are processed by the Web Audio API and scheduled to play.
When they play, they are all in the correct order but there is this very tiny glitch or skip at the same spots every time (presumably between chunks) that I can't seem to get rid of. As far as I can tell, they are all scheduled right up next to each other so I can't find a reason why there should be any sort of gap or overlap between them. Any help would be appreciated. Here's the code:
Socket Route
socket.on('stream-audio', () => {
db.client.db("dev").collection('music.files').findOne({"metadata.songId": "3"}).then((result) =>{
const bucket = new GridFSBucket(db.client.db("dev"), {
bucketName: "music"
});
bucket.openDownloadStream(result._id).on('data',(chunk) => {
socket.emit('audio-chunk',chunk)
});
});
});
Front end
//These variable are declared as object variables, hence all of the "this" keywords
context: new (window.AudioContext || window.webkitAudioContext)(),
freeTime: null,
numChunks: 0,
chunkTracker: [],
...
this.socket.on('audio-chunk', (chunk) => {
//Keeping track of chunk decoding status so that they don't get scheduled out of order
const chunkId = this.numChunks
this.chunkTracker.push({
id: chunkId,
complete: false,
});
this.numChunks += 1;
//Callback to the decodeAudioData function
const decodeCallback = (buffer) => {
var shouldExecute = false;
const trackIndex = this.chunkTracker.map((e) => e.id).indexOf(chunkId);
//Checking if either it's the first chunk or the previous chunk has completed
if(trackIndex !== 0){
const prevChunk = this.chunkTracker.filter((e) => e.id === (chunkId-1))
if (prevChunk[0].complete) {
shouldExecute = true;
}
} else {
shouldExecute = true;
}
//THIS IS THE ACTUAL WEB AUDIO API STUFF
if (shouldExecute) {
if (this.freeTime === null) {
this.freeTime = this.context.currentTime
}
const source = this.context.createBufferSource();
source.buffer = buffer
source.connect(this.context.destination)
if (this.context.currentTime >= this.freeTime){
source.start()
this.freeTime = this.context.currentTime + buffer.duration
} else {
source.start(this.freeTime)
this.freeTime += buffer.duration
}
//Update the tracker of the chunks that this one is complete
this.chunkTracker[trackIndex] = {id: chunkId, complete: true}
} else {
//If the previous chunk hasn't processed yet, check again in 50ms
setTimeout((passBuffer) => {
decodeCallback(passBuffer)
},50,buffer);
}
}
decodeCallback.bind(this);
this.context.decodeAudioData(chunk,decodeCallback);
});
Any help would be appreciated, thanks!
As an outline of how my environment is set up, I am storing the music files as MP3s which I have uploaded into a MongoDB database using GridFS.
You can do this if you want, but these days we have tools like Minio, which can make this easier using more common APIs.
I then use a socket.io server to download the chunks from the MongoDB database and send them as individual emits to the front end
Don't go this route. There's no reason for the overhead of web sockets, or Socket.IO. A normal HTTP request would be fine.
where the are processed by the Web Audio API and scheduled to play.
You can't stream this way. The Web Audio API doesn't support useful streaming, unless you happened to have raw PCM chunks, which you don't.
As far as I can tell, they are all scheduled right up next to each other so I can't find a reason why there should be any sort of gap or overlap between them.
Lossy codecs aren't going to give you sample-accurate output. Especially with MP3, if you give it some arbitrary number of samples, you're going to end up with at least one full MP3 frame (~576 samples) output. The reality is that you need data ahead of the first audio frame for it to work properly. If you want to decode a stream, you need a stream to start with. You can't independently decode MP3 this way.
Fortunately, the solution also simplifies what you're doing. Simply return an HTTP stream from your server, and use an HTML audio element <audio> or new Audio(url). The browser will handle all the buffering. Just make sure your server handles range requests, and you're good to go.
Update
Since asking the question below and arriving at a more fundamental question after finding the error in the code, I found some more information such as in the MDN web docs for the downloads API method downloads.download() it states that a revoke of an object url should be performed only after the file/url has been downloaded. So, I spent some time trying to understand whether or not a web extension makes the downloads API onChanged event 'available' to javascript of a web page and don't think it does. I don't understand why the downloads API is available to extensions only, especailly when there are quite a few questions concerning this same memory-usage/object-url-revocation issue. For example, Wait for user to finish downloading a blob in Javascript.
If you know, would you please explain? Thank you.
Starting with Firefox browser closed, and right clicking on a local html file to open in Firefox, it opens with five firefox.exe processes as viewed in Windows Task Manager. Four of the processes start with between 20,000k and 25,000k of memory and one with about 115,000k.
This html page has an indexedDB database with 50 object stores each containing 50 objects. Each object is extracted from its object store and converted to string using JSON.stringify, and written to a two-dimensional array. Afterward, all elements of the array are concatenated into one large string, converted to a blob and written to the hard disk through a URL object which is revoked immediately afterward. The final file is about 190MB.
If the code is stopped just before the conversion to blob, one of the firefox.exe process's memory usage increases to around 425,000k and then falls back to 25,000k in about 5-10 seconds after the elements of the array have been concatenated into a single string.
If the code is run to completion, the memory usage of that same firefox.exe process grows to about 1,000,000k and then drops to about 225,000k. The firefox.exe process that started at 115,000k also increases at the blob stage of the code to about 325,000k and never decreases.
After the blob is written to disk as a text file, these two firefox.exe processes never release the approximate 2 x 200,000k increase in memory.
I have set every variable used in each function to null and the memory is never freed unless the page is refreshed. Also, this process is initiated by a button click event; and if it is run again without an intermediate refresh, each of these two firefox.exe processes grab an additional 200,000k of memory with each run.
I haven't been able to figure out how to free the memory?
The two functions are quite simple. json[i][j] holds the string version of the jth object from the ith object store in the database. os_data[] is an array of small objects { "name" : objectStoreName, "count" : n }, where n is the number of objects in the store. The build_text fuction appears to release the memory if write_to_disk is not invoked. So, the issue appears to be related to the blob or the url.
I'm probably overlooking something obvious. Thank you for any direction you can provide.
EDIT:
I see from JavaScript: Create and save file that I have a mistake in the revokeObjectURL(blob) statment. It can't revoke blob, the createObjectURL(blob) needed to be saved to a variable like url and then revoke url, not blob.
That worked for the most part and the memory is released from both of the firefox.exe processes mentioned above, in most cases. This leaves me with one small question about the timing of the revoke of the url.
If the revoke is what allows for the release of memory, should the url be revoked only after the file has been successfully downloaded? If the revoke takes place before the user clicks ok to download the file, what happens? Suppose I click the button to prepare the file from the database and after it's ready the browser brings up the window for downloading, but I wait a little while thinking about what to name the file or where to save it, won't the revoke statment be run already but the url is still 'held' by the browser since it is what will be downloaded? I know I can still download the file, but does the revoke still release the memory? From my small amount of experimenting with this one example, it appears that it does not get released in this scenario.
If there was an event that fires when the file has either successfully or unsuccessfully been downloaded to the client, is not that the time when the url should be revoked? Would it be better to set a timeout of a few minutes before revoking the url, since I'm pretty sure there is not an event indicating download to client has ended.
I'm probably not understanding something basic about this. Thanks.
function build_text() {
var i, j, l, txt = "";
for ( i = 1; i <=50; i++ ) {
l = os_data[i-1].count;
for ( j = 1; j <= l; j++ ) {
txt += json[i][j] + '\n';
}; // next j
}; // next i
write_to_disk('indexedDB portfolio', txt);
txt = json = null;
} // close build_text
function write_to_disk( fileName, data ) {
fileName = fileName.replace(".","");
var blob = new Blob( [data], { type: 'text/csv' } ), elem;
if ( window.navigator.msSaveOrOpenBlob ) {
window.navigator.msSaveBlob(blob, fileName);
} else {
elem = window.document.createElement('a');
elem.href = window.URL.createObjectURL(blob);
elem.download = fileName;
document.body.appendChild(elem);
elem.click();
document.body.removeChild(elem);
window.URL.revokeObjectURL(blob);
}; // end if
data = blob = elem = fileName = null;
} // close write_to_disk
I am a bit lost as to what is the question here...
But let's try to answer, at least part of it:
For a starter let's explain what URL.createObjectURL(blob) roughly does:
It creates a blob URI, which is an URI pointing to the Blob blob in memory just like if it was in an reachable place (like a server).
This blob URI will mark blob as being un-collectable by the Garbage Collector (GC) for as long as it has not been revoked, so that you don't have to maintain a live reference to blob in your script, but that you can still use/load it.
URL.revokeObjectURL will then break the link between the blob URI and the Blob in memory. It will not free up the memory occupied by blob directly, it will just remove its own protection regarding the GC, [and won't point to anywhere anymore].
So if you have multiple blob URI pointing to the same Blob object, revoking only one won't break the other blob URIs.
Now, the memory will be freed only when the GC will kick in, and this in only decided by the browser internals, when it thinks it is the best time, or when it sees it has no other options (generally when it misses memroy space).
So it is quite normal that you don't see your memory being freed up instantly, and by experience, I would say that FF doesn't care about using a lot of memory, when it is available, making GC kick not so often, whihc is good for user-experience (GCing often results in lags).
For your download question, indeed, web APIs don't provide a way to know if a download has been successful or failed, nor even if it has just ended.
For the revoking part, it really depends on when you do it.
If you do it directly in the click handler, then the browser won't have done the pre-fetch request yet, so when the default action of the click (the download) will happen, there won't be anything linked by the URI anymore.
Now, if you do revoke the blob URI after the "save" prompt, the browser will have done a pre-fetch request, and thus might be able to mark by itself that the Blob resource should not be cleared. But I don't think this behavior is tied by any specs, and it might be better to wait at least for the window's focus event, at which point the downloading of the resource should already have started.
const blob = new Blob(['bar']);
const uri = URL.createObjectURL(blob);
anchor.href = uri;
anchor.onclick = e => {
window.addEventListener('focus', e=>{
URL.revokeObjectURL(uri);
console.log("Blob URI revoked, you won't be able to download it anymore");
}, {once: true});
};
<a id="anchor" download="foo.txt">download</a>
I have an HTML5 video that is rather large. I'm also using Chrome. The video element has the loop attribute but each time the video "loops", the browser re-downloads the video file. I have set Cache-Control "max-age=15768000, private". However, this does not prevent any extra downloads of the identical file. I am using Amazon S3 to host the file. Also the s3 server responds with the Accepts Ranges header which causes the several hundred partial downloads of the file to be requested with the 206 http response code.
Here is my video tag:
<video autoplay="" loop="" class="asset current">
<source src="https://mybucket.s3.us-east-2.amazonaws.com/myvideo.mp4">
</video>
UPDATE:
It seems that the best solution is to prevent the Accept Ranges header from being sent with the original response and instead use a 200 http response code. How can this be achieved so that the video is fully cached through an .htaccess file?
Thanks in advance.
I don't know for sure what's the real issue you are facing.
It could be that Chrome has a max-size limit to what they'd cache, and if it the case, then not using Range-Requests wouldn't solve anything.
An other possible explanation is that caching media is not really a simple task.
Without seeing your file it's hard to tell for sure in which case you are, but you have to understand that to play a media, the browser doesn't need to fetch the whole file.
For instance, you can very well play a video file in an <audio> element, since the video stream won't be used, a browser could very well omit it completely and download only the audio stream. Not sure if any does, but they could. Most media formats do physically separate audio and video streams in the file and their byte positions are marked in the metadata.
They could certainly cache the Range-Requests they perform, but I think it's still quite rare they do.
But as tempting it might be to disable Range-Requests, you've to know that some browsers (Safari) will not play your media if your server doesn't allow Range-Requests.
So even then, it's probably not what you want.
The first thing you may want to try is to optimize your video for web usage. Instead of mp4, serve a webm file. These will generally take less space for the same quality and maybe you'll avoid the max-size limitation.
If the resulting file is still too big, then a dirty solution would be to use a MediaSource so that the file is kept in memory and you need to fetch it only once.
In the following example, the file will be fetched entirely only once, by chunks of 1MB, streamed by the MediaSource as it's being fetched and then only the data in memory will be used for looping plays:
document.getElementById('streamVid').onclick = e => (async () => {
const url = 'https://upload.wikimedia.org/wikipedia/commons/transcoded/2/22/Volcano_Lava_Sample.webm/Volcano_Lava_Sample.webm.360p.webm';
// you must know the mimeType of your video before hand.
const type = 'video/webm; codecs="vp8, vorbis"';
if( !MediaSource.isTypeSupported( type ) ) {
throw 'Unsupported';
}
const source = new MediaSource();
source.addEventListener('sourceopen', sourceOpen);
document.getElementById('out').src = URL.createObjectURL( source );
// async generator Range-Fetcher
async function* fetchRanges( url, chunk_size = 1024 * 1024 ) {
let chunk = new ArrayBuffer(1);
let cursor = 0;
while( chunk.byteLength ) {
const resp = await fetch( url, {
method: "get",
headers: { "Range": "bytes=" + cursor + "-" + ( cursor += chunk_size ) }
}
)
chunk = resp.ok && await resp.arrayBuffer();
cursor++; // add one byte for next iteration, Ranges are inclusive
yield chunk;
}
}
// set up our MediaSource
async function sourceOpen() {
const buffer = source.addSourceBuffer( type );
buffer.mode = "sequence";
// waiting forward to appendAsync...
const appendBuffer = ( chunk ) => {
return new Promise( resolve => {
buffer.addEventListener( 'update', resolve, { once: true } );
buffer.appendBuffer( chunk );
} );
}
// while our RangeFetcher is running
for await ( const chunk of fetchRanges(url) ) {
if( chunk ) { // append to our MediaSource
await appendBuffer( chunk );
}
else { // when done
source.endOfStream();
}
}
}
})().catch( console.error );
<button id="streamVid">stream video</button>
<video id="out" controls muted autoplay loop></video>
Google chrome has a limit to the size of it's file catch. In this case my previous answer would not work. You should use something like file-compressor this resource may be able to compress the file enough that it makes the file cache eligible. The webbrowser can have a new cache size manually set however this is not doable if the end user has not designated their cache to agree with the space required to hold the long video.
A possibility that people that got here are facing - the dev-tool has a "Disable Cache" Option under Network tab. When enabled (meaning cache is disabled) the browser probably doesn't cache the videos, hence needs to download them again.
disable cache from network tab
The JavaScript process generates a lot of data (200-300MB). I would like to save this data for further analysis but the best I found so far is saving using this example http://jsfiddle.net/c2U2T/ which is not an option for me, because it looks like it requires all the data being available before starting the downloading. But what I need is something like
var saver = new Saver();
saver.save(); // The Save As ... dialog appears
saver.onaccepted = function () { // user accepted saving
for (var i = 0; i < 1000000; i++) {
saver.write(Math.random());
}
};
Of course, instead of the Math.random() will be some meaningful construction.
#dader - I would build upon dader's example.
Use HTML5 FileSystem API - but instead of writing to the file each and every line (more IO than it is worth), you can batch some of the lines in memory in a javascript object/array/string, and only write it to the file when they reach a certain threshold. You are thus appending to a local file as the process chugs (makes it easy to pause/restart/stop etc)
Of note is the following, which is an example of how you can spawn the dialoge to request the amount of data that you would need (it sounds large). Tested in chrome.:
navigator.persistentStorage.queryUsageAndQuota(
function (usage, quota) {
var availableSpace = quota - usage;
var requestingQuota = args.size + usage;
if (availableSpace >= args.size) {
window.requestFileSystem(PERSISTENT, availableSpace, persistentStorageGranted, persistentStorageDenied);
} else {
navigator.persistentStorage.requestQuota(
requestingQuota, function (grantedQuota) {
window.requestFileSystem(PERSISTENT, grantedQuota - usage, persistentStorageGranted, persistentStorageDenied);
}, errorCb
);
}
}, errorCb);
When you are done you can use Javascript to open a new window with the url of that blob object that you saved which you can retrieve via: fileEntry.toURL()
OR - when it is done crunching you can just display that URL in an html link and then they could right click on it and do whatever Save Link As that they want.
But this is something that is new and cool that you can do entirely in the browser without needing to involve a server in any way at all. Side note, 200-300MB of data generated by a Javascript Process sounds absolutely huge... that would be a concern for whether you are storing the "right" data...
What you actually are trying to do is a kind of streaming. I mean FileAPI is not suited for the task. Instead, I could suggest two options :
The first, using XHR facility, ie ajax, by splitting your data into several chunks which will sequencially be sent to the server, each chunk in its own request along with an id ( for identifying the stream ) and a position index ( for identifying the chunk position ). I won't recommend that, since it adds work to break up and reassemble data, and since there's a better solution.
The second way of achieving this is to use Websocket API. It allows you to send data sequentially to the server as it is generated. Following a usual stream API. I think you definitely need this.
This page may be a good place to start at : http://binaryjs.com/
That's all folks !
EDIT considering your comment :
I'm not sure to perfectly get your point though but, what about HTML5's FileSystem API ?
There are a couple examples here : http://www.html5rocks.com/en/tutorials/file/filesystem/ among which this sample that allows you to append data to an existant file. You can also create a new file, etc. :
function onInitFs(fs) {
fs.root.getFile('log.txt', {create: false}, function(fileEntry) {
// Create a FileWriter object for our FileEntry (log.txt).
fileEntry.createWriter(function(fileWriter) {
fileWriter.seek(fileWriter.length); // Start write position at EOF.
// Create a new Blob and write it to log.txt.
var blob = new Blob(['Hello World'], {type: 'text/plain'});
fileWriter.write(blob);
}, errorHandler);
}, errorHandler);
}
EDIT 2 :
What you're trying to do is not possible using javascript as said on SO here. Tha author nonetheless suggest to use Java Applet to achieve needed behaviour.
To put it in a nutshell, HTML5 Filesystem API only provides a sandboxed filesystem, ie located in some hidden directory of the browser. So if you want to access the true filesystem, using java would be just fine considering your use case. I guess there is an interface between java and javascript here.
But if you want to make your data only available from the browser ( constrained by same origin policy ), use FileSystem API.
I'm trying to write a fail-safe program that uses the canvas to draw very large images (60 MB is probably the upper range, while 10 MB is the lower range). I have discovered long ago that calling the canvas's synchronous function toDataURL usually causes the page to crash in the browser, so I have adapted the program to use the toBlob method using a filler for cross-browser compatibility. My question is this: How long do Blob URLs using the URL.createObjectURL(blob) method last?
I would like to know if there's a way to cache the Blob URL that will allow it to last beyond the browser session in case somebody wants to render part of the image at one point, close the browser, and come back and finish it later by reading the Blob URL into the canvas again and resuming from the point at which it left off. I noticed that this optional autoRevoke argument may be what I'm looking for, but I'd like a confirmation that what I'm trying to do is actually possible. No code example is needed in your answer unless it involves a different solution, all I need is a yes or no on if it's possible to make a Blob URL last beyond sessions using this method or otherwise. (This would also be handy if for some reason the page crashes and it acts like a "restore session" option too.)
I was thinking of something like this:
function saveCache() {
var canvas = $("#canvas")[0];
canvas.toBlob(function (blob) {
/*if I understand correctly, this prevents it from unloading
automatically after one asynchronous callback*/
var blobURL = URL.createObjectURL(blob, {autoRevoke: false});
localStorage.setItem("cache", blobURL);
});
}
//assume that this might be a new browser session
function loadCache() {
var url = localStorage.getItem("cache");
if(typeof url=="string") {
var img = new Image();
img.onload = function () {
$("#canvas")[0].getContext("2d").drawImage(img, 0, 0);
//clear cache since it takes up a LOT unused of memory
URL.revokeObjectURL(url);
//remove reference to deleted cache
localStorage.removeItem("cache");
init(true); //cache successfully loaded, resume where it left off
};
img.onprogress = function (e) {
//update progress bar
};
img.onerror = loadFailed; //notify user of failure
img.src = url;
} else {
init(false); //nothing was cached, so start normally
}
}
Note that I am not certain this will work the way I intend, so any confirmation would be awesome.
EDIT just realized that sessionStorage is not the same thing as localStorage :P
Blob URL can last across sessions? Not the way you want it to.
The URL is a reference represented as a string, which you can save in localStorage just like any string. The location that URL points to is what you really want, and that won't persist across sessions.
When using URL.toObjectUrl() in conjuction with the autoRevoke argument, the URL will persist until you call revokeObjectUrl or "till the unloading document cleanup steps are executed." (steps outlined here: http://www.w3.org/TR/html51/browsers.html#unloading-document-cleanup-steps)
My guess is that those steps are being executed when the browser session expires, which is why the target of your blobURL can't be accessed in subsequent sessions.
Some other discourse on this: How to save the window.URL.createObjectURL() result for future use?
The above leads to a recommendation to use the FileSystem API to save the blob representation of your canvas element. When requesting the file system the first time, you'll need to request PERSISTENT storage, and the user will have to agree to let you store data on their machine permanently.
http://www.html5rocks.com/en/tutorials/file/filesystem/ has a good primer everything you'll need.