How to display a QPixmap in QWebEngine? - javascript

I'm trying to figure out what is the best and most efficient way to display a QPixmap or a QImage in a QWebEngine frame.
My goal is to display a video that is being loaded from a custom network camera, which we have a C++ API, in the QWebEngine frame, with maximum efficiency. So saving the frames to the disk and then loading them on the QWebEngine frame, is not an option.
Previously, I was using Qt WebKit and it was pretty straight forward to load a QPixmap created in C++, because it was accessible through a Javascript object. But now, I'm interested in QWebEngine due to multiple purposes.
However, in QWebEngine, it seems that there is no way to directly attach QObjects to the frame window object.
So my question is what is the most efficient way to transfer a image, or a byte array, to be shown in QWebEngine?

One of the fastest ways to draw QPixmap is QGraphicsScene::addPixmap. And it is very comfortable way. Or just draw it on QLabel.
Drawing pixmap with QWebEngine and JS is really bad idea. It is slow and WebEngine is big Chromium browser under the hood, so it is + about ~60 mb libraries.
My goal is to display a video that is being loaded from a custom
network camera, which we have a C++ API
If you need only this as you said, you don't need QWebEngine, only your network method to obtain image and QGraphicsScene to draw. It is really fast and good for video.
But if you want it so much you can, for example, paste image with JS through clipboard :)

Related

javascript RAM memory usage [duplicate]

This question already has answers here:
jQuery or javascript to find memory usage of page
(10 answers)
Closed 8 years ago.
I have website which can upload images.
I do CROP from client side before upload then in server side new optimize...
On Mobile devices when no Free RAM fails.
How I can get RAM memory usage from JavaScript to skip CROP if there no memory?
I am looking only JavaScript solution!.
PLEASE I do not have LEAK OF MEMORY!!!
if I open many apps and no much RAM left my strategy not working
I need CODE by JavaScript get the free RAM and if is it bellow some amount I skip the CROP
------------ OK define Fail: --------------
From mobile devices people take photo and upload it...
from JavaScript I perform CROP
1. around 2MB image goes to 300kb
2. I upload only 300kb then from server side 300kb --> 30kb that I save
If there is no RAM this FAILs
I do not want to say "try again"
I would like to Skip the CROP
Thank you very much for comments.
I handle the errors but I would like to avoid client to wait 40-60 sec and then message
If I go with NO CROP IS IT OK but saving near 1.7MB per image bandwidth... GREEDY :-)
window.performance good I will used thanks.
I will do research to have round trip from SERVER SIDE what I can do can I find it for Mobile devices
In Development
Use Chrome's DevTools for pretty comphrensive diagnostics. You can get JavaScript run-time diagnostics, request information, and basically anything you might need.
Client-side Testing
As far as testing how much RAM is available in your code itself, there isn't really a "correct" or "recommended" strategy (source). Obviously the best solution would be to just optimize your code; varying your site's display/interactions based of how many other apps a client is running could be confusing for the user (eg: they expect some tool to be displayed and it never is; they think the page isn't loading properly, so they leave; etc.).
Some highlights from the source above:
"Counting DOM elements or document size might be a good estimation, but it could be quite inaccurate since it wouldn't include event binding, data(), plugins, and other in-memory data structures."
You'll need to monitor both the DOM and the memory you're using for your code for accurate results.
You could try using window.performance (source), but this isn't well-supported across different browsers.
Note: As Danny mentions in a comment below, showing an advisory message when a feature is disabled could clear up some confusion, but then why use the resources on a feature that is so optional that you could just not use it? Just my two cents... :)

Large WebGL application loading time

See if somebody were to create a large application based on WebGL. Let's say that it's a 3D micro management game which by itself take approximately 700 Megabytes in files to run it.
How would one deal with the loading of the assets. I would have assumed that it would have to be done asynchronously but I am unsure how exactly it would work.
P.S. I am thinking RollerCoaster Tycoon as an example, but really it's about loading large assets from server to browser.
Well first off, you dont want your users to download 700 megabytes of data, at least not at once.
One should try to keep as many resources(geometry, textures) as possible procedural.
All data that needs to be downloaded should be loaded in a progressive/on demand manner using multiple web workers
since one will probably still need to process the data with javascript which can become quite cpu heavy when having many resources.
Packing the data into larger packages may also be advisable to prevent request overhead.
Sure thing one would gzip all resources and try to preload data as soon as the user hits the website. When using image textures and/or text content, embedding it into the html(using <img> and <script> tags) allows to exploit the browser cache to some extend.
Using WebSQL/IndexedDB/LocalStorage can be done but due to the currently very low quotas and flaky/not existing implementation of the quota management api its not a feasable solution right now.

Server side javascript with WebGL?

I'm thinking about learning WebGL and the first thing that comes to mind is that JavaScript is client-side; what approach (if any) is used to have server-side JavaScript specifically related to WebGL?
I am new to WebGL as well, but I still feel that this is a very advanced question you are asking. I believe it is an advanced question because of the scope of answers that exist to do what you are asking and the current problems related to proprietary WebGL.
If you have done any research into WebGL you will immediately see the need for Server Side code due to the fact that the WEbGL API code is executed within the browser and thus freely available to any knowing individual. This is not a typical circumstance for game developers who are used to shipping their code compiled.
By making use of server side controls a developer can hide a large amount of WebGL translations, shaders, and matrices, and still maintain a level of information hiding on the client side code. However, the game will never work without an active internet connection.
Since WebGL is still relatively new, and IE does not support it, expect things to change. M$ may decide that they want to build their own web API like WebGL that ends up being an ASP.NET library. All of the required complexity that currently goes into building a solution to the problem you are facing gets condensed into a 3 button Wizard.
With that being said I think the answer to your question lies in the fate of some future technologies. For bigger goals there will more than likely be a large amount of back and forth communication; protocols like HTTP may not cut it. WebSockets or other similar technologies may be worth looking into. If you are attempting to use Canvas for something smaller just an understanding of building dynamic JavaScript may be enough.
The problem with these answers is that OpenGL is an API itself and has a specific order of operations that is not meant to be changed. This means that this approach to building a WebGL applications is very limited. Since changing GL objects may require a whole Canvas restart, a page refresh, or a new page request. This could result in effects not desirable. For now I would say aim low, but ones thing for sure WebGL is going to change the www as we web developers know it.
I'm not sure what you are looking for, probably not this... :)
but...
If you want a server side fallback for browsers not supporting WebGL, lets say for generating fixed frames as png images of some 3D scene, then you could write your 3D veiwer in C or C++, build it for OpenGL ES when targeting your server side fallback, and use Emscripten to target browsers supporting WebGL.

Render RGBA to PNG in pure JavaScript?

Let's say I have a canvas element, and I need to turn the image on the canvas into a PNG or JPEG. Of course, I can simply use canvas.toDataURL, but the problem is that I need to do this a twenty times a second, and canvas.toDataURL is extremely slow -- so slow that the capturing process misses frames because the browser is busy converting to PNG.
My idea is to call context.getImageData(...), which evidently is much faster, and send the returned CanvasPixelArray to a Web Worker, which will then process the raw image data into a PNG or JPEG. The problem is that I can't access the native canvas.toDataURL from within the Web Worker, so instead, I'll need to resort to pure JavaScript. I tried searching for libraries intended for Node.js, but those are written in C++. Are there any libraries in pure JavaScript that will render raw image data into a PNG or JPEG?
There have been several JavaScript ports of libpng, including pnglets (very old), data:demo, and PNGlib.
The PNG spec itself is pretty straightforward – the most difficult part you may have is with implementing a simple PNG encoder yourself is getting ZLIB right (for which there are also many independent JavaScript implementations out there).
There's actually a C++ to JavaScript compiler called Emscripten.
Someone made a port of libpng, which you might want to check.
I was able to write my own PNG encoder, which supports both RGB and palette depending on how many colors there are. It's intended to be run as a Web Worker. I've open-sourced it as usain-png.

Is it possible to optimize/shrink images before uploading?

I am working on a web application that will deal with many image uploads. Its quite likely that the users will be in areas with slow internet connections and I'm hoping to save them upload time by compressing the images before uploading.
I have seen that Aurigma Image Uploader achieves this using a java applet or active x but it's expensive and I'd rather something open source or at least a little cheaper. Ideally I'd like to roll my own if its at all possible.
I'm developing on Ruby on Rails if that makes any difference..
Thanks!
Edit just to clarify: I don't mind if the solution uses activeX or an applet (although js is ideal) - I'm just looking for something a little cheaper than Aurigma at this stage of development.
Also, it may not be feasible for users to shrink the image themselves as in many instances they will uploading directly from an internet cafe or other public internet spot.
Generally, it isn't possible to write an image compressor in JavaScript. Sorry.
You'll need to use a plugin of some sort, and as you mention, other sites use Java.
It appears to be possible to write something to encode a JPEG in ActionScript (i.e. Flash), which will reach a much larger audience than the Java plugin you mention. Here's a link to a blog post talking about PNG & JPEG encoders in ActionScript.
Here's another blog post with a demo of an inlined JPEG encoder in ActionScript.
Only if you use Flash or Silverlight (only way to be cross-platform)
http://www.jeff.wilcox.name/2008/07/fjcore-source/ may be worth a read.
Without using applets or activex (only in windows) you can't execute anything on a client pc.
Probably not, but you can always insist that image uploads over x size will not succeed.
Is this an application where you can force them to insert a smaller image. In that case you could grab the size first to verify it fits standards. This is what facebook used to do with profile pictures. If it was too big they wouldn't take it.

Categories