Problem
I'm trying to send an array of images (saved locally on the javascript side on an assets folder) to the native side (both iOS and Android). There the native side process the images and returns a new image. This works because I've tried sending the image URL (using internet based images instead of local images) and the native code runs perfectly.
The problem is that downloading each image is such a slow process, and I would send like 10 or 15 images. I think the best way to handle this is sending absolute paths of the images within the device.
What I've tried so far:
Sending URL of the images (Works but this is not what I'm looking for, the idea is to have the images saved on an "assets" folder instead of downloading one by one)
Ideas:
Maybe I can get an array of base64 strings of every image, but seems like a slow task to do; in the native side I will have to convert every base64 string into data and then into images.
It will be ideal to send the absolute uri path of each asset, but I couldn't find a way to get it (only could get like './assets/myImage.png')
According to React Native documentation (https://facebook.github.io/react-native/docs/native-modules-ios.html) the native side supports the JSON standard formats (such as string, number, boolean, arrays and objects of any type of this list, and react callback/promises)
When you require a local image file on the JS side you're not actually getting its data. Instead, RN creates a map of IDs to images; if you try to log it you can see that it's actually just a number.
Although numbers can be serialized over the bridge this is not enough, to be able to access the image on the native side you first need to resolve the required image to an object that you can later convert on native. On the JS side it would look something like this:
const myImage = require('./my-image.png');
const resolveAssetSource = require('react-native/Libraries/Image/resolveAssetSource');
const resolvedImage = resolveAssetSource(myImage);
You can now pass the resolvedImage to your native API, this will be a dictionary object (a map) with the image info (size, uri, etc.). On the native side you can now convert the image:
UIImage *image = [RCTConvert UIImage:imageObj];
On Android it works in a similar way, but as far as I know there are no direct conversion methods, so you'll need to extract the uri from the image map object and load it from there.
Related
My Blazor Server app uses a canvas element to manipulate an image that is loaded into an img element via a file drop zone, it serves as a reference for the canvas and holds the information that is worked on via JS. I'm using Excubo's Canvas wrapper to work within C# as much as possible but I don't think this error is related to it as it also happens with pure JS. Since this is more of a logic problem I don't think sharing code would be much help but here is a crud visual representation of what I mean, perhaps it will be more useful in trying to convey the problem.
The end goal is to invoke a function in JS to load the image src into a secondary work canvas, and using the data present in the main canvas, manipulate pixel information accordingly for both. After parsing the data, I convert it back toDataURL and set the img.src attribute.
The problem happens when returning the base64 data from that function particularly when using large images. The server disconnects immediately and I'm not sure if there is a way to extend the allowed wait time or this is just an implementation problem on my part.
Also worth mentioning that I extended the maximum file size for the dropzone InputFile component to 30MB but this happens with files of 3MB (and maybe even lower but I couldn't troubleshoot the threshold value yet).
Perhaps there's another way to achieve what I'm aiming for but I'm not sure how I would reference the img src attribute in C# without JS. Client-Server side logic isn't a concern because this app is supposed to be run locally on the server machine only so they are essentially the same target.
Found a solution that works for my case. Here is the reference for the fix.
The default MaximumReceiveMessageSize is too low for my needs, I increased it to 30MB and it works without disconnecting from server. Also tried increasing the ApplicationMaxBufferSize and TransportMaxBufferSize on the MapBlazorHub pipeline configurations but it didn't help. Increasing JS interop timeout on the AddServerSideBlazor service options was also unfruitful.
Here is the relevant Program.cs code I'm using now:
var maxBufferSize = 30 * 1024 * 1024;
builder.Services.AddServerSideBlazor().AddHubOptions(opt => { opt.MaximumReceiveMessageSize = maxBufferSize; });
I'm looking into using the WebView2 component for rendering some UI things on Windows, and I have a question about resource loading: for loading "normal" resources (i.e. HTML, CSS, images, JavaScript, whatever), the component mostly takes care of handling the loading of those resources itself. But I wonder if there is a way to hook into that loading process and control it yourself with WebView2?
As an example: say you want to load an image that is procedurally generated on native side of the WebView2, i.e. it is not in a file or on a server. How would you do that? Another example would be if you stored all your resources in a zip file or a SQLite database or something like that, and you wanted the WebView2 component to load resources directly from it (with you writing the "bridge" code yourself), how would you do that?
The reason I'm asking is because on macOS, WKWebView provides exactly this functionality by allowing you to create custom url schemes and hooking up handlers to them. This is exactly what I want, and it allows you to do something like this in your HTML:
<script type="text/javascript" src="my-scheme://test.js"/>
And on the Objective-C side, you can do a thing like this (leaving out boilerplate for hooking up my-scheme to this handler, this is the meat of the code for handling the response):
const char* data = "function testFunction() { return 18; }";
[task didReceiveResponse: [[NSURLResponse alloc]
initWithURL: task.request.URL
MIMEType: #"text/javascript"
expectedContentLength: strlen(data)
textEncodingName: #"ASCII"]];
[task didReceiveData: [NSData
dataWithBytes: data
length:strlen(data)]];
[task didFinish];
I.e. by registering the custom url scheme handler, I could send over my C string there as a JavaScript file. It doesn't have to be a hard-coded C string, obviously, as I mentioned the most relevant uses for me would be to provide procedurally generated resources, as well as loading things that are not necessarily stored as files (or on a web server). How would you do this with WebView2?
One way I have sent any type of file from native code to a browser is by converting the file to Base64 and then sending it to the browser. In the case of WebView2 you could use ExecuteScriptAsync. Once the Base64 string is received you could have javascript code (which you have previously injected) to convert it into a blob/file and then you can add it to any part of the DOM you want.
I want to get the screenshots from PageSpeed Insights. Using the API, I used a code that i founded here : https://embed.plnkr.co/plunk/c7fAFx, but doesn't work.
please help me! I am learning to code.
Why doesn't the linked code work?
Well because it is ancient and attempting to use the version 1 Page Speed Insights API.
It is currently on version 5 so that is why it does not work, v1 no longer exists as a public API.
How to recreate the functionality of this App?
As you are learning to code I will lay out the steps for you and then you can research how to do each step and use that to learn.
I will warn you as a beginner there is a lot to learn here. However on the flip side if you manage to work out how to do the below you will have a good first project that has covered multiple areas of JS development.
As you have marked this "JavaScript" I have assumed you want to do this in the browser.
This is fine up until the point where you want to save the images as you will have to work out how to ZIP them which is probably the most difficult part.
I have highlighted the steps you need to learn / implement in bold
1. First call the API:
The current URL for Page Speed Insights API is:
https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url=https://yoursite.com
Just change url=https://yoursite.com to any site you want to gather the images from.
For a small amount of requests a day you do not need to worry about an API key.
However if you do already have an API key just add &key=yourAPIKey to the end of the URL (replacing the yourAPIKey part obviously :-P).
You want to make an AJAX call to the API URL first.
2. Parse the response
Then when you get a response you are going to get a large JSON response.
You need to parse the JSON response and turn it into a JavaScript Object or Array you can work with.
3. Find the relevant parts
So once you have a JavaScript Object you can work with you are looking for "final-screenshot" and "screenshot-thumbnails".
These are located under "audits".
So for example if you parsed to an array called lighthouseResults you would be looking for lighthouseResults['audits']['final-screenshot'] or lighthouseResults['audits']['screenshot-thumbnails']
"final-screenshot" contains how the site looked after it was loaded, so if you just want that you want this element.
This contains an image that is base64 encoded (lighthouseResults['audits']['final-screenshot']['details']['data']).
"screenshot-thumbnails" is the part you want if you want the "filmstrip" of how the site loads over time. This contains a list of the thumbnails base64 encoded.
To access each of these you need to loop over each of the items located at lighthouseResults['audits']['screenshot-thumbnails']['details']['items'] and return the ['data'] part for each ['item']
Find the parts that you want and store them to a variable
4a. Decode the image(s)
Once you have the image(s) in a variable, you will have them as a base64 encoded string at the moment. You need to convert these into usable jpg images.
To do this you need to base64 decode each image.
For now I would just display them in the browser once they are decoded.
learn how to decode a base64 encoded image
4b. Alternative to decoding the image
As the images are base64 encoded they can be displayed directly in a browser without decoding first.
You can just add an image where the src your base64 image string you gathered in step 3.
If you just want to display the images this is much easier.
Add images to the screen and set the src to the base64 image string you have from step 3
Saving the images
Now you said in a comment you want to save the images. Although this can be done via JavaScript it is probably a little advanced for starting out.
If you want to save the images you really want to be doing that server side.
However if you do want to download the images (filmstrip) in the browser then you want to look into a zip utility such as jszip.js.
The beauty of this is they normally want you to convert the images to base64 first before zipping them, so it may not actually be that difficult!
My app loads a small HTML document that contains one image in a webview. How can I fetch this image and use it as a Bitmap object in my app?
I'm already using a JavaScriptInterface together with my webview for getting some other information, like passing booleans. Is it possible to pass an image aswell via the JavaScriptIterface? Is it a good idéa or is there a better way?
Take a look at this question: Get image data in JavaScript?
You might be able to draw the image on a (I presume hidden) Canvas, then Base64-encode it with toDataURL and pass that as a string through the JS interface then decode it on the Java side. I imagine it'll be slow, but it's worth a try.
Can I load something using AS3/SWF and then create a DOM element using javascript to display the loaded data, without having the browser to load the same data twice?
Yes, but it's not easy. You would have to convert the image to (for example a base64) string using a custom function looping through all the pixels of the bitmapdata, then send it to the webpage using an external interface, and then convert it back, either using the base64 to set the image url, or using Canvas to build the image manually from the pixels.
Perhaps I've missed something in what you've said but wouldn't it be quite easy to use the FileReference upload() method to send the file to a php script which then moves the file to the desired location on the server. If you wanted to have the image display in html without a new page load you could (I'm not too familiar with JS but I assume this is possible -> ) periodically check to see if your desired file is in the desired location. You could call a js function through ExternalInterface to tell the html page to expect this file and to check for it.
I've not tested this method so I can guarantee there are no flaws in it but it's the way I would attempt first. I'm assuming you're sending an image but it would work fine for any other file.