I'm making a callback from javascript to a method on my swf. It works in Firefox with no problems, but in chrome the second parameter that's passed is always received as null?
I've debugged my javascript and found that everything is working fine and the two values that are passed to the swf are correct at the point where the callback to the swf is made.
At first i thought this might be a cross domain issue, but have ruled that out as if it were the case then the method on swf would just not be called at all. The second value is a binary string representation of an image and the length of the string that is passed is 101601, so i'm wondering whether there's possibly a limitation on the amount of data that can be passed? The first parameter is a much smaller string representing the file type and this always gets received successfully.
Like i said, the strange thing is, it works perfectly fine in Firefox?
NOTE - so i've just tried it with a much smaller image (stupidly it hadn't occurred to me to test that until i wrote this), where the string length is only 133 and it still fails. So that rules that out.
I've also checked the AS3 docs and it doesn't appear to mention any such limitation.
The string is being produced using the FileReader class's readAsBinaryString() method. As far as i'm aware this outputs a UTF 16 string representation of the binary it receives. Although i think i'm right in thinking that this shouldn't be an issue, as it's still just a string and the encoding only really affects the decoding?
Javascript
var readFile = function(file)
{
var reader = new FileReader();
reader.onloadend = function( evt )
{
alert( reader.result.length );//this outputs the correct length
alert( reader.result ); //this outputs the binary encoded as a String
swf.addImage( file.type, reader.result );
}
reader.readAsBinaryString(file);
}
AS3
ExternalInterface.addCallback( "addImage", addImageHandler );
and
private function addImageHandler( type:String, file:String ):void
{
trace( "type: ", type );//this traces the type correctly
trace( "file: ", file );//this traces out null in chrome, but traces the binary string in firefox
}
So there appears to be an issue with passing the UTF-16 encoded string from javascript to flash, but only in chrome?
I'm not sure why this would be the case, but if i encode the UTF-16 string to base46, or convert it from UTF-16 to UTF-8 within the Javascript, before i pass it to the swf, then everything works as expected.
EDIT
On further testing, the solution turned out to be simpler than expected, it was just a case of calling encodeURI on the UTF-16 string and then calling decodeURI at the other end.
Javascript
var reader = new FileReader();
reader.onloadend = function( evt )
{
swf.addImage( file.type, encodeURI( reader.result ) );
}
reader.readAsBinaryString(file);
AS3
private function dropHandler( type:String, file:String ):void
{
file = decodeURI( file );
...
}
Of course without a code sample from you this might not help at all, but in case it could:
This is how I invoke a JS function from AS3 with say 2 parameters:
ExternalInterface.call("JavaScriptFunctionName", arg1_value, arg2_value);
Where this is the JS function:
function JavaScriptFunctionName(arg1, arg2) { ...
Related
I'm using the following method to convert files into base64 encoding and it's been working fine for a long time, but I see now that Buffer is depricated.
// function to encode file data to base64 encoded string
function base64_encode(file) {
var bitmap = fs.readFileSync(file);
// convert binary data to base64 encoded string
return new Buffer(bitmap).toString("base64");
}
let base64String = base64_encode("Document.png");
Can someone please help me modify this to work with the new suggested method as I'm not sure how to modify it myself?
Thank you so much in advance.
It is not Buffer that is deprecated but its constructor, so instead of new Buffer() you use e.g. Buffer.from.
However fs.readFileSync already returns a Buffer if no encoding is specified, so there is not really a need to pass that to another buffer. Instead you can do return fs.readFileSync(file).toString("base64")
Using the Sync part of the API is most of the time something would like to avoid and if possible switch over to the promise-based API.
I retrieve an encoded string (using TextEncoder into UTF-8, which was stringified before sending to the server) from the server using AJAX. I parse it upon retrieval and get an Object. I need to convert this Object to a decoded string. TextDecoder seems to have decode method, but it expects ArrayBuffer or ArrayBufferView, not Object. That method gives TypeError if I use my Object as-is:
var myStr = "This is a string, possibly with utf-8 or utf-16 chars.";
console.log("Original: " + myStr);
var encoded = new TextEncoder("UTF-16").encode(myStr);
console.log("Encoded: " + encoded);
var encStr = JSON.stringify(encoded);
console.log("Stringfied: " + encStr);
//---------- Send it to the server; store in db; retrieve it later ---------
var parsedObj = JSON.parse(encStr); // Returns an "Object"
console.log("Parsed: " + parsedObj);
// The following decode method expects ArrayBuffer or ArrayBufferView only
var decStr = new TextDecoder("UTF-16").decode(parsedObj); // TypeError
// Do something with the decoded string
This SO 6965107 has extensive discussion on converting strings/ArrayBuffers but none of those answers work for my situation. I also came across this article, which does not work if I have Object.
Some posts suggest to use "responseType: arraybuffer" which results in ArrayBuffer response from the server, but I cannot use it when retrieving this encoded string because there are many other items in the same result data which need different content-type.
I am kind of stuck and unable to find a solution after searching for a day on google and SO. I am open to any solution that lets me save "strings containing international characters" to the server and "retrieve them exactly as they were", except changing the content-type because these strings are bundled within JSON objects that carry audio, video, and files. Any help or suggestions are highly appreciated.
Can someone offer some troubleshooting tips here? My PostXML() return value is NULL for a very small subset of data results. This works for 99.9% of usage. I think the failed data may have special characters perhaps, but comparing to a similar dataset, its identical and passes OK? The oXMLDoc.xml in File.asp contains a valid XML string while debugging, but its null when it gets back to my JS call.
Is there any known issues with what looks like a valid XML element getting trashed in the Microsoft.XMLHTTP object?
function PostXML(sXML)
{
var oHTTPPost = new ActiveXObject("Microsoft.XMLHTTP");
oHTTPPost.Open("POST","File.asp", false);
oHTTPPost.send(sXML);
// documentElement is null???
return oHTTPPost.responseXML.documentElement;
}
File.asp
<%
' oXMLDoc.xml contains valid XML here, but is NULL in the calling JS
Response.ContentType = "text/xml"
Response.Write oXMLDoc.xml
%>
Check the response headers. The content type needs to be application/xml.
XMLHttpRequest is available in current IEs. I suggest using the ActiveX only as a fallback.
You can override the content mimetype on it:
xhr = new XMLHttpRequest();
xhr.overrideMimeType("application/xml");
...
This might by possible on the ActiveX object, too. But I am not sure.
Another possibility is using the DOMParser to convert a received string into an Document instance.
Found the issue.
Using IE Dev/Debugger, I found xEFxBFxBF in one of the string attributes. This product uses MS SQL Server and the query output did not reflect these characters even if copy/pasted into Notepad++. I'm assuming Ent Manager filters out unsupported characters... /=
Thanks for the help people!
Ouch man.
Is it possible to use jQuery instead?
The other thing I know is that if the return xml is a little off (poorly formatted xml, case sensitive, non-legal characters) the javascript will trash the return value.
With jQuery you have better debugging options to see the error.
I'm trying to decode a base64 string for an image back into binary so it can be downloaded and displayed locally by an OS.
The string I have successfully renders when put as the src of an HTML IMG element with the data URI preface (data: img/png;base64, ) but when using the atob function or a goog closure function it fails.
However decoding succeeds when put in here: http://www.base64decode.org/
Any ideas?
EDIT:
I successfully got it to decode with another library other than the built-in JS function. But, it still won't open locally - on a Mac says it's damaged or in an unknown format and can't get opened.
The code is just something like:
imgEl.src = 'data:img/png;base64,' + contentStr; //this displays successfully
decodedStr = window.atob(contentStr); //this throws the invalid char exception but i just
//used a different script to get it decode successfully but still won't display locally
the base64 string itself is too long to display here (limit is 30,000 characters)
I was just banging my head against the wall on this one for awhile.
There are a couple of possible causes to the problem. 1) Utf-8 problems. There's a good write up + a solution for that here.
In my case, I also had to make sure all the whitespace was out of the string before passing it to atob. e.g.
function decodeFromBase64(input) {
input = input.replace(/\s/g, '');
return atob(input);
}
What was really frustrating was that the base64 parsed correctly using the base64 library in python, but not in JS.
I had to remove the data:audio/wav;base64, in front of the b64, as this was given as part of the b64.
var data = b64Data.substring(b64Data.indexOf(',')+1);
var processed = atob(data);
I am working on my open source project Downloadify, and up until now it simply handles returning Strings in response to ExternalInterface.call commands.
I am trying to put together a test case using JSZip and Downloadify together, the end result being that a Zip file is created dynamically in the browser, then saved to the disk using FileReference.save. However, this is my problem:
The JSZip library can return either a base64 encoded string of the Zip, or the raw byte string. The problem is, if I return that byte string in response to the ExternalInterface.call command, I get this error:
Error #1085: The element type "string" must be terminated by the matching end-tag "</string>"
ActionScript 3:
var theData:* = ExternalInterface.call('Downloadify.getTextForSave',queue_name);
Where queue_name is just a string used to identify the correct instance in JS.
JavaScript:
var zip = new JSZip();
zip.add("test.txt", "Hello world!\n");
var content = zip.generate(true);
return content;
If I instead return a normal string instead of the byte string, the call works correctly.I would like to avoid using base64 as I would have to include a base64 decoder in my swf which will increase its size.
Finally: I am not looking for a AS3 Zip generator. It is imperative to my project to have that part run in JavaScript
I am admittedly not a AS3 programmer by trade, so if you need any more detail please let me know.
When data is being returned from javascript calls it's being serialized into an XML string. So if the "raw string" returned by JSZip will include characters which make the XML non-valid, which is what I think is happening here, you'll get errors like that.
What you get as a return is actually:
<string>[your JSZip generated string]</string>
Imagine your return string includes a "<" char - this will make the xml invalid, and it's hard to tell what character codes will a raw byte stream translate too.
You can read more about the external API's XML format on LiveDocs
i think the problem is caused by the fact, that flash expects a utf8 String and you throw some binary stuff at it. i think for example 0x00FF will not turn out to be valid utf8 ...
you can try fiddling around with flash.system::System.setCodePage, but i wouldn't be too optimistic ...
i guess a base64 decoder is probably really the easiest ... i'd rather worry about speed than about file size though ... this rudimentary decoder method uses less than half a K:
public function decodeBase64(source:String):ByteArray {
var ret:ByteArray = new ByteArray();
var map:Object = new Object();
var i:int = 0;
for each (var char:String in "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/".split("")) map[char] = i++;
map["="] = 0;
source = source.split("\n").join("").split("\r").join("");//remove linebreaks
for (i = 0; i < source.length/4; i++) {
var buf:int = 0;
for each (char in source.substr(i * 4, 4).split("")) buf = (buf << 6) + map[char];
ret.writeByte(buf >>> 16);
ret.writeShort(buf);
}
return ret;
}
you could simply shorten function names and take a smaller image ... or use ColorTransform or ConvolutionFilter on one image instead of four ... or compile the image into the SWF for smaller overall size ... or reduce function name length ...
so unless you're planning on working with MBs of data, this is the way to go ...