I have to encode string in c# and decode it with javascript unescape function.
the javascript unescape is the only option since I am sending the string with get request to some api that using unescape to decoed it.
i tried almost everything
server.urlencode
WebUtility.HtmlEncode
and a lot other encoding! I even tried Uri.EscapeDataString using jscript
Nothing isn't encode like the "escape" function
Any idea How to make it work?
EDIT:
this is my code
string apiGetRequest = String.Format("http://212.00.00.00/Klita?name={0}&city={1}&CREATEBY=test ", Uri.EscapeDataString(name), Uri.EscapeDataString(city));
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(apiGetRequest);
req.GetResponse();
Can you give an example of the string you want do encode and the encoded result?
URLencoding is the correct encoding-type you need. Make sure, you don't double encode your string somewhere in your code.
You might need to use decodeURIComponent instead of unescape, since unescape is not UTF-8 aware, thus might result in in broken string after decoding.
See http://xkr.us/articles/javascript/encode-compare/ for more information.
EDIT:
I don't know much about asp, but it looks like your trying to access the url not with a browser but with your ASP-server-side application. Well, your server does not run any JS code. You will just retrieve the HTML markup and maybe some JS code as a big string. This code would be parsed and executed within a browser but not within ASP.
Related
So I'm trying to work on passing a url from my backend to my frontend and it has some escaping going on such that by the time the string gets to the frontend it looks something like "www.test.com".
Is there a way I can unescape this string or is my best solution here to just manually replace substrings of " with quotes?
Thanks for your help!
'"www.test.com"'.replace(/"/g,'')
output:
"www.test.com"
My app depends on a webservice to form it's URIs, so sometimes it comes up with (what I believe is) a windows-1250 encoded string (/punk%92d) and express fails as follows:
Connect
400 Error: Failed to decode param 'punk%92d'
at Layer.match
So I thought about converting each link to that segment into utf-8 (example: /punk’d, so there would be no reference to the offending enconding), and back again to windows-1250 to work with the external webservice.
I tried this approach using both iconv and iconv-lite but there's always something wrong with the results: /punk d, /punk�d, etc.
Here's a sample using iconv:
var str = 'punk’d';
var buf = new Buffer(str.toString('binary'), 'binary');
console.log(new Iconv('UTF-8', 'Windows-1250').convert(buf).toString('binary'));
…and iconv-lite:
console.log(iconv.decode(new Buffer(str), 'win1250'));
I know using binary is a bad approach, but I was hoping something, anything would just do the job. I obviously tried multiple variations of this code since my knowledge of Buffers is limited, an even simpler things wouldn't work, like:
console.log(new Buffer('punk’d').toString('utf-8'));
So I'm interested in either a way to handle those encoded strings in the URI within express, or an effective way to convert them within node.js.
Sorry if this seems like too simple of a thing to try, but since Node and Express are both JavaScript, have you tried simply using decodeURIComponent('punk’d')? It looks to me that it's simply a standard encoded URI. I think you're getting that weird output from iconv because you're converting from the wrong encoding.
I'm trying to convert Chinese characters in a XML file to readable Chinese string using javascript, but I'm not sure how to. I have checked other SO posts, and tried the following
unescape(encodeURIComponent('丘'))
but still can't get it to work, and wondering if someone could help?
<utf8>丘</utf8>
Neither unescape nor encodeURIComponent (which deal with percent-encoding) will help you with a XML character entity. You just want to parse the XML file! Accessing the DOM then will yield the expected string.
first of all it is a userscript and I can't change the server-side encoding.
My problem is that when using encodeURIComponent() for encoding POST params (later sent via xhr.setRequestHeader), the characters are encoded in utf-8, but the server needs to receive iso-8859-1 data. Is there an alternative to encodeURIComponent() that would encode in iso-8859-1 ?
.
To make sure you understand, here is an exemple:
A classic form on the website send é like this: yournewmessage:%E9
Ajax via xhr.send('yournewmessage='+encodeURIComponent('é')) sends this: yournewmessage:%E9%80%80
The server needs the former. Thanks to anyone who can help me.
So, I’ve since figured out this problem. What I did was searching for an equivalence between utf-8 and iso-8859-1, what I found was between utf-8 and cp1252 (Windows-1252) so there are two conversions, utf-8 to cp1252 and cp1252 to iso-8859-1 (these two having a lot of similarities)
http://pastebin.com/jTDqR2PQ
Ugly code, comments left in French, and unelegant solution, but I feel bad seeing this question unansered while I actually found a solution that works.
I have a javascript script which is calling a php page to supply an ajax form with suggestions. The suggestions are returned fine by the php page, but for some reason, when i set the responsetext of the javascript object request as an element in my HTML page, all the special characters (ie. á or ã) show up as this question mark. Is there a function II must run on the response text of the request to make sure these are read properly?
Thanks.
If you are not serving your HTML pages as UTF-8, the browser will guess an encoding, typically a single-byte Windows codepage depending on the user's locale.
But this doesn't happen for AJAX. With XMLHttpRequest, unless you specifically state an encoding in the Content-Type: ...; charset= parameter, the browser will treat it as UTF-8. That means if you are actually serving Windows code page 1252 (Western European) content, you will get an invalid UTF-8 sequence and consequent question mark.
You don't want to be using a non-UTF-8 encoding! Make sure you are using UTF-8 throughout your application. Serve all your pages with Content-Type: text/html; charset=utf-8, store your data in UTF-8 tables, use mysql_set_charset() to choose UTF-8, etc.
In any case consider passing AJAX responses using JSON. The function json_encode() will create a JSON string that uses JavaScript escape sequences for non-ASCII characters, which avoids any problem of encoding mismatch. Also this is easier to extend to add functionality than returning raw HTML.
I would try, in your php script, to encode everything as html entities.
This can be easily tested by doing something like this before returning the results to javascript:
$results = htmlentities($htmlstring);
There's also the htmlspecialchars function you might try.
More about this here:
http://php.net/manual/en/function.htmlentities.php