one API that I have to use returns the text using ISO-88951-1, so the words with spanish accents like "obtendrá" are wrong codified when I show them to the user (the valid output would be "obtendrá").
How can I transform the text to UTF-8? I need a function in javascript or jQuery to perfom this conversion.
Thanks
There are at least a couple of modules on npm that let you do encoding conversions: iconv (requires compilation) and iconv-lite (doesn't require compilation).
Finally I achieve what I need using:
$("<div>").html(theTextinHex).text();
Thanks anyway!
Related
So I'm trying to work on passing a url from my backend to my frontend and it has some escaping going on such that by the time the string gets to the frontend it looks something like "www.test.com".
Is there a way I can unescape this string or is my best solution here to just manually replace substrings of " with quotes?
Thanks for your help!
'"www.test.com"'.replace(/"/g,'')
output:
"www.test.com"
I'm trying to take a txt file that looks like.
HCI Version: 4.0 (0x6)
HCI Revision: 0x23A1
LMP Version: 4.0 (0x6)
LMP Subversion: 0x4176
I'm trying to put said file into Mongodb in JSON format via node. The more i dig in the more i'm losing my mind. I was just introduced to node streams. Any thoughts would go a long way and thanks in advance.
You can either use regex or use split on the colon for each of the rows.
For the first one your regular expression could look something like:
/HCI Version:.+?(?=\))+\)/g
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/split
I am having problems with UTF8 characters while using SanitizeValueStrategy('sanitize'). I have to use it since the client is using the language files to edit texts and he might use tags like <b> or <i>...
I need to use only the json file for translations. The client won't touch the app code to change any text. JSON file:
{
"H1": "Typy domů",
"NAME": "Křestní"
}
The problems, thought, only occurs while using angular's interpolation:
<h1 translate>houseTypes.H1</h1>
Typy domů
I can use this method to put text inside of element's body, but this problems still occurs for attributes.
<input placeholder="'houseTypes.NAME'|translate"></h1>
Křestní
The questions is:
How can I get UTF8 characters to be written correctly, while only using the JSON static file loader in interpolations, or any other way in attributes as is placeholder.
For anyone struggeling upon finding way how to make UTF-8 character normal even in {{interpolations}}, this is the way to do it:
$translateProvider.useSanitizeValueStrategy('sanitizeParameters');
This way the sanitize will be always made even in interpolations.
You have two options:
Use the strategy sanitizeParameters which will only sanitize the dynamic parameters, but not the actual translation (template). If you have the translation under control (but not the dynamic values), this will work.
Use the strategy escape (or escapeParameters) which does not use sanitization but escaping.
For me the problem was solved when i set SanitizeValueStrategy to null
$translateProvider.useSanitizeValueStrategy(null);
I'm trying to write a javascript regexp for a blacklist of file extensions. I'm using a jquery plugin that has an option for acceptable file types that takes a regex, but rather than maintain a whitelist we would like to maintain a blacklist. So I need the regex to only match if the string doesn't contain certain file extensions. Currently there is a regex we use to whitelist photo extensions on our photo upload:
/(\.|\/)(gif|jpe?g|png)$/i
For our document upload we would like to simply do a blacklist, but I haven't been able to make the ?! delimeter work. So for the sake of an example how would I reverse this regex to match as long as the file extension doesn't contain gif, jpg, jpeg, png?
I've tried several different ways of using the ?!, but nothing I've tried has worked properly. Heres some examples of what I have tried unsuccessfully:
/(\.|\/)(?!gif|jpe?g|png)$/i
/(\.|\/)(?!(gif|jpe?g|png))$/i
Essentially I need this regex to always return true unless the blacklisted file extensions are matched.
This works:
/\.(?!(?:exe|js|htaccess)$)|\/(?!(?:exe|js|htaccess)$)/i
I think it's because the "match only if not followed by" doesn't like parentheses before it, but I'm not sure.
This new regex now works with the parentheses and multiple extensions:
/^.*(\.|\/)(?!(exe|js|htaccess)$)(?![^\.\/]*(\.|\/))/i
However, to allow files with no extension, this must be used:
/(^.*(\.|\/)(?!(exe|js|htaccess)$)(?![^\.\/]*(\.|\/)))|(^[^\.]+$)/i
// ^^^^^^^^^
// This part matches a name without any dots
Looking at the complexity of that regex, I suggest that you implement something server-side or use another plugin.
Hack the jQuery plugin to accept a callback function instead of (or in addition to) a regular expression. Use the negation operator (!) and the positive regular expression to supply an appropriate callback. And write a mail to the maintainer of the plugin asking him to accept your patch.
I'm not a regex expert, but something like this appears to work: /\.(exe|js|htaccess)$/ig.test(filename) true results when the file is on your blacklist.
var shouldblockUpload = /\.(exe|js|htaccess)$/ig.test(filename);
//Inform user illegal upload
You can also explicitly only allow certain filetypes through the accept attribute on file inputs to help hint users to the right ones.
I would like to change string encoding from UTF-8 to ISO-8859-2 in Javascript. How can I do it?
I need it because I've designed a widget. User just copies < script > tag from my site and puts it on his. This script creates div and puts into div widget contents with text.
If target website is in UTF-8 encoding - it works fine. But when it is in ISO-8859-2 than text that is encoded in UTF-8 is displayed on site with ISO-8859-2 and as a result I see trash.
Instead of using e.g. "ĉ" in your JavaScript code, use Unicode escapes such as "\u0109".
If you're in control of the output, you can replace all special characters with unicode escapes (e.g. \u00e4 for ä). The browser can interpret it correctly regardless of document encoding.
The easiest way to do this would be to put the string into a JSON encoder. Both PHP's and Ruby's does that. Don't know about other implementations though.
Another solution that might work is to add charset="utf-8" to the <script> tag.
I suppose you just need to convert your wdiget from UTF-8 to ISO-8859-2 and provide 2 versions of script.