how to compare two strings by meaning? - javascript

I want the user of my node.js application to write down ideas, which then get stored in a database.
So far so good, but I don't want redundant entrys in that table, so I decided to check for similarity, using this one:
https://www.npmjs.com/package/string-similarity-js
Do you know a way, in which I can compare two strings by meaning? In like getting a high similarity score for "using public transport" vs "driving by train" which performs very poor in the above one.

To compare two strings by meaning, the strings would need to be convert first to a tensor and then evalutuate the distance or similarity between the tensors. Many algorithm can be used to convert strings to tensors - all related to the domain of interest. But the Universal Sentence Encoder is a wide broad sentence encoder that will project all words in one dimensional space. The cosine similarity can be used to see how closed some words are in meaning.
Example
Though king and kind are closed in hamming distance (difference of only one character), they are very different. Whereas queen and king though they seems not related (because all characters are different) are close in meaning. Therefore the distance (in meaning) between king and queen should be smaller than between king and kind as demonstrated in the following snippet.
<script src="https://cdn.jsdelivr.net/npm/#tensorflow/tfjs"></script>
<script src="https://cdn.jsdelivr.net/npm/#tensorflow-models/universal-sentence-encoder"></script>
<script>
(async() => {
const model = await use.load();
const embeddings = (await model.embed(['queen', 'king', 'kind'])).unstack()
tf.losses.cosineDistance(embeddings[0], embeddings[1], 0).print() // 0.39812755584716797
tf.losses.cosineDistance(embeddings[1], embeddings[2], 0).print() // 0.5585797429084778
})()
</script>

Comparing the meaning of two string is still an ongoing research. If you really want to solve the problem (or to get really good performance of your language modal) you should consider get a PhD.
For out of box solution at the time: I found this Github repo that implement google's BERT modal and use it to get the embedding of two sentences. In theory, the two sentence share the same meaning if there embedding is similar.
https://github.com/UKPLab/sentence-transformers
# the following is simplified from their README.md
embedder = SentenceTransformer('bert-base-nli-mean-tokens')
# Corpus with example sentences
S1 = ['A man is eating a food.']
S2 = ['A man is eating pasta.']
s1_embedding = embedder.encode(S1)
s2_embedding = embedder.encode(S2)
dist = scipy.spatial.distance.cdist([s1_embedding], [s2_embedding], "cosine")[0]
Example output (copied from their README.md)
Query: A man is eating pasta.
Top 5 most similar sentences in corpus:
A man is eating a piece of bread. (Score: 0.8518)
A man is eating a food. (Score: 0.8020)
A monkey is playing drums. (Score: 0.4167)
A man is riding a horse. (Score: 0.2621)
A man is riding a white horse on an enclosed ground. (Score: 0.2379)

Related

Replace words in a paragraph using Javascript

I have a paragraph of some texts. I want to replace some words in that using wildcard.
The below is my Paragraph.
0.7% lower on the prospect of fresh restrictions that would deal a blow to hopes of a swift economic
recovery. <Origin Href=\"StoryRef\">urn:newsml:reuters.com:*:nL4N2EC04Z</Origin>\n The 2,000-plus cases reported on Sunday was a shocker ,said Nicholas Mapa, ING
In this Para, I want to remove <Origin Href=\"StoryRef\">urn:newsml:reuters.com:*:nL4N2EC04Z</Origin>\n
There are multiple paragraphs. But only the uncommon one is nL4N2EC04Z
All other words are common in those paragraphs .
<Origin Href=\"StoryRef\">urn:newsml:reuters.com:*:(need_to_use_wild_card_here)</Origin>\n
I tried to replace one half.
My code
storyRef="<Origin Href=\"StoryRef\">urn:newsml:reuters.com:*:";
storyRef.replace(storyRef," ")
But am stuck in replacing other parts.
It seems encoding problem. Try to use JSON.stringify to ensure that characters like < and some others does not read decoded.
You can improve Regex too to something like this 👇
storyRef.replace(/<Origin.*Origin>\\n/gm, ' ');
This example will start in <Origin, get all content between until Origin>\n.
const p = JSON.stringify('0.7% lower on the prospect of fresh restrictions that would deal a blow to hopes of a swift economic recovery. <Origin Href=\"StoryRef\">urn:newsml:reuters.com:*:nL4N2EC04Z</Origin>\n The 2,000-plus cases reported on Sunday was a shocker ,said Nicholas Mapa, ING');
const storyRef = JSON.parse(p.replace(/(<Origin.*Origin>\\n)+/gm, ' '));
console.log(storyRef);

Matching multiple quotes in a sentence

I am trying to match multiple quotes inside of a single sentence, for example the line:
Hello "this" is a "test" example.
This is the regex that I am using, but I am having some problems with it:
/[^\.\?\!\'\"]{1,}[\"\'\“][^\"\'\“\”]{1,}[\"\'\“\”][^\.\?\!]{1,}[\.\?\!]/g
What I am trying achieve with this regex is to find everything from the start of the last sentence until I hit quotes, then find the closing set and continue until either a .?!
The sample text that I am using to test with is from Call of Cthulhu:
What seemed to be the main document was headed “CTHULHU CULT” in characters painstakingly printed to avoid the erroneous reading of a word so unheard-of. The manuscript was divided into two sections, the first of which was headed “1925—Dream and Dream Work of H. A. Wilcox, 7 Thomas St., Providence, R.I.”, and the second, “Narrative of Inspector John R. Legrasse, 121 Bienville St., New Orleans, La., at 1908 A. A. S. Mtg.—Notes on Same, & Prof. Webb’s Acct.” The other manuscript papers were all brief notes, some of them accounts of the queer dreams of different persons, some of them citations from theosophical books and magazines.
The issue comes on the line The manuscript was.... Does anyone know how to account for repeats like this? Or is there a better way?
This one ignores [.?!] inside quotes. But cases like Acct.” The nth will be considered as a single sentence in this case. Probably a . is missing over there.
var r = 'What seemed to be the main document was headed “CTHULHU.?! CULT” in characters painstakingly printed to avoid the erroneous reading of a word so unheard-of. The manuscript was divided into two sections, the first of which was headed “1925—Dream and Dream Work of H. A. Wilcox, 7 Thomas St., Providence, R.I.”, and the second, “Narrative of Inspector John R. Legrasse, 121 Bienville St., New Orleans, La., at 1908 A. A. S. Mtg.—Notes on Same, & Prof. Webb’s Acct.” The other manuscript papers were all brief notes, some of them accounts of the queer dreams of different persons, some of them citations from theosophical books and magazines.'
.split(/[“”]/g)
.map((x,i)=>(i%2)?x.replace(/[.?!]/g,''):x)
.join("'")
.split(/[.?!]/g)
.filter(x => x.trim()).map(x => ({
sentence: x,
quotescount: x.split("'").length - 1
}));
console.log(r);
You can use this naive pattern:
/[^"'“.!?]*(?:"[^"*]"[^"'“.!?]*|'[^']*'[^"'“.!?]*|“[^”]*”[^"'“.!?]*)*[.!?]/
details:
/
[^"'“.!?]* # all that isn't a quote or a punct that ends the sentence
(?:
"[^"*]" [^"'“.!?]*
|
'[^']*' [^"'“.!?]*
|
“[^”]*” [^"'“.!?]*
)*
[.!?]
/
If you want something more strong, you can emulate the "atomic grouping" feature, in particular if you are not sure that each opening quote has a closing quote (to prevent catastrophic backtracking):
/(?=([^"'“.!?]*))\1(?:"(?=([^"*]))\2"[^"'“.!?]*|'(?=([^']*))\3'[^"'“.!?]*|“(?=([^”]*))\4”[^"'“.!?]*)*[.!?]/
An atomic group forbids backtracking once closed. Unfortunately this feature doesn't exist in Javascript. But there's a way to emulate it using a lookahead that is naturally atomic, a capture group and a backreference:
(?>expr) => (?=(expr))\1

How to detect and remove unwanted lines from a string?

I am working on a project in which i have to extract text data from a PDF.
I am able to extract text from the PDF, but extracted text sometimes contains lines which i would like to strip off from it.
Here's and example of unwanted lines -
ISBN 0-7225-3293-8. = CONTENTS = Part One Part Two Epilogue
Page 1 / 94
And, here's an example of good line (which i'd like to keep) -
Dusk was falling as the boy arrived with his herd at an abandoned church.
I wanted to sleep a little longer, he thought. He had had the same dream that night as a week ago
Different PDFs can give out different unwanted lines.
How can i detect them ?
Option 1 - Give the computer a rule: If you are able to narrow down what content it is that you would like to keep, the obvious criteria that sticks out to me is the exclusion of special characters, then you can filter your results based on this.
So let's say you agree that all "good lines" will be without special characters ('/', '-', and '=') for example, if a line DOES contain one of these items, you know you can remove it from the content you are keeping. This could be done in a for loop containing an if-then condition that looks something like this..
var lineArray = //code needed to make each line of the file an element of the array
For (cnt = 0; cnt < totalLines; cnt++)
{
var line = lineArray[cnt];
if (line.contains("/") || line.contains("-") || line.contains("="))
lineArray[cnt] = "";
}
At the end of this code you could simply get all the text within the array and it would no longer contain the unwanted lines. If there are unwanted lines however, that are virtually indistinguishable by characters, length, positioning etc. the previous approach begins to break down on some of the trickier lines.
This is because there is no rule you can give the computer to distinguish between the good and the bad without giving it a brain such as yours that recognizes parts of speech and sentence structure. In which case you might consider option 2, which is just that.
Option 2- Give the computer a brain: Given that the text you want to remove will more or less be incoherent documentation based on what you have shown us, an open source (or purchased) natural language processor may be what you are looking for.
I found a good beginner's intro at http://myreaders.info/10_Natural_Language_Processing.pdf with some information that might be of use to you. From the source,
"Linguistics is the science of language. Its study includes:
sounds (phonology),
word formation (morphology),
sentence structure (syntax),
meaning (semantics), and understanding (pragmatics) etc.
Syntactic Analysis : Here the analysis is of words in a sentence to know the grammatical structure of the sentence. The words are transformed into structures that show how the words relate to each others. Some word sequences may be rejected if they violate the rules of the language for how words may be combined. Example: An English syntactic analyzer would reject the sentence say : 'Boy the go the to store.' "
Using some sort of NLP, you can discover whether a given section of text contains a sentence or some incoherent rambling. This test could then be used as a filter in your program for what you would like to keep or remove.
Side note- As it appears your sample text is not just sentences but literature, sometimes characters will speak in sentence fragments as part of their nature given by the author. In this case, you could add a separate condition that if the text is contained within two quotations and has no special characters, you want to keep the text regardless.
In the end NLP may be more work than you require or that you want to do, in which case Option 1 is likely going to be your best bet. On the other hand, it may be just the thing you are looking for. Whatever the case or if you decide you need some combination of the two, best of luck! I hope this answer helps.

Encrypted text like english text

Is there a method to encrypt text and the output to be a common english/spanish text or alike, and be able to decrypt it too?
I tried the Caesar encryption
http://en.wikipedia.org/wiki/Caesar_cipher
Plaintext: THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG
Ciphertext: QEB NRFZH YOLTK CLU GRJMP LSBO QEB IXWV ALD
but I'd like the output for example:
Plaintext: THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG
Ciphertext: RADIO LIBRARY MAKE TABLE TIME ON KITCHEN DAY OF
Here's a possible solution. There may be performance issues with having the English or Spanish dictionary in an array, but you may just need common words.
function wordSwap(String str){
var dictionary = Array(a, the, brown, fox, over, ...);
var swapDictionary = randomizeArray(dictionary);
var newStr = "";
str = str.split(' ');
foreach(str as s){
var idx = dictionary.indexOf(s);
newStr += swapDictionary[idx]+" ";
}
return newStr;
}
Sure this is possible with a specifically crafted one time pad. XOR the plaintext and the target ciphertext and you get the key. key.length = max(pt.length, ct.length) This obviously works only for one PT, CT pair.
Jack's answer is pretty straightforward, and matches your Caesar Cipher well, but it's not very secure. It's just a substitution cipher with a much bigger "alphabet". Like your Caesar Cipher, that means it can be broken using frequency analysis. The words THE and AND are pretty common in English. ÉL and LA are extremely common in Spanish. So I look for "words" that show up very commonly in the cipher text and assume that they map to common words in my target language. I continue making guesses based on frequency and context until I work out portions of the message (or even the whole message). If I know this is probably about poodles, and I see a SUNRISE shows up often in the message, maybe I assume that SUNRISE is a poodle and I work from there.
I like it for being simple, but I don't like it so much if I want security.
We could devise a format preserving encryption scheme, which is what you kind of want here, but I'm not familiar with one that is designed to work on such a large domain (it's an area you could investigate though, or ask on http://crypto.stackexchange.com, which would be a better place for this question). The advantage of format preserving encryption is that the resulting message should be the same size as the original message.
But here's another solution that we could use, which is kind of a base-N encoding, where N is the size of our dictionary.
Start with an ordered dictionary and your plaintext. Look up each word in your dictionary and note the index. Use those indexes to create a new message where the word size is based on the number of elements in your dictionary. For simplicity, you could round this up to 64 bits per term, but you could also make each term any arbitrary number of bits if you're willing to do more bit math and let data spill across byte boundaries. Encrypt that message however you like (i.e. AES).
Now we need to encode that back into words. For values less than N-1, we just select that word out of the dictionary. For numbers equal to N-1 or greater, you can use the last word in the dictionary as a marker and then add the next word to it. So say we had a 1000 word dictionary (0..999) from A to ZYRIAN. We could encode 999 as ZYRIAN A and 1000 as ZYRIAN AARDVARK. If we needed to encode larger numbers we can chain. For example ZYRIAN ZYRIAN A is 1998. Of course you'll get better output sizes if you again let data split across byte boundaries, no value is greater than 2*N.
The key here is that we've broken the problem into two problems: a transcoder that allows us convert between arbitrary words and numbers, and encryption, which we can do using any standard encryption scheme.

How can Z͎̠͗ͣḁ̵͙̑l͖͙̫̲̉̃ͦ̾͊ͬ̀g͔̤̞͓̐̓̒̽o͓̳͇̔ͥ text be prevented?

I've read about how Zalgo text works, and I'm looking to learn how a chat or forum software could prevent that kind of annoyance. More precisely, what is the complete set of Unicode combining characters that needs to:
a) either be stripped, assuming chat participants are to use only languages that don't require combining marks (i.e. you could write "fiancé" with a combining mark, but you'd be a bit Zalgo'ed yourself if you insisted on doing so); or,
b) reduced to maximum 8 consecutive characters (the maximum encountered in actual languages)?
EDIT: In the meantime I found a completely differently phrased question ("How to protect against... diacritics?"), which is essentially the same as this one. I made its title more explicit so others will find it as well.
Assuming you're very serious about this and want a technical solution you could do as follows:
Split the incoming text into smaller units (words or sentences);
Render each unit on the server with your font of choice (with a huge line height and lots of space below the baseline where the Zalgo "noise" would go);
Train a machine learning algorithm to judge if it looks too "dark" and "busy";
If the algorithm's confidence is low defer to human moderators.
This could be fun to implement but in practice it would likely be better to go to step four straight away.
Edit: Here's a more practical, if blunt, solution in Python 2.7. Unicode characters classified as "Mark, nonspacing" and "Mark, enclosing" appear to be the main tools used to create the Zalgo effect. Unlike the above idea this won't try to determine the "aesthetics" of the text but will instead simply remove all such characters. (Needless to say, this will trash text in many, many languages. Read on for a better solution.) To filter out more character categories add them to ZALGO_CHAR_CATEGORIES.
#!/usr/bin/env python
import unicodedata
import codecs
ZALGO_CHAR_CATEGORIES = ['Mn', 'Me']
with codecs.open("zalgo", 'r', 'utf-8') as infile:
for line in infile:
print ''.join([c for c in unicodedata.normalize('NFD', line) if unicodedata.category(c) not in ZALGO_CHAR_CATEGORIES]),
Example input:
1
H̡̫̤ͭ̓̓̇͗̎̀ơ̯̗͒̄̀̈ͤ̀͡w͓̲͙͋ͬ̊ͦ̂̀̚ ͎͉͖̌ͯͅͅd̳̘̿̃̔̏ͣ͂̉̕ŏ̖̙͋ͤ̊͗̓͟͜e͈͕̯̮͌ͭ̍̐̃͒s͙͔̺͇̗̱̿̊̇͞ ̸ͩͩ͑̋̀ͮͥͦ̊Z̆̊͊҉҉̠̱̦̩͕ą̟̹͈̺̹̋̅ͯĺ̡̘̹̻̩̩͋͘g̪͚͗ͬ͒o̢̖͇̬͍͇̔͋͊̓ ̢͈͂ͣ̏̿͐͂ͯ͠t̛͓̖̻̲ͤ̈ͣ͝e͋̄ͬ̽͜҉͚̭͇ͅx̌ͤ̓̂̓͐͐́͋͡ț̗̹̄̌̀ͧͩ̕͢ ̮̗̩̳̱̾w͎̭̤̄͗ͭ̃͗ͮ̐o̢̯̻̾ͣͬ̽̔̍͟r̢̪͙͍̠̀ͅǩ̵̶̗̮̮ͪ́?̙͉̥̬ͤ̌͗ͩ̕͡
2
H̡̫̤ͭ̓̓̇͗̎̀ơ̯̗͒̄̀̈ͤ̀͡w͓̲͙͋ͬ̊ͦ̂̀̚ ͎͉͖̌ͯͅͅd̳̘̿̃̔̏ͣ͂̉̕ŏ̖̙͋ͤ̊͗̓͟͜e͈͕̯̮͌ͭ̍̐̃͒s͙͔̺͇̗̱̿̊̇͞ ̸ͩͩ͑̋̀ͮͥͦ̊Z̆̊͊҉҉̠̱̦̩͕ą̟̹͈̺̹̋̅ͯĺ̡̘̹̻̩̩͋͘g̪͚͗ͬ͒o̢̖͇̬͍͇̔͋͊̓ ̢͈͂ͣ̏̿͐͂ͯ͠t̛͓̖̻̲ͤ̈ͣ͝e͋̄ͬ̽͜҉͚̭͇ͅx̌ͤ̓̂̓͐͐́͋͡ț̗̹̄̌̀ͧͩ̕͢ ̮̗̩̳̱̾w͎̭̤̄͗ͭ̃͗ͮ̐o̢̯̻̾ͣͬ̽̔̍͟r̢̪͙͍̠̀ͅǩ̵̶̗̮̮ͪ́?̙͉̥̬ͤ̌͗ͩ̕͡
3
Output:
1
How does Zalgo text work?
2
How does Zalgo text work?
3
Finally, if you're looking to detect, rather than unconditionally remove, Zalgo text you could perform character frequency analysis. The program below does that for each line of the input file. The function is_zalgo calculates a "Zalgo score" for each word of the string it is given (the score is the number of potential Zalgo characters divided by the total number of characters). It then looks if the third quartile of the words' scores is greater than THRESHOLD. If THRESHOLD equals 0.5 it means we're trying to detect if one out of each four words has more than 50% Zalgo characters. (The THRESHOLD of 0.5 was guessed and may require adjustment for real-world use.) This type of algorithm is probably the best in terms of payoff/coding effort.
#!/usr/bin/env python
from __future__ import division
import unicodedata
import codecs
import numpy
ZALGO_CHAR_CATEGORIES = ['Mn', 'Me']
THRESHOLD = 0.5
DEBUG = True
def is_zalgo(s):
if len(s) == 0:
return False
word_scores = []
for word in s.split():
cats = [unicodedata.category(c) for c in word]
score = sum([cats.count(banned) for banned in ZALGO_CHAR_CATEGORIES]) / len(word)
word_scores.append(score)
total_score = numpy.percentile(word_scores, 75)
if DEBUG:
print total_score
return total_score > THRESHOLD
with codecs.open("zalgo", 'r', 'utf-8') as infile:
for line in infile:
print is_zalgo(unicodedata.normalize('NFD', line)), "\t", line
Sample output:
0.911483990148
True Señor, could you or your fiancé explain, H̡̫̤ͭ̓̓̇͗̎̀ơ̯̗͒̄̀̈ͤ̀͡w͓̲͙͋ͬ̊ͦ̂̀̚ ͎͉͖̌ͯͅͅd̳̘̿̃̔̏ͣ͂̉̕ŏ̖̙͋ͤ̊͗̓͟͜e͈͕̯̮͌ͭ̍̐̃͒s͙͔̺͇̗̱̿̊̇͞ ̸ͩͩ͑̋̀ͮͥͦ̊Z̆̊͊҉҉̠̱̦̩͕ą̟̹͈̺̹̋̅ͯĺ̡̘̹̻̩̩͋͘g̪͚͗ͬ͒o̢̖͇̬͍͇̔͋͊̓ ̢͈͂ͣ̏̿͐͂ͯ͠t̛͓̖̻̲ͤ̈ͣ͝e͋̄ͬ̽͜҉͚̭͇ͅx̌ͤ̓̂̓͐͐́͋͡ț̗̹̄̌̀ͧͩ̕͢ ̮̗̩̳̱̾w͎̭̤̄͗ͭ̃͗ͮ̐o̢̯̻̾ͣͬ̽̔̍͟r̢̪͙͍̠̀ͅǩ̵̶̗̮̮ͪ́?̙͉̥̬ͤ̌͗ͩ̕͡
0.333333333333
False Příliš žluťoučký kůň úpěl ďábelské ódy.
Make the box overflow:hidden. It doesn't actually disable Zalgo text, but it prevents it from damaging other comments.
.comment {
/* the overflow: hidden is what prevents one comment's combining marks from affecting its siblings */
overflow: hidden;
/* the padding gives space for any legitimate combining marks */
padding: 0.5em;
/* the rest are just to visually divide the three comments */
border: solid 1px #ccc;
margin-top: -1px;
margin-bottom: -1px;
}
<div class=comment>The below comment looks awful.</div>
<div class=comment>H̡̫̤ͭ̓̓̇͗̎̀ơ̯̗͒̄̀̈ͤ̀͡w͓̲͙͋ͬ̊ͦ̂̀̚ ͎͉͖̌ͯͅͅd̳̘̿̃̔̏ͣ͂̉̕ŏ̖̙͋ͤ̊͗̓͟͜e͈͕̯̮͌ͭ̍̐̃͒s͙͔̺͇̗̱̿̊̇͞ ̸ͩͩ͑̋̀ͮͥͦ̊Z̆̊͊҉҉̠̱̦̩͕ą̟̹͈̺̹̋̅ͯĺ̡̘̹̻̩̩͋͘g̪͚͗ͬ͒o̢̖͇̬͍͇̔͋͊̓ ̢͈͂ͣ̏̿͐͂ͯ͠t̛͓̖̻̲ͤ̈ͣ͝e͋̄ͬ̽͜҉͚̭͇ͅx̌ͤ̓̂̓͐͐́͋͡ț̗̹̄̌̀ͧͩ̕͢ ̮̗̩̳̱̾w͎̭̤̄͗ͭ̃͗ͮ̐o̢̯̻̾ͣͬ̽̔̍͟r̢̪͙͍̠̀ͅǩ̵̶̗̮̮ͪ́?̙͉̥̬ͤ̌͗ͩ̕͡</div>
<div class=comment>The above comment looks awful.</div>
A related question was asked before: https://stackoverflow.com/questions/5073191/how-is-zalgo-text-implemented but it's interesting to go into prevention here.
In terms of preventing this you can choose several strategies:
prevent combining diacritics entirely (and piss off many international users),
filter out combining characters using whitelisting or blacklisting (and piss off a smaller percentage of international users)
prevent a certain number of combining characters (and piss of an even smaller percentage of users)
have a healthy moderator community (with all the downsides that has, see your question as an example here)
You can get rid off Zalgo text in your application using strip-combining-marks by Mathias Bynens.
The module strip-combining-marks is available for browsers (via Bower) and Node.js applications (via npm).
Here is an example on how to use it with npm:
var stripCombiningMarks = require("strip-combining-marks");
var zalgoText = 'U̼̥̻̮͍͖n͠i͏c̯̮o̬̝̠͉̤d͖͟e̫̟̗͟ͅ';
var stripptedText = stripCombiningMarks(zalgoText); // "Unicode"
Using PHP and the mindset of a demolition worker you can get rid of the Zalgo with the iconv function. Of course that will kill any other UTF-8 chars too.
$unZalgoText = iconv("UTF-8", "ISO-8859-1//IGNORE", $zalgoText);

Categories