I am trying to clean text in the exact way that Firefox does before spell checking individual words for a Firefox extension I'm building (my addon uses nspell, a JavaScript implementation of Hunspell, since Firefox doesn't expose the Hunspell instance it uses via the extension API).
I've looked at the Firefox gecko cloned codebase, i.e. in the mozSpellChecker.h file and other related files by searching for "spellcheck" but I cannot seem to find out how they are cleaning text.
Reverse engineering it has been a major PITA, I have this so far:
// cleans text and strips out unwanted symbols/patterns before we use it
// returns an empty string if content undefined
function cleanText (content, filter = true) {
if (!content) {
console.warn(`MultiDict: cannot clean falsy or undefined content: "${content}"`)
return ''
}
// ToDo: first split string by spaces in order to properly ignore urls
const rxUrls = /^(http|https|ftp|www)/
const rxSeparators = /[\s\r\n.,:;!?_<>{}()[\]"`´^$°§½¼³%&¬+=*~#|/\\]/
const rxSingleQuotes = /^'+|'+$/g
// split all content by any character that should not form part of a word
return content.split(rxSeparators)
.reduce((acc, string) => {
// remove any number of single quotes that do not form part of a word i.e. 'y'all' > y'all
string = string.replace(rxSingleQuotes, '')
// we never want empty strings, so skip them
if (string.length < 1) {
return acc
}
// for when we're just cleaning the text of punctuation (i.e. not filtering out emails, etc)
if (!filter) {
return acc.concat([string])
}
// filter out emails, URLs, numbers, and strings less than 2 characters in length
if (!string.includes('@') && !rxUrls.test(string) && isNaN(string) && string.length > 1) {
return acc.concat([string])
}
return acc
}, [])
}
But I'm still seeing big differences between content when testing things like - well - the text area used to create this question.
To be clear: I'm looking for the exact method(s) and matches and rules that Firefox uses to clean text, and since it's open source it should be somewhere, but I can't seem to find it!
I believe you want the functions in mozInlineSpellWordUtil.cpp
.
From the header:
/**
* This class extracts text from the DOM and builds it into a single string.
* The string includes whitespace breaks whereever non-inline elements begin
* and end. This string is broken into "real words", following somewhat
* complex rules; for example substrings that look like URLs or
* email addresses are treated as single words, but otherwise many kinds of
* punctuation are treated as word separators. GetNextWord provides a way
* to iterate over these "real words".
*
* The basic operation is:
*
* 1. Call Init with the weak pointer to the editor that you're using.
* 2. Call SetPositionAndEnd to to initialize the current position inside the
* previously given range and set where you want to stop spellchecking.
* We'll stop at the word boundary after that. If SetEnd is not called,
* we'll stop at the end of the root element.
* 3. Call GetNextWord over and over until it returns false.
*/
You can find the complete source here, but it is fairly complex. For example, here is the method used to classify parts of the text as email addresses or urls, but it's over 50 lines long just to handle that.
Writing a spell checker seems trivial in principal, but as you can see from the source, it is a major endeavor. I'm not saying you shouldn't try, but as you've likely discovered, the devil is in the details of the edge cases.
Just as one example, when you're deciding what constitutes a word boundary or not, you have to decide which characters to ignore, including characters outside of the ASCII range. For example, here you can see the MONGOLIAN TODO SOFT HYPHEN being handled like the ASCII hyphen character:
// IsIgnorableCharacter
//
// These characters are ones that we should ignore in input.
inline bool IsIgnorableCharacter(char ch) {
return (ch == static_cast<char>(0xAD)); // SOFT HYPHEN
}
inline bool IsIgnorableCharacter(char16_t ch) {
return (ch == 0xAD || // SOFT HYPHEN
ch == 0x1806); // MONGOLIAN TODO SOFT HYPHEN
}
Again, I'm not trying to dissuade you from working on this project, but tokenizing text into discrete words in a way that will work within the context of HTML and in a multilingual environment, is a major endeavor.