I am working on a small project and I need some help. I have a CSV file with 150,000 rows (each row has 10 cols of data). I am using the fscvread to read the file and during the loop I want to match one of the columns (call it stringx) of each row against an array of 10,000 words. If any of the 10,000 words exist in stringx, it is removed using preg_replace.
Now all of this is fine. I have everything working fine, but the problem is, its too slow.
I have tried 2 methods to match the array. 1) I convert stringx to an array using explode(" ", $stringx) and then use the array_diff($array_stringx, $array_10000); 2) use foreach on $array_10000 and preg_replace on $stringx
Method 1 takes about 60 secs to go through 200 rows of data and method 2 can loop 500 rows in 60 secs.
Is there a better way to do this?
Once again, I am looking for an efficient way to (basically) array_diff an array of 10,000 words against 150,000 strings one at a time.
Help is much appreciated.
The following is just an alternative. It may or may not fulfil your requirements.
It performs 84 ops/second with 10k words dictionary and 15k string on my laptop.
Downside is that it does not remove the spaces around the words.
$wordlist is just rows with one word each, could be a file.
$dict = array_flip(preg_split('/\n/',$wordlist));
function filter($str,$dict) {
$words = preg_split('/\s/',$str);
sort($words);
$words = array_unique($words);
foreach ($words as $word) {
if (key_exists($word,$dict)) {
$removeWords[] = '/\b' . $word . '\b/';
}
}
return preg_replace($removeWords, '', $str);
}
Another example that performs a bit faster (107ops/s with 15kb string and 10k words dictionary)
function filter2($str,$dict) {
$words = preg_split('/\b/',$str);
foreach ($words as $k => $word) {
if (key_exists($word,$dict)) {
unset($words[$k]);
}
}
return implode('', $words);
}