I am trying to tokenize Gujarati
(an Indian
language) word into characters.
Example : વાનર is word then I want list of characters like [વા, ન, ર]
I tried java.text.BreakIterator
with Gujarati
locale
but it did not work. Though it works perfectly for Hindi
.
Here is code :
import java.text.*;
import java.util.*;
public class Language{
public static void main(String[] args) {
String text = "વાનર";
Locale gujarati = new Locale("gu","IN");
BreakIterator breaker = BreakIterator.getCharacterInstance(gujarati);
breaker.setText(text);
int start = breaker.first();
for (int end = breaker.next(); end != BreakIterator.DONE; start = end, end = breaker.next()) {
System.out.println(text.substring(start,end));
}
}
}
Ouput:
વ
ા
ન
ર
Is there any library that can do it correctly?
I am fine with languages other than Java
I have written a small python function to convert Gujarati
words into list of Syllables
. My code is inspired by this Github repo libindic/syllabalizer.
Code:
def syllabify_gu(text):
signs = [u'\u0abe',
u'\u0abf',
u'\u0ac0',
u'\u0ac1',
u'\u0ac2',
u'\u0ac3',
u'\u0ac4',
u'\u0ac5',
u'\u0ac7',
u'\u0ac8',
u'\u0ac9',
u'\u0acb',
u'\u0acc',
u'\u0a81',
u'\u0a82',
u'\u0a83',
u'\u0acd']
limiters = ['\"', '\'', '`', '!', ';', ', ', '?', '.']
lst_chars = []
for char in text:
if char in limiters:
lst_chars.append(char)
elif char in signs:
lst_chars[-1] = lst_chars[-1] + char
else:
try:
if char == u'\u0ab0' and len(lst_chars) > 0 and lst_chars[-1][-1] == u'\u0acd' and lst_chars[-1][-2] == u'\u0aa4':
lst_chars[-1] = lst_chars[-1] + char
else:
lst_chars.append(char)
except IndexError:
lst_chars.append(char)
return lst_chars
syllabify_gu("સંગીત એ એવું પવિત્ર ઝરણું છે, જેનાં વહેતા તરંગોથી અંતરનાં તાર રણઝણી ઉઠે છે.")
Output:
['સં',
'ગી',
'ત',
' ',
'એ',
' ',
'એ',
'વું',
' ',
'પ',
'વિ',
'ત્ર',
' ',
'ઝ',
'ર',
'ણું',
' ',
'છે',
',',
' ',
'જે',
'નાં',
' ',
'વ',
'હે',
'તા',
' ',
'ત',
'રં',
'ગો',
'થી',
' ',
'અં',
'ત',
'ર',
'નાં',
' ',
'તા',
'ર',
' ',
'ર',
'ણ',
'ઝ',
'ણી',
' ',
'ઉ',
'ઠે',
' ',
'છે',
'.']
Find Unicode for Gujarati characters here.