Reed-Solomon Code for large code length and errors

I'm new to Error Correction Code (ECC). Take the Reed-Solomon code (RS(n,k)) as an example, it encodes k symbols into n symbols with n-k parity symbols and it can correct (n-k)/2 error symbols.

I want to know whether there is design or implementation for RS(n,k) with large n like 600, and k be 400? This means it can correct about 100 error symbols. If it is not possible, what is the limitation? If it's possible, what is the time cost, especially for decoding since it is more complicated than encoding, for big data.

I look up several pieces of literature. Although n=544 is possible, however, the current solution only supports RS(544,514) which means the error correction capability is only (544-514)/2=15.

I know the hardest part of decoding is to solve key equation. But I don't know how to estimate the time cost of decoding.



  • Keep in mind that as n in RS(n,k) becomes larger, the field becomes larger. For n = 512 to 1023, a 10 bit field is needed. If the data is 400 bytes, that is actually 320 10 bit symbols.

    RS(600,400) is not an issue. For RS(n,k) with number of parities = number of syndromes = n-k = p, encoding has time complexity O(pn). Generating syndromes is O(pn). Decoding is O(p^2) if using Berlekamp Massey or Sugiyama extended Euclid decoder, or O((p/2)^3) if using Peterson Gorenstein Zierler matrix inversion.

    As example of a "larger" code, BCH code uses elements of GF(2^n), but limits the encoding polynomial to 1 bit coefficients. For example, there is a BCH(48600,48408), 48408 data bits, 192 parity bits, but syndrome generation and error decoding is done in GF(2^16). It is used for a digital broadcast known as DVBS2.