I was reading the Embedding documentation, and it states that keras layers. Embedding "turns positive integers (indexes) into dense vectors of fixed size.
eg. [[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]]."`
I, however, accidentally included a decimal number inside my vector comprised of indices, but it still worked nonetheless. I just want to know what happens under the hood when the index is no enter code here t a positive integer?
You can check yourself here, the inputs are casted into int32
, meaning any floating point number will get rounded to an integer, so embedding will still work. If you give negative or larger than vocabulary size numbers you will get an error at runtime.