Some of the TensorFlow examples using BERT models show a use of the BERT preprocessor to "pack" inputs. E.g. in this example,
text_preprocessed = bert_preprocess.bert_pack_inputs([tok, tok], tf.constant(20))
The documentation implies that this works equally well with more than two input sentences, such that (I would expect) one can do something like:
text_preprocessed = bert_preprocess.bert_pack_inputs([tok, tok, tok], tf.constant(20))
However, so doing causes the error at the bottom[1] of this post.
I get that there isn't a matching signature; if I read this correctly (and I may not!), there's a signature for a single input and one for two. But what's the recommended way to pack more than two sentences into input suitable for a classification task, as suggested in the above colab?
1.
ValueError: Could not find matching function to call loaded from the SavedModel. Got:
Positional arguments (2 total):
* [tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("inputs:0", shape=(None,), dtype=int32), row_splits=Tensor("inputs_2:0", shape=(None,), dtype=int64)), row_splits=Tensor("inputs_1:0", shape=(2,), dtype=int64)), tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("inputs_3:0", shape=(None,), dtype=int32), row_splits=Tensor("inputs_5:0", shape=(None,), dtype=int64)), row_splits=Tensor("inputs_4:0", shape=(2,), dtype=int64)), tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("inputs_6:0", shape=(None,), dtype=int32), row_splits=Tensor("inputs_8:0", shape=(None,), dtype=int64)), row_splits=Tensor("inputs_7:0", shape=(2,), dtype=int64))]
* Tensor("seq_length:0", shape=(), dtype=int32)
Keyword arguments: {}
Expected these arguments to match one of the following 4 option(s):
Option 1:
Positional arguments (2 total):
* [RaggedTensorSpec(TensorShape([None, None]), tf.int32, 1, tf.int64)]
* TensorSpec(shape=(), dtype=tf.int32, name='seq_length')
Keyword arguments: {}
Option 2:
Positional arguments (2 total):
* [RaggedTensorSpec(TensorShape([None, None]), tf.int32, 1, tf.int64), RaggedTensorSpec(TensorShape([None, None]), tf.int32, 1, tf.int64)]
* TensorSpec(shape=(), dtype=tf.int32, name='seq_length')
Keyword arguments: {}
Option 3:
Positional arguments (2 total):
* [RaggedTensorSpec(TensorShape([None, None, None]), tf.int32, 2, tf.int64), RaggedTensorSpec(TensorShape([None, None, None]), tf.int32, 2, tf.int64)]
* TensorSpec(shape=(), dtype=tf.int32, name='seq_length')
Keyword arguments: {}
Option 4:
Positional arguments (2 total):
* [RaggedTensorSpec(TensorShape([None, None, None]), tf.int32, 2, tf.int64)]
* TensorSpec(shape=(), dtype=tf.int32, name='seq_length')
Keyword arguments: {}```
BERT model expects specific input shape.
Working sample code:
bert_preprocess = hub.KerasLayer("https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3")
bert_encoder = hub.KerasLayer("https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/4",trainable=True)
def get_sentence_embeding(sentences):
preprocessed_text = bert_preprocess(sentences)
return bert_encoder(preprocessed_text)['pooled_output']
get_sentence_embeding([
"How to find which version of TensorFlow is",
"TensorFlow not found using pip"]
)