I have an error in Google Colab when importing TFBertModel, two months before everything worked fine.
from transformers import TFBertModel
I receive:
AttributeError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py in _get_module(self, module_name)
1389 try:
-> 1390 return importlib.import_module("." + module_name, self.__name__)
1391 except Exception as e:
25 frames
AttributeError: module 'tensorflow._api.v2.compat.v2.__internal__' has no attribute 'register_load_context_function'
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py in _get_module(self, module_name)
1390 return importlib.import_module("." + module_name, self.__name__)
1391 except Exception as e:
-> 1392 raise RuntimeError(
1393 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1394 f" traceback):\n{e}"
RuntimeError: Failed to import transformers.models.bert.modeling_tf_bert because of the following error (look up to see its traceback):
module 'tensorflow._api.v2.compat.v2.__internal__' has no attribute 'register_load_context_function'
The version of keras is 3.1.1, tensorflow 2.16.1, transformers version 4.38.2
For the runtime Python3 + CPU, created today, I was able to run the following TFBertModel just fine in Google Colab today. I noticed that our Tensorflow versions differ, other than that I don't believe that TFBertModel is being deprecated :
Transformers version: 4.38.2 Tensorflow version: 2.15.0
Example working TFBertModel Code
import tensorflow as tf
from transformers import BertTokenizer, TFBertModel
# Instantiate the tokenizer and the model
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained('bert-base-uncased')
# For fun, let's Encode some text
input_texts = ["Hello World, I'm using Bert!", "I am also using Bert in a sentence."]
encoding = tokenizer(input_texts, return_tensors='tf', padding=True, truncation=True)
# Get the BERT representations
outputs = model(encoding['input_ids'], attention_mask=encoding['attention_mask'])
last_hidden_state = outputs.last_hidden_state
print(last_hidden_state)
output:
Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFBertModel: ['cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.seq_relationship.bias']
- This IS expected if you are initializing TFBertModel from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFBertModel from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model).
All the weights of TFBertModel were initialized from the PyTorch model.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertModel for predictions without further training.
tf.Tensor(
[[[ 0.05163623 0.25609216 0.05970613 ... -0.20756713 0.06182499
0.7532056 ]
...