网上有很多封装好的BERT模型,我们可以下载下来训练我们的句向量或词向量。这些BERT模型已经在大规模语料集上预训练过了,如何通过这些预训练好的模型得到我们需要的词向量呢?
办法就是今天的主角bert-embedding了。
安装
pip install bert-embedding
安装很简单,但是可能出现一些问题。首先环境里必须有TensorFlow,注意版本不要太高,以免出现兼容性问题,我安装的版本是1.13.1。还有就是与bert-embedding兼容的numpy版本只有1.14.6,在安装bert-embedding时它会自动给你装上。如果你在之后又装pandas,新版的pandas安装时会将numpy升级成高版本,那么bert-embedding再运行就会报错。
使用
from bert_embedding import BertEmbeddingbert_abstract = """We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers.As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%."""sentences = bert_abstract.split('\n')
bert_embedding = BertEmbedding()
result = bert_embedding(sentences)
这是官网的例子,我们来看一下输出结果是什么。
first_sentence = result[0]first_sentence[0]
# ['we', 'introduce', 'a', 'new', 'language', 'representation', 'model', 'called', 'bert', ',', 'which', 'stands', 'for', 'bidirectional', 'encoder', 'representations', 'from', 'transformers']
len(first_sentence[0])
# 18len(first_sentence[1])
# 18
first_token_in_first_sentence = first_sentence[1]
first_token_in_first_sentence[1]
# array([ 0.4805648 , 0.18369392, -0.28554988, ..., -0.01961522,
# 1.0207764 , -0.67167974], dtype=float32)
first_token_in_first_sentence[1].shape
# (768,)
result是一个二维list,长度为句子数量。每一个小的list包含句子和对应的词向量。这些词向量里是不包含cls和sep标记的。句子中的单词和词向量是一一对应的关系。
那么获取到的词向量该如何使用呢。
下面是一个实际的例子。
- 读取数据
train = pd.read_csv('data/train.tsv', sep='\t')
test = pd.read_csv('data/test.tsv', sep='\t')
test.head()
- 获取词向量
from bert_embedding import BertEmbedding
bert_embedding = BertEmbedding()
train_text_embed = bert_embedding(train['text'])
test_text_embed = bert_embedding(test['text'])
- padding
max_len=max([len(x.split(" ")) for x in train.text]+[len(x.split(" ")) for x in test.text])def padding(text, max_len):pad_text = []for i in text:embedding_i = i[1] #句子长度的list,每一个元素都是一个词向量。pad = [np.zeros(768)]pad_text_i = embedding_i + pad * (max_len - len(i[0]))pad_text.append(pad_text_i)text_embedding = np.array(pad_text)return text_embeddingtrain_text_embedding = padding(train_text_embed, max_len)
test_text_embedding = padding(test_text_embed, max_len)
这一步的目的是将所有的句子padding成相同的长度。注意这里的max_len取的是训练集和验证集所有句子的最大长度,因为模型的输入维度是固定的。更好的方式是取不同batch中所有句子的最大长度,那就要更麻烦一点了。
padding后我们将它转换成了numpy格式。
如果用的是pytorch,就将其转换成tensor。
- 转换格式
train_text = torch.from_numpy(train_text_numpy).float()
test_text = torch.from_numpy(test_text_numpy).float()
看一下维度。
这个维度是[句子数量,句子长度,词向量维度]。
OK,接下来再将其划分成小批量就可以输入到模型中了。