TransWikia.com

Visualization of transformed features in BERT

Data Science Asked by metk on April 21, 2021

So I’m trying the Intent Recognition with BERT using Keras and TensorFlow 2 available at kdnuggets.com and this is the code for the results evaluation.

sentences = [
  "Play our song now",
  "Rate this book as awful"
]

pred_tokens = map(tokenizer.tokenize, sentences)
pred_tokens = map(lambda tok: ["[CLS]"] + tok + ["[SEP]"], pred_tokens)
pred_token_ids = list(map(tokenizer.convert_tokens_to_ids, pred_tokens))

pred_token_ids = map(
  lambda tids: tids +[0]*(data.max_seq_len-len(tids)),
  pred_token_ids
)
pred_token_ids = np.array(list(pred_token_ids))

predictions = model.predict(pred_token_ids).argmax(axis=-1)

for text, label in zip(sentences, predictions):
  print("text:", text, "nintent:", classes[label])
  print()

What I want to add in here is a visualization of the transformed features in BERT using a graph that is similar to clustering problems graphs. It should look like this:
Image taken from: towardsdatascience.com

enter image description here

Each color represents an intent and this graph is meant to show the predicted intents distribution in order to visualize how well BERT is doing.

Is there a graph similar to this meant to visualize model predictions? In this case, what are the hyperparameters related to the transformed features that should be considered since the predicted intents are not stored in a dataframe?

Thank you for your help peeps.

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP