Data Science Asked by BlueGirl on August 4, 2020
How can get the hidden layer outputs in a simple one-layer lstm?
cat("Building Modeln")
model <- keras_model_sequential() %>%
layer_lstm(units = 64, dropout = 0.2, input_shape = c(seqlength, length(chars))) %>%
layer_dense(units = length(chars), activation = "softmax") %>%
compile(loss = 'categorical_crossentropy',
optimizer = optimizer_sgd(lr = 0.001,
decay = 1e-6,
momentum = 0.9,
nesterov = T),
metrics = c('accuracy'))
summary(model)
cat("Training n")
history <- model %>%
fit(train,
trainLabels,
epochs = 6,
batch_size = 16,
validation_split = 0.2)
I found this guid but I don’t know how to fit it to this simple model and what is data.
model <- ... # create the original model
layer_name <- 'my_layer' intermediate_layer_model <- keras_model(inputs =
model$input,outputs = get_layer(model, layer_name)$output)
intermediate_output <- predict(intermediate_layer_model, data)
Can anybody give a sample of this?
You'll definitely want to name the layer you want to observe first (otherwise you'll be doing guesswork with the sequentially generated layer names):
model <- keras_model_sequential() %>%
layer_lstm(units = 64,
dropout = 0.2,
input_shape = c(seqlength, length(chars)),
name = "lstm") %>%
...
The rest is pretty straightforward:
lstm_layer_model <- keras_model(
inputs = model$input,
outputs = get_layer(model, "lstm")$output
)
lstm_output <- predict(lstm_layer_model, new_data)
So if you want to know the LSTM neuron values for a given case, define that case's features as new_data
and run the last line.
Answered by DHW on August 4, 2020
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP