Data Science Asked by Astro on November 19, 2020
Keras now provides advanced parametric activation layers like Leaky-ReLU PReLU.
Each time I add this layer to a sequential model, an additional trainable parameter is added to graph.
How can I make sure the trainable parameter of activation is shared across all layers.
Thanks
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP