Data Science Asked on November 5, 2021
The usual strategy in neural networks today is to use min-max scaling to scale the input feature vector from 0 to 1. I want to know if the same principle holds true if our inputs have a large dynamic range (for example, there may be some very large values and some very small values). Isn’t it better to use logarithmic scaling in such cases?
If it is a classification problem, then you will use sigmoid or softmax to make the output value in (0,1) and all the value must sum to 1 as per the rule of probability.
Answered by SrJ on November 5, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP