Data Science Asked by Anastasia Kriuchkovska on March 23, 2021
So I have ResNet50 trained to classify images.
For each prediction I track the time needed for it (input and model are moved to GPU):
start = time.time()
result = self.model.forward(transformed_image)
end = time.time()
print(end - start)
And always I get the following output:
1.0592937469482422
0.05996203422546387
0.06096029281616211
0.04996800422668457
So the first prediction is ~20 times longer than the following ones.
Why? And what happens behind the scenes when we launch prediction for the first time, using Torch?
I have seen a similar question several times before. See https://stackoverflow.com/a/55577921/9212382 for a possible explanation.
Answered by Jakub on March 23, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP