Hi @discourse-admin ,
Will the 24 hours time limit be for retraining and then inferencing the models on holdout data?
How big of a dataset is holdout data, It would let us estimate how long we can spend for inference
How does A10G gpus compare with 3090’s in fp16 deep learning, so that I can estimate how much of compute time I have already used 
Hi @Harshit_S
The 24 hour time limit is just for inferencing on the holdout data, which is a similar size and distribution to the labeled training data. To compare specs on the g5 instances for the final scoring check out the spec sheet which pairs nicely with this direct comparison with the 3090 across a variety of metrics.
Have fun with model training and optimization!
thinkonward team
1 Like
Okay this is big, I was very conservative about the resources I use to train, now I can go all out 
1 Like
Hi @discourse-admin
Thank you for your reply earlier, my inference time has been becoming very very high, if you could give me a ball park number about how many images we have predict in those 24 hours for the holdout data, I can adjust accordingly
Hi again @Harshit_S,
We are likely to work with around 30 images for the holdout data. Hope that helps.
Best of luck with the challenge!
ThinkOnward Team
1 Like