How to select final submission?

I couldn’t find an option to select the final submission. Does this mean the top submission on the leaderboard is automatically chosen? What should I do if I’m unable to reproduce the best submission and wish to select the second-best one?

Hi @camaro for the Final Evaluation, the top submissions on the Predictive Leaderboard will be invited to send Onward their fully reproducible code, which includes the developed Neural Network within the framework you built. Your submission should contain a Jupyter Notebook (Python >3.6) with a clearly written pipeline and any necessary supplements to reproduce your results. The Onward Judges will retrain your model and generate 100 samples of x0, x1 for a holdout dataset on 1,000 elements. Outputs will then be scored using the same method as the Live Scoring algorithm. This score will count for 95% of your final score.

In addition, your submission will be assessed for the interpretability of your submitted code. The interpretability criterion will focus on the extent of documentation, including docstrings and markdown, clear variable naming, and adherence to standard Python style guidelines. Interpretability counts for 5% of your final score.

Important: Ensure you have preserved the reproducibility of the training process for the proposed neural network and keep your best seed. Your work will be disqualified if your results are unreproducible during the final evaluation stage.

If you are unable to reproduce the best submission, you can send the second-best one as long as you can reproduce it.

Happy training!

Onward Team

Thanks for the quick answer!
Another question.
What happen if my submission failed to pass Goodness of Fit Criteria for the holdout dataset? (i.e. the case my submission has 0.099 RMSE for the public test set but 0.101 for the hidden test set.)
Is is possible to re-select submission or just considered as score 0?

Hi @camaro if your submission does not meet the threshold on the holdout dataset it will be a score of 0 and you can try submitting again with additional training or modified hyperparameters for your model.

Onward Team