Hi @camaro for the Final Evaluation, the top submissions on the Predictive Leaderboard will be invited to send Onward their fully reproducible code, which includes the developed Neural Network within the framework you built. Your submission should contain a Jupyter Notebook (Python >3.6) with a clearly written pipeline and any necessary supplements to reproduce your results. The Onward Judges will retrain your model and generate 100 samples of x0, x1 for a holdout dataset on 1,000 elements. Outputs will then be scored using the same method as the Live Scoring algorithm. This score will count for 95% of your final score.
In addition, your submission will be assessed for the interpretability of your submitted code. The interpretability criterion will focus on the extent of documentation, including docstrings and markdown, clear variable naming, and adherence to standard Python style guidelines. Interpretability counts for 5% of your final score.
Important: Ensure you have preserved the reproducibility of the training process for the proposed neural network and keep your best seed. Your work will be disqualified if your results are unreproducible during the final evaluation stage.
If you are unable to reproduce the best submission, you can send the second-best one as long as you can reproduce it.
Happy training!
Onward Team