Question about Final Deadline

I am entering the competition quite late, and although the final deadline for submission is mentioned as Feb 8, I wanted to know about the deadline to submit the interpretable code and notebook, also, putting everything in a jupyter notebook seems not-efficient, are we allowed to submit a folder where the code is also split across many utils .py files for better readability?

@team

Hey, @harshitsheoranlearnt

Thanks for taking part in our challenge. The deadline for the Predictive Leaderboard submission is February 8 2024. Challengers who end up in top 10 of the Predictive Leaderboard can submit their code for the final evaluation until February 15 2024.

We completely understand your intention to modularize your submission source code, which will undoubtedly enhance its interpretability and maintainability. It’s a smart move considering the complexity of the challenge.

However, we believe it’s crucial to keep the main flow, especially the high-level logic, within a Jupyter notebook file. This will provide us with a comprehensive view of the flow’s execution and facilitate final evaluation procedures.

So, feel free to modularize the codebase, organizing utility functions into separate modules for clarity and reusability. But let’s ensure that the main flow remains in the notebook.

Good luck!

Onward Team

1 Like

Hi @kylepeters08721

For the 7 days after February 8th 2024, those who have been selected for final submissions may submit multiple final submissions, but only the latest will be considered. If selected, we strongly recommend choosing your highest scoring submission and finalizing notebooks, and documentation so that the team can reproduce your results.

Here is a brief discussion on final submissions from the two birds one neural network challenge that closed in January.

Happy solving!

Onward Team

but only the latest will be considered

Does that mean, we use the last submit to submit the models that we want to give the code for? Or, can we submit any model at any time and just provide code for the solution we believe the most in, because I don’t know which of my 0.61s are the highest scoring.

From this link:
" Important : Ensure you have preserved the reproducibility of the training process for the proposed neural network and keep your best seed. Your work will be disqualified if your results are unreproducible during the final evaluation stage."

When you say the results must be fully reproducible, does that mean you will use our code to retrain our model and then test it on the testing dataset and this has to match exactly with the submission results we submitted?

If so, Pytorch has a lot of edge cases for reproducibility that may be hard to fully remove (Reproducibility — PyTorch 2.2 documentation). This means rerunning the same code may get very close results on the leaderboard but some numbers will be slightly different.

Or, by fully reproducible, do you mean the recreated model must match approximately the same metric results that were obtained on the leaderboard?

Hi @kylepeters08721

If selected for a final submission, we will indeed use your code to retrain your model and test it on the test dataset. The recreated model should have approximately the same score to your highest scoring submission with minimal edits to the final submission code. The measure of closeness of the reproduced results to your highest scoring submission is confidential at this time, but we take into account the variability in retraining models.

Thanks for your question!

Onward Team

1 Like