Clarification on 4-Hour Inference

Good day

The judges will run a submitted algorithm on up to an AWS SageMaker g5.12xlarge instance, and inference must run within 4 hours.
Could you please clarify what data volume the 4-hour inference limit applies to? Does 12x indicate that the data size is 12 times larger than the training set?

Best Regards,
Sergey

Hi @pyatkovsky15022001

The 4 hour inference time limit applies to how long it takes a model to run inference on the private hold out dataset. The hold out data is approximately the same size as the provided test dataset for this challenge. g5.12xlarge refers to the performance specifications of the compute that the team will use to run the algorithm. Here are the specs as a reference for what to expect for compute available for final submissions.

ThinkOnward Team