Scoring function

Hi Team Onward,
I have a couple of questions about the scoring function.

  1. Whatever I submit, even the sample submission file or a very promising solution with respect to the training dataset, I get a zero score, so I’m a little confused. Could you provide an official Python implementation of the scoring function?

  2. Does the scoring function accept as valid the reverse of a valid solution? The challenge is about sorting image traces, but how can one choose between a sorting solution and its mirrored one? E.g. if [3,1,4,0,2] is the correct order of vectors for a given image made of 5 traces, I think that [2,0,4,1,3] should be accepted as correct too. Is there any preference condition between the two vectors?

Thank you for your attention,

Francesco Lucchi

1 Like

Hello Francesco,

Thank you for your insightful question and observation.

  1. The sample_submission file is intended to produce a zero score for the Parallel Perspectives challenge. The scoring algorithm’s implementation for this challenge is kept confidential.

  2. Mirrored submissions will be accepted as the data has no context to describe orientation. The Live Scoring algorithm will be edited in the next few days to give equal scoring regardless of orientation.

Please let us know if you continue to have questions about this unique puzzle challenge.

Team Onward

Hi @team, follow-up question: how many data in the holdout test data and what is the largest number possible of seismic data traces per slice?

Hi @team and fellow competitors,

In relation to the first question posed by @francescolucchi , I encountered a similar issue—an assigned score of 0 to a solution that appeared promising. To investigate whether the problem was specific to my solution, I uploaded a random one and received the same 0 score. Considering the equation:

Score = w0similarity - w1sum_unmatches(dissimilarity_piece)

I anticipated a negative value for the random solution. Could you please review the scoring algorithm?

Thank you for your attention.

Best regards,
Gabriel Gama

Hi @gabriel.soares.gamma here is a bit more information on the scoring algorithm and why your solutions received a score of 0.

Scores range from 0 to 1 where a score of 0 means less than 75% of the pieces are correct, and a score of 1 means that all pieces are correct. The first weight is used for assessing overall similarity, and the second for the sum of dissimilarities for mismatched pieces. With the current weights, the score will be 0 for solutions that have more than 25% of mismatched elements per image.

Happy puzzling!

Onward Team

2 Likes

Hi @team , has the live scoring algorithm been updated to provide equal scoring regardless of orientation? Thank you.

Hi @leocd91 the holdout test data has a similar number of slices and traces per slice as the training and testing datasets:

Dataset: train.json
Total samples: 620
Min traces: 546
Max traces: 2620

Dataset: test_traces.json
Total samples: 834
Min traces: 480
Max traces: 3083

The scoring algorithm has been updated to account for changes in reconstructed image orientations as well.

Onward Team

2 Likes

Hi Onward team,

Since there are perfect solutions on the leaderboard already, there is a chance they will score perfectly on the private data as well. How would prizes be distributed in that scenario?

Hi @dmitry.ulyanov.msu

As more participants have reached the top-ranking spots for Parallel Perspectives, the Challenges Team has diversified and expanded the hold-out data. With this update, we feel that the finalists can be sorted.

Onward Team

Selection_999(5296)

i want to point out that some of the reconstructed test samples have “zero padding” (e.g. the attached image shows padding at the right)

i think it is both correct to such padding can be attached to left or right. I wonder does that affects scoring?

Hi @hengcherkeng235

We have looked into this and the padding does not impact the scoring for this challenge.

Happy solving!

Onward Team