These are some key parameters from the starter notebook.
num_sources_sensors = {(1, 3): 0.2, (1, 4): 0.2, (2, 5): 0.6}
noise_std = {0.0: 0.5, 0.1: 0.5}
nt = {800: 0.6, 1000: 0.2, 1200: 0.2}
kappa = {0.05: 0.5, 0.1: 0.5}
Can we assume the range of test parameters match the above range or the test has wider ranges? For instance, the number of sensors will be 3 to 5, number of sources will be 1 to 2, the noise_std will be 0 to 0.1.
EDITED: change to the right channel.
Hi @Ning_Jia great to see you on the forums again 
For the test dataset, you have access to both global and sample-specific metadata, which includes details about the simulation parameters. Using this information, you can determine the exact distribution of these parameters within the test dataset.
ThinkOnward Team
@discourse-admin Thanks for your reply. My concern is about the unseen hold data set. For instance, the number of source is from 1 to 2 for the test. If the unseen holdout has 3 sources, that will be a big surprise. Another important parameter is the number of sensors, the test has a range of 2 to 6. I will expect the unseen has the same range.
@Ning_Jia The parameter ranges will be relatively similar, but their distributions will differ.
ThinkOnward Team
During the second phase, will we be able to see the metadata and data distribution of the unseen hold dataset? I think we don’t really care about the distribution itself, but rather about the values. That is, if the unseen hold dataset contains elements with nt = 900, will we be able to know that?
Hi @aleph_0 during the second phase you will not see any of the data or metadata for the hold out dataset. If selected for the final evaluations you will get more information on what to expect in the invite email.
ThinkOnward Team