Participate in Benchmarking Events

These guides will help you go through the different aspects rellevant for benchmarking the outcome of your tool or pipeline as part of an open OEB benchmarking event. The overall process is described in this figure.

General Flow
  1. Prepare the participant data, i.e., the dataset to be evaluated. Instructions on how to generate and format it are specific to each benchmaking event, so counsult the challenges’ rules at the organizater’s website. It is also usually linked at the OpenEBench Event entry of the organizing community.

  2. Upload it to the OpenEBench Virtual Research Environment and evaluate your participant dataset selecting the benchmaking event you are interested in. Behind the scenes, the execution of a benchmarking workflow will be triggered to generate a set of datasets containing your assessments. You can visualize and compare them against other event’s participants.

  3. Once satisfied with your results, you can submit your assessments to become public and accessible at OpenEBench website. According to the event’s specification, the publication process might require the approval of the organiziers.

  4. Optionally, you can export your benchmarking results. OpenEBench helps your to publish your benchmarking datasets to EUDAT, a long-term data infrastructure that will issue a D.O.I. for your data.

Read the following documentation to learn more on each of these steps.