Call for Artifacts
Traditionally, only papers are published. However, claims and results described in a paper often originate from artifacts not present in the paper. Artifacts are any additional material that substantiates the claims made in the paper, and ideally makes them fully replicable. For some papers, these artifacts are as important as the paper itself because they provide crucial evidence for the quality of the results.
The goal of TAP artifact evaluation is twofold. On the one hand, we want to encourage authors to provide more substantial evidence to their papers and to reward authors who create artifacts. On the other hand, we want to simplify the independent replication of results presented in the paper and to ease future comparison with existing approaches. Artifact submission is optional for TAP 2019. However, all authors of all accepted papers for TAP 2019 are encouraged to submit an artifact for evaluation.
Artifacts of interest include (but are not limited to):
- Software, Tools, or Frameworks
- Data sets
- Test suites
- Machine-checkable proofs
- Any combination of them
- Any other artifact described in the paper
Important Dates
- June 25 artifact submission
- July 2 test phase notification
- July 2-4 clarification period
- July 15 artifact notification
Artifact Evaluation
All artifacts are evaluated by the artifact evaluation committee. Each artifact will be reviewed by at least two committee members. Reviewers will read the accepted paper and explore the artifact to evaluate how well the artifact supports the claims and results of the paper. The evaluation is based on the following questions.
- Is the artifact consistent with the paper and the claims made by the paper?
- Are the results of the paper replicable through the artifact?
- Is the artifact complete, i.e., how many of the results of the paper are replicable?
- Is the artifact well-documented?
- Is the artifact easy to use?
The artifact evaluation is performed in the following two phases.
- In the test phase, reviewers check if the artifact is functional, i.e., they look for setup problems (e.g., corrupted, missing files, crashes on simple examples, etc.). If any problems are detected, the authors are informed of the outcome and asked for clarification. The authors will get 96 hours to respond to the reviews in case problems are encountered.
- In the assessment phase, reviewers will try to reproduce any experiments or activities and evaluate the artifact w.r.t the questions detailed above.
Papers with accepted artifacts that are publicly available will receive an artifact-evaluation badge on the first page and may extend their paper with an additional appendix of up to 2 pages.
Artifact Submission
An artifact submission should consist of
- an abstract that summarizes the artifact and explains its relation to the paper,
- a .pdf file of the most recent version of the accepted paper, which may differ from the submitted version to take reviewers' comments into account,
- an URL from which a .zip file containing the artifact can be downloaded, and
- the SHA256 checksum of the .zip file.
Additionally, the abstract including the URL of the download link, as well as the SHA256 checksum of your .zip file, and the .pdf file of your paper must be submitted to EasyChair.
http://www.easychair.org/conferences/?conf=tap2019
We need the checksum to ensure the integrity of your artifact.
You can generate the checksum using the following command-line tools.
- Linux: sha256sum <file>
- Windows: CertUtil -hashfile <file> SHA256
- MacOS: shasum -a 256 <file>
If you cannot submit the artifact as requested or encounter any other difficulties in the submission process, please contact the artifact evaluation chairs prior to submission.
Artifact Packaging Guidelines
We expect that authors package their artifact (.zip file) and write their instructions such that the artifact evaluation committee can evaluate the artifact within a virtual machine provided by us. The TAP 2019 virtual machine was created with VirtualBox 6.0.8 and consists of an installation of Ubuntu 19.04 with Linux 5.0.0 and the following notable packages.
- OCaml 4.07.1
- OpenJDK 1.8.0_212
- Mono 5.20.1.19
- ruby 2.5.5p157
- Python 2.7.16 and Python 3.7.3
- bash 5.0.3
- cmake 3.13.4-1
- clang 8.0.0.3
- gcc 8.3.0
- VIM 8.1
- Emacs 26.1
- Coq 8.9.1 with CoqIDE
- benchexec 1.18-1
- TexLive 2019
- A 32bit libc
- VirtualBox guest additions
The artifact evaluation committee will be instructed not to download software or data from external sources. Any additional software required by your artifact must be included in the .zip file and the artifact must provide instructions for the installation. To include an Ubuntu package in your artifact submission, you can create a .deb file with all the necessary dependencies from inside the VM. Reviewers can then install them by using sudo dpkg -i <.deb file>. You can create the necessary .deb files for example as follows.
- If you have only one package without dependencies, you can use
apt-get download <packagename> - If you have only one package without dependencies but with local modifications, e.g., particular configuration files, you can use the dpkg-repack utility
- If you have a package with multiple dependencies, you can use wget together with apt to download them all and put them into a folder:
wget $(apt-get install --reinstall --print-uris -qq <packagename> | cut -d"'" -f2)
In case you think the VM is improper for evaluation of your artifact, please contact the artifact evaluation chairs prior to artifact submission.
Your artifact .zip file must contain the following elements.
-
The main artifact, i.e., data, software, libraries, scripts, etc. required to replicate the results of your paper.
- The review will be singly blind. Please make sure that you do not (accidentally) learn the identify of the reviewers (e.g., through analytics, logging).
- We recommend to prepare your artifact in such a way that any computer science expert without dedicated expertise in your field can use your artifact, especially to replicate your results.
- A license file. Your license needs to allow the artifact evaluation chairs to download and distribute the artifact to the artifact evaluation committee members and the artifact evaluation committee members must be allowed to evaluate the artifact, e.g., use, execute, and modify the artifact for the purpose of artifact evaluation.
-
A README text file that introduces the artifact to the user and guides the user through replication of your results. Ideally, it should consist of the following parts.
- We recommend to describe the structure and content of your artifact.
- It should describe the steps to set up your artifact within the provided TAP 2019 VM. To simplify the reviewing process, we recommend to provide an installation script (if necessary).
- We would appreciate it if you would support the reviewers not only for the main review phase but also for the testing phase. To this end, it would be helpful if you would provide instructions that allow installation and rudimentary testing (i.e., in such a way that technical difficulties would pop up) in as little time as possible.
-
Document in detail how to replicate your results of the paper.
- Please document which claims or results of the paper can be replicated with the artifact and how (e.g., which experiment must be performed). Please also explain which claims and results cannot be replicated and why.
- Describe in detail how to replicate the results in the paper, especially describe the steps that need to be performed to replicate the results in the paper. To simplify the reviewing process, we recommend to provide evaluation scripts (where applicable).
- Precisely state the resource requirements (RAM, number of cores, CPU frequency, etc.), which you used to test your artifact. Your resource requirements should be modest and allow replication of results even on laptops.
- Please provide for each task/step of the replication (an estimate) how long it will take to perform it or how long it took for you and what exact machine(s) you used.
- For tasks that require a large amount of resources (hardware or time), we recommend to provide a possibility to replicate a subset of the results with reasonably modest resource and time limits, e.g., within 8 hours on a reasonable personal computer. In this case, please also include a script to replicate only a subset of the results. If this is not possible, please contact the artifact evaluation chairs early, but no latter than before submission.
Publication of Artifacts
The artifact evaluation committee uses the submitted artifact only for the artifact evaluation. It may not publicize the artifact or any parts of it during or after completing evaluation. Artifacts and all associated data will be deleted at the end of the evaluation process. However, to get the artifact badge the accepted artifact must be publicly available. We encourage the authors of artifacts to make their artifacts also permanently available, e.g., on ZENODO or figshare.com, and refer to them in their papers via a DOI or link to the published artifact.
Artifact Evaluation Committee
Chairs- Daniel Dietsch (University of Freiburg, Germany)
- Marie-Christine Jakobs (TU Darmstadt, Germany)
- Martin Bromberger (MPI, Germany)
- Maryam Dabaghchian (University of Utah, USA)
- Simon Dierl (TU Dortmund, Germany)
- Rayna Dimitrova (University of Leicester, UK)
- Mathias Fleury (MPI, Germany)
- Marcel Hark (RWTH Aachen, Germany)
- Martin Jonáš (Masaryk University, Czech Republic)
- Sven Linker (University of Liverpool, UK)
- Felipe R. Monteiro (Federal University of Amazonas, Brazil)
- Marco Muñiz (Aalborg University, Denmark)
- Gabriel Radanne (University of Freiburg, Germany)
- Cedric Richter (Paderborn University, Germany)
- Asieh Salehi Fathabadi (University of Southampton, UK)
- Christian Schilling (IST Austria, Austria)
- Martin Tappler (TU Graz, Austria)