Call for Artifacts
Traditionally, only papers are published. However, claims and results described in a paper often originate from artifacts not present in the paper. Artifacts are any additional material that substantiates the claims made in the paper, and ideally makes them fully replicable. For some papers, these artifacts are as important as the paper itself because they provide crucial evidence for the quality of the results.
The goal of TAP artifact evaluation is twofold. On the one hand, we want to encourage authors to provide more substantial evidence to their papers and to reward authors who create artifacts. On the other hand, we want to simplify the independent replication of results presented in the paper and to ease future comparison with existing approaches.
Artifacts of interest include (but are not limited to):
- Software, Tools, or Frameworks
- Data sets
- Test suites
- Machine-checkable proofs
- Any combination of them
- Any other artifact described in the paper
Artifact submission is optional for TAP 2020. However, all authors of all accepted papers for TAP 2020 are encouraged to submit an artifact for evaluation. Additionally, badges shown on the title page of the corresponding paper give you credit for good artifact submissions.
Important Dates
- March 23rd, 2020, artifact submission
- March 30th, 2020 test phase notification
- March 30th - April 1st, 2020 clarification period
- April 14, 2020, artifact notification
Artifact Evaluation
All artifacts are evaluated by the artifact evaluation committee. Each artifact will be reviewed by at least two committee members. Reviewers will read the accepted paper and explore the artifact to evaluate how well the artifact supports the claims and results of the paper. The evaluation is based on the following questions.
- Is the artifact consistent with the paper and the claims made by the paper?
- Are the results of the paper replicable through the artifact?
- Is the artifact complete, i.e., how many of the results of the paper are replicable?
- Is the artifact well-documented?
- Is the artifact easy to use?
The artifact evaluation is performed in the following two phases.
- In the test phase, reviewers check if the artifact is functional, i.e., they look for setup problems (e.g., corrupted, missing files, crashes on simple examples, etc.). If any problems are detected, the authors are informed of the outcome and asked for clarification. The authors will get 3 days to respond to the reviews in case problems are encountered.
- In the assessment phase, reviewers will try to reproduce any experiments or activities and evaluate the artifact w.r.t the questions detailed above.
Awarding
We award up to two badges that are granted independently of each other. For artifacts that are successfully evaluated by the artifact evaluation committee we either grant the functional or reusable badge. Each successfully evaluated artifact receives at least the functional bade. The reusable badge is granted to artifacts of very high quality. Independently, artifacts that are publicly available under a DOI receive an availability badge. Authors may use all granted badges on the title page of the respective paper.
Additionally, papers with successfully evaluated artifacts may extend their paper with an additional appendix of up to 2 pages.
Artifact Submission
An artifact submission should consist of
- an abstract that summarizes the artifact and explains its relation to the paper including
- an URL from which a .zip file containing the artifact can be downloaded - we encourage you to provie a DOI - and
- the SHA256 checksum of the .zip file, and
- a .pdf file of the most recent version of the accepted paper, which may differ from the submitted version to take reviewers' comments into account.
The
https://easychair.org/conferences/?conf=tap2020
We need the checksum to ensure the integrity of your artifact.
You can generate the checksum using the following command-line tools.
If you cannot submit the artifact as requested or encounter any other difficulties in the submission process, please contact the artifact evaluation chairs prior to submission.
Artifact Packaging Guidelines
We expect that authors package their artifact (.zip file) and write their instructions such that the artifact evaluation committee can evaluate the artifact within a virtual machine provided by us. Only submit the required files to replicate your results in the provided virtual machine. Do not submit a virtual machine image in the .zip file. AEC members will copy your .zip file into the provided virtual machine.
TAP 2020 Virtual Machine
The TAP 2020 virtual machine was created with VirtualBox 6.0.16 and consists of an installation of Ubuntu 19.04 with Linux 5.0.0-38 and the following notable packages.- OCaml 4.09.0
- OpenJDK 1.8.0_232
- OpenJDK 13 (2019-09-17)
- Mono 6.8.0.105
- Ruby 2.5.5p157
- Python 2.7.16 and Python 3.7.3
- bash 5.0.3
- cmake 3.13.4
- clang 8.0.0.3
- gcc 8.3.0
- Racket 7.5
- VIM 8.1
- Emacs 26.1
- Coq 8.9.1 with CoqIDE 8.9.1
- benchexec 2.5
- TexLive 2019
- A 32bit libc
- VirtualBox guest additions 6.0.16
The VM has a user tap2020 with password tap2020. The root user has the same password.
The final version of the VM will be available on ZENODO.
In order to save space, the VM does not have an active swap file. Please mention in your submission if you expect that a swap file is needed. You can activate swap for the running session using the following commands.
sudo fallocate -l 1G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
The artifact evaluation committee will be instructed not to download software or data from external sources. Any additional software required by your artifact must be included in the .zip file and the artifact must provide instructions for the installation. To include an Ubuntu package in your artifact submission, you can create a .deb file with all the necessary dependencies from inside the VM. Reviewers can then install them by using sudo dpkg -i <.deb file>. You can create the necessary .deb files for example as follows.
- If you have only one package without dependencies, you can use
apt-get download <packagename> - If you have only one package without dependencies but with local modifications, e.g., particular configuration files, you can use the dpkg-repack utility
- If you have a package with multiple dependencies, you can use wget together with apt to download them all and put them into a folder:
wget $(apt-get install --reinstall --print-uris -qq <packagename> | cut -d"'" -f2)
In case you think the VM is improper for evaluation of your artifact, please contact the artifact evaluation chairs prior to artifact submission.
Artifact Content
Your artifact .zip file must contain the following elements.
-
The main artifact, i.e., data, software, libraries, scripts, etc. required to replicate the results of your paper.
- The review will be singly blind. Please make sure that you do not (accidentally) learn the identify of the reviewers (e.g., through analytics, logging).
- We recommend to prepare your artifact in such a way that any computer science expert without dedicated expertise in your field can use your artifact, especially to replicate your results. For example, provide easy-to-use scripts and a detailed README document.
- A license file. Your license needs to allow the artifact evaluation chairs to download and distribute the artifact to the artifact evaluation committee members and the artifact evaluation committee members must be allowed to evaluate the artifact, e.g., use, execute, and modify the artifact for the purpose of artifact evaluation.
-
A README text file that introduces the artifact to the user and guides the user through the replication of your results. Ideally, it should consist of the following parts.
- We recommend to describe the structure and content of your artifact.
- It should describe the steps to set up your artifact within the provided TAP 2020 VM. To simplify the reviewing process, we recommend to provide an installation script (if necessary).
- We would appreciate it if you would support the reviewers not only for the assessment phase but also for the test phase. To this end, it would be helpful if you would provide instructions that allow installation and rudimentary testing (i.e., in such a way that technical difficulties would pop up) in as little time as possible.
-
Document in detail how to replicate your results of the paper.
- Please document which claims or results of the paper can be replicated with the artifact and how (e.g., which experiment must be performed). Please also explain which claims and results cannot be replicated and why.
- Describe in detail how to replicate the results in the paper, especially describe the steps that need to be performed to replicate the results in the paper. To simplify the reviewing process, we recommend to provide evaluation scripts (where applicable).
- Precisely state the resource requirements (RAM, number of cores, CPU frequency, etc.), which you used to test your artifact. Your resource requirements should be modest and allow replication of results even on laptops. If you require more resources, please contact the artifact evaluation chairs prior submission.
- Please provide for each task/step of the replication (an estimate) how long it will take to perform it or how long it took for you and what exact machine(s) you used.
- For tasks that require a large amount of resources (hardware or time), we recommend to provide a possibility to replicate a subset of the results with reasonably modest resource and time limits, e.g., within 8 hours on a reasonable personal computer. In this case, please also include a script to replicate only a subset of the results. If this is not possible, please contact the artifact evaluation chairs early, but no latter than before submission.
- Any additional software required by your artifact including an installation description (see TAP 2020 VM).
Publication of Artifacts
The artifact evaluation committee uses the submitted artifact only for the artifact evaluation. It may not publicize the artifact or any parts of it during or after completing evaluation. Artifacts and all associated data will be deleted at the end of the evaluation process. We encourage the authors of artifacts to make their artifacts also permanently available, e.g., on ZENODO or figshare.com, and refer to them in their papers via a DOI. All artifacts for which a DOI exists that is known to the artifact evaluation committee are granted the availability badge.
Artifact Evaluation Committee
Chairs- Daniel Dietsch (University of Freiburg, Germany)
- Marie-Christine Jakobs (TU Darmstadt, Germany)
- Sadegh Dalvandi (University of Surrey, UK)
- Simon Dierl (TU Dortmund, Germany)
- Mathias Fleury (MPI, Germany)
- Ákos Hajdu (Budapest University of Technology and Economics, Hungary)
- Marcel Hark (RWTH Aachen, Germany)
- Sven Linker (University of Liverpool, UK)
- Marco Muñiz (Aalborg University, Denmark)
- Kostiantyn Potomkin (Australian National University, Australia)
- Virgile Robles (CEA List, France)
- Martin Sachenbacher (LION Smart GmbH, Germany)
- Christian Schilling (IST Austria, Austria)