Sponsored by USENIX, the Advanced Computing Systems Association.
All authors of (conditionally) accepted USENIX Security papers are encouraged to submit artifacts for Artifact Evaluation (AE). Artifacts can be submitted in the same cycle as the (conditionally) accepted paper or in any of the following cycles for 2022. Each submitted artifact will be reviewed by the Artifact Evaluation Committee (AEC). Before submitting your artifact, please check the Artifact Evaluation Information and the Artifact Submission Guidelines below.
- Notification to authors: Friday, September 3, 2021
- Final papers due: Tuesday, October 5, 2021
- Artifact submission deadline: Wednesday, October 13
- Answering AE reviewer questions: Monday, October 18 to Friday, November 5, 2021
- Artifact decisions announced: Tuesday, November 16, 2021
- Final appendix files due: Thursday, December 16, 2021
- Notification to authors: Thursday, January 20, 2022
- Artifact registration deadline: Wednesday, February 16, 2022
- Final paper files due: Tuesday, February 22, 2022
- Artifact submission deadline: Wednesday, March 2, 2022
- Answering AE reviewer questions: Monday, March 7 to Friday, March 25, 2022
- Artifact decisions announced: Tuesday, April 5, 2022
- Final appendix files due: Thursday, May 5, 2022
- Notification to authors: Monday, May 2, 2022
- Artifact registration deadline: Wednesday, June 8, 2022
- Final paper files due: Tuesday, June 14, 2022
- Artifact submission deadline: Wednesday, June 22, 2022
- Answering AE reviewer questions: Monday, June 27 to Friday, July 15, 2022
- Artifact decisions announced: Tuesday, July 26, 2022
- Final appendix files due: Thursday, August 25, 2022
Artifact Evaluation Committee
Artifact Evaluation Committee Co-Chairs
Clémentine Maurice, CNRS
Cristiano Giuffrida, VU Amsterdam
Artifact Evaluation Committee
Shubham Agarwal, CISPA Helmholtz Center for Information Security
Mohsen Ahmadi, University of California, Los Angeles
Nikolaos Alexopoulos, Technische Universität Darmstadt
Ranya Aloufi, Imperial College London
Amit Seal Ami, College of William & Mary
Simone Aonzo, EURECOM
Daniel Arp, Technische Universität Braunschweig
Pierre Ayoub, EURECOM
David G. Balash, The George Washington University
Jakob Bleier, TU Wien
Alejandro Cabrera Aldaya, Tampere University
Fabricio Ceschin, Federal University of Parana
Weikeng Chen, University of California, Berkeley
Hongjun Choi, Purdue University
Emilio Coppa, Sapienza University of Rome
Pascal Cotret, ENSTA Bretagne
Dipanjan Das, University of California, Santa Barbara
Giulio De Pasquale, King's College London
Matteo Dell'Amico, University of Genoa
Thorsten Eisenhofer, Ruhr-Universität Bochum
Alessandro Erba, CISPA Helmholtz Center for Information Security
Christof Ferreira Torres, University of Luxembourg
Cesar Pereida Garcia, Tampere University
Walid Ghandour, Lebanese University
Samuel Grayson, University of Illinois at Urbana–Champaign
Michele Grisafi, University of Trento
ASHISH Hooda, University of Wisconsin—Madison
Shengtuo Hu, University of Michigan
Shan Huang, Stevens Institute of Technology
Mazharul Islam, University of Wisconsin—Madison
Adriaan Jacobs, imec-DistriNet, Katholieke Universiteit Leuven
Jafar Haadi Jafarian, University of Colorado Denver
Yilin Ji, Karlsruhe Institute of Technology
Kaushal Kafle, College of William & Mary
Imtiaz Karim, Purdue University
Arslan Khan, Purdue University
Soomin Kim, Korea Advanced Institute of Science and Technology (KAIST)
Daniel Klischies, Ruhr-Universität Bochum
Josiane Kouam, Inria
Joel Kuepper, The University of Adelaide
Georg Land, Ruhr-Universität Bochum and Deutsches Forschungszentrum für Künstliche Intelligenz
Ang Li, Arizona State University
Song Li, Johns Hopkins University
Eleonora Losiouk, University of Padua
Keyu Man, University of California, Riverside
Guozhu Meng, SKLOIS, Institute of Information Engineering, Chinese Academy of Sciences
Vladislav Mladenov, Horst Görtz Institute for IT Security
Paul Olivier, EURECOM
Kexin Pei, Columbia University
Hernan Ponce de Leon, Bundeswehr University Munich
Erwin Quiring, Technische Universität Braunschweig
Vidya Lakshmi Rajagopalan, Stevens Institute of Technology
Andrew R. Reiter, Redacted
Moritz Schloegel, Ruhr-Universität Bochum
Omid Setayeshfar, University of Georgia
Johnny So, Stony Brook University
Marco Squarcina, TU Wien
Avinash Sudhodanan, HUMAN
Mauro Tempesta, TU Wien
Flavio Toffalini, EPFL
Guillaume Valadon, Quarkslab
Thijs van Ede, University of Twente
Alexios Voulimeneas, imec-DistriNet, Katholieke Universiteit Leuven
Xiaoguang Wang, Virginia Tech
Zhongjie Wang, Baidu Security
Shijia Wei, The University of Texas at Austin
Chengyu Zhang, East China Normal University
Yuchen Zhang, Stevens Institute of Technology
Zhiyuan Zhang, The University of Adelaide
Shitong Zhu, University of California, Riverside
Artifact Evaluation Information
A scientific paper consists of a constellation of artifacts that extend beyond the document itself: software, hardware, evaluation data and documentation, raw survey results, mechanized proofs, models, test suites, benchmarks, and so on. In some cases, the quality of these artifacts is as important as that of the document itself. To emphasize the importance of such artifacts and promote the reproducibility of experimental results, USENIX Security will run its third (optional) AE this year.
We believe the evaluation and dissemination of artifacts benefits our science and engineering as a whole. Their availability improves replicability and reproducibility, and enables authors to build on top of each other's work. It can also help more unambiguously resolve questions about cases not considered by the original authors.
Beyond helping the community as a whole, it confers several direct and indirect benefits to the authors themselves. The most direct benefit is, of course, the recognition that the authors accrue, with the community rewarding artifacts that sometimes take years to build. Another direct benefit is the ability to improve the final version of the paper and of the artifacts based on feedback from the AEC. In addition, the very act of creating a bundle that can be used by the AEC confers several indirect benefits:
- The same bundle can be distributed to third parties and we believe this will foster science in general. We strongly encourage authors to make the artifacts available such that other researchers can build upon this artifact.
- A bundle can be used subsequently for later experiments (e.g., on new parameters or for responding to a journal reviewer's questions).
- The bundle is more likely to survive being put in storage between the departure of one student and the arrival of the next.
However, creating a bundle that meets all these properties can be onerous. Therefore, the process we describe below does not require an artifact to have all these properties, it offers a route to evaluation that confers fewer benefits for proportionally less effort.
The process is based on experience collected at other venues that have organized such an evaluation process for several years.
The Artifact Evaluation Committee will also grant Distinguished Artifact Awards to outstanding artifacts accepted to USENIX Security 2022.
To maintain a wall of separation between paper review and the artifacts, authors will be given the option to submit their artifacts only after their papers have been (conditionally) accepted for publication at USENIX Security. The new process for the 2022 Artifact Evaluation edition gives authors 5-6 weeks after the notification to prepare their artifacts. Of course, they can (and should!) prepare their artifacts well in advance, and can already provide the artifacts to the PC via supplemental materials, as many authors already do.
At artifact-submission time, the authors choose the criteria by which their artifacts will be evaluated. The criteria correspond to three separate badges that can be awarded to a paper. In general, good artifacts are expected to be: consistent with the paper, as complete as possible, documented well, and easy to (re)use. The AEC will read the paper and then judge if the artifact meets the criteria for each of the requested badges.
Each artifact submission will be reviewed by at least two AEC members. Since we anticipate small glitches with installation and use, reviewers may communicate with authors for a period of three weeks to help resolve glitches while preserving reviewer anonymity. The AEC will then complete its evaluation and notify the authors of the outcome. Please make sure that at least one of the authors is reachable to answer questions in a timely manner.
As the notification of the Artifact Evaluation is after the final paper deadline, badges cannot directly appear on the published papers. Nevertheless, authors are given the possibility to add the awarded badges and the Artifact Appendix on the author version PDF that they can later host on their website or an archive. Moreover, USENIX will publish the badges and the appendices on the conference website. Finally, the badges and the artifact appendices will also be published by USENIX after the conference in dedicated proceedings.
Authors can request their artifact to be evaluated towards one, two, or all three of the following badges:
- Artifacts Available: To earn this badge, the AEC must judge that the artifacts associated with the paper have been made available for retrieval, permanently and publicly. We encourage authors to use Zenodo, which is a publicly-funded long-term storage platform that also assigns a DOI for your artifact. Other valid hosting options include institutional repositories and third-party digital repositories (e.g., FigShare, Dryad, Software Heritage, GitHub, or GitLab—not personal web pages). For repositories that can evolve over time (e.g., GitHub), a stable reference to the evaluated version (e.g., a commit hash) is required. Other than making the artifacts available, this badge does not mandate any further requirements on functionality, correctness, or documentation.
- Artifacts Functional: To earn this badge, the AEC must judge that the artifacts conform to the expectations set by the paper in terms of functionality, usability, and relevance. In short, do the artifacts work and are they useful for producing outcomes associated with the paper? The AEC will consider three aspects of the artifacts in particular.
- Documentation: are the artifacts sufficiently documented to enable them to be exercised by readers of the paper?
- Completeness: do the submitted artifacts include all of the key components described in the paper?
- Exercisability: do the submitted artifacts include the scripts and data needed to run the experiments described in the paper, and can the software be successfully executed?
To avoid excluding some papers, the AEC will try to accept any artifact that authors wish to submit. These can be software, hardware, data sets, survey results, test suites, mechanized proofs, access to special hardware, and so on. Given the experience in other communities, we decided to not accept paper proofs in the artifact evaluation process. The AEC lacks the time and often the expertise to carefully review paper proofs. Obviously, the better the artifact is packaged, the more likely the AEC can actually work with it during the evaluation process.
While we encourage open research, submission of an artifact does not contain tacit permission to make its content public. All AEC members will be instructed that they may not publicize any part of your artifact during or after completing evaluation, nor retain any part of it after evaluation. Thus, you are free to include models, data files, proprietary binaries, exploits under embargo, etc. in your artifact. Also note that participating in the AE does not require you to later publish your artifacts, but of course we strongly encourage you to do so.
In addition, we encourage that you anonymize any data files that you submit. We recognize that some artifacts may attempt to perform malicious operations by design. These cases should be boldly and explicitly flagged in detail in the readme so AEC members can take appropriate precautions before installing and running these artifacts. The evaluation of exploits and similar results might lead to additional hurdles where we still need to collect experience on how to handle this best. Please contact us in case you have concerns, for example when evaluating bug finding tools or other types of artifacts that need special requirements.
Artifact Evaluation Committee Membership
The AEC will consist of about 30-40 members. We intend the AEC members to be a combination of senior graduate students, postdocs, and researchers, identified with the help of the USENIX Security Program Committee.
Qualified graduate students are often in a much better position than many researchers to handle the diversity of systems expectations we will encounter. In addition, these graduate students represent the future of the community, so involving them in this process early will help push this process forward. We are convinced that participation in the AEC can provide useful insight into both the value of artifacts, the process of artifact evaluation, and help establish community norms for artifacts. We therefore seek to include a broad cross-section of the USENIX Security community on the AEC.
Naturally, the AEC chairs will devote considerable attention to both mentoring and monitoring the junior members of the AEC, helping to educate the students on their power and responsibilities.
The AEC chairs will also grant Distinguished Artifact Reviewer Awards to AEC members who have contributed with outstanding efforts during the Artifact Evaluation and constructive feedback to authors.
Artifact Submission Guidelines
Packaging and Submitting the Artifact
By the artifact registration deadline, please submit the abstract and PDF of your accepted USENIX Security 2022 paper (as well as topics and conflicts) via the submission form.
Additionally, please provide instructions on how to access a working copy of your artifact for the purpose of Artifact Evaluation. For instance, this may involve providing a URL to a repository/archive of your artifact or instructions to get remote (e.g., SSH) access to a special evaluation platform. Note that the artifact does not need to be anonymized and you may update the working copy of your artifact during the Artifact Evaluation (in response to comments from the reviewers in the author discussion phase). For your artifact to be considered, you also need to check the "ready for review" box before the finalization deadline.
If you are applying for an Artifact Functional and/or Results Reproduced badge, the artifact should include a README file that gives detailed instructions on how to get it working, how to run your experiments, and how to satisfy all the relevant hardware/software requirements. If something is unclear, you will be contacted to clarify any questions regarding your artifact. Feel free to reach out in advance in case you have any questions.
The artifact evaluation committee will read your accepted paper before evaluating the artifact. Authors should also submit the unified Artifact Appendix using the following guidelines. In particular, please make concrete what claims you are making of the artifact, especially if these differ from the expectations set up by the paper. This is a place where you can tell us about difficulties we might encounter in using the artifact, or its maturity relative to the content of the paper. We are still going to evaluate the artifact relative to the paper, but this helps set expectations upfront, especially in cases that might frustrate the reviewers without prior notice.
Artifact Authors Not Anonymous
The artifact submissions are not anonymous. The reviewers will see the authors for each artifact from the start. Please do not waste your time trying to hide the artifact authors.
We ask that, during the evaluation period, you do not embed any analytics or other tracking in the Web site for the artifact or, if you cannot control this, that you do not access this data. This is important for maintaining the confidentiality of reviewers. If for some reason you cannot comply with this, please notify the chairs immediately such that we can discuss options.
Authors should consider one of the following methods to package the software components of their artifacts (though the AEC is open to other reasonable formats as well):
- Source code: If your artifact has very few dependencies and can be installed easily on several operating systems, you may submit source code and build scripts. However, if your artifact has a long list of dependencies, please use one of the other formats below.
- Virtual machine/container: A virtual machine or Docker image containing software application already setup with the right tool-chain and intended runtime environment. For example:
- For raw data, the VM would contain the data and the scripts used to analyze it.
- For a mobile phone application, the VM would have a phone emulator installed.
- For mechanized proofs, the VM would have the right version of the theorem prover used.
- We recommend using VirtualBox or Docker: Both are freely available on several platforms. An Amazon EC2 instance is also possible.
- Binary Installer: Please indicate exactly which platform and other runtime dependencies your artifact requires.
- Live instance on the Web: Ensure that it is available for the duration of the artifact evaluation process.
- Remote machine: In case your artifact requires special hardware (e.g., SGX or another type of trusted execution environment), your artifact is actually a piece of hardware, or uses confidential/proprietary/licensed software, please make sure that the reviewers can somehow access a target evaluation machine. Some kind of VPN/SSH access to the device might be an option, but please preserve reviewers’ anonymity, e.g., by explicitly asking reviewers a public key and disabling IP address or geolocation logging.
Remember that the AEC is attempting to determine whether the artifact meets the expectations set by the paper. If possible, package your artifact to help the committee easily evaluate this. This can for example be achieved via automated test cases that reproduce the experiments described in the paper or a step-by-step documentation to help setting up the artifact.
If you have any questions about how best to package your artifact, please do not hesitate to contact the AEC chairs, at email@example.com.
There are several sources of good advice about preparing artifacts for evaluation. These three are particularly noteworthy:
- HOWTO for AEC Submitters, by Dan Barowy, Charlie Curtsinger, Emma Tosch, John Vilk, and Emery Berger
- Artifact Evaluation: Tips for Authors, by Rohan Padhye
- How Are Award-winning Systems Research Artifacts Prepared (Part 1), by Tianyin Xu
The AE process at USENIX Security '22 is a continuation of the AE process at USENIX Security '20 and '21 and was inspired by multiple other conferences, such as OSDI, EuroSys, and several other systems conferences. See artifact-eval.org for the origins of the AE process.