An important responsibility of the NSDI PC chairs and steering committee is to continually look for ways to improve the conference. One potential means of improvement, pioneered by VLDB and since taken on by SIGMOD, PETS, SIGMETRICS, and IEEE S&P, is to offer multiple deadlines and a one-shot-revision process. We therefore plan for the NSDI conference to also try making these changes, in an experiment designed to see if they lead to the myriad expected benefits. The design of our new system draws heavily on that of SIGMETRICS and IEEE S&P.
Expected benefits of multiple deadlines and one-shot-revision
Multiple deadlines and one-shot revision are being introduced in the hope of achieving numerous benefits described below. We'll be conducting various measurements throughout this experiment to determine whether they're actually leading to these positive outcomes and not having other negative consequences.
One reason we expect research quality to improve is that multiple deadlines give authors temporal flexibility. This will encourage them to wait until their work is complete to submit it. No longer will they be forced to submit it at the arbitrary point when the once-a-year deadline happens to be. Indeed, we've heard from participants in other conferences with multiple deadlines that this flexibility gives professors the ability to help multiple students submit to the same conference, without everyone conflicting with everyone else at deadline time.
Another reason research quality may improve is that some papers will experience the one-shot-revision process. In these cases, the field experts on the PC will be giving authors specific guidance about how to improve their papers. Unlike in past years, when this guidance took the form of reviews accompanying a rejection, this new process will allow the improved work to be published at NSDI rather than at another venue.
One reason we expect these changes to lead to higher-quality reviews is that the one-shot-revision process provides authors the ability to get re-reviewed by the same reviewers who reviewed the earlier version of their paper. Those reviewers will be more familiar with the paper than an arbitrary set of reviewers from a future PC. They will also be focused on the same aspects and metrics, so the re-reviewing process will not be as random.
Another reason we expect to obtain higher-quality reviews is that spreading the reviewing load across multiple deadlines should make it easier for reviewers to find time in their schedules to read and review papers they're assigned. To the extent that reviewing is less rushed, we expect higher-quality reviews.
This reduction in rushed-ness should also increase reviewer satisfaction.
Another way reviewer satisfaction increases is by reducing the number of times they must, in aggregate, re-review the same paper. One way we expect to achieve this is by inducing authors to wait until their work is ready before submitting it, thereby eliminating the load on the reviewer who would otherwise have had to read and reject the premature work. One factor inducing authors to wait in this way is the offer of multiple deadlines; another factor is the threat of a rejection leading to an 11-month embargo.
Another way reviewer load is reduced is by the use of one-shot revision. Instead of having entirely fresh, unfamiliar reviewers from a different conference review a resubmission, one-shot-revision allows us to have the same reviewers do this review, which will be less work. Furthermore, to the extent that the guidance accompanying one-shot-revision decisions clarifies exactly what changes the authors need to make, it reduces unsuccessful resubmissions and thus the need for third reviews (and more).
The following paragraph contains the wording we plan to use to describe to PC members how to use one-shot-revision:
While the new submission system does have an option for a "Revise" decision, we do not expect or want this to be a commonly exercised option. If a paper would have been accepted in previous years, it should still be an accept now. Textual changes (e.g., to properly weaken claims or position the paper relative to related work) can be handled through standard shepherd-mediated camera-ready preparations. Similarly, we should be rejecting papers that are fundamentally flawed, that lack novelty, that make an insufficient contribution, or that require changes so substantial that it is difficult to predict that a revised version will likely be acceptable. Revise should be reserved for papers that are strong and exciting but flawed in some way that goes beyond what could be fixed via shepherding. In these cases, it is important to specify what would turn the paper into an acceptable, NSDI-quality paper, not just identify one or more flaws, since we aim to minimize cases where a resubmission addresses all of the reviewer comments yet is still judged to be below the bar.
Reviewers will also be instructed to not offer a one-shot-revision option if they can't determine that the paper is adequate modulo the proposed revisions. For instance, if the paper is written so sloppily that there may be a hidden deep flaw, then the paper should be rejected, not given a one-shot-revision request to fix the inadequate writing.
Here are some commonly asked questions and our responses to them.
I heard you accepted 19 papers submitted to the spring deadline. Does this mean you'll have to apply a higher bar for papers in fall to maintain the size of the conference?
Absolutely not. We're committed to applying the same bar for acceptance in fall as in spring. If due to increased interest in submitting high-quality work to NSDI, we wind up accepting substantially more papers in NSDI '23 than in NSDI '22, then we'll make appropriate adjustments to the conference timing, e.g., by shortening breaks and/or reducing the time allotted to each presentation.
Why was our paper not given a one-shot-revision decision given that the main issues the reviewers raised were with the writing?
Before giving a one-shot-revision decision, we want to be confident we'll accept the paper if it gets updated according to our revise instructions. For this reason, we're reluctant to give revision decisions to papers where the main issue is that we want to see significant changes in writing. If we can't tell whether, after the writing becomes clearer, there will be some new flaw we'll discover that wasn't evident before, then we'll reject the paper rather than give it a one-shot-revision decision.
In general, we focus our one-shot-revision decisions on cases where what's missing is new experiments or proofs, not where what's missing is clear explanation. If the main issue with the paper is writing but we're confident that the writing update will fix the paper, then what we give is an accept decision, not a one-shot-revision decision. After all, accepted papers are shepherded to help fix the writing.
How will you handle the transition between NSDI '23 and NSDI '24, for papers that receive a one-shot-revision decision for the fall deadline?
We expect that the NSDI '23 organizers will honor the implicit commitments we make with one-shot-revision decisions in the fall. To encourage this, we're making it easy for them to solicit the help of NSDI '23 PC members who were involved in making those decisions. Specifically, one of the commitments we're having people on the NSDI '23 PC make is to be available to serve as external reviewers for those one-shot-revision decision papers. This isn't much of a burden, since we expect few papers to receive such decisions.
Why would I not wait for the final deadline so I can present the most complete work?
Certainly, if your work is not ready in time for the spring deadline, you should wait to complete it and submit it to the fall deadline. But, if your work is ready in time for the spring deadline, there are several advantages to submitting it then rather than waiting for the fall deadline, including the following: (1) Your paper will receive a faster decision turnaround. (2) If your paper winds up getting a one-shot-revision decision, you still have a chance to revise it in time to have it appear at the NSDI '23 conference. (3) If your paper is rejected but the reviews are encouraging, you can resubmit elsewhere faster. (4) If your paper is accepted, you'll be able to list it on your CV and it will be available on the USENIX website sooner. You may also be able to make more timely decisions about when to graduate and/or apply for a new position.
Of course, there are also advantages to submitting to the fall deadline, so we hope there are sufficient incentives for both deadlines. One important fact is that under no circumstances will the PC chairs adapt the standard for acceptance depending on the date (e.g., accepting more papers in the fall or spring to either "fill the program" or on the contrary to be more selective because the program is full).
Data from VLDB suggests submissions do tend to spread throughout the year. The 2015 VLDB received 710 submissions during the year, of which just under 200 were first submitted at the final cutoff date, but the other 510+ were relatively distributed throughout the year. (For more details, see the graph on slide 15 of this VLDB 2015 PC Presentation.)
How likely is a paper going to be accepted without a one-shot-revision? As an author should I expect that outcome since a direct accept seems really hard?
We intend for revision decisions to be alternatives to reject decisions, not accept decisions. So, if a paper would have been worthy of an accept decision in past years, it should receive an accept decision this year. Even if a paper needs shepherdable changes to reach final form, we'll accept it and ask for those changes to be shepherded, rather than asking for them to be made in a revision.
It's not clear to what extent our rate of one-shot-revision use will match that of VLDB and IEEE S&P, but it may nevertheless be useful to look at statistics for VLDB and for IEEE S&P.
How will this affect students' ability to get jobs, or professors' ability to get tenure?
Enabling students to get jobs (or faculty to get tenure) should be the consequence of performing high-quality science. The new process should enhance both. Other communities who have adopted this approach have seen a noted improvement in paper quality. The new process should also allow faster turnaround, making it more likely that students and professors can include a paper acceptance on a CV.
How will you achieve a two-month turnaround for reviews for the spring deadline given that NSDI has usually had a 2.5-month turnaround? Will you reduce the number of reviews that spring-deadline papers receive?
We don't plan to have any difference in the number of reviews across deadlines. We expect, based on the experiences of other conferences that have moved to a multiple-deadline model, to have substantially fewer submissions to the spring deadline than to the fall deadline. Therefore, there won't be as much work for reviewers to do for that deadline, enabling a slightly faster schedule. Also, since the entire PC (both light and heavy) will participate in the online PC meeting, we'll be able to use a broader set of reviewers during the spring reviews and thus expect to less often need to rely on third-round reviews to reach desired expertise levels. This will let us fit already-rare third-round reviews into a shorter online discussion period.
How will these changes impact the volume of submissions to NSDI?
Obviously, at this point, we can only speculate. VLDB, PETS, and IEEE S&P both saw submission volume increase after they moved to a similar model. Since we must conservatively assume this will happen for NSDI, we have increased the size of the PC. We also expect additional pressure on presentation time at the conference itself.
How will these changes impact submissions to other conferences?
This is also difficult to foresee. Hopefully, the fact that our extra deadline doesn't overlap with the deadlines of SIGCOMM, OSDI, and SOCC will mean we cause minimal interference. There may, however, be some effects, which hopefully will be beneficial. For instance, we've chosen the spring deadline to be a few weeks after the SIGCOMM author notification deadline. This should facilitate submission by authors who feel they can rapidly incorporate the reviewer feedback they receive from the SIGCOMM PC on rejected papers. Also, our spring deadline is about a month after the submission deadline for OSDI, and about two months after the submission deadline to SOCC, so some authors may feel sufficiently empowered by the opportunity to submit to NSDI soon afterward that they choose not to submit not-yet-ready work to those venues.
If I have a one-shot-revision, how can I manage the timing?
There will be guidelines and suggestions with the one-shot-revision decision to help you. Efforts will be made (such as case-by-case extension of the fall deadline specifically for one-shot-revision papers), to make sure that you can present your work as soon as possible.
Won't this lead to accepting more papers, thus increasing our acceptance rate and hurting the prestige of NSDI?
We may see a slight increase in the acceptance rate, owing to our ability to use the one-shot-revision process to bring a few papers up to our standard that otherwise would not have reached it. But, since we're applying the same standard for publication as in past years, we don't expect a significant change in the prestige associated with the conference.
How will this new process affect PC members?
It's hard to say, but we expect there will be several benefits to PC members, including the following. First, their reviewing load will be spread out over two deadlines, giving them more flexibility to schedule reviewing. Second, to the extent authors delay submitting their work until it's ready, reviewers will appreciate reviewing better papers. Third, we expect their re-reviewing of revised papers to be preferable to the typical alternative: re-reviewing rejected papers that different reviewers reviewed earlier.
PC members may fear that the reviewing load will be not only spread across a longer period but also greater in absolute quantity. After all, other conferences trying this new approach have received slightly more submissions than before. To compensate for this possible effect, we have increased the size of the PC.
How will we know if this new format is achieving its goals?
We have devised specific hypotheses and a concrete methodology for evaluating them. Some of our planned evaluations test whether expected improvements occur, and some test whether feared negative effects occur. We're not releasing the details of our methodology to avoid improperly influencing the outcome.
Will reviewers be able to consistently meet deadlines?
Based on conversations with PETS, VLDB, and SIGMETRICS chairs, this hasn't, thus far, been a major issue for them. They find that while the PC chairs do more "nagging" work, the reviews do generally get done on time, a process that's helped by the fact that the workload is lighter for each deadline. After all, some reviewers will always choose to procrastinate, but if the deadline is two days way, it's still quite feasible to read and review 1-3 papers, whereas at that point, it's impossible to be timely with 15-20.
If a paper is accepted to the spring deadline, and then another paper covering the same ground is submitted to the fall deadline, will it be rejected by virtue of having been "scooped"?
One of our main goals is to encourage authors to wait until their work is ready before submitting it. In support of this, we'll consider work submitted to the spring deadline and the fall deadline to be "concurrent work." In other words, we won't give spring-deadline papers priority over fall-deadline papers.
Thanks, but I still have an unresolved question about the process or a specific concern as an author for submitting this year. Is there someone I can contact?
Of course! Feel free to share your thoughts and questions with the PC chairs and/or the steering committee. You can reach them at firstname.lastname@example.org and email@example.com, respectively.
Return to Call for Papers