X
facebook
LinkedIn
Instagram

Home » Guide to Reviewing Papers

Upcoming Deadlines

All times are in Anywhere on Earth (AoE) time zone. The submission site of each track will open approximately four weeks before its submission deadline.

Guide to Reviewing Papers

 
This page provides guidelines for reviewers responsible for assessing submissions to CHI.

 

Key points:

  • Your primary criterion for judging a paper is: Does this submission provide a strong contribution to the field of HCI? Remember that there are many ways a paper can make a contribution to HCI, and you should review the paper appropriately. See “Contributions to CHI” for details.
  • Reviewers decide the overall recommendation (Accept with minor revisions, Revise and Resubmit, and Reject) on each submission with written appraisal that matches with their recommendation.
  • A high-quality review is typically about a page of written text; very short reviews are frustrating for authors, tend to be less constructive, and hurt the review process. Always put yourself in the author’s position: what level of detailed feedback would you like to see for your own work?
  • Make sure to write a detailed review whether or not you like the paper — short positive or negative reviews without justification will not provide support for decisions about a paper during the PC meeting, especially if they have well-argued counter reviews.
  • As a reviewer, you are responsible for the content and accuracy of your reviews, including the references cited. This is especially relevant if you choose to use AI tools based on Large Language Models (LLM), which are prone to producing plausible sounding garbage. Such reviews can negatively reflect on the reviewer and their reputation. The ACM Policy on using AI tools in reviewing is provided here: https://www.acm.org/publications/policies/peer-review-faq.
  • Maintaining confidentiality is crucial in the review process thus quoting or disclosing papers under review must be avoided. Copying and pasting content from a manuscript into an LLM may pose risks to confidentiality (again, note the ACM Policy: https://www.acm.org/publications/policies/peer-review-faq). Reviewers should therefore avoid sharing content from unpublished papers with any online tools that may store or process the data in any way that could compromise confidentiality.

 

Contributions

The primary criterion for the evaluation of all papers is the submission’s contribution to HCI. In all cases, a CHI paper must make an original research contribution. It is important to recognize that a paper can make a contribution to HCI in many ways, and you should review the paper appropriately. Please see Selecting a Subcommittee (link tba) for a list of some of the types of contributions a paper can make to HCI, and Guide to a Successful Submission for the associated criteria that you can use to assess different types of contribution.

 

Paper Lengths

In CHI 2025, paper length should be commensurate with their contribution. There is no arbitrary maximum (or minimum) length to papers. However, clarity and conciseness of writing is considered vital to a high-quality submission. When discussing paper lengths, we work from the main text of the paper – not references, figures, tables, etc.

To facilitate judgements on whether or not a paper’s length is commensurate with contribution, authors are required to identify if their paper is:

  • short (less than 5,000 words),
  • long (typically 7,000–8,000 words), or
  • excessively long (more than 12,000 words).

Papers of different lengths are reviewed within the same rigorous review process and at the highest level are judged by very similar criteria (i.e., does this paper provide a strong contribution to the field of HCI?). However, it is important as a reviewer to realize that the type of content that is appropriate for a shorter paper is somewhat different than for a longer paper. A short paper should present brief and focused research contributions that are noteworthy but may not be as comprehensive or provide the same depth of results as a long paper – reviewers should not ask for more, but focus on what is needed for a short contribution.

Authors of such excessively long papers (going beyond 12,000 words) are required to justify to reviewers why a shorter contribution is not suitable or not possible. In a very exceptional case, such submissions may be considered for full review, but authors must provide clear strong justification for why submissions are required to be more than 12,000 words. The length of typical, long submissions is expected to be approximately 7,000–8,000 words..

Papers whose lengths are incommensurate with their contributions will be rejected. Papers may be perceived as too long if they are repetitive or verbose, or too short if they omit important details, neglect relevant prior related work, or tamper with formatting rules to save on page count.

CHI 2025 encourages shorter, more focused papers but all papers will be reviewed through the same process.

 

Prior Publication

Content appearing at CHI should be new and groundbreaking. Therefore, material that has been previously published in widely disseminated archival publications should not be republished unless the work has been significantly revised. Guidelines for determining “significance” of a revision are stated in the ACM Policy on Pre-Publication Evaluation and the ACM Policy on Prior Publication and Simultaneous Submissions. Roughly, a significant revision would contain more than 25% of new content material (i.e., material that offers new insights, new results, etc.) and significantly amplify or clarify the original material. These are subjective measures left to the interpretation and judgment of the reviewers and committee members – authors are advised to revise well beyond the policy guidelines.

There is an exception for work that has previously been presented or published in a language other than English. Such work may be translated and published in English at CHI. The original author should typically also be the author (or co-author) of the English translation, and it should be made clear in your submission’s abstract that this is a translation.

Also note that non-archival venues, such as workshop presentations, posters, and CHI’s own Late-Breaking Work do not count as prior publications. Furthermore, a CHI paper should not be rejected on the grounds that it overlaps with work developed independently that was published after the CHI submission was made, during the review period. In other words, work that an author could not have known about should not count against them.

 

Replicating Work

The policy on prior publication refers only to re-publication of one’s own work; this does not preclude publication of work that replicates other researchers’ work. Constructive replication can be a significant contribution to human-computer interaction, and a new interpretation or evaluation of previously-published ideas can make a good CHI paper. For future replications to be possible, however, submitted work must include sufficient information. Efforts to include complete, well-organized supplementary material facilitating replication, such as software, analysis code and data, should be rewarded.

 

Transparency

Lack of transparency in the way research results are reported can be a ground to doubt the contribution. See the “Transparency” section in the Guide to a Successful Submission for a discussion of transparency in different contribution types.

 

Use of AI tools such as LLMs

There are many online tools reviewers might use for editing and proofreading their reviews. Such use is not prohibited but must be used with care. To maintain the integrity of the review process, reviewers should avoid sharing content from unpublished papers with any online tools that may store or process the data in ways that could compromise confidentiality – it is the reviewer’s responsibility to understand the implications of the use of any such tool and abide accordingly. Tools based on LLMs demonstrate greater commitment to prompt completion rather than accuracy. Reviewers must ensure accuracy and relevance of their reviews and take care after using such tools for editing and proofreading.

 

Subcommittees

To improve the reviewing process, the CHI program committee is divided into approximately a dozen topic areas divided into approximately two dozen subcommittees. Each subcommittee is responsible for a topic area within HCI (see Selecting a Subcommittee (link tba) for details). Each subcommittee is chaired by two Subcommittee Chairs (SCs), who invite the relevant Associate Chairs (ACs) who are knowledgeable in the topics covered by the subcommittee. As specialists in this topic area, the primary responsibility of the AC is to recruit excellent reviewers (such as you) for each submission.

However, as a reviewer, you should not judge the paper by how well it fits the subcommittee theme(s). Many papers will not cleanly fit into a particular subcommittee for a variety of reasons, and we do not want to penalize authors for this. Remember, the subcommittee organization is there only to try to improve reviewer matches and to better handle the volume of submissions. If you have a paper that does not fit the subcommittee theme, evaluate it as best you can with respect to the paper’s own quality. Any topic is valid, as long as it fits within the interests of a reasonable fraction of the overall CHI audience. The primary criterion for review is the submission’s contribution to HCI.

For more information about the overall CHI review process, see CHI Papers Review Process.

 

References

We highly recommend Ken Hinckley’s thoughtful piece on what excellent reviewing is. If we had any way to enforce this, we would make it “required reading” for CHI reviewers and ACs.

Even with great guidelines like these that we can all agree on, the debate about what makes a good CHI paper has been going on as long as the CHI conference has existed. If you are interested, the papers below touch upon this debate and contain references to additional papers that concern it.

  • Greenberg, S. and Buxton, B. 2008. Usability evaluation considered harmful (some of the time). In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems. CHI ’08. ACM, 111-120. DOI=http://doi.acm.org/10.1145/1357054.1357074
  • Olsen, D. R. 2007. Evaluating user interface systems research. In Proceedings of the 20th Annual ACM Symposium on User interface Software and Technology. UIST ’07. ACM, 251-258. DOI=http://doi.acm.org/10.1145/1294211.1294256
  • Dourish, P. 2006. Implications for design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’06, ACM, 541-550. DOI=http://doi.acm.org/10.1145/1124772.1124855
  • Newman, W. 1994. A preliminary analysis of the products of HCI research, using pro forma abstracts. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Celebrating interdependence (Boston, Massachusetts, United States, April 24 – 28, 1994). ACM, New York, NY, 278-284. DOI=http://doi.acm.org/10.1145/191666.191766
  • Daniel Reed and Ed H. Chi. 2012. Online privacy; replicating research results. Commun. ACM 55, 10 (October 2012), 8-9. DOI=https://doi.acm.org/10.1145/2347736.2347739
↑