This post is the first in a series of updates responding to the feedback we have received on the proposed changes to the CHI papers review process for CHI 2027 and beyond. We are grateful to everyone who contributed through the community session at CHI 2026, the feedback form, and other channels, including the open letter. In this post, we summarise what we heard for each of the five proposed changes, highlight the main points of support, concern, and uncertainty, and then outline how we plan to respond. We will continue to keep the community updated as the proposals are refined and as implementation plans develop.
Cross-cutting themes across all five proposals
Several themes appeared repeatedly across the responses.
First, the community understands that CHI’s current reviewing model is under strain and that change is needed. Many respondents thanked the Steering Committee and working group for bold, thoughtful work.
Second, there is significant anxiety that the combined proposals may create a more exclusive CHI, favouring established, well-networked, well-resourced, research-intensive, Global North institutions and large labs. All policies and procedures align with CHI’s stated values around diversity, inclusion, global community, transparency, and long-term sustainability.
Third, respondents repeatedly asked for clearer operational detail: how qualifications are checked, how reviewer assignments are guaranteed, how exemptions work, how appeals work, how AI use is handled, how quality is assessed, how conflicts are managed, how workload is capped, and how PCS will support the changes. We will detail these in a series of blog posts over the coming months, as we first need to address community input on the overall plan.
Fourth, many respondents urged the CHI Steering Committee not to implement all changes at once. Several recommended phased implementation, pilots, published evaluation criteria, monitoring for differential impacts, and transparent sharing of simulations or evidence. We will report our process, our pilots, and create evaluation criteria; however, many of the changes are required in concert to be successful. For example, we cannot have minimum qualifications for Associate Chairs (ACs) without also reducing the number of ACs needed.
Finally, there is a strong call to frame the changes less punitively. Respondents want language and processes that emphasize care, reciprocity, mentoring, accessibility, and shared stewardship rather than punishment, policing, or exclusion. We have taken this feedback on board here and will continue to do so in future blog posts.
1. Minimum Qualifications for Peer Review Roles
Feedback on minimum qualifications was generally positive. Many respondents supported the principle of clearer standards for reviewers, ACs, SCs/Senior ACs, and Paper Chairs. Several people felt this clarification was overdue, especially given concerns about very junior or underprepared reviewers being asked to assess CHI papers. Some respondents explicitly welcomed the requirement that reviewers have at least one relevant publication and some form of review training or mentoring.
At the same time, this proposal generated significant concern about gatekeeping, exclusion, and narrowing the CHI community. A recurring concern was that requiring CHI or SIGCHI publications could disadvantage researchers from the Global South, smaller institutions, teaching-focused roles, liberal arts colleges, interdisciplinary areas, industry, clinical practice, usable security, VR/AR, health, and other communities that may contribute valuable expertise but publish in venues outside SIGCHI. Several respondents worried that the policy could reinforce an “inner circle” of established CHI researchers and make it harder for newcomers, peripheral communities, or underrepresented regions to enter the CHI community.
There were also repeated questions about what counts as relevant experience. Respondents asked whether being any author on a CHI paper is sufficient, or whether lead/senior authorship should matter. Several felt that “one CHI/SIGCHI paper” was too low if it could be satisfied by being a minor author on a large author-list paper. Others felt the criterion was too narrow if it excluded people with substantial HCI-adjacent expertise from other rigorous venues.
A major implementation issue concerned the phrase “prior mentoring or training in review writing.” Many respondents asked how this would be evidenced, verified, or enforced. Several suggested that CHI, SIGCHI, or ACM should provide a low-cost, accessible, self-paced review training or certification, both to support new reviewers and to make the qualification transparent.
There was also concern about the definitions of “very junior reviewer” and pathways into reviewing and AC roles. Respondents wanted clearer guidance on whether this refers to PhD students, early-stage PhD students, people without review experience, or people without publications. Some worried that the rules could worsen the reviewer pipeline by excluding early-career researchers before they have opportunities to learn.
Several respondents also raised concerns about career breaks and non-linear careers. Publication-count and recent-publication requirements were seen as potentially disadvantaging people with parental leave, caregiving responsibilities, illness, disability, heavy teaching loads, leadership/service roles, or slower publication trajectories associated with qualitative, participatory, community-based, or marginalized-population research.
A particularly important theme was disciplinary framing. Some respondents objected to language that describes reviewing as “scientific assessment,” arguing that CHI includes design, engineering, humanities, critical, qualitative, and practice-based work that may not fit a narrow scientific model. This concern was amplified by worries about removing subcommittees, which some respondents see as protecting methodological and epistemic diversity. (More on subcommittees below).
Summary: There is support for clearer standards, but the community wants them to be more flexible, transparent, globally inclusive, and explicit about alternative forms of expertise, review training, career breaks, and pathways for newcomers.
| Response and next stepsThe Minimum Qualifications for Peer Review Roles policy has already been approved by the CHI Steering Committee and is the only part of the review process changes that is not currently a proposal. We nevertheless heard the feedback clearly, especially around transparency, flexibility, and inclusion. The policy sets out what counts as relevant experience for each role, and we encourage readers to consult the full policy here: https://chi.acm.org/policies-processes/minimum-qualifications-for-full-paper-peer-review-roles-v1-0/. In practice, publication experience will be assessed through author profiles and publication records, including CHI and SIGCHI specialized conference publications. Prior mentoring or training in review writing may include formal review training, documented mentoring, or equivalent prior experience; to make this more accessible and consistent, we will point reviewers to the ACM Peer Review Training and Certification course, which covers reviewer suitability, review touchstones, paper evaluation, and review submission. We will detail how we will do this in a future blog post.We also heard requests for clearer definitions. In the policy, a “very junior reviewer” should be understood as someone who meets the minimum qualifications for reviewing but has limited prior reviewing experience; such reviewers should be included with appropriate support and balanced by more experienced reviewers on the review team. As was the standard in the program committee in prior years, SCs/Senior Associate Chairs and Associate Chairs will again be responsible for ensuring the reviewers have the correct experience and knowledge. We also recognize the concern that publication-based criteria can disadvantage people with non-linear careers, career breaks, caregiving responsibilities, disabilities, heavy teaching loads, institutional service, industry careers, or slower publication trajectories. The intent of the policy is not to exclude qualified community members, but to ensure that every paper is reviewed by people with appropriate expertise and support. We will therefore clarify in the policy how exceptions and equivalent experience can be considered, especially for people whose contributions to HCI, reviewing, or relevant adjacent fields are not fully captured by recent CHI publication counts.Finally, we accept the concern that the phrase “scientific assessment” does not adequately reflect the full diversity of work published at CHI, including design, engineering, critical, qualitative, humanities-informed, practice-based, and interdisciplinary research. ACM has clear evaluation criteria: originality, correctness, novelty, importance, and clarity of exposition. We will amend the wording of the policy so that it better reflects CHI’s breadth of epistemologies, methods, and contribution types while preserving the core goal: ensuring that all submissions receive careful, informed, and fair evaluation inline with the ACM criteria. |
2. Submission and Shared Review Responsibility
The principle of shared responsibility received notable support. Many respondents agreed that authors who submit to CHI should contribute to the reviewing system, and several saw this as an overdue correction to the problem of a small group carrying a disproportionate service burden. Some respondents appreciated the seriousness of the proposed consequences, arguing that strong incentives are needed. However, this proposal generated some of the strongest concerns about workload, fairness, and unintended consequences.
A repeated concern was that the four-review requirement per submission may be manageable for large labs but punitive for small teams, single-author papers, papers from primarily undergraduate institutions, early-career faculty, teaching-focused faculty, interdisciplinary teams, and authors whose co-authors are students or non-CHI specialists. Several respondents gave concrete examples where one eligible senior author might be responsible for many reviews across multiple submissions.
Many respondents worried that the proposal could increase inequity between large, well-resourced labs and smaller or less connected groups. Larger teams can distribute reviewing responsibilities more easily, while smaller teams may face a much heavier per-person burden. Some respondents also worried that this could encourage honorary authorship, authorship padding, or authorship trading, as teams seek to add qualified reviewers to papers.
A major implementation concern was the mechanism by which declared reviewers would actually receive assignments. Several respondents noted that volunteering to review does not guarantee being invited to review. They asked what happens if someone is declared but is never assigned enough reviews, or declines a review because it is outside their expertise. Respondents also asked how responsibilities would be tracked across multiple submissions and how penalties would be allocated if an author completes some, but not all, expected reviews.
The proposed penalty, desk rejection of authors’ own submissions for failure to complete reviews, was seen by some as motivating, but by many as too severe or administratively complex. Several respondents worried that junior authors could be punished for the behaviour of senior co-authors, supervisors, or collaborators. Other respondents suggested including future submission bans for non-compliant individuals, review-credit systems, caps, or “you cannot submit again until owed reviews are completed.”
The exemption process was another major theme. Respondents asked what happens when no author meets the minimum qualifications, when authors misunderstand whether they qualify, or when exceptional circumstances arise after submission. Several felt the exemption policy needs clearer adjudication, appeals, and damage-control mechanisms.
A number of respondents worried that mandating reviewing could reduce review quality. Concerns included perfunctory reviews, ghosting, poor engagement, increased use of AI-generated reviews, and additional quality-control work for ACs and Senior ACs. Some respondents argued that the problem is not only a lack of willingness to review, but poor reviewer matching and inadequate tools for identifying available qualified reviewers.
There were also concerns about conflicts of interest. Some respondents worried that drawing reviewers heavily from current submitting authors creates incentives for competitive reviewing, especially in closely related areas.
Summary: The norm of reciprocity is widely understood and often supported, but the proposed implementation is seen as potentially burdensome, inequitable for small or less-resourced teams, difficult to administer, and risky for junior authors unless there are clearer mechanisms, caps, exemptions, appeals, and individual rather than team-level accountability.
| Response and next stepsWe heard the concern that mandatory reviewing may place a heavier burden on individuals in small teams, single-author papers, or teams where only one author meets the review qualifications. At the same time, the principle behind this change is central to the proposed review model: authors who submit to CHI need to contribute to the collective work of reviewing. CHI relies on community labour, and a sustainable review process requires that those who benefit from it also help maintain it. We recognise that this expectation may make submitting to CHI more demanding, and authors will need to consider this responsibility when deciding whether CHI is the right venue for a particular submission.We also want to clarify that this policy is not intended to require teams to add qualified authors or senior CHI researchers to their papers. Everyone remains encouraged to submit to CHI, regardless of the authorship list. If no author on a submission meets the minimum qualifications for reviewing, the submission can proceed through the exemption process. It is only where one or more authors are qualified that they will be expected to contribute to reviewing. Similarly, if an author is declared as qualified and willing to review but is not assigned any reviews, there will be no expectation that they complete reviews that were never assigned. The policy stipulates that authors named during submission are willing to review; if they do not receive any assigned reviews (and thus do 0 reviews), they remain willing and comply with the policy.We also heard questions about sanctions. We have considered a range of possible responses when assigned reviews are not completed, including future submission restrictions, review-credit systems, and other delayed penalties. The current approach seeks to balance fairness, clarity, and practical implementation in a system operating at CHI’s scale. Where a declared reviewer accepts or is assigned reviews and then does not complete them, consequences will be necessary for the policy to be meaningful. We will also develop a clear exemption policy so that authors understand how to communicate exceptional circumstances that prevent them from reviewing in a given year. This will include circumstances that arise after submission, such as illness, caregiving emergencies, or other serious unforeseen events. The aim is to make the process predictable and fair, while avoiding ambiguity for authors, ACs, Senior ACs, and Paper Chairs.Several respondents were concerned that mandated reviewing could reduce review quality. We share the view that completing a review cannot simply mean submitting text. ACs and Senior ACs will evaluate whether reviews meet expected standards of care, specificity, constructiveness, and engagement with the paper. Reviews that are clearly inadequate, perfunctory, inappropriate, or not meaningfully authored by the reviewer will be treated as not completed, and the relevant sanctions will apply.Finally, we heard concerns about conflicts of interest and competitive reviewing when reviewers are drawn from the submitting author pool. We will draw on evidence and practice from other distributed peer review schemes, including UKRI’s ESRC Connect Awards pilot, where applicants also act as reviewers and safeguards include separating reviewer pools, avoiding reciprocal reviewing relationships, checking institutional conflicts, moderator oversight, and excluding reviews that show evidence of gaming or unjustified negative scoring. Our next step will be a dedicated blog post explaining how we will implement shared review responsibility in practice, including reviewer assignment, exemptions, sanctions, review quality checks, and conflict-of-interest safeguards. |
3. Screening/Triage via Rubric-Based Assisted Desk Reject
There was broad recognition that CHI’s submission volume is unsustainable and that some form of earlier screening may be necessary. Many respondents supported more desk rejection or triage in principle, especially for clearly out-of-scope, incomplete, immature, or very low-readiness submissions. Some respondents saw this as one of the most important proposals for protecting reviewer time.
However, the proposed rubric-based ADR process attracted substantial concern. The most common concern was that the rubric is too subjective, too complex, and too demanding for a rapid skim-reading process. Respondents argued that assessing correctness, originality, importance, novelty, and clarity often requires careful reading and domain expertise. Several suggested replacing the five-point rubric with simpler binary or threshold questions, such as whether the paper has a plausible contribution, engages relevant prior work, and uses methods or reasoning appropriate to its claims.
Many respondents worried about false positives: good, unconventional, interdisciplinary, qualitative, critical, replication, validation, feminist, Global South, racially minoritized, or otherwise non-mainstream work being screened out before full review. Several respondents felt that novelty and originality are especially risky criteria for early desk rejection, because reviewers often disagree about what is novel, and because some important work may not look “novel” at first glance. Others worried that the process could favour checklist-style papers and penalize field-changing or difficult-to-classify work.
A second major concern was algorithmic reviewer assignment. Several respondents said that current reviewer-matching systems are already poor and that using algorithmic assignment for ADR could create bad matches, especially for papers requiring both topical and methodological expertise. Respondents worried this would make ADR opaque, luck-dependent, and vulnerable to poor calibration.
The proposal to use many triage reviewers also raised concerns about intellectual property, confidentiality, collusion, and abuse. Some worried that exposing unpublished work to 10 rapid reviewers increases risk, especially for immature or sensitive work. Others worried that submitting authors could use ADR to suppress competing work, particularly in close-knit areas.
The role of AI came up repeatedly. Some respondents worried that reviewers will use AI to complete triage superficially. Others suggested that AI might need to be used explicitly and transparently to help detect AI-generated or low-quality submissions, though there was no consensus. Several respondents stressed that CHI needs clear policy on acceptable and unacceptable AI use in reviewing and triage.
A number of respondents argued that ADR should remain in the hands of experienced ACs, Senior ACs, or Paper Chairs, rather than being outsourced to a broad pool of submitting authors. Some said the current ADR process can work if given more time, clearer criteria, and stronger support, rather than creating an entirely new distributed mechanism. Others suggested hybrid models: initial AC screening plus one external rapid screener, or 2–3 qualified AC-level checks rather than 10 broad triage ratings.
Several respondents requested appeal mechanisms and feedback. If ADR is used, they want authors to receive the ratings or short explanations, and some argued each team should have one opportunity to appeal an ADR decision.
Summary: There is strong support for reducing full-review load through earlier screening, but substantial concern that the proposed rubric-based, algorithmically assigned, many-reviewer ADR process could be subjective, opaque, burdensome, vulnerable to abuse, and harmful to methodological and community diversity. Respondents favour simpler criteria, more experienced oversight, transparency, appeals, and careful monitoring of differential impacts.
| Response and next stepsWe agree that any screening or triage process must be fair, robust, and appropriate for the diversity of work submitted to CHI. We have already conducted a pilot of the rubric-based screening process, and this gave us confidence that expert reviewers are able to complete the task reliably and efficiently. In the pilot, experienced reviewers were able to apply the rubric in around 15 minutes per paper. This is not out of step with other expert assessment processes; for example, the UK Research Excellence Framework is a large-scale expert review process used to assess research quality across UK higher education institutions, and the work of Editors-in-Chief on journals. We recognise, however, that less experienced reviewers may need more time and support. We also agree that, whilst the 5 criteria were set by the ACM before we started this process, the precise wording of the rubric may not be in its final form. The CHI 2026 panel on peer review was especially useful in pointing us toward evidence on the advantages of checklist-based approaches and how they can support higher-quality decision-making. At the same time, we recognise that developing criteria that work across all CHI papers, methods, contribution types, and epistemic traditions is difficult. We are therefore continuing to refine the rubric, and are also exploring alternative ways to conduct triage in discussion with ACM.We heard the concern that current reviewer-matching systems are not good enough to support a process of this importance. We agree. We are working on a revised set of keywords and expertise descriptors, and any triage process we adopt will be tested before implementation. This testing will need to examine not only efficiency, but also reviewer-paper fit, consistency, transparency, and differential effects across CHI’s diverse communities.Finally, we will highlight the appeal process. As with all SIGCHI venues, SIGCHI has outlined a clear appeals process for submissions for many years already; see SIGCHI Submission and Review Process. Authors whose papers are assisted desk-rejected through triage will have a clear route to request reconsideration, and we will provide more detail on the grounds, process, and timing for appeals in the implementation plan.Our next step will be a dedicated blog post explaining how we will implement ADR for CHI2026. |
4. Optimizing the Full Peer Review Process and Synthesis
Feedback on the move from a 1AC/2AC model to a single Primary AC model was divided. Some respondents supported removing the 2AC role, seeing it as a reasonable streamlining step that reduces overhead. A few noted that other conferences use similar models and that the 2AC review can be burdensome or uneven.
However, many respondents emphasized that the 2AC role provides checks and balances. The second AC can catch misallocation, challenge a poor or biased 1AC judgment, support fairness, and help when external reviewers disappear or disagree. Several respondents worried that giving one AC more power could worsen the effects of idiosyncratic preferences, disciplinary bias, or attempts to narrow the field.
A major concern was the workload of the new AC role. Respondents repeatedly said that finding external reviewers is harder and more stressful than writing 2AC reviews. If each AC handles 10–15 papers and must secure three external reviews per paper, several respondents said they would be unlikely to serve. Some argued that CHI should instead increase the number of ACs and reduce reliance on external reviewers, rather than reducing the committee and increasing external-review recruitment.
Respondents also questioned whether Senior ACs can realistically oversee the proposed number of ACs and papers. There was concern that oversight of hundreds of papers and reviews may be too large to ensure consistency, fairness, and quality.
The proposed removal of subcommittees generated particularly strong and emotional feedback. Some respondents welcomed the removal, but many saw subcommittees as essential infrastructure for community, identity, calibration, methodological protection, and reviewer recruitment. Respondents from qualitative, health, design, critical, and other areas worried that subcommittees protect minority methods and epistemologies from being judged by dominant norms. Several warned that removing subcommittees could create more silos, not fewer, because people may self-organize informally into trusted networks without the transparency of formal subcommittee structures.
The Steering Committee also received an open letter responding specifically to the proposal to remove formal subcommittees from the CHI review process. The letter argues that the term “thematic silos” does not accurately reflect how subcommittees function, and proposes instead understanding them as porous, overlapping “subcommunities” that provide practical support, peer mentoring, reviewer recruitment help, calibration, and a sense of intellectual home for authors and reviewers. It emphasizes that subcommittees help CHI sustain methodological and epistemic diversity, develop shared reviewing norms within different areas, support emerging subfields, and make the breadth of CHI more legible to newcomers and outsiders. The letter also asks for clearer evidence that subcommittees cause harm or inefficiency, and for a more explicit explanation of how AC bidding, peer support, difficult-case discussion, and field-building would be handled without them.
Several respondents asked how AC bidding and paper assignment would work without subcommittees. They worried about ACs having to bid across thousands of papers, loss of author agency in choosing an appropriate community, and overreliance on automated matching. Respondents wanted clearer explanation of the assignment process, including how authors can signal topic, method, contribution type, and community fit.
There were also comments about PC meetings. Some respondents valued PC meetings as spaces for calibration, accountability, and community-building, while others felt recent virtual PC meetings have been of limited value. A few suggested alternative networking or calibration mechanisms if traditional meetings are not retained.
A recurring point was that the AC should not merely summarize reviews. In a context where AI-assisted reviews may become more common, respondents stressed that ACs must exercise critical judgment, identify low-quality or AI-generated reviews, and be empowered to champion papers despite superficial reviewer concerns.
Summary: Streamlining is welcomed by some, but many respondents worry that removing 2ACs and subcommittees could weaken checks and balances, mentorship, calibration, methodological diversity, and community ties. Additional comments reinforced these concerns, emphasizing the practical and intellectual role subcommittees play in supporting ACs, reviewers, authors, difficult decisions, and the development of shared reviewing norms. The largest practical concern remains that the new AC role may become less attractive because recruiting three external reviewers across 10–15 papers is highly burdensome.
| Response and next stepsWe heard the concern that the new Primary AC role could make reviewer recruitment more burdensome, especially if each AC is responsible for securing three external reviews across 10–15 papers. Our expectation is that this will be easier than in the current system because the reviewer pool will include qualified people who have already agreed to review as part of their own submission responsibilities. The aim is to move reviewer recruitment away from last-minute ad hoc requests and toward a more predictable pool of available reviewers.We also heard concerns about the workload for Senior ACs. Our modelling suggests that the proposed Senior AC role is manageable, but we will continue to test these assumptions as the implementation details are developed. We recognise that workload needs to be realistic if we are to retain experienced people in these roles. However, we also note that over the last 15 years, the ACs had on average 14.5 papers assigned, typically half as 1AC and half as 2AC. Some years, this number was as high as 20.0, and in others, as low as 10.0, on average.We are taking seriously the feedback about the proposed removal of formal subcommittees. Our vision is not to lose the intellectual, methodological, and community functions that effective subcommittees currently provide. Rather, we expect Senior ACs to take on a role closer to subcommunity chairs: representing subcommunity norms, recruiting ACs with the right expertise, supporting calibration, running effective program committee meetings, and ensuring that areas with strong existing reviewing cultures continue to be supported. We also expect the new process to make it easier to identify emerging subcommunities and support new areas of CHI as they develop, providing a more flexible and agile structure.In combination with the Minimum Qualifications for Peer Review Roles, the proposed model asks ACs and Senior ACs to exercise expert judgement in ways that are closer to journal associate editors and area chairs. They will not simply summarize reviews, but will be responsible for ensuring that papers are evaluated by appropriate experts, that reviews are interpreted in context, and that decisions are fair across CHI’s diverse contribution types and research traditions.Our next step will be a dedicated blog post outlining how this model will work in practice, including the details of AC recruitment, bidding, paper assignment, subcommunity support, Senior AC responsibilities, and program committee meetings. |
5. Recognizing and Rewarding Reviewing Service
This proposal received generally positive but often qualified feedback. Many respondents supported making reviewing more visible and formally acknowledging high-quality service. Several said this is a welcome step toward recognizing reviewing as scholarly labour.
However, many respondents felt that recognition needs to be more concrete and useful. Certificates, public lists, or ceremony acknowledgements were seen as nice but insufficient. Suggested material or practical rewards included reduced registration fees, ACM membership discounts, free or discounted conference items, guaranteed student volunteer slots, AC/SC lunches, service letters, public confirmation usable in annual reviews, and stronger CV-visible forms of recognition.
Some respondents emphasized that institutional recognition matters. For reviewer recognition to be meaningful, it should be documented in a way that people can use for promotion, tenure, annual evaluation, workload models, and service reporting. Respondents also noted that recognition should be accessible to people who cannot attend CHI in person.
Several respondents cautioned that recognition systems can be gamed or may incentivize quantity over quality. Concerns included nepotism, AI-generated reviews, rewarding verbose but shallow reviews, and opaque quality metrics. Some asked how review quality would be assessed, whether authors would have input, and whether AI would be used to evaluate reviews.
A few respondents argued that the best recognition is not awards but being invited into meaningful service roles, such as the Program Committee, because these roles carry more institutional weight than “outstanding reviewer” labels. Others said awards should be paired with clearer sanctions or accountability for poor reviews.
There was also a more critical theme: some respondents felt recognition alone cannot compensate for structural loss of community, workload increases, or the removal of relational incentives such as PC meetings and subcommittee belonging.
Summary: Reviewer recognition is broadly supported, but respondents want it to be concrete, transparent, useful for career evaluation, accessible to remote/non-attending reviewers, resistant to gaming, and paired with accountability for poor reviewing. Recognition is seen as helpful but not sufficient to solve motivation, workload, or community-care issues.
| Response and next stepsWe heard broad support for making reviewing work more visible, alongside a clear message that recognition needs to be useful beyond the conference itself. Public acknowledgement, certificates, and ceremony recognition are valuable, but they may not be enough for reviewers whose institutions need concrete evidence of service for promotion, tenure, annual review, workload allocation, or professional evaluation.We will therefore explore stronger CV-visible forms of recognition for outstanding reviewing and reviewing service more broadly. This may include clearer public records of service, more formal documentation that reviewers can use in institutional evaluations, and ways to distinguish sustained, high-quality reviewing contributions over time. We will also consider how recognition can be made accessible to reviewers who are not able to attend CHI in person.At the same time, we recognise that recognition must be trustworthy. Any system for acknowledging reviewing should avoid rewarding quantity over quality, should be transparent about how outstanding reviewing is identified, and should not create incentives for superficial or AI-generated reviews. Our next step is to develop a clearer recognition process that makes reviewing visible as scholarly service while maintaining confidence in the quality and integrity of the review process. |
We are also producing an FAQ blogpost to supplement this and other blog posts that will detail how the various parts of the process will be implemented. These will be posted on https://chi.acm.org/ and https://chi2027.acm.org/


























