Blog

  • Peer Review Changes: April 2026 Summary of Feedback Gathered, our Initial Responses, and Next Steps

    This post is the first in a series of updates responding to the feedback we have received on the proposed changes to the CHI papers review process for CHI 2027 and beyond. We are grateful to everyone who contributed through the community session at CHI 2026, the feedback form, and other channels, including the open letter. In this post, we summarise what we heard for each of the five proposed changes, highlight the main points of support, concern, and uncertainty, and then outline how we plan to respond. We will continue to keep the community updated as the proposals are refined and as implementation plans develop.

    Cross-cutting themes across all five proposals

    Several themes appeared repeatedly across the responses.

    First, the community understands that CHI’s current reviewing model is under strain and that change is needed. Many respondents thanked the Steering Committee and working group for bold, thoughtful work.

    Second, there is significant anxiety that the combined proposals may create a more exclusive CHI, favouring established, well-networked, well-resourced, research-intensive, Global North institutions and large labs. All policies and procedures align with CHI’s stated values around diversity, inclusion, global community, transparency, and long-term sustainability.

    Third, respondents repeatedly asked for clearer operational detail: how qualifications are checked, how reviewer assignments are guaranteed, how exemptions work, how appeals work, how AI use is handled, how quality is assessed, how conflicts are managed, how workload is capped, and how PCS will support the changes. We will detail these in a series of blog posts over the coming months, as we first need to address community input on the overall plan.

    Fourth, many respondents urged the CHI Steering Committee not to implement all changes at once. Several recommended phased implementation, pilots, published evaluation criteria, monitoring for differential impacts, and transparent sharing of simulations or evidence. We will report our process, our pilots, and create evaluation criteria; however, many of the changes are required in concert to be successful. For example, we cannot have minimum qualifications for Associate Chairs (ACs) without also reducing the number of ACs needed. 

    Finally, there is a strong call to frame the changes less punitively. Respondents want language and processes that emphasize care, reciprocity, mentoring, accessibility, and shared stewardship rather than punishment, policing, or exclusion. We have taken this feedback on board here and will continue to do so in future blog posts.

    1. Minimum Qualifications for Peer Review Roles

    Feedback on minimum qualifications was generally positive. Many respondents supported the principle of clearer standards for reviewers, ACs, SCs/Senior ACs, and Paper Chairs. Several people felt this clarification was overdue, especially given concerns about very junior or underprepared reviewers being asked to assess CHI papers. Some respondents explicitly welcomed the requirement that reviewers have at least one relevant publication and some form of review training or mentoring.

    At the same time, this proposal generated significant concern about gatekeeping, exclusion, and narrowing the CHI community. A recurring concern was that requiring CHI or SIGCHI publications could disadvantage researchers from the Global South, smaller institutions, teaching-focused roles, liberal arts colleges, interdisciplinary areas, industry, clinical practice, usable security, VR/AR, health, and other communities that may contribute valuable expertise but publish in venues outside SIGCHI. Several respondents worried that the policy could reinforce an “inner circle” of established CHI researchers and make it harder for newcomers, peripheral communities, or underrepresented regions to enter the CHI community.

    There were also repeated questions about what counts as relevant experience. Respondents asked whether being any author on a CHI paper is sufficient, or whether lead/senior authorship should matter. Several felt that “one CHI/SIGCHI paper” was too low if it could be satisfied by being a minor author on a large author-list paper. Others felt the criterion was too narrow if it excluded people with substantial HCI-adjacent expertise from other rigorous venues.

    A major implementation issue concerned the phrase “prior mentoring or training in review writing.” Many respondents asked how this would be evidenced, verified, or enforced. Several suggested that CHI, SIGCHI, or ACM should provide a low-cost, accessible, self-paced review training or certification, both to support new reviewers and to make the qualification transparent.

    There was also concern about the definitions of “very junior reviewer” and pathways into reviewing and AC roles. Respondents wanted clearer guidance on whether this refers to PhD students, early-stage PhD students, people without review experience, or people without publications. Some worried that the rules could worsen the reviewer pipeline by excluding early-career researchers before they have opportunities to learn.

    Several respondents also raised concerns about career breaks and non-linear careers. Publication-count and recent-publication requirements were seen as potentially disadvantaging people with parental leave, caregiving responsibilities, illness, disability, heavy teaching loads, leadership/service roles, or slower publication trajectories associated with qualitative, participatory, community-based, or marginalized-population research.

    A particularly important theme was disciplinary framing. Some respondents objected to language that describes reviewing as “scientific assessment,” arguing that CHI includes design, engineering, humanities, critical, qualitative, and practice-based work that may not fit a narrow scientific model. This concern was amplified by worries about removing subcommittees, which some respondents see as protecting methodological and epistemic diversity. (More on subcommittees below).

    Summary: There is support for clearer standards, but the community wants them to be more flexible, transparent, globally inclusive, and explicit about alternative forms of expertise, review training, career breaks, and pathways for newcomers.

    Response and next stepsThe Minimum Qualifications for Peer Review Roles policy has already been approved by the CHI Steering Committee and is the only part of the review process changes that is not currently a proposal. We nevertheless heard the feedback clearly, especially around transparency, flexibility, and inclusion. The policy sets out what counts as relevant experience for each role, and we encourage readers to consult the full policy here: https://chi.acm.org/policies-processes/minimum-qualifications-for-full-paper-peer-review-roles-v1-0/. In practice, publication experience will be assessed through author profiles and publication records, including CHI and SIGCHI specialized conference publications. Prior mentoring or training in review writing may include formal review training, documented mentoring, or equivalent prior experience; to make this more accessible and consistent, we will point reviewers to the ACM Peer Review Training and Certification course, which covers reviewer suitability, review touchstones, paper evaluation, and review submission. We will detail how we will do this in a future blog post.We also heard requests for clearer definitions. In the policy, a “very junior reviewer” should be understood as someone who meets the minimum qualifications for reviewing but has limited prior reviewing experience; such reviewers should be included with appropriate support and balanced by more experienced reviewers on the review team. As was the standard in the program committee in prior years, SCs/Senior Associate Chairs and Associate Chairs will again be responsible for ensuring the reviewers have the correct experience and knowledge. We also recognize the concern that publication-based criteria can disadvantage people with non-linear careers, career breaks, caregiving responsibilities, disabilities, heavy teaching loads, institutional service, industry careers, or slower publication trajectories. The intent of the policy is not to exclude qualified community members, but to ensure that every paper is reviewed by people with appropriate expertise and support. We will therefore clarify in the policy how exceptions and equivalent experience can be considered, especially for people whose contributions to HCI, reviewing, or relevant adjacent fields are not fully captured by recent CHI publication counts.Finally, we accept the concern that the phrase “scientific assessment” does not adequately reflect the full diversity of work published at CHI, including design, engineering, critical, qualitative, humanities-informed, practice-based, and interdisciplinary research. ACM has clear evaluation criteria: originality, correctness, novelty, importance, and clarity of exposition. We will amend the wording of the policy so that it better reflects CHI’s breadth of epistemologies, methods, and contribution types while preserving the core goal: ensuring that all submissions receive careful, informed, and fair evaluation inline with the ACM criteria.

    2. Submission and Shared Review Responsibility

    The principle of shared responsibility received notable support. Many respondents agreed that authors who submit to CHI should contribute to the reviewing system, and several saw this as an overdue correction to the problem of a small group carrying a disproportionate service burden. Some respondents appreciated the seriousness of the proposed consequences, arguing that strong incentives are needed. However, this proposal generated some of the strongest concerns about workload, fairness, and unintended consequences.

    A repeated concern was that the four-review requirement per submission may be manageable for large labs but punitive for small teams, single-author papers, papers from primarily undergraduate institutions, early-career faculty, teaching-focused faculty, interdisciplinary teams, and authors whose co-authors are students or non-CHI specialists. Several respondents gave concrete examples where one eligible senior author might be responsible for many reviews across multiple submissions.

    Many respondents worried that the proposal could increase inequity between large, well-resourced labs and smaller or less connected groups. Larger teams can distribute reviewing responsibilities more easily, while smaller teams may face a much heavier per-person burden. Some respondents also worried that this could encourage honorary authorship, authorship padding, or authorship trading, as teams seek to add qualified reviewers to papers.

    A major implementation concern was the mechanism by which declared reviewers would actually receive assignments. Several respondents noted that volunteering to review does not guarantee being invited to review. They asked what happens if someone is declared but is never assigned enough reviews, or declines a review because it is outside their expertise. Respondents also asked how responsibilities would be tracked across multiple submissions and how penalties would be allocated if an author completes some, but not all, expected reviews.

    The proposed penalty, desk rejection of authors’ own submissions for failure to complete reviews, was seen by some as motivating, but by many as too severe or administratively complex. Several respondents worried that junior authors could be punished for the behaviour of senior co-authors, supervisors, or collaborators. Other respondents suggested including future submission bans for non-compliant individuals, review-credit systems, caps, or “you cannot submit again until owed reviews are completed.”

    The exemption process was another major theme. Respondents asked what happens when no author meets the minimum qualifications, when authors misunderstand whether they qualify, or when exceptional circumstances arise after submission. Several felt the exemption policy needs clearer adjudication, appeals, and damage-control mechanisms.

    A number of respondents worried that mandating reviewing could reduce review quality. Concerns included perfunctory reviews, ghosting, poor engagement, increased use of AI-generated reviews, and additional quality-control work for ACs and Senior ACs. Some respondents argued that the problem is not only a lack of willingness to review, but poor reviewer matching and inadequate tools for identifying available qualified reviewers.

    There were also concerns about conflicts of interest. Some respondents worried that drawing reviewers heavily from current submitting authors creates incentives for competitive reviewing, especially in closely related areas.

    Summary: The norm of reciprocity is widely understood and often supported, but the proposed implementation is seen as potentially burdensome, inequitable for small or less-resourced teams, difficult to administer, and risky for junior authors unless there are clearer mechanisms, caps, exemptions, appeals, and individual rather than team-level accountability.

    Response and next stepsWe heard the concern that mandatory reviewing may place a heavier burden on individuals in small teams, single-author papers, or teams where only one author meets the review qualifications. At the same time, the principle behind this change is central to the proposed review model: authors who submit to CHI need to contribute to the collective work of reviewing. CHI relies on community labour, and a sustainable review process requires that those who benefit from it also help maintain it. We recognise that this expectation may make submitting to CHI more demanding, and authors will need to consider this responsibility when deciding whether CHI is the right venue for a particular submission.We also want to clarify that this policy is not intended to require teams to add qualified authors or senior CHI researchers to their papers. Everyone remains encouraged to submit to CHI, regardless of the authorship list. If no author on a submission meets the minimum qualifications for reviewing, the submission can proceed through the exemption process. It is only where one or more authors are qualified that they will be expected to contribute to reviewing. Similarly, if an author is declared as qualified and willing to review but is not assigned any reviews, there will be no expectation that they complete reviews that were never assigned. The policy stipulates that authors named during submission are willing to review; if they do not receive any assigned reviews (and thus do 0 reviews), they remain willing and comply with the policy.We also heard questions about sanctions. We have considered a range of possible responses when assigned reviews are not completed, including future submission restrictions, review-credit systems, and other delayed penalties. The current approach seeks to balance fairness, clarity, and practical implementation in a system operating at CHI’s scale. Where a declared reviewer accepts or is assigned reviews and then does not complete them, consequences will be necessary for the policy to be meaningful. We will also develop a clear exemption policy so that authors understand how to communicate exceptional circumstances that prevent them from reviewing in a given year. This will include circumstances that arise after submission, such as illness, caregiving emergencies, or other serious unforeseen events. The aim is to make the process predictable and fair, while avoiding ambiguity for authors, ACs, Senior ACs, and Paper Chairs.Several respondents were concerned that mandated reviewing could reduce review quality. We share the view that completing a review cannot simply mean submitting text. ACs and Senior ACs will evaluate whether reviews meet expected standards of care, specificity, constructiveness, and engagement with the paper. Reviews that are clearly inadequate, perfunctory, inappropriate, or not meaningfully authored by the reviewer will be treated as not completed, and the relevant sanctions will apply.Finally, we heard concerns about conflicts of interest and competitive reviewing when reviewers are drawn from the submitting author pool. We will draw on evidence and practice from other distributed peer review schemes, including UKRI’s ESRC Connect Awards pilot, where applicants also act as reviewers and safeguards include separating reviewer pools, avoiding reciprocal reviewing relationships, checking institutional conflicts, moderator oversight, and excluding reviews that show evidence of gaming or unjustified negative scoring. Our next step will be a dedicated blog post explaining how we will implement shared review responsibility in practice, including reviewer assignment, exemptions, sanctions, review quality checks, and conflict-of-interest safeguards.

    3. Screening/Triage via Rubric-Based Assisted Desk Reject

    There was broad recognition that CHI’s submission volume is unsustainable and that some form of earlier screening may be necessary. Many respondents supported more desk rejection or triage in principle, especially for clearly out-of-scope, incomplete, immature, or very low-readiness submissions. Some respondents saw this as one of the most important proposals for protecting reviewer time.

    However, the proposed rubric-based ADR process attracted substantial concern. The most common concern was that the rubric is too subjective, too complex, and too demanding for a rapid skim-reading process. Respondents argued that assessing correctness, originality, importance, novelty, and clarity often requires careful reading and domain expertise. Several suggested replacing the five-point rubric with simpler binary or threshold questions, such as whether the paper has a plausible contribution, engages relevant prior work, and uses methods or reasoning appropriate to its claims.

    Many respondents worried about false positives: good, unconventional, interdisciplinary, qualitative, critical, replication, validation, feminist, Global South, racially minoritized, or otherwise non-mainstream work being screened out before full review. Several respondents felt that novelty and originality are especially risky criteria for early desk rejection, because reviewers often disagree about what is novel, and because some important work may not look “novel” at first glance. Others worried that the process could favour checklist-style papers and penalize field-changing or difficult-to-classify work.

    A second major concern was algorithmic reviewer assignment. Several respondents said that current reviewer-matching systems are already poor and that using algorithmic assignment for ADR could create bad matches, especially for papers requiring both topical and methodological expertise. Respondents worried this would make ADR opaque, luck-dependent, and vulnerable to poor calibration.

    The proposal to use many triage reviewers also raised concerns about intellectual property, confidentiality, collusion, and abuse. Some worried that exposing unpublished work to 10 rapid reviewers increases risk, especially for immature or sensitive work. Others worried that submitting authors could use ADR to suppress competing work, particularly in close-knit areas.

    The role of AI came up repeatedly. Some respondents worried that reviewers will use AI to complete triage superficially. Others suggested that AI might need to be used explicitly and transparently to help detect AI-generated or low-quality submissions, though there was no consensus. Several respondents stressed that CHI needs clear policy on acceptable and unacceptable AI use in reviewing and triage.

    A number of respondents argued that ADR should remain in the hands of experienced ACs, Senior ACs, or Paper Chairs, rather than being outsourced to a broad pool of submitting authors. Some said the current ADR process can work if given more time, clearer criteria, and stronger support, rather than creating an entirely new distributed mechanism. Others suggested hybrid models: initial AC screening plus one external rapid screener, or 2–3 qualified AC-level checks rather than 10 broad triage ratings.

    Several respondents requested appeal mechanisms and feedback. If ADR is used, they want authors to receive the ratings or short explanations, and some argued each team should have one opportunity to appeal an ADR decision.

    Summary: There is strong support for reducing full-review load through earlier screening, but substantial concern that the proposed rubric-based, algorithmically assigned, many-reviewer ADR process could be subjective, opaque, burdensome, vulnerable to abuse, and harmful to methodological and community diversity. Respondents favour simpler criteria, more experienced oversight, transparency, appeals, and careful monitoring of differential impacts.

    Response and next stepsWe agree that any screening or triage process must be fair, robust, and appropriate for the diversity of work submitted to CHI. We have already conducted a pilot of the rubric-based screening process, and this gave us confidence that expert reviewers are able to complete the task reliably and efficiently. In the pilot, experienced reviewers were able to apply the rubric in around 15 minutes per paper. This is not out of step with other expert assessment processes; for example, the UK Research Excellence Framework is a large-scale expert review process used to assess research quality across UK higher education institutions, and the work of Editors-in-Chief on journals. We recognise, however, that less experienced reviewers may need more time and support. We also agree that, whilst the 5 criteria were set by the ACM before we started this process, the precise wording of the rubric may not be in its final form. The CHI 2026 panel on peer review was especially useful in pointing us toward evidence on the advantages of checklist-based approaches and how they can support higher-quality decision-making. At the same time, we recognise that developing criteria that work across all CHI papers, methods, contribution types, and epistemic traditions is difficult. We are therefore continuing to refine the rubric, and are also exploring alternative ways to conduct triage in discussion with ACM.We heard the concern that current reviewer-matching systems are not good enough to support a process of this importance. We agree. We are working on a revised set of keywords and expertise descriptors, and any triage process we adopt will be tested before implementation. This testing will need to examine not only efficiency, but also reviewer-paper fit, consistency, transparency, and differential effects across CHI’s diverse communities.Finally, we will highlight the appeal process. As with all SIGCHI venues, SIGCHI has outlined a clear appeals process for submissions for many years already; see SIGCHI Submission and Review Process. Authors whose papers are assisted desk-rejected through triage will have a clear route to request reconsideration, and we will provide more detail on the grounds, process, and timing for appeals in the implementation plan.Our next step will be a dedicated blog post explaining how we will implement ADR for CHI2026.

    4. Optimizing the Full Peer Review Process and Synthesis

    Feedback on the move from a 1AC/2AC model to a single Primary AC model was divided. Some respondents supported removing the 2AC role, seeing it as a reasonable streamlining step that reduces overhead. A few noted that other conferences use similar models and that the 2AC review can be burdensome or uneven.

    However, many respondents emphasized that the 2AC role provides checks and balances. The second AC can catch misallocation, challenge a poor or biased 1AC judgment, support fairness, and help when external reviewers disappear or disagree. Several respondents worried that giving one AC more power could worsen the effects of idiosyncratic preferences, disciplinary bias, or attempts to narrow the field.

    A major concern was the workload of the new AC role. Respondents repeatedly said that finding external reviewers is harder and more stressful than writing 2AC reviews. If each AC handles 10–15 papers and must secure three external reviews per paper, several respondents said they would be unlikely to serve. Some argued that CHI should instead increase the number of ACs and reduce reliance on external reviewers, rather than reducing the committee and increasing external-review recruitment.

    Respondents also questioned whether Senior ACs can realistically oversee the proposed number of ACs and papers. There was concern that oversight of hundreds of papers and reviews may be too large to ensure consistency, fairness, and quality.

    The proposed removal of subcommittees generated particularly strong and emotional feedback. Some respondents welcomed the removal, but many saw subcommittees as essential infrastructure for community, identity, calibration, methodological protection, and reviewer recruitment. Respondents from qualitative, health, design, critical, and other areas worried that subcommittees protect minority methods and epistemologies from being judged by dominant norms. Several warned that removing subcommittees could create more silos, not fewer, because people may self-organize informally into trusted networks without the transparency of formal subcommittee structures.

    The Steering Committee also received an open letter responding specifically to the proposal to remove formal subcommittees from the CHI review process. The letter argues that the term “thematic silos” does not accurately reflect how subcommittees function, and proposes instead understanding them as porous, overlapping “subcommunities” that provide practical support, peer mentoring, reviewer recruitment help, calibration, and a sense of intellectual home for authors and reviewers. It emphasizes that subcommittees help CHI sustain methodological and epistemic diversity, develop shared reviewing norms within different areas, support emerging subfields, and make the breadth of CHI more legible to newcomers and outsiders. The letter also asks for clearer evidence that subcommittees cause harm or inefficiency, and for a more explicit explanation of how AC bidding, peer support, difficult-case discussion, and field-building would be handled without them. 

    Several respondents asked how AC bidding and paper assignment would work without subcommittees. They worried about ACs having to bid across thousands of papers, loss of author agency in choosing an appropriate community, and overreliance on automated matching. Respondents wanted clearer explanation of the assignment process, including how authors can signal topic, method, contribution type, and community fit.

    There were also comments about PC meetings. Some respondents valued PC meetings as spaces for calibration, accountability, and community-building, while others felt recent virtual PC meetings have been of limited value. A few suggested alternative networking or calibration mechanisms if traditional meetings are not retained.

    A recurring point was that the AC should not merely summarize reviews. In a context where AI-assisted reviews may become more common, respondents stressed that ACs must exercise critical judgment, identify low-quality or AI-generated reviews, and be empowered to champion papers despite superficial reviewer concerns.

    Summary: Streamlining is welcomed by some, but many respondents worry that removing 2ACs and subcommittees could weaken checks and balances, mentorship, calibration, methodological diversity, and community ties. Additional comments reinforced these concerns, emphasizing the practical and intellectual role subcommittees play in supporting ACs, reviewers, authors, difficult decisions, and the development of shared reviewing norms. The largest practical concern remains that the new AC role may become less attractive because recruiting three external reviewers across 10–15 papers is highly burdensome. 

    Response and next stepsWe heard the concern that the new Primary AC role could make reviewer recruitment more burdensome, especially if each AC is responsible for securing three external reviews across 10–15 papers. Our expectation is that this will be easier than in the current system because the reviewer pool will include qualified people who have already agreed to review as part of their own submission responsibilities. The aim is to move reviewer recruitment away from last-minute ad hoc requests and toward a more predictable pool of available reviewers.We also heard concerns about the workload for Senior ACs. Our modelling suggests that the proposed Senior AC role is manageable, but we will continue to test these assumptions as the implementation details are developed. We recognise that workload needs to be realistic if we are to retain experienced people in these roles. However, we also note that over the last 15 years, the ACs had on average 14.5 papers assigned, typically half as 1AC and half as 2AC. Some years, this number was as high as 20.0, and in others, as low as 10.0, on average.We are taking seriously the feedback about the proposed removal of formal subcommittees. Our vision is not to lose the intellectual, methodological, and community functions that effective subcommittees currently provide. Rather, we expect Senior ACs to take on a role closer to subcommunity chairs: representing subcommunity norms, recruiting ACs with the right expertise, supporting calibration, running effective program committee meetings, and ensuring that areas with strong existing reviewing cultures continue to be supported. We also expect the new process to make it easier to identify emerging subcommunities and support new areas of CHI as they develop, providing a more flexible and agile structure.In combination with the Minimum Qualifications for Peer Review Roles, the proposed model asks ACs and Senior ACs to exercise expert judgement in ways that are closer to journal associate editors and area chairs. They will not simply summarize reviews, but will be responsible for ensuring that papers are evaluated by appropriate experts, that reviews are interpreted in context, and that decisions are fair across CHI’s diverse contribution types and research traditions.Our next step will be a dedicated blog post outlining how this model will work in practice, including the details of AC recruitment, bidding, paper assignment, subcommunity support, Senior AC responsibilities, and program committee meetings.

    5. Recognizing and Rewarding Reviewing Service

    This proposal received generally positive but often qualified feedback. Many respondents supported making reviewing more visible and formally acknowledging high-quality service. Several said this is a welcome step toward recognizing reviewing as scholarly labour.

    However, many respondents felt that recognition needs to be more concrete and useful. Certificates, public lists, or ceremony acknowledgements were seen as nice but insufficient. Suggested material or practical rewards included reduced registration fees, ACM membership discounts, free or discounted conference items, guaranteed student volunteer slots, AC/SC lunches, service letters, public confirmation usable in annual reviews, and stronger CV-visible forms of recognition.

    Some respondents emphasized that institutional recognition matters. For reviewer recognition to be meaningful, it should be documented in a way that people can use for promotion, tenure, annual evaluation, workload models, and service reporting. Respondents also noted that recognition should be accessible to people who cannot attend CHI in person.

    Several respondents cautioned that recognition systems can be gamed or may incentivize quantity over quality. Concerns included nepotism, AI-generated reviews, rewarding verbose but shallow reviews, and opaque quality metrics. Some asked how review quality would be assessed, whether authors would have input, and whether AI would be used to evaluate reviews.

    A few respondents argued that the best recognition is not awards but being invited into meaningful service roles, such as the Program Committee, because these roles carry more institutional weight than “outstanding reviewer” labels. Others said awards should be paired with clearer sanctions or accountability for poor reviews.

    There was also a more critical theme: some respondents felt recognition alone cannot compensate for structural loss of community, workload increases, or the removal of relational incentives such as PC meetings and subcommittee belonging. 

    Summary: Reviewer recognition is broadly supported, but respondents want it to be concrete, transparent, useful for career evaluation, accessible to remote/non-attending reviewers, resistant to gaming, and paired with accountability for poor reviewing. Recognition is seen as helpful but not sufficient to solve motivation, workload, or community-care issues.

    Response and next stepsWe heard broad support for making reviewing work more visible, alongside a clear message that recognition needs to be useful beyond the conference itself. Public acknowledgement, certificates, and ceremony recognition are valuable, but they may not be enough for reviewers whose institutions need concrete evidence of service for promotion, tenure, annual review, workload allocation, or professional evaluation.We will therefore explore stronger CV-visible forms of recognition for outstanding reviewing and reviewing service more broadly. This may include clearer public records of service, more formal documentation that reviewers can use in institutional evaluations, and ways to distinguish sustained, high-quality reviewing contributions over time. We will also consider how recognition can be made accessible to reviewers who are not able to attend CHI in person.At the same time, we recognise that recognition must be trustworthy. Any system for acknowledging reviewing should avoid rewarding quantity over quality, should be transparent about how outstanding reviewing is identified, and should not create incentives for superficial or AI-generated reviews. Our next step is to develop a clearer recognition process that makes reviewing visible as scholarly service while maintaining confidence in the quality and integrity of the review process.

    We are also producing an FAQ blogpost to supplement this and other blog posts that will detail how the various parts of the process will be implemented. These will be posted on https://chi.acm.org/ and https://chi2027.acm.org/ 

  • When Authorship Goes Wrong: Why SIGCHI Conferences Freeze Author Lists and What That Means

    Original Blog Post from: https://medium.com/sigchi/when-authorship-goes-wrong-why-sigchi-conferences-freeze-author-lists-and-what-that-means-ada403633cc5

    Every year at conferences, the same painful situation occurs. A student or collaborator made a significant contribution to a paper, but their name was inadvertently omitted from the submission. When the mistake is discovered, the conference says that no changes are allowed. The result feels deeply unfair, especially in a community that publicly emphasizes inclusion, mentorship, and proper credit. How can a system that claims to value ethical research allow someone who did the work to be excluded from authorship?

    To understand this, we must examine how authorship actually works in ACM publications. The answer is not found in a single rule, but in the interaction between conferences’ Calls for Papers, ACM’s authorship policy, and the submission time certification that binds authors and venues together. Only by understanding how these layers fit together does the logic behind frozen author lists become clear.

    The Call for Papers is a binding contract

    When a research team submits a paper to an ACM conference, they do not just upload a PDF. They also accept the Call for Papers, the submission rules of that venue, and ACM policies. For SIGCHI conferences, this is typically a checkbox with a text outlining the authors’ agreement; see an example text in the SIGCHI Submissions and Reviewing Policy. These rules define what authors are allowed to do and what the conference is allowed to enforce. By clicking submit, the authors enter into a contract with the venue, in which both sides agree to follow the published rules.

    These conference rules do not exist in isolation. They operate under ACM policies, such as the Policy on Authorship, which acts as a higher-level framework. A conference cannot permit something that ACM policy forbids, but it can impose stricter procedural rules as long as they are consistent with ACM policy. This is why many conferences state explicitly that author lists are frozen at submission and cannot be changed later. That rule is not arbitrary. It is how the conference operationalizes ACM’s authorship policy in a way that is enforceable and auditable.

    What ACM actually means by authorship

    ACM’s Policy on Authorship is much stricter than many people realize. To be an author, all authors must fulfill the attributes quoted here:

    Anyone listed as an author on an ACM submission must meet all the following criteria:

    • They are an identifiable human being. Anonymous authorship is not permitted, although pseudonyms and/or pen names are permitted, provided accurate contact information is given to ACM. ACM does not currently permit collective authorship.
    • They have made substantial intellectual contributions to some components of the original Work described in the manuscript, such as contributing to the conception, design, and analysis of the study reported on in the Work and participating in the drafting and/or revision of the manuscript.
    • They take full responsibility for all content in the published Works.

    As a result, it is possible that an author may be excluded from the list of authors during the submission process. This is what authors try to rectify after the submission deadline has passed. 

    However, the ACM Policy on Authorship continues, most importantly for the question of whether ACM allows authorship changes, to be: “They [all authors] are aware the manuscript has been submitted for publication to ACM.” This is therefore not only about who helped, but about who formally stands behind the manuscript that enters the scholarly record.

    At submission time, all listed authors make a set of explicit statements to ACM. They confirm that they know the paper has been submitted, that they approve the version being reviewed, and that they agree to be held responsible for its integrity and compliance with policy. This turns authorship into a legal and ethical attestation, not just a credit line. The author list becomes part of a contract between the researchers and ACM about who is accountable for the work.

    Submission time certification and author responsibility

    This submission time certification is the cornerstone that holds the entire system together. The version of the paper that enters into peer review is the one that the listed authors approved and agreed to stand behind. Reviewers, chairs, and ACM rely on the fact that every named author endorsed that specific document. If problems later arise, these are the people who can be contacted, questioned, and held accountable.

    A person who was not listed at submission did not make that certification to ACM. From the perspective of the reviewers, chairs, and ACM, there is no record that this person saw the final version, approved it, or agreed to be accountable for it. Even if they did so within the research team, that consent was never formally communicated through the submission process. As a result, they never entered into the submission time contract that defines authorship for this particular manuscript. This is why authorship cannot simply be extended later without breaking the underlying logic of responsibility.

    Why author lists are frozen

    From this perspective, frozen author lists follow directly. If the author list could be changed freely after submission, the submission time certification would become meaningless. People who never approved the reviewed version could be added or removed to avoid responsibility if something goes wrong. The conference would no longer know who actually stood behind the paper that was evaluated.

    Freezing the author list is therefore not about being inflexible for its own sake. It is about preserving the integrity of the authorship contract. It ensures that the names on the paper correspond to the people who approved and took responsibility for the version that entered peer review. Without this, authorship would become negotiable after the fact, which would undermine both accountability and trust in the scholarly record.

    The ethical collision

    This is where the system becomes emotionally and ethically difficult. Sometimes a student or collaborator really did contribute before submission, but was accidentally left off the author list. From a moral perspective, they deserve authorship. From a procedural perspective, they cannot be added without falsifying the submission time certification that underpins the review process.

    The system knowingly prioritizes the integrity of the scholarly record over perfect fairness in individual cases. That is a harsh trade-off, but it is deliberate. Without it, authorship would become vulnerable to pressure, politics, and strategic manipulation. Acknowledging this does not make the harm disappear, but it explains why these painful edge cases exist.

    What’s next when an authorship mistake is discovered?

    Once a missing author is discovered after submission, there are only two paths forward that remain ethically and procedurally valid.

    The first option is to withdraw the paper. This resets the submission process, allowing the author list to be corrected so that all contributors can review, approve, and take responsibility for the manuscript. This produces a clean and fully ethical authorship record, but it also means losing the current review cycle and potentially the paper itself if it is not accepted again.

    The second option is to proceed with the frozen author list and acknowledge the missing contributor explicitly. This does not give them formal authorship, but it does create a public and permanent record of their contribution. However, this option undoubtedly raises ethical questions. Especially if the power dynamics are not in favour of the forgotten author(s).

    There is no third option that is both policy-compliant and ethically defensible. Quietly adding someone as an author after submission would misrepresent who approved and stood behind the reviewed manuscript. Quietly leaving them out would hide their contribution. Withdrawal or transparent acknowledgment are the only two “clean” choices.

    Withdrawal is the only way to restore full authorship

    There is only one way to give someone full authorship after an omission has occurred. The paper must be withdrawn, the author list corrected, and the work resubmitted so that all authors can read, approve, and take responsibility for the manuscript, which has to occur under ACM Policy on Authorship for every submission. This resets the submission time certification and restores a clean authorship record. 

    This option discards reviews, delays publication, and frequently harms researchers more than it benefits them. However, it is the only way to give all authors their deserved spot on the author list while complying with ethical and procedurally valid standards.

    Why acknowledgments are an imperfect but legitimate solution

    Using acknowledgments instead of authorship to correct an omission undoubtedly raises ethical questions, especially when power dynamics are unequal, and the forgotten contributor is a student or junior researcher. Being named in acknowledgments does not provide the same career credit, visibility, or protection as formal authorship; in such situations, the harm is real.

    Acknowledgments are, therefore, an imperfect solution to authorship mistakes. They do not give someone the formal academic credit that comes with being listed as an author, and they do not make them accountable for the submitted manuscript. For people who genuinely deserved authorship, this is understandably painful. This option should therefore be considered very carefully. However, under a frozen author list (resulting from ACM policy), acknowledgments are the only mechanism the system still allows to record a contribution without falsifying the authorship record.

    There is also a real downstream risk. Because acknowledgments are not part of the formal author list, they do not participate in automated conflict of interest detection, bibliographic databases, or authorship-based accountability systems. In principle, this creates a vulnerability: a malicious actor could attempt to conceal a conflicted contributor by relocating them from the author list to the acknowledgments. This is precisely why ACM defines ghost authorship as concealment intended to deceive.

    That is not what happens when acknowledgments are used transparently under a frozen author list. ACM defines ghost authorship as hiding a real contributor in order to deceive the reader about who was behind the work. An acknowledgment that openly states that someone contributed to the design, analysis, or writing of a paper is a form of disclosure, not concealment. The reader is explicitly informed, even if automated systems cannot act on it. For that reason, transparent acknowledgments, used because authorship cannot be corrected procedurally, do not violate ACM’s prohibition on ghost authorship.

    The formal appeals process

    None of this is meant to be handled through private negotiation or quiet exceptions. SIGCHI and ACM provide a formal appeals process for disputes about submissions and reviews. Authors can appeal to the Associate Chair/Editors, then to the Track Chairs, then to the Program Chairs or the Editor-in-Chief, and finally to ACM itself.

    This ladder exists so that conflicts, including authorship-related ones, can be reviewed transparently and on the record. If an author team believes a conference has misapplied its own rules or ACM policy, this is where the issue should be raised to increase awareness among all involved stakeholders. See SIGCHI Submissions and Reviewing Policy for the full details on the appeals process.

    Closing

    The authorship system in ACM conferences is not gentle, and it does not always produce outcomes that feel fair. But it is internally consistent. Authorship is not only about who helped. It is about who certified, approved, and took responsibility for a specific document that was submitted. Senior authors in particular should take this responsibility seriously by guiding authorship decisions early and helping junior collaborators navigate these rules. Once that is understood, frozen author lists stop looking arbitrary and start looking like the price we pay for a scholarly record that can actually be trusted.

  • Reflection on CHI Program Committee History

    Over the past two decades, the CHI conference has grown at an extraordinary pace. As the flagship venue for Human-Computer Interaction research, CHI now attracts thousands of submissions each year from across the world. This growth reflects the increasing importance of HCI as a field and the expanding range of research topics it encompasses.

    At the same time, the scale of the conference raises important questions about how the peer review process evolves and whether the current structure can sustainably support continued growth.

    To better understand these developments, we analyzed the evolution of the CHI Program Committee between 2005 and 2026. The goal of this analysis is to provide empirical evidence on how the CHI reviewing ecosystem has evolved and what structural pressures may emerge as the conference continues to grow.

    Data Sources and Methodology

    The analysis presented here is based entirely on publicly available sources, including: CHI conference websites and front matter, DBLP, ACM Digital Library, and ORCID Profiles.

    Because metadata across sources is often incomplete or inconsistent, linking records required a combination of automated data collection and manual verification. As with any large-scale data integration effort, small inaccuracies may remain. However, because the results are presented at an aggregated level, the overall trends should remain reliable. 

    I open-sourced my scripts, so if you find more data-clearing opportunities, please send a pull request to https://github.com/sven-mayer/chi-statistics

    The Growth of the Program Committee

    The most visible change over time is simply the scale of the CHI Program Committee.

    Before 2010, CHI operated with a single program committee. In 2010, the conference introduced topical subcommittees to distribute reviewing responsibilities across research areas. Since then, the committee has grown dramatically and now includes well over one thousand researchers serving in roles such as Subcommittee Chairs (SCs) and Associate Chairs (ACs).

    Figure 1: The number of program committee members between 2005 and 2026 in relation to the submitted papers in the respective years.

    Figure 1 shows the relation between PC members (ACs, SCs, and paper chairs) and submitted papers. This expansion closely mirrors the growth in paper submissions, as the number of papers is directly linked to the number of Associated Chairs needed.

    Looking ahead, projections illustrate the magnitude of the challenge. Assuming approximately eleven papers per Associate Chair, similar to CHI 2026, and a submission growth rate of roughly thirty percent per year (35% in 2026, 25% in 2025), the required number of Associate Chairs would increase rapidly:

    • CHI 2027: about 9,000 submissions, requiring approximately 1,630 Associate Chairs
    • CHI 2028: about 11,700 submissions, requiring approximately 2,120 Associate Chairs
    • CHI 2029: about 15,200 submissions, requiring approximately 2,760 Associate Chairs
    • CHI 2030: about 19,700 submissions, requiring approximately 3,600 Associate Chairs

    At this scale, maintaining the traditional review structure becomes increasingly challenging. This naturally raises the question of how the CHI community has managed the conference’s growth over the past decades.

    To explore this, we analyzed 21 years of CHI Program Committee data. The analysis focuses on indicators of community engagement, such as prior participation in the program committee and publication activity within CHI and the broader SIGCHI ecosystem (see the full list of eligible conferences at https://sigchi.org/conferences/). These measures serve as a proxy for how well the Program Committee’s composition aligns with the experience and expertise needed to evaluate submissions at scale.

    Importantly, the Program Committee is not a homogeneous group. Associate Chairs, Subcommittee Chairs, and Paper Chairs play distinct roles in the review process and carry different levels of responsibility. We therefore analyze these groups separately to better understand how experience and community engagement are distributed across the different layers of the reviewing system.

    Reflection on the Paper Chairs

    Paper Chairs play a central role in shaping the CHI review process. They oversee the entire full paper review pipeline, coordinate the work of Subcommittee Chairs and Associate Chairs, and ultimately ensure that the review process is conducted fairly and consistently.

    Given this responsibility, Paper Chairs are typically selected from highly experienced members of the CHI community. An examination of their publication history and prior program committee participation confirms this expectation for most paper chairs (see Figure 2). However, some paper chairs had no prior CHI publishing experience. Thus, most Paper Chairs have substantial publishing experience at CHI and have previously served in multiple program committee roles before taking on this leadership position, see Figure 3, left.

    Figure 2: Paper Chair experience with respect to authoring CHI (top) and SIGCHI (bottom) papers.

    However, the data also reveals an interesting pattern in community participation after serving as Paper Chair. Many of the individuals who take on this role later reduce their involvement in the program committee, as shown in Figure 3 (right). In other words, once researchers reach this peak leadership position, they often step back from regular committee service.

    Figure 3: Number of years a Paper Chair was part of the program committee before they became a Paper Chair (left, yellow). Number of years a Paper Chair served on the program committee after serving as a paper chair (right, red); we ignore cases where the next year would be 2027, as we do not yet know whether they will return.

    This pattern is understandable. Serving as Paper Chair is an extremely demanding role that requires a significant investment of time and effort. Yet it also has an important structural implication for the conference. Because Paper Chairs should be among the most experienced members of the community, their reduced participation afterward means that the program committee gradually loses some of its most senior members.

    In a system that already requires continuous expansion of the program committee, this dynamic reduces the pool of highly experienced reviewers available to support the conference.

    Reflection on the Subcommittee Chairs

    Subcommittee Chairs occupy a critical position in the CHI review structure. Introduced in 2010 as part of the move toward a subcommittee-based reviewing model, they act as the bridge between the Paper Chairs and the operational work of the Associate Chairs. In practice, Subcommittee Chairs oversee large groups of Associate Chairs, guide the discussion of papers within their subcommittee, and help ensure consistency in decision-making. Therefore, the role requires both subject-matter expertise and extensive experience with the CHI review process.

    Figure 4: Subcommittee Chair experience with respect to authoring CHI (top) and SIGCHI (bottom) papers.

    Looking at historical data confirms that Subcommittee Chairs are typically drawn from the most experienced members of the CHI community (see Figure 4). Many have substantial publication records at CHI and have previously served multiple times as Associate Chairs before taking on the role.

    Figure 5: Years a Subcommittee Chair was part of the program committee before they became paper chair  (left, yellow). Years as a Subcommittee Chair were included in the program committee after serving as a Subcommittee Chair (right, red); we ignore cases where the next year would be 2027, as we do not yet know whether they will return.

    However, several patterns emerge when examining Subcommittee Chairs over the past two decades.

    First, Subcommittee Chairs often reduce their involvement in the program committee after serving in this leadership role, as is also observed for Paper Chairs. While understandable given the workload involved, this again removes some of the most experienced members from the recurring reviewer pool (see Figure 5, right).

    Second, because Subcommittee Chairs operate largely within their specific research areas, they play a central role in shaping the reviewing culture of their subcommittee. Their expectations, interpretation of the review criteria, and approach to discussion management can significantly influence how submissions are evaluated in that area. 

    Across all years, Subcommittee Chairs authored between 0 and 72 CHI full papers, with a mean of 11.2 and a median of 10.0. Considering SIGCHI venues more broadly, they authored an average of 24.8 papers with a median of 20.0 (min = 0, max = 146). While these numbers confirm that most Subcommittee Chairs are highly active contributors to the community, Figure 4 also shows that some have comparatively few CHI and SIGCHI publications.

    This becomes particularly relevant in light of the CHI Steering Committee’s Minimum Qualifications for Peer Review Roles, which defines one criterion for serving as a Subcommittee Chair as having 5 or more SIGCHI papers, including at least 2 CHI papers. When considering this, we find that 46 (7.9%) of subcommittee chairs would not qualify based solely on the minimum paper criteria. While this is a consistent issue over most years, over the last 10 years, the percentage of subcommittee chairs that would not qualify today has been: 7.3% (4 of 55) in 2026, 10.4% (5 of 48) in 2025, 6.4% (3 of 47) in 2024, 4.4% (2 of 45) in 2023, 4.7% (2 of 43) in 2021, 7.9% (3 of 38) in 2020, 5.9% (2 of 34) in 2019, 3.0% (1 of 33) in 2018, 3.7% (1 of 27) in 2016.

    Finally, the distribution of experience among Subcommittee Chairs varies considerably across years. As the conference grows and new subcommittees are introduced, the pool of potential candidates must expand accordingly. This means that while some Subcommittee Chairs bring extensive CHI experience, others enter the role with less prior exposure to the conference’s reviewing process. Here, Figure 6 shows that in total, 7 SCs had no prior PC experience (1 SC in 2010, 2 SC in 2013, 1 SC in 2015, 1 SC in 2024, and 2 SC in 2026). Looking at which subcommittees these are, we also see an imbalance here: 3 SC in Design, 1 SC in Specific Application Areas, 1 SC in Technology, Systems, and Tools, 1 SC in Understanding People: Theory, Concepts, and Methods, and 1 SC in User Experience and Usability.

    Figure 6: Years of past CHI program committee experience of Subcommittee Chairs by year. (Note: it’s the same data as in Figure 5 left, but grouped by year).

    As a result, Subcommittee Chairs are not only coordinators of the review process but also key actors in shaping how standards are applied within different parts of the CHI community.

    This becomes particularly relevant when considering how subcommittees evolve over time and how differences between them may emerge.

    Reflection on the Associate Chairs

    Associate Chairs form the operational backbone of the CHI review process. While Paper Chairs and Subcommittee Chairs provide strategic oversight and coordination, Associate Chairs are responsible for managing the review of individual submissions. In practice, this includes selecting external reviewers, moderating reviewer discussions, synthesizing reviewer feedback, and contributing to final recommendations. Because of this central role, the overall composition and experience of Associate Chairs directly influence the quality and consistency of the peer review process.

    Prior CHI Program Committee Experience of Associate Chairs

    The growth of the program committee also affects the distribution of experience among Associate Chairs. Because submissions increase each year (Figure 1), the program committee must expand accordingly. As a result, a substantial portion of newly recruited Associate Chairs have no prior experience serving in this role at CHI. Looking at submission numbers over the past 21 years, CHI experienced an average annual growth rate of 13.8% (SD = 21.7%, min = −35.6%, max = 81.2%). Since the COVID-19 pandemic, the growth rate has been comparatively stable at 27.2% on average (SD = 4.8%, min = 23.4%, max = 34.1%). The two pandemic years (2021 and 2022), however, resulted in a temporary 9% decrease in submissions. Interestingly, these fluctuations are not reflected in the composition of the Associate Chair pool. Since 2011, the proportion of newly recruited Associate Chairs has averaged 35.5% (SD = 5.5%, min = 26.6%, max = 46.1%). However, the growth in submission numbers does not appear to directly correlate with the number of newly recruited Associate Chairs (Spearman’s rs = 0.057, p = 0.840). This raises an interesting question. Why does the proportion of newly recruited Associate Chairs remain consistently around 35%?

    Figure 7: Prior CHI AC experience for the last 21 years. Note: in the first years, they are all new as we have no data older than 2005. The red-dashed line, therefore, indicates when the committee should be stable.

    To better understand participation patterns among Associate Chairs, Figure 8 shows the total number of years individuals served on the CHI Program Committee. The distribution reveals a striking pattern. The largest group of Associate Chairs served only a single year on the program committee (N = 996). Participation then decreases steadily with increasing years of service. For example, 503 Associate Chairs served for 2 years, 371 for 3 years, and 224 for 4 years. Only a very small number of individuals remained on the program committee for extended periods.

    Figure 8: Total number of years Associate Chairs served on the CHI Program Committee.

    Overall, this indicates that participation in the CHI program committee is typically short-lived for many Associate Chairs. While a core group of individuals contributes over multiple years, the majority of Associate Chairs appear to participate only once. This high turnover may explain the relatively stable share of newly recruited Associate Chairs each year. If many Associate Chairs serve only briefly, the program committee must continually recruit new members to maintain its size as submission numbers grow. However, experienced Associate Chairs remain essential for maintaining review quality and mentoring new committee members. Therefore, the structural need to continually expand the committee makes balancing experience and growth increasingly difficult.

    To better understand this dynamic, we now examine the distribution of Associate Chair experience within individual subcommittees. Looking at the program committee as a whole, its composition appears relatively homogeneous. As shown earlier, approximately 35% of Associate Chairs are newly recruited each year, suggesting a relatively stable structure of experience within the program committee. However, this aggregated view hides substantial differences between subcommittees.

    Figure 9 illustrates the distribution of prior CHI program committee experience among Associate Chairs across subcommittees for the last three years (CHI 2024–2026). Each bar shows the percentage of Associate Chairs within a subcommittee grouped by the number of years they previously served on the CHI program committee.

    Figure 9: Prior CHI program committee experience of Associate Chairs across subcommittees for CHI 2024–2026.

    The figure reveals that the experience composition varies considerably across subcommittees. Since the introduction of subcommittees, the proportion of newcomer Associate Chairs has ranged from 4.2% to 80.0%. Looking only at the most recent years, the range remains substantial. In 2026, the share of newcomer ACs ranged from 10.3% to 68.3% across subcommittees; in 2025, from 10.9% to 50.8%; and in 2024, from 12.0% to 61.0%, see Figure 9.

    Some subcommittees, therefore, include a substantial number of first-time Associate Chairs, while others rely more heavily on ACs with prior program committee experience. In other words, while the overall program committee may appear stable when looking at aggregate statistics, the underlying composition differs significantly across subcommittees. As a result, variations in prior program committee experience may influence how reviewing practices and expectations develop within different parts of the conference.

    In addition, these patterns are not stable over time within individual subcommittees. For example, over its 17 years of existence, the Design subcommittee has had an average newcomer AC rate of 39.7%, but the yearly rate ranged from 8.7% to 64.0%. In the last three years alone, the newcomer share was 53.0% in 2026, 42.6% in 2025, and 25.9% in 2024. A similar pattern can be observed in the relatively new Computational Interaction subcommittee (6 years old), which shows a mean newcomer rate of 29.2%, but with considerable variation (min = 4.2%, max = 43.5%).

    Taken together, these findings suggest that while new Associate Chairs are continuously required to support the growth of CHI submissions, the overall turnover within the program committee is considerably higher than what would be expected from submission growth alone. Because many Associate Chairs serve only once, the influx of newcomers exceeds the typical growth rate of submissions. This dynamic may reduce consistency in the review process across years. Furthermore, the large variation in newcomer rates across subcommittees means that some subcommittees offer a more stable reviewing experience than others.

    Prior CHI Publishing Experience of Associate Chairs

    Prior participation in the CHI program committee is only one indicator of experience within the community. Another important dimension concerns researchers’ publication history within CHI and the broader SIGCHI ecosystem. Publication activity reflects long-term engagement with the research community and familiarity with its standards and expectations. We therefore next examine the publication experience of Associate Chairs across CHI and other SIGCHI venues. Figure 10 shows the distribution of Associate Chairs’ publication records with respect to CHI papers (top) and SIGCHI-sponsored conference papers more broadly (bottom).

    Figure 10: Associate Chairs’ experience with respect to authoring CHI full paper (top) and SIGCHI full papers (bottom).

    Subcommittee
    MeanSDMinMax202420252026
    Learning, Education, and Families51.513.140.566.040.566.048.1
    Understanding People — Qualitative Methods44.22.741.246.346.341.245.0
    Privacy and Security43.46.936.250.050.036.243.9
    Accessibility and Aging42.54.238.646.941.946.938.6
    Specific Application Areas40.26.233.345.533.345.541.7
    Understanding People — Mixed and Alternative Methods37.28.230.846.530.846.534.3
    Understanding People — Statistical and Quantitative Methods35.38.426.142.437.526.142.4
    User Experience and Usability33.26.227.940.027.931.740.0
    Design32.59.425.043.029.625.043.0
    Critical Computing, Sustainability, and Social Justice32.56.427.339.639.630.627.3
    Interaction Beyond the Individual30.51.629.532.329.632.329.5
    Health27.67.520.735.626.535.620.7
    Visualization27.23.624.331.231.224.326.1
    Computational Interaction18.01.217.019.419.417.617.0
    Interacting with Devices: Interaction Techniques & Modalities15.44.110.718.417.118.410.7
    Games and Play12.11.710.513.812.010.513.8
    Developing Novel Devices: Hardware, Materials, and Fabrication11.13.57.414.314.37.411.5
    Blending Interaction: Engineering Interactive Systems & Tools5.62.73.48.78.73.44.8
    Table 1: Last 3 years of AC proportion that do not meet the  Minimum Qualifications for Peer Review Roles.

    Across the dataset, Associate Chairs authored between 0 and 74 CHI full papers, with a mean of 5.9 and a median of 4.0. Notably, 694 Associate Chairs (9.7%) had not authored a CHI paper at the time they served in the role. When considering SIGCHI venues more broadly, Associate Chairs authored between 0 and 151 papers, with a mean of 13.9 and a median of 10.0. Even when expanding the scope to all SIGCHI-sponsored conferences, 198 Associate Chairs (2.8%) had not published a SIGCHI paper.

    The distributions shown in Figure 10 further illustrate the wide range of publication experience among Associate Chairs. While a small number of individuals have authored several dozen papers within the CHI or SIGCHI ecosystem, the majority of Associate Chairs have comparatively modest publication records.

    Having established the overall distribution of publication experience among Associate Chairs, the next question is whether this variation is stable over time or whether the composition of the program committee has changed across years. To examine this, we analyze the publication experience of Associate Chairs annually. Figures 11 and 12 show the distribution of prior publications for Associate Chairs serving each year since 2005. Figure 11 focuses on CHI full papers, while Figure 12 shows the corresponding distribution for full papers for all SIGCHI-sponsored conferences.

    Figure 11: Associate Chairs’ experience with respect to authoring CHI full papers over the years.

    Figures 11 and 12 reveal that the distribution of publication experience among Associate Chairs has remained relatively broad across the entire period. In most years, the program committee includes individuals with extensive publication records alongside others with only limited publication experience within CHI or the SIGCHI community.

    At the same time, the figures suggest gradual shifts in the composition of the program committee. In particular, the proportion of Associate Chairs with a high number of publications has increased over time, especially across SIGCHI venues. This reflects both the community’s overall growth and the increasing number of researchers with long-term publication histories in SIGCHI conferences.

    To better understand what these numbers imply for the committee’s qualification profile, we compare them against the Minimum Qualifications for Peer Review Roles defined for Associate Chairs. One criterion specifies that Associate Chairs should have authored at least five SIGCHI papers, including at least two CHI papers.

    Across the full dataset, 1026 Associate Chairs (14.3%) would not meet this requirement. This pattern appears consistently across years. Over the last decade, the share of Associate Chairs that would not meet the current qualification requirements has been 16.0% (178 of 1110) in 2026, 14.2% (122 of 857) in 2025, 14.6% (101 of 691) in 2024, 12.7% (60 of 474) in 2023, 14.0% (66 of 472) in 2022, 12.4% (65 of 526) in 2021, 12.4% (57 of 458) in 2020, 12.9% (55 of 426) in 2019, 12.4% (38 of 307) in 2018, 8.1% (20 of 246) in 2017, and 8.8% (19 of 217) in 2016.

    Taken together, these numbers indicate a gradual increase in the share of Associate Chairs who would not meet the current qualification requirements, rising from 8.8% in 2016 to 16.0% in 2026. In other words, the fraction of Associate Chairs with relatively limited prior publication records within CHI and SIGCHI has increased over time.

    This trend appears particularly notable given that the number of CHI publications has also grown substantially in recent years, with the conference publishing more papers than ever before. Thus, while the community continues to expand and produce more research output, the publication experience represented within the Associate Chair pool does not appear to increase proportionally.

    Figure 12: Associate Chairs’ experience with respect to authoring SIGCHI full papers over the years.

    While these yearly trends provide a temporal perspective on Associate Chair experience, they still aggregate across the entire program committee. Earlier, we observed that prior program committee experience varies substantially across subcommittees, even when the overall committee appears relatively stable. This raises the question of whether a similar pattern exists for publication experience. Next, we therefore examine how Associate Chair publication records differ across subcommittees.

    Again, we focus on the last three years (CHI 2024–2026) to examine the composition of Associate Chair publication experience at the level of individual subcommittees. Figure 13 shows the distribution of prior CHI full paper publications among Associate Chairs across subcommittees, while Figure 14 shows the corresponding distribution for SIGCHI-sponsored conferences more broadly.

    Figure 13: Associate Chairs’ experience with respect to authoring CHI full papers over the last 3 years per subcommittee.

    Looking first at CHI publications (Figure 13), substantial differences between subcommittees become visible. Across the examined years, the share of Associate Chairs with 10 or more prior CHI full papers ranges from as low as 2.5% to almost half having more than 10 CHI papers (44.8%) per subcommittee. At the same time, the share of Associate Chairs with 0 or 1 prior CHI papers ranges from 3.4% to 57.4% per subcommittee; note that these should not qualify to be an AC today. These ranges illustrate that some subcommittees are composed primarily of highly experienced CHI contributors, while others include a much larger proportion of Associate Chairs with comparatively limited CHI publication histories.

    When extending the analysis to SIGCHI venues more broadly (Figure 14), overall publication experience increases across nearly all subcommittees. Many Associate Chairs who have published relatively few CHI papers nonetheless have substantial publication records across other SIGCHI conferences. Across subcommittees, the share of Associate Chairs with 10 or more SIGCHI publications ranges from 20.0% to 85.2%, again highlighting substantial differences between communities. Looking at the lower end of the distribution (fewer than five SIGCHI papers, which would also fall below the current qualification guideline), the share of Associate Chairs ranges from 1.7% to 43.3% across subcommittees.

    These ranges make clear that the publication experience represented among Associate Chairs differs considerably between subcommittees. While some areas draw heavily on long-standing CHI contributors, others rely more strongly on researchers with shorter publication histories at CHI or within the broader SIGCHI community.

    However, the observed variation raises an important structural question for the conference. Associate Chairs play a central role in shaping the review process by selecting reviewers, moderating discussions, and interpreting evaluation criteria. If the publication experience represented across different subcommittees varies substantially, it may also influence how reviewing practices and expectations develop within those subcommittees. As a result, papers submitted to different subcommittees may be evaluated within somewhat different reviewing cultures and standards.

    This observation becomes particularly relevant when considering the qualification guidelines introduced by the CHI Steering Committee. According to the current Minimum Qualifications for Peer Review Roles, Associate Chairs are expected to have authored at least five SIGCHI papers, including two CHI papers. When applying this criterion to subcommittee-level data, the proportion of Associate Chairs who would not meet this requirement ranges from 3.4% to 66.0% across subcommittees during the examined period. Some subcommittees stand out for having a large share of ACs with little to no experience (see Table 1). 

    Taken together, these observations reinforce a broader pattern already visible in the earlier analyses. While CHI is commonly perceived as a single conference with a unified review process, in practice it operates through a set of subcommittees that differ in their community structures, reviewer pools, and publication cultures. Understanding these structural differences is therefore important when discussing how reviewing standards evolve and how consistent evaluation practices can be maintained across the conference.

    Figure 14: Associate Chairs’ experience with respect to authoring SIGCHI full papers over the last 3 years per subcommittee.

    Reflection on the Heroes

    While the previous sections focused on structural patterns and potential challenges in the organization, the program committee (Paper Chairs, Subcommittee Chairs, and Associate Chairs), the data also reveals another important aspect of the CHI community: a small group of individuals repeatedly contributes a substantial amount of service to the conference.

    Figure 15 illustrates how often the 2665 unique individuals have served on the CHI program committee over the last two decades. The distribution is highly skewed. Most people serve only once, and a smaller group serves two or three times. However, a very small number of individuals appear repeatedly over many years, taking on repeated service roles within the conference.

    Figure 15: Distribution of program committee service across individuals.

    While 38.31% (1021) members serve only once, a much smaller group repeatedly returns to support the conference. 15 individuals (0.56%) have served more than 12 times, as shown in Figure 15.

    Among those with the most extensive service records are researchers such as Michael Muller, Susan Fussell, Andy Wilson, Carl Gutwin, Effie Lai-Chong Law, and Patrick Baudisch, each of whom has been on the CHI program committee more than 13 times in 21 years. Other long-standing contributors include Jodi Forlizzi, John Zimmerman, Mark Rouncefield, Niklas Elmqvist, Steven Feiner, with 13 years of service, and Emmanuel Pietriga, Jeff Nichols, Per Ola Kristensson, and Volker Wulf, with 12 years of service.

    The data, therefore, highlights the importance of this small group of highly dedicated contributors who repeatedly step forward to support the conference. Without this sustained commitment, it would be difficult to operate peer review at the scale CHI has reached today.

    Figure 16: Distribution of institutional affiliations of CHI program committee members. Each bar represents the number of institutions that appear a given number of times as affiliations of program committee members across the analyzed years. Selected institutions with particularly frequent representation are labeled.

    The data also reveals recurring institutional contributors to the CHI program committee (see Figure 16). While hundreds of institutions have been represented on the program committee over the years, the distribution is highly skewed. Most institutions contribute only a few program committee members, while a smaller group repeatedly contributes many members.

    Several institutions with long-standing HCI research groups appear frequently in the data, including Microsoft, Carnegie Mellon University, the University of Michigan, the University of Washington, Google, IBM, and Georgia Institute of Technology.

    The data therefore show that a relatively small group of institutions repeatedly serves on the program committee over many years. This sustained institutional engagement plays an important role in supporting the large-scale peer review process.

    Summarizing Reflection

    The Question of Qualifications

    These trends raise an important question about the program committee’s long-term sustainability: Who is qualified to serve as an Associate Chair?

    Historically, Associate Chairs have been selected based on strong publication records and prior reviewing experience. However, as the number of submissions continues to grow, the number of required Associate Chairs has increased rapidly. At the same time, this growth is not clearly reflected in a corresponding expansion of the pool of highly experienced reviewers.

    Our analysis suggests that this tension is already visible in the data. For many years, members of the community have expressed concerns about the quality of reviews, often arguing that reviewers without sufficient experience may struggle to evaluate complex research contributions. Just as undergraduate students are generally not expected to review scientific papers, Associate Chairs are expected to possess a certain level of expertise and familiarity with the field, especially at the scale at which CHI now operates. 

    Historically, the in-person program committee meeting provided an additional safeguard for review quality and reviewer selection. During these meetings, decisions were discussed collectively, and a broader group of experienced committee members could examine disagreements or unusual decisions. This process created a form of collective accountability and helped ensure that evaluation standards were applied consistently across the conference.

    Today, these physical program committee meetings are no longer part of the review process. While this change was necessary as the conference grew, it also removed an important mechanism for cross-checking decisions and aligning evaluation standards across subcommittees. As a result, other mechanisms must help ensure that the program committee collectively maintains strong expertise and accountability.

    To address this issue, the CHI Steering Committee recently introduced Minimum Qualifications for Peer Review Roles, which define baseline expectations for different peer review positions. For Associate Chairs, these guidelines include having authored at least five SIGCHI papers, including at least two CHI papers.

    When applying these criteria retrospectively to historical data, a noticeable share of past Associate Chairs would not meet these requirements today. While this situation has existed for many years, the proportion has increased over time, rising from roughly 9% in 2016 to around 16% in 2026.

    This creates a structural tension. Maintaining consistent review quality requires experienced evaluators, yet the scale of the conference demands continual expansion of the committee.

    The Hidden Issue: Subcommittee Divergence

    When examining the program committee as a whole, many metrics appear relatively stable. Measures such as publication experience, academic age, and prior committee participation show gradual trends over time. However, aggregated statistics hide an important underlying issue.

    When the data is examined at the level of individual subcommittees, the picture becomes much more uneven. Different subcommittees show substantially different patterns in publication history, reviewing experience, and participation in the program committee. Some subcommittees maintain a strong base of experienced CHI contributors, while others rely more heavily on newer researchers.

    The differences are not small. Across subcommittees, the share of Associate Chairs with extensive CHI publication records varies dramatically, as does the proportion of Associate Chairs with relatively limited prior CHI experience. Similar patterns appear when examining publication histories across the broader SIGCHI conference ecosystem.

    Subcommittees were originally introduced to ensure that domain experts evaluate submissions on time. However, independently evolving subcommittee communities can develop different implicit norms for evaluating research.

    In such a situation, expectations for what constitutes a strong paper may differ across subcommittees. Thus, evaluation criteria may gradually drift apart. As a result, two papers of similar quality could face very different reviewing conditions depending on the subcommittee to which they are assigned. This dynamic risks creating silos within the conference.

    A Risk of Internal Federation

    If CHI continues to grow while subcommittees evolve independently, the conference risks becoming less like a single venue and more like a loose federation of semi-independent conferences.

    SIGCHI already functions as a federation of research communities through its many conferences. Over the years, the community has created numerous specialized venues focused on particular topics within HCI. These conferences provide important spaces for focused discussion and domain-specific progress. CHI, however, has historically served a different role. It is the central gathering place for the HCI community, where ideas from across domains meet and influence one another.

    If subcommittees increasingly diverge in their reviewing cultures and expectations, CHI may gradually shift from a unified conference into a collection of loosely connected communities operating under a shared umbrella with different value systems.

    Looking Forward

    The growth of CHI is a success story for the HCI community. It reflects the expanding relevance of our field and the diversity of research being conducted worldwide. At the same time, this growth introduces structural challenges for the peer review system. The size of the program committee continues to grow, the pool of experienced reviewers remains limited, and subcommittees may gradually diverge in their reviewing cultures.

    The analysis presented here offers a historical perspective on how the composition of the program committee has evolved over the past two decades, including patterns in participation, experience, and qualifications. These observations illustrate how selection practices and qualification profiles have gradually shifted over time as the conference has grown.

    Our data-driven perspective is not meant to criticize the HCI or CHI community. Rather, it aims to support ongoing discussions on how to maintain a sustainable and fair peer-review process at scale. Decisions about the future structure of the reviewing system must ultimately consider many factors, including community diversity, domain expertise, and reviewing capacity.

    In conclusion, we aim to contribute one piece of evidence to inform broader, more holistic discussions about the future of CHI by providing a historical view of how the program committee has evolved.

  • CHI 2026

    The ACM (Association for Computing Machinery) CHI conference on Human Factors in Computing Systems is the leading international conference on Human-Computer Interaction. CHI 2026 will take soon take place in Barcelona from the 13th to the 17th of April.

    The CHI Steering Committee would like to thank all of the volunteers who have been working, for many years on this conference, to bring our global community together in Europe this April. This conference series and each particular conference takes many years of work and planning. Work is now underway for CHI 2027, CHI 2028, CHI 2029 and CHI 2030 and as a result of the scale of CHI, We book venues a long way out based on Venue Capacity Projections, and our ongoing challenge is to Find the right venue with this growth velocity.
     

    We book venues a long way out

    CHI has sold out two years running, which is a function of our emerging growth acceleration. But it’s also forced us to figure out where to go from here.The venue you experience at CHI this year was booked years ago. That’s just the reality of operating at CHI conference scale. The spaces we need require booking years into the future. This requires us to make decisions based on our best read of what things will look like years from now.

    CHI has sold out before, such as in Paris in 2013 and the Steering Committee adapted our planning. Covid-19 happened and we stepped back from our venue in Hawaii 2020, with a promise to return. CHI 2021 pivoted to an online-only conference, and we stepped back from our venue in Yokohama, with another promise to return. Each year the General Chairs and organising committees for CHI step into a physical location booked many years ago, along with a changing set of global circumstances. For all their hard work and grace, we thank them.

    Venue Capacity Projections

    When our site selection teams go looking for venues, they work from historical delegate numbers: how attendance has grown, where people travel from, what the program demands. It’s not a perfect science, and when growth accelerates the way it has, the projections don’t always keep up. We’re feeding everything we learn back into that process so we get better at it.

    Over the years we have many opportunities for our community to get updates and contribute input, for example:

    Finding the right venue with this growth velocity is hard

    CHI is a big, complex conference with a program structure that puts heavy demands on a venue: parallel tracks, workshops, demos, the works. The venues that can  handle all of that, at a registration fee that doesn’t price out our community, are few and far between. And there are fewer of them every year. We have hit a ceiling and we all need to reimagine future CHI conferences to accommodate these constraints.

    We want to hear from you!

    You can write or email the Chair of the Steering Committee at:

    Mailing Address:
    Professor Aaron Quigley
    Dean
    ANU College of Systems and Society
    The Australian National University
    Canberra ACT 2601 AUSTRALIA
    CRICOS Provider #00120C

    Email:
    aquigley@acm.org

    Or please come meet us at CHI 2026

    1. Reflections on Format Changes at CHI
      Thu, 16 Apr | 4:30 PM – 4:30 PM
    2. SIGCHI Townhall
      Wed, 15 Apr | 12:45 PM – 2:15 PM
    3. Upcoming changes to the CHI full paper peer review process: Community feedback session
      Wed, 15 Apr | 2:15 PM – 3:45 PM

    Aaron Quigley on behalf of the CHI Steering Committee

  • CHI Conference Peer Review Restructuring

    Since 2024, a working group of the CHI Steering Committee has been working on reimagining peer review. We have held community input sessions, had an online feedback forum, and have met regularly over the last 2 years. We created reports on what other conferences (within SIGCHI and the ACM, but also beyond) have been doing. We looked to other review processes (e.g., grants, faculty search, grad school application, University ranking) for inspiration. We used previous CHI submission data to run simulations. We engaged in role-playing exercises where we enacted different review structures and processes. This all culminated in an in-person meeting in January 2026, where we finalized our proposal for CHI full papers review processes moving forward. We sought input and feedback from various stakeholders and discussed it with the CHI Steering Committee and the SIGCHI Executive Committee. 

    Our proposal is a 5-point action plan, with interconnected solutions. 

    A diagram titled "CHI Conference Peer Review Restructuring: 5-Point Action Plan." It features five rectangular cards arranged horizontally, connected by forward-pointing arrows to indicate a sequence. Below the cards, a large arrow spans the width of the diagram pointing left to right, containing the text "TOWARDS AN IMPROVED CHI REVIEW ECOSYSTEM."

The five steps are:

1. QUALIFICATIONS: Features a shield icon with a checkmark. Text reads: "Ensuring Minimum Standards for all Reviewers & ACs."

2. RESPONSIBILITY: Features an icon of a handshake over a document. Text reads: "Defining Clear Policy & Expectations for Reviewers."

3. SCREENING (ADRs): Features an icon of a funnel sorting items onto a clipboard. Text reads: "Implementing Rubric-based Assisted Desk Rejects."

4. STREAMLINING: Features a flowchart icon. Sub-icons detail specific changes: a broken chain link ("Dissolving Subcommittees"), a figure with a star ("Senior ACs"), and three figures ("1AC + 3 Reviews"). Text reads: "Optimizing the Full Review Process."

5. RECOGNITION: Features a trophy icon with a star. Text reads: "Highlighting & Rewarding Outstanding Reviewers."

    Overview of the Review Restructuring

    The CHI conference is transitioning toward a fairer, more sustainable, and highly structured peer review ecosystem. This multi-stage model is designed to maintain our rigorous scientific standards at scale, while fundamentally shifting how we manage collective effort. By structurally aligning our reviewing capacity with submission volume, our goal is to foster a review culture that prioritizes community wellbeing and actively combats the toxic behaviours and reviewer burnout that can emerge in over-strained, high-stakes environments.

    Peer Review Only Works When Peers Review

    While operationalized at the moment of submission, this principle is at the core of the entire restructuring. A system in which many authors submit work but only a few contribute to evaluating it is neither fair nor sustainable. We are establishing that contributing to the review process is both a professional and ethical responsibility for anyone who benefits from it. By distributing this labour equitably, we aim for authors to provide the same level of thoughtful, constructive feedback that they themselves receive, preventing a small group of volunteers from carrying a disproportionate burden, and ensuring that our review process scales with increasing submissions. This operating principle works in tandem with all parts of the proposed restructuring. These 5 pillars are:

    1. Enforcing Minimum Qualifications

    Focus: Making clear expectations on qualifications to serve in various review roles, with a clear pathway to obtaining senior positions.

    Because decision-making power and review expectations are shifting, we must guarantee that all submissions receive quality evaluations. We have established strict, shared expectations and minimum qualifications for Reviewers, Associate Chairs, Senior Associate Chairs, and Paper Chairs. This ensures that individuals evaluating the work have the requisite experience to understand its methodological context and research contribution.

    2. Sharing Review Responsibility among Submitting Authors

    Focus: Establishing the reviewer pool at the moment of submission.

    Peer review is a collective endeavour that only functions when peers actively participate. Under the new model, contributing to the reviewer pool is no longer optional; it is a structural requirement for submission to the conference.

    When a paper is submitted, the authors must explicitly declare four qualified individual names from their author list who will collectively complete up to four full paper reviews for the conference (an author name can be submitted multiple times). This ensures that every paper submitted provides the reviewing labour required to evaluate it. Failing to fulfill these declared review responsibilities will result in an automatic desk rejection of all authors’ own submissions who failed to deliver the reviews.

    3. Screening Papers: Rubric-Based Assisted Desk Reject (ADR)

    Focus: A rapid, distributed filter to protect reviewer time and ensure baseline quality.

    To prevent volunteers from spending thousands of hours evaluating papers that are so far from acceptable as to render external reviews unnecessary, all submissions will first pass through a rapid triage phase. The reviewers for this triage phase are drawn from submitting author volunteers. By distributing this initial triage effort across the authors who are submitting to the conference, we ensure the workload remains equitable, and scales with increasing submission numbers.

    In this phase, papers are algorithmically matched to the reviewer pool for a fast-pass evaluation. Reviewers will not write extensive text; instead, they will evaluate the paper using a standardized, 5-level rubric based on core ACM criteria: Originality, Correctness, Novelty, Importance, and Clarity of Exposition. Papers that meet a threshold will proceed to full peer review. Papers that do not will be assisted desk-rejected (ADR) by the paper chairs.

    When a paper is submitted, the authors must explicitly declare one qualified individual from their author list to triage  up to 10 papers using a rubric based on the ACM criteria. Failing to deliver the assessment on time will result in an automatic desk rejection of the authors’ own submissions who did not deliver the assessments.

    4.  Streamlining Full Peer Review & Synthesis

    Focus: Deep-dive evaluation for papers that pass through triage.

    Papers that pass the ADR triage phase enter the traditional, rigorous peer review cycle, but with a streamlined management structure. We are shifting from the historical 1AC/2AC model to a single Primary AC model.

    The Primary AC acts as the review manager and synthesizer, similar to the old 1AC role. Papers will receive three substantive external reviews from a qualified reviewer pool. The Primary AC will then evaluate these three reviews, guide the discussion, and write a single, cohesive meta-review. If the consensus points toward an actionable revision, the paper will proceed to the Revise and Resubmit (RR) phase, as in prior years.

    5. Recognizing and Rewarding Service

    Focus: Making review efforts visible.

    This new model requires significant, dedicated effort from the community. We are committed to making this invisible labour visible. CHI is introducing forms of recognition to honour this work, including public lists of Outstanding Reviewers integrated into the proceedings’ front matter, as well as published summary statistics highlighting the scale of the collective effort.

    We Welcome Input and Feedback

    Now, we are reaching out to the larger CHI community for your input and feedback. You may provide feedback in person at CHI 2026 (Wednesday afternoon at 2:15 in the Auditorium), or via this online form.

  • Open Call 2026 for Expressions of Interest for CHI Leadership Roles (CHI 2028 and beyond)

    Call closes 16 February 2026

    TL;DR The CHI Steering Committee (SC) is seeking expressions of interest (self-nomination or nomination of others) from individuals interested in serving the CHI community in future key leadership roles, including CHI conference General Chairs and Technical Program Chairs for CHI 2028 and beyond. 

    CHI Conference Leadership 

    We invite Expressions of Interest and nominations from individuals willing to take on CHI conference leadership roles, specifically General Chair (GC) and/or Technical Program Chair (TPC), for CHI 2028 and beyond. 

    Work in these roles is rewarding yet substantial. Tasks are expected to begin ~2 years prior to the appointed conference, with more intensive work in the year leading up to the conference, and wrap-up tasks in the 6 months following the conference. 

    While we encourage all members of our community to play their part in shaping CHI, our current call is focused on identifying individuals willing to lead CHI2028 by demonstrating experience in supporting, cultivating, and welcoming participation from the Global South, especially South Africa – the location for CHI 2028. See below for role-specific information. 

    General Chair

    The key overall responsibilities of General Chairs include:

    • Create and lead a diverse team to deliver the organisational aspects of the conference (eg, managing 10+ chairs across venues like Papers, Meet-ups etc; crafting a positive user experience for the attendees (usually ~4.5K), and local community; tracking and ensuring budgets etc.).
    • Supporting the overall planned conference direction in line with the ‘Core to CHI’ policy and the strategic direction provided by the CHI Steering Committee.
    • Manage the CHI conference budget in conjunction with ACM.
    • Ensure that appropriate SIGCHI policies are adhered to in delivering the overall conference experience.
    • Liaise with the logistics contractor (currently Executive Events) to oversee all logistical details (e.g., managing rare/unforeseen circumstances like a pandemic, logistical decision-making related to physical spaces, etc.).
    • Work effectively with the Technical Program Chairs and other organisation committee chairs not under TPC oversight (e.g., Student Mentorship Program, Keynote Chairs) to deliver the conference experience.
    • Work effectively with the CHI Steering and SIGCHI Executive Committees by ensuring that key decisions are vetted by the SC/EC and on time.

    Technical Program Chair

    The key overall responsibilities for Technical Program Chairs include:

    • Work effectively with the CHI Steering and SIGCHI Executive Committees by ensuring that key decisions are vetted by the SC/EC and on time.
    • Create and lead a diverse team to deliver the technical program for the conference.
    • Liaise with ACM to oversee the publication process.
    • Create and manage the technical program schedule and content for the conference, leveraging existing infrastructure. 
    • Manage venue space allocation and layouts. 
    • Ensure that appropriate regulations and procedures are followed when delivering the technical content.
    • Work effectively with the General Chairs to deliver the conference program schedule.

  • Welcome to the CHI 2027 Chairs


    The CHI Steering Committee and the SIGCHI Executive Committee are pleased to announce our CHI 2027 leadership team. 

    CHI 2027 General Chairs

    Amy Ogan, Associate Professor of Learning Sciences, Human-Computer Interaction Institute at the School of Computer Science, Carnegie Mellon University

    Anind K. Dey, Professor and Dean of the Information School and Adjunct Professor in both the Allen School of Computer Science and Engineering and the Department of Human-Centered Design and Engineering, University of Washington.

    CHI 2027 Technical Program Chairs

    Rosta Farzan, Associate Dean of Engaged Scholarship, Professor, Department of Informatics and Networked Systems, University of Pittsburgh

    Sven Mayer, Head of the Chair for Human-AI Interaction, Professor of Computer Science at the TU Dortmund University.

    The 2027 ACM Conference on Human Factors in Computing Systems is scheduled for May 10-14, 2027, in Pittsburgh, PA, at the David L. Lawrence Convention Center. 

    Please join us in wishing Amy, Anind, Rosta, and Sven well in taking on these leadership roles. Please keep an eye out for their upcoming open call for organising committee member roles for 2027, as this is your opportunity to help shape the future of CHI. 

  • CHI ’25: Overview of the Post-conference Survey

    Thank you to everyone who participated in the CHI post-conference survey, which was open from May 12 to May 30. Overall, 908 people participated in the survey, representing approximately 17% of the total number of CHI attendees (5,309 in total). 

    This article provides a concise overview of the survey findings, with interpretations based on respondents’ open comments.

    Note: In this blog post, “participants” should be interpreted as survey respondents, which represent only a subset of actual participants.

    Overall experience

    The following chart shows the distribution of respondents based on their type of attendance at the conference. The green bar represents respondents who attended the conference in person, while the brown bar represents those who attended virtually. Approximately 6% of survey respondents attended virtually, which is close to the actual 8% virtual attendance for CHI ’25.

    Bar chart showing number of attendees from 0 to 500 on the Y axis against attendance type. Two bars are shown. On the left, the bar for “attendance type = virtual” goes up to almost 60. On the right, the bar for “attendance type = site” is slightly above 800.

    In the survey, 52 respondents shared their reasons for choosing virtual attendance over on-site attendance. These reasons are reported in the table below (respondents were allowed to select multiple reasons).

    What factor(s) did influence your decision to attend the conference virtually rather than in person?
    ————————————————————————-
    | 34.62% | My decision to attend virtually was influenced by ecological reasons. |
    | 5.77% | My decision to attend virtually was influenced by cultural reasons. |
    | 13.46% | My decision to attend virtually was influenced by medical reasons.|
    | 44.23% | My decision to attend virtually was influenced by personal reasons. |
    | 28.85% | My decision to attend virtually was for other reasons. |

    The chart below shows responses to the question “How valuable was your CHI experience?” broken down by attendance type: on-site (left) and virtual (right). On-site attendance was rated more highly, largely due to opportunities for networking—such as reconnecting with colleagues and meeting new people—and for engaging with timely, high-quality scientific content. In contrast, feedback from virtual attendees was less positive. While this year’s option for remote presenters to give livestreamed talks was appreciated, survey responses expressed a desire for greater interaction, particularly the ability for presenters to see and engage with their audience. Virtual participants also regretted that online offerings were not clearly labelled in the program. They would like more sessions to be livestreamed or recorded and regretted the limited activity on the Discord server, which hindered networking opportunities. The value of in-person attendance is further reflected in responses to the question posed to on-site attendees “If the conference had taken place fully online, with only virtual presentations, would you still have attended?”: 56% respondents of on-site participants answered no.

    Two histograms, one per attendance type, that report on the distribution of answers to the question “How valuable was your CHI experience?”. Possible values are along the x-axis with min value being “Not at all valuable” and max value being “Very valuable”. Counts per value are reported along the y-axis. The histograms show two opposite trends. The left chart shows responses from 835 on-site attendees; most rated the experience as "Valuable" or "Very valuable," with around 400 selecting "Valuable" and about 350 selecting "Very valuable." Only a small number rated it as "Not valuable" or "Not at all valuable." The right chart shows responses from 57 virtual attendees; responses are more evenly distributed across all five categories, with slightly more participants selecting "Valuable," followed by "Very valuable" and "Neutral." Fewer respondents selected "Not valuable" or "Not at all valuable." A color-coded legend indicates response categories from red (least valuable) to dark blue (most valuable).

    Conference Activities and Tools

    The survey included a series of questions in which respondents rated various aspects of the conference, including sessions and activities (such as papers, demos, posters, panels, SIGs, student competitions, workshops, and keynotes), as well as events (receptions, exhibits, job fairs, and town hall meeting).
    On-site attendees gave largely positive feedback on sessions and activities, with most venues receiving similarly high appreciation. While typical challenges of a large-scale conference like CHI (such as overcrowded rooms and numerous parallel sessions) were reported, in-person participants also gave positive evaluation to nearly all activities. One exception was the opening keynote, which received mixed appreciations and was seen by some as misaligned with the CHI audience. As in previous years (see, for example, 2024 and 2023), the Job Fair was also not very well received, with participants noting a lack of energy. Finally, participants expressed a desire for the SIGCHI Town Hall to allow more time for discussion.

    Feedback from virtual attendees on sessions and activities was limited, echoing concerns already mentioned: a lack of clarity about what content was available online and limited opportunities for engaging with the audience.
    Respondents also evaluated how well different tools and infrastructure met their needs, including the conference website, the Progressive Web App (PWA available at https://programs.sigchi.org/) used to navigate the program, and other elements such as livestreamed and recorded content, the Discord server, and Q&A options for both on-site and remote presenters.

    Feedback on the PWA was consistent with last year’s: participants found it useful but reported difficulties with navigation, login requirements, and reliance on a stable internet connection. Comments also pointed to suboptimal handling of Q&A. While attendees were encouraged to submit questions via the PWA, the feature seemed underutilized. On-site participants could ask questions in person, but several expressed frustration regarding the very limited time allocated for Q&A during on-site, live sessions.

    Many virtual participants would like to see more engagement on the Discord server. Some respondents even reported being unaware of its existence.

    Participants’ evaluations of these various tools and aspects are shown in the charts below, illustrating the challenges of effectively supporting diverse participation needs at CHI scale.

    Eight bar charts laid out as a 2x4 matrix that report on the distribution of answers to question “How did the following meet your needs as an attendee?”. Histograms are labeled: PWA Navigation, Website, Streamed Content, Recorded Content, In person QAs, PWA QAs, Paper videos and Discord. Each chart includes five rating categories (Poor, Fair, Neutral, Good, Excellent) and one for N/A, color-coded from red (Poor) to dark blue (Excellent). Counts are shown on the y-axis, and each chart includes the number of responses.
- PWA Navigation (n=696): Most respondents rated navigation as "Good" or "Excellent." Around one sixth selected "Poor" or "Fair."
- Website (n=698): A majority rated it as "Good" or "Excellent."
- Streamed Content (n=689): A vast majority of N/As. Other ratings are more towards"Good" or "Excellent" than "Fair" and "Poor".
- Recorded Content (n=686): Similar to streamed content, with high ratings overall.
- In-Person Q&A (n=683): Most responses were positive; "Good" was the most frequent rating.
- PWA Q&A (n=686): A vast majority of N/As. The other ratings are pretty balanced across categories, with a slight majority for "Good".
- Paper Videos (n=684): A vast majority of N/As. The other ratings are pretty balanced across categories, with a slight majority for "Good".
- Discord (n=684): A vast majority of N/As. The other ratings are pretty balanced across categories, with a slight majority for "Good".  The otehr ratings are mixed, but are more towards "Good" and "Excellent".

    To support asynchronous participation, presenters were required to submit pre-recorded videos. For papers, case studies, and alt.chi, these videos were limited to 10 minutes. Posters (including LBW, student competitions, and the Doctoral Consortium) and demos were limited to 3 minutes. Survey respondents were asked to evaluate whether these durations were appropriate from both attendee and presenter perspectives, and share any comment they might have.

    Half of the respondents reported that the video format worked well for them. Among the remaining half, many state that they had not watched any videos. Comments from attendees indicated that while the videos are good for asynchronous viewing, some questioned whether the preparation effort was worth the benefit. A few suggested the duration should be flexible, as viewers can choose what to skip. From the presenters’ perspective, open comments pointed at the effort to produce videos, which is considered too high. Many recommended that live presentations be recorded instead of requiring separate pre-recorded videos.

    Accessibility

    Approximately 3% of survey respondents requested an accessibility-related feature for the conference. All participants had the opportunity to rate various aspects of the conference, with their responses summarized in the chart below. Open comments help contextualize these ratings, revealing frustrations with the number of requirements placed on authors during the submission process, and particularly those related to TAPS.

    Nine bar charts laid out as a 2x5 matrix that report on the distribution of answers to question “How accessible Dif you find the following conferencing aspects?”. Histograms are labeled: Submission Process, Registration, Transportation to venue, Navigation inside venue, Session presentations, QAs, Social Activities, Restrooms and Catering. Each chart includes five rating categories (Poor, Fair, Neutral, Good, Excellent) and one for N/A, color-coded from red (Poor) to dark blue (Excellent). Counts are shown on the y-axis, and each chart includes the number of responses.
Registration, Transportation to venue, Session Presentations and Restrooms have the same kind of profile: Most respondents rated the aspect as "Good" or "Excellent", with very few "Poor" or "Fair" rates
Submission Process, Navigation Inside Venue, QAs, Social Activities: Most respondents rated the aspect as "Good" or "Excellent", but there is a more notable numbers of "Poor" or "Fair" rates (from 25 to 50 in each category).
Catering has the most mixed profile with about 2/3 of respondents rated it as "Good" or "Excellent", and one 1/3 as "Poor" or "Fair".

    Additional suggestions included providing more food options for special diets. Some of them suggested more clearly labelling allergens, and providing a dedicated table for dietary-restricted meals. Participants also expressed concerns that social events, which are organized independently of the official CHI program, felt exclusionary to some participants.

    The survey also asked about both positive and negative experiences related to accessibility. On the positive side, attendees especially appreciated quiet/sensory rooms, the availability of sign language interpreters, and reserved seating in paper sessions. On the negative side, respondents cited navigation challenges within the convention center, overcrowded rooms, and a general lack of seating for those unable to stand for extended periods.

    Willingness to attend CHI ’26 and CHI ’27

    The charts below show responses to the final questions regarding the likelihood of attending CHI ’26 in Barcelona (Spain) and CHI ’27 in Pittsburgh (USA). The results reveal a marked difference in projected attendance based on location, with over 100 respondents expressing significant reservations about traveling to the United States.

    Side-by-side bar charts comparing anticipated attendance at CHI conferences in 2026 and 2027 based on location.
- Left chart (CHI 2026 in Barcelona, Spain):
The most common response is "Will probably attend physically" (around 410 respondents), followed by "Will quite certainly attend physically" (about 100). Fewer respondents indicated "Will not attend" (around 80), "Will probably attend virtually" (about 35), and "Will quite certainly attend virtually" (about 15).
- Right chart (CHI 2027 in Pittsburgh, USA):
The largest group indicated "Will not attend" (around 230), followed closely by "Will probably attend physically" (about 220), and then "Will quite certainly attend physically" (around 70). Virtual attendance options were chosen less frequently: "Will probably attend virtually" (about 90) and "Will quite certainly attend virtually" (about 25).
A legend below the charts clarifies the color coding for each response category.

    ACM Open

    This year, the survey also included three questions to assess respondents’ awareness and understanding of the ACM Open initiative and its implications for Article Processing Charges (APCs). They are illustrated by the chart below.

    Three bar charts showing responses related to ACM Open participation and awareness of Article Processing Charge (APC) coverage, with a color-coded legend "I don’t know" (gray), "No" (green), and "Yes" (blue):
- Left chart – "Does your institution participate in ACM Open?"
Out of 649 respondents: the majority (over 400) answered "Yes", about 150 responded "I don't know", around 130 responded "No".
- Middle chart – "Authors not in ACM Open institutions may be covered by APC waivers and discounts. Are you covered?"
Out of 422 respondents: a large majority (over 330) answered "No", a smaller group (about 80) answered "Yes".
- Right chart – "Has your institution said anything to you about whether or not it will cover APCs?"
Out of 548 respondents: most answered "I don't know" (over 300), about 140 said "No", around 70 said "Yes".

    The survey collected a mix of responses voicing enthusiasm, confusion, and concern around the changes that will accompany ACM Open—ACM’s plan to embrace 100% open access starting January 1, 2026. The cost to authors, particularly early career and industry authors and those from academic institutions not yet covered, remains the key point of concern, which SIGCHI is preparing to address (alongside other SIGs). SIGCHI is committed to supporting the community through this transition and will share more details regarding related workflows as they become available, providing updates regarding waivers and support for covering Article Processing Charges (APCs), as we approach 2026.

    Thank you!

    As a final note, we would like to thank all the respondents who took the time to provide valuable feedback. We have carefully read and analyzed all the detailed comments, categorizing them into relevant themes and summarizing the key points. The input and insights shared by the attendees are very valuable to everyone involved in SIGCHI, the CHI Steering Committee, and the overall organization of CHI.

  • CHI Steering Committee, Site Selection Consultation

    Tl;dr – To provide input for this consultation, please fill out our survey. The survey will be open for responses until 15th of November 2024. (As the survey notes, an aspect of this is to look for venues in the global south and outside of our standard rotation.)

    Selecting a site for CHI conferences requires balancing important, and often competing, concerns.  Looking forward to CHI 2028, 2029, and beyond the CHI Steering Committee is seeking input for potential CHI locations, with a specific call to look beyond the obvious large cities where CHI has been held in the past.  This consultation, which will be open until 15th of November, 2024, will help the CHI Steering Committee to request proposals from a broad and more diverse range of locations for the coming years.  

    The site selection process is described in more detail on the CHI Steering Committee Blog. This detailed blog post covers, for example “As per SIGCHI EC decision, we rotate the conference location through Asia, Eastern North America, Europe, Western North America, and a wildcard year.” However there is much more in the post.

    This is the first time the CHI Steering Committee has solicited input from the community on where to hold CHI in the future.  The SIGCHI EC had a working group looking at site selection for SIGCHI Conferences more generally.

    The CHI Steering Committee has a set of core values which determine which locations will be considered, although these concerns are often challenging to balance.

    Safety

    Safety of attendees is a key concern for site selection, with many potential dimensions.  Critical values of the CHI Steering committee focus on safety for the LGBTQIA+ community, gender equality and safety for all genders, and violence or crime.  Issues of safety are crucial to site selection, and will be considered across a broad range of dimensions and will be used to exclude potential locations.

    Access

    Access and accessibility of location is a key concern for site selection, taking into account that accessibility regulations and standards are often specific and specialised to each location and region.  Broader access, including connections by  air or train, international travel restrictions and visas, and any other barriers to access will be considered and will be used to exclude potential locations.

    Sustainability

    CHI is committed to promoting environmental sustainability wherever it is possible. Given the international scope of CHI attendees, one strategy we have pursued is to move the CHI location to different population centers where CHI attendees live and work, reducing our carbon footprint where we can, while recognising that every location choice is going to be distant for some people and geographies. We also look for sites that will be good partners in other types of environmental sustainability including sourcing local foods, reducing supply chains for conference supplies, and reducing needs for air conditioning and other power consuming technologies.

    How can you have input?

    Well to provide input for this targeted consultation, please fill out our survey. The survey will be open for responses until 15th of November 2024.

  • New CHI Format to launch at CHI 2026

    At CHI2024, we brought a plan to the Town Hall for a major format change to CHI. We also discussed these changes and the motivation behind them in a blog post a few months ago. To summarize, we are planning to streamline the CHI conference experience by having a program only during the weekdays, and simplifying the formats for presentation. Papers will be delivered in the mornings, and then the afternoons will consist of interactive content divided into five categories. In total, there are six submission areas: 

    1. Papers
    2. Panels
    3. Meet up
    4. Posters
    5. Workshops 
    6. Interactive demos

    There are several reasons to make this change. This change is the first of many iterations to optimize our conference  experience as our community and scholarship grows. This format allows us a wider variety of convention centers and other meeting places that both enables us to have the conference in different locations and reduces costs. Avoiding weekend space rental also reduces costs. As we have documented in previous blog posts, CHI has had to make several accommodations to try and break even from a financial perspective, and post-pandemic has been running at a deficit. Many of the changes proposed are designed to assist in that mission of fiscal responsibility. Feedback from CHI participants often mentions the complexity of the program, due in particular to the large number of different tracks. Reducing the number of types of submissions and renaming them to be more self-explanatory is intended to simplify the program and make it easier for newcomers to understand.

    We hope that the Meet Up format is considered as a very flexible format that will provide some of the value people have had in our traditional non-paper formats. For example, an organizer could propose a Meet Up that is topically organized around a specific Case-Study, a research tool, or even for networking purposes.

    Regarding the Doctoral Consortium track (historically set up as an exclusive meeting between 10-20 PhD students and selected CHI mentors) we are opening this format up by creating a new Chair position that will program mentoring activities of interest to the hundreds of students that attend CHI annually—rather than focusing only on a small group of students. Moreover, SIGCHI sponsors ~25 specialized conferences annually and most of them have a doctoral consortium track that members of the community can apply for—the advantage of these doctoral consortia is that PhD students are able to benefit from mentors in their direct area of expertise, which has been harder to guarantee at CHI since it spans all areas of human-computer interaction. 

    We presented these changes at CHI2024, and there was an active discussion of the pros and cons of this format. One of the concerns was that having papers competing with many other tracks and activities of CHI. Our new proposal will mitigate this by scheduling papers in a different part of the day that does not compete with other activities. With this approach we will have a similar expected number of paper sessions, while ensuring that these do not conflict with other important aspects of the conference experience (e.g., panels, networking, workshops, etc). People also reflected on the positives of this newly streamlined format, and the flexibility this allowed for organizing the conference. Outside of the town hall, we’ve had online discussion sessions with many community members, and individual conversations with CHI contributors.

    CHI is always planning a few years ahead when we are contracting for space. As such, we’re making final decisions for CHI2026 in Barcelona, and what exact space we’re contracting for in the conference center. With that in mind and considering the feedback we received, the steering committee is moving forward with a pilot of this new format for CHI2026. 

    Many of the specific organizational details will be turned over to the team running CHI2026 – with more details to appear on their page including CfPs. If you have questions or comments about these changes, please add a comment to this open document

    We understand any change to something as established and large as CHI is going to come with uncertainties and challenges. That being said, we see these changes as positive for both the intellectual and financial health of the conference. We are positive the community will work with us to manage these changes and make the format productive for future generations of CHI. In classic HCI tradition, this change reflects an iterative approach.  We want to understand how this approach will fare with the community as they experience CHI 2026. This is not expected to be the final outcome, but the start of a new iteration.

    We invite everyone in the global HCI community to take advantage of these changes and to use this opportunity to help reimagine the conference experience for both new and established members.