header image

Volume 26, No. 3, Art. 5 – September 2025

Navigating Consensus in Team-Based Qualitative Research: Challenges and Strategies for Rigorous Analysis

Sean N. Halpin

Abstract: Many researchers presume team-based qualitative research improves rigor, deepens meaning, and reduces bias by integrating multiple perspectives. Yet, researchers seldom challenge this belief. In the current paper, I critique these assumptions arising from power imbalances, pressure to align, and bargaining within research teams. Drawing from qualitative methodology, psychology, and epistemic justice, I argue that group coding can limit meaning-making, discourage dissent, and reinforce prevailing perspectives. Examining team dynamics reveals how forced consensus weakens trustworthiness. Instead of treating coder consensus as rigor, a reflexive approach prioritizing transparency, structured debate, and integrity is needed. I propose strategies for reducing bias, including team structures, audit trails, and clear steps for resolving interpretive differences.

Key words: qualitative coding; qualitative health research; quality criteria; rigor; trustworthiness

Table of Contents

1. Introduction

2. The Myth of Team-Based Rigor: Assumptions vs. Reality

2.1 Assumption 1: More coders mean more valid findings

2.2 Assumption 2: Group coding enhances reflexivity

2.3 Assumption 3: Disagreements lead to richer interpretations

3. Conceptual Framework: Understanding Power, Conformity, and Bias in Team-Based Analysis

3.1 Power asymmetry in research teams

3.2 Conformity and groupthink in coding decisions

3.3 The tit-for-tat problem: Strategic compromise in coding decisions

4. How These Issues Impact Rigor and Trustworthiness

4.1 Credibility and dependability

4.2 Transferability

4.3 Transparency and reflexivity

5. Strategies for Mitigating Bias in Team-Based Analysis

5.1 Structuring team-based analysis to reduce power imbalances

5.2 Addressing conformity and false consensus

5.3 Combatting the tit-for-tat effect

5.4 Maintaining an audit trail for transparency and reflexivity

6. Discussion: Beyond Team-Based Qualitative Rigor

References

Author

Citation

 

1. Introduction

Team-based qualitative research is positioned as a method that strengthens rigor, deepens meaning, and reduces researcher bias (CRESWELL & POTH, 2016; O'CONNOR & JOFFE, 2020). Many researchers assume involving multiple coders or analysts improves validity by limiting subjectivity and introducing diverse perspectives into analysis (BRAUN & CLARKE, 2021; NOWELL, NORRIS, WHITE & MOULES, 2017). However, few have explored these assumptions and considered that team-based analysis often creates obstacles that weaken trustworthiness. Power dynamics within research teams, pressure to conform in coding, and bargaining over how to analyze data shape outcomes in ways that obscure transparency and misalign with the epistemology of much of qualitative inquiry (FRICKER, 2007; JANIS, 1982). As such, researchers must carefully assess group processes to ensure they support meaning-making rather than distort findings. Addressing these challenges clarifies how rigor works in qualitative research and reevaluates whether consensus among team members strengthens methodology (DENZIN & LINCOLN, 2011; TRACY, 2010). [1]

As more researchers use team-based qualitative analysis, few have examined the hidden challenges of coding in groups. While qualitative research depends on researcher insight, team-based analysis often assumes that coder agreement signals rigor rather than reflecting power struggles, back-and-forth decision-making, or compromise (BRAUN & CLARKE, 2021; O'CONNOR & JOFFE, 2020). Hierarchies within research teams can privilege certain voices, allowing senior researchers or those with greater influence to shape coding outcomes (BOURDIEU, 1988 [1984]; FRICKER, 2007). Pressure to conform may push team members to adjust their coding to fit the dominant view, limiting debate and weakening analysis (JANIS, 1982; KAHNEMAN, 2011). Analysts may also engage in unspoken tit-for-tat exchanges, where coding conflicts are traded instead of explored, reducing consistency (NOWELL et al., 2017). These challenges call into question whether team-based analysis strengthens qualitative rigor or reinforces existing views. Without deeper scrutiny, qualitative research risks adopting a positivist approach that values coder alignment over methodological clarity and research transparency (DENZIN & LINCOLN, 2011; TRACY, 2010). [2]

I challenge the common belief that team-based qualitative research improves rigor and examines methodological obstacles tied to power, pressure to conform, and bias in group analysis. While prior work has questioned whether inter-coder agreement (ICA) belongs in qualitative research (HALPIN, 2024; O'CONNOR & JOFFE, 2020), a broader critique of coding structures remains necessary. Drawing from my experiences in qualitative research teams and relevant literature, I analyze how group dynamics influence coding choices and shape meaning. Rather than rejecting team-based approaches, I advocate for a reflexive, structured framework that reduces power imbalances and forced consensus while offering strategies to address these risks (BRAUN & CLARKE, 2024; LINCOLN & GUBA, 1985). By exposing hidden challenges in team-based analysis, I aim to strengthen conversations on trustworthiness in qualitative research and calls for greater transparency in structuring and justifying coding practices (MORSE, 2015; NOWELL et al., 2017). [3]

Core ideas behind team-based qualitative research remain largely unchallenged, despite a growing body of literature questioning traditional markers of rigor (BRAUN & CLARKE, 2021; O'CONNOR & JOFFE, 2020; TRACY, 2010). Many studies on qualitative rigor focus on ICA, or statistical measures of showing that more than one researcher coding of data align, as a measure of trustworthiness, though scholars argue ICA does not align with qualitative approaches that emphasize researcher perspective and reflexivity (HALPIN, 2024; MORSE, 2015). Beyond ICA, few have explored how group coding shapes analytic outcomes. Studies on power imbalances in research teams show how some voices hold greater sway, sidelining alternative perspectives (BOURDIEU, 1988 [1984]; FRICKER, 2007). Social psychological research on group decision-making also highlights the risks of pressure to conform and tactical compromise, raising concerns over whether team-based approaches expand or limit diverse thinking (JANIS, 1982; KAHNEMAN, 2011). In this paper, I expand critiques of ICA to the broader practice of team coding, exploring how group interactions shape qualitative methods and findings. [4]

Following the introduction, in Section 2, I challenge core ideas behind team-based qualitative analysis, questioning whether group coding strengthens rigor, depth, and validity. I draw on literature from qualitative methodology, sociology of science, and cognitive psychology to reveal risks tied to power gaps, pressure to conform, and bargaining in research teams. In Section 3, I explore how power, conformity, and bias impact team-based analysis. Next, in Section 4, I examine how the previously explored issues impact rigor and trustworthiness in qualitative research. In Section 5, I offer strategies to promote transparency, reflexivity, and inclusive decision-making in team-based analysis. Finally, I use Section 6 to discuss implications for researchers, journal reviewers, and educators, and outline directions for future work on rigor in qualitative inquiry. [5]

2. The Myth of Team-Based Rigor: Assumptions vs. Reality

Team-based qualitative research is often framed as a methodological approach that strengthens analytic rigor by incorporating multiple perspectives, reducing bias, and ensuring that researchers take ownership of coding choices (CRESWELL & POTH, 2016; O'CONNOR & JOFFE, 2020). However, the underlying belief relies on a narrow view of rigor, prioritizing coder agreement over epistemological depth and flexibility in analysis. In this part of the paper, I critically examine three common assumptions about team-based rigor, revealing how group coding can sometimes weaken trustworthiness rather than improve qualitative inquiry. [6]

2.1 Assumption 1: More coders mean more valid findings

Many researchers claim that involving multiple coders strengthens validity by reducing bias and increasing reliability in analysis (O'CONNOR & JOFFE, 2020). This belief aligns with a positivist view where researcher perspectives are treated as errors to correct through standardized coding and agreement metrics (KRIPPENDORFF, 2018). However, qualitative approaches that emphasize reflexivity and meaning-making do not measure validity by coder agreement but by the depth and transparency of analysis (BRAUN & CLARKE, 2024; TRACY, 2010). Consistency across coders does not necessarily indicate more accurate insights but may instead suppress alternative perspectives to create uniform themes (DENZIN & LINCOLN, 2011). When teams prioritize consistency, they risk flattening human complexity by limiting diverse viewpoints (NOWELL et al., 2017). Rather than assuming that more coders always strengthen rigor, researchers must carefully assess how team-based coding shapes analytic choices, determining whether coder consistency reflects deeper theoretical consideration or methodical shortcuts. [7]

Scholars have highlighted the complexities and nuances of collaborative qualitative analysis. MAUTHNER and DOUCET (2003) emphasized the importance of reflexivity in team-based research, noting that collaborative analysis requires researchers to be acutely aware of how their positionalities and interactions influence data interpretation. SALDANA (2021) provided a comprehensive guide on various coding methods, underscoring the need for flexibility and adaptability in collaborative settings to accommodate diverse analytical perspectives. Furthermore, MACPHAIL, KHOZA, ABLER and RANGANATHAN (2016) offered practical guidelines for establishing intercoder reliability in qualitative studies, illustrating the challenges and strategies for achieving consistency among multiple coders. These contributions underscore that while collaborative coding can enhance analytical depth, it also necessitates deliberate strategies to navigate potential divergences in interpretation and ensure methodological rigor. [8]

While LINCOLN and GUBA's (1985) trustworthiness criteria remain foundational in qualitative research, they have been critiqued for lacking clarity in application across diverse methodologies. MORSE (2015) contended that the criteria-credibility, transferability, dependability, and confirmability are often applied superficially and without sufficient theoretical grounding. She calls for a return to concepts like validity and reliability, adapted for qualitative contexts, and argues for tailoring rigor strategies to specific research questions and epistemological positions. NOWELL and colleagues (2017) echoed these concerns, noting that when applied rigidly, LINCOLN and GUBA's framework may obscure the contextual and interpretive dimensions of analysis, particularly in reflexive thematic analysis. These critiques highlighted the need for flexible, thoughtful engagement with trustworthiness frameworks, especially in collaborative settings where epistemological tensions may surface. [9]

2.2 Assumption 2: Group coding enhances reflexivity

Reflexivity remains a hallmark of qualitative rigor, requiring researchers to examine their own perspectives, biases, and epistemological stance throughout the analytic process (BERGER, 2015; LINCOLN & GUBA, 1985). In theory, team-based analysis should strengthen reflexivity by encouraging researchers to challenge prevailing ideas and push for deeper analysis of data (TRACY, 2010). However, power structures within research teams often determine whose insights shape decisions while others get dismissed (BOURDIEU, 1988 [1984]; FRICKER, 2007). Junior researchers or those with less standing may hesitate to offer alternative perspectives, especially when senior scholars guide the team's approach (KARNIELI-MILLER, STRIER & PESSACH, 2009). A paradox emerges: Team-based analysis aims to create space for reflexivity, yet group structures may limit open dialogue and reinforce existing hierarchies. Research on social pressure suggests that individuals in group settings tend to align with prevailing views rather than express independent thoughts, a pattern that narrows perspectives in coding (JANIS, 1982; KAHNEMAN, 2011). For example, in a study using conversation analysis to evaluate educational video content for patients undergoing stem cell transplant, HALPIN, KONOMOS and ROULSTON (2022) found that team-based interpretation of participant reactions required ongoing negotiation of analytic frames. The process illuminated how different epistemological lenses, clinical versus interactional, shaped the interpretation of what constituted confusion or understanding. These differences were productive but also highlighted the need for explicit reflexivity in collaborative analysis. [10]

2.3 Assumption 3: Disagreements lead to richer interpretations

Another common belief in team-based qualitative analysis suggests that multiple coders strengthen analytic depth by fostering debate and pushing researchers to consider varied perspectives before identifying themes (O'CONNOR & JOFFE, 2020; TRACY, 2010). However, not all coding conflicts improve methodological rigor, and how teams handle competing views can shape the trustworthiness of findings. In many cases, researchers engage in tactical compromises instead of genuine debate, exchanging analytic trade-offs to maintain group stability and accelerate decisions (NOWELL et al., 2017). For example, in a study exploring the use of patient portals to recruit pregnant individuals into maternal-child health research (HALPIN et al., 2025), our research team used a team-based coding approach that required reaching consensus across coders from different disciplinary backgrounds. Coding disagreements were often resolved through informal trade-offs, where one team member would yield a code assignment in one segment to gain support for another. These negotiations, while efficient, reflected team dynamics more than analytic rigor and were rarely documented in formal analytic memos. This pattern appears frequently in large, mixed-discipline research teams, where competing theoretical perspectives lead to coding adjustments based on group dynamics rather than rigorous inquiry into diverging views (MORSE, 2015). Furthermore, how coding conflicts play out often depends on implicit social hierarchies, with junior researchers or minority perspectives overruled in favor of more senior voices (BOURDIEU, 1988 [1984]; FRICKER, 2007). Power imbalances raise concerns about whether team-based analysis broadens perspectives or simply reinforces existing patterns. To prevent coding conflicts from turning into routine trade-offs, qualitative researchers must establish structured methods for addressing competing viewpoints, ensuring transparency and intellectual integrity in analysis. [11]

3. Conceptual Framework: Understanding Power, Conformity, and Bias in Team-Based Analysis

Examining how team-based qualitative research shapes analytic outcomes requires consideration of various forces that influence group decision-making. Power imbalances, pressure to align, and strategic trade-offs do not arise solely from individual interactions but stem from structural hierarchies, epistemic traditions, and ingrained biases (BOURDIEU, 1988 [1984]; FRICKER, 2007; JANIS, 1982). These forces determine whose voices carry weight, how coding conflicts unfold, and which perspectives gain legitimacy in research. The following discussion presents three interconnected frameworks, power dynamics, group influence, and decision-making patterns, to provide a theoretical lens for understanding how team-based analysis creates both constraints and opportunities in qualitative research. [12]

3.1 Power asymmetry in research teams

Power shapes how knowledge takes form in research teams, determining whose perspectives hold value and how coding choices develop (BOURDIEU, 1988 [1984]; FRICKER, 2007). In team-based qualitative research, imbalances emerge when hierarchies, seniority, or epistemic privilege allow certain individuals to control coding approaches (KARNIELI-MILLER et al., 2009). Senior researchers may hold implicit or explicit control, leading junior team members to accept prevailing themes rather than challenge prevailing assumptions (NOWELL et al., 2017). [13]

For example, in a study evaluating an educational intervention for multiple myeloma patients (HALPIN & KONOMOS, 2022), two team members, a senior qualitative researcher and a medical illustrator, collaboratively coded patient interviews. Despite meaningful contributions from both, the coding outcomes tended to reflect the interpretations of the more experienced researcher. The illustrator, who brought deep visual and narrative insight, often deferred to the PhD researcher during analytic discussions. While unintentional, this dynamic illustrates how epistemic privilege can quietly shape what gets coded and why. [14]

Epistemic hierarchy aligns with FRICKER's (2007) theory of epistemic injustice, in which she highlighted how marginalized voices often go unheard in knowledge creation. Research teams risk dismissing perspectives as subjective or misaligned when they challenge established analytical frameworks. BOURDIEU's (1988 [1984]) concept of cultural capital, the accumulated knowledge, credentials, and modes of expression that confer status in academic settings, offers a valuable lens for understanding how power circulates in research teams. Individuals with high cultural capital (e.g., PhD holders, senior faculty, or experienced qualitative methodologists) may be viewed as more legitimate voices in analytic discussions, while those with less institutional standing, such as early-career researchers, community partners, or practitioners, may be perceived as lacking the symbolic authority to challenge dominant interpretations. These asymmetries can lead to subtle forms of epistemic exclusion, where the perspectives of those with lower status are underweighted or dismissed during theme development and consensus-building. By tying symbolic power to team roles, we can better assess whether a group's analytic process truly fosters diverse knowledge or reproduces academic hierarchies under the guise of collaboration. [15]

While LINCOLN and GUBA's (1985) framework remains widely cited, recent scholars have questioned whether their criteria sufficiently account for power and positionality in team-based analysis (HOLMES, 2020; MORSE, 2015). To reduce power imbalances, researchers must assess whose voices guide coding and whether team structures encourage diverse perspectives or reinforce existing hierarchies. Even when collaboration is well-intentioned, team-based research can obscure individual accountability and ethical clarity. HORNER (2002) argued that participatory approaches may unintentionally mask how labor and decision-making are distributed, with some team members absorbing disproportionate analytic burdens without recognition or agency. MAUTHNER and DOUCET (2008) similarly critiqued the epistemological assumptions of collaborative analysis, noting that dividing analytic labor can fragment interpretive coherence and erode reflexivity. When researchers divide tasks mechanically, such as assigning transcripts to different team members, the nuanced interplay between researcher, data, and interpretation may be lost. These critiques highlight that collaboration itself is not inherently rigorous or ethical; without careful attention to power, voice, and analytic integrity, team-based practices can perpetuate the very hierarchies they aim to dismantle. Using structured formats, rotating facilitators, and explicitly documenting alternative viewpoints can create space for a broader range of insights (BRAUN & CLARKE, 2021; TRACY, 2010). [16]

Recent scholarship has emphasized the importance of critically examining power dynamics and epistemic injustices within research teams. HALL, MITCHEL, HALPIN and KILANKO (2023) argued that qualitative research can either reproduce or resist structural inequities depending on whether it intentionally amplifies marginalized voices and centers participant empowerment. Their work demonstrates how culturally responsive focus group design can serve as a tool for disrupting epistemic hierarchies by validating community knowledge and co-constructing meaning. ALCOFF (2010) similarly highlighted how epistemic identities shape whose perspectives are valued, calling for researchers to remain attuned to the implications of authority and voice in analysis. HOLMES (2020) emphasized the importance of reflexivity in mitigating these dynamics, arguing that researchers must engage in continuous self-assessment to prevent their own positionality from unduly shaping findings. Together, these perspectives support more equitable and transparent team-based analytic practices. [17]

3.2 Conformity and groupthink in coding decisions

In theory, team-based coding should promote diverse perspectives and richer insights. However, research from psychology and group behavior shows individuals often align with group consensus instead of voicing dissent (JANIS, 1982; KAHNEMAN, 2011). Groupthink emerges when the need for unity and efficiency overrides deeper scrutiny, leading to premature agreement and the dismissal of alternative viewpoints (NOWELL et al., 2017). [18]

Experiments on social influence demonstrate how individuals conform to majority opinions even when they seem incorrect (ASCH, 1956). In team-based qualitative research, alignment with prevailing coding patterns may not always reflect genuine epistemic debates but rather implicit pressure to follow the dominant approach (O'CONNOR & JOFFE, 2020). For example, when junior team members observe senior researchers coding a certain way, they may adjust their own work to match the perceived norm. [19]

Cognitive biases such as selective reasoning, the tendency to favor information that supports pre-existing beliefs, can lead teams to overlook contradictory data that challenges emerging themes (NICKERSON, 1998; TRACY, 2010). These dynamics may encourage premature consensus, especially when team members feel pressured to align with dominant views. To counteract these effects, research teams should encourage structured dissent through several strategies including anonymous coding comparison before team discussions, assigning a devil's advocate role to challenge dominant themes, and conducting independent memo-writing before consensus-building sessions (BRAUN & CLARKE, 2021). By fostering structured disagreement rather than implicit conformity, qualitative researchers can ensure that team-based analysis strengthens rather than dilutes interpretive rigor. [20]

3.3 The tit-for-tat problem: Strategic compromise in coding decisions

Even when coding disagreements are explicitly acknowledged, the ways in which they are resolved within research teams can introduce additional methodological biases. Rather than being resolved through critical engagement with the data, coding disputes are often settled through strategic negotiation, compromise, or hierarchical decision-making (MORSE, 2015; NOWELL et al., 2017). This dynamic aligns with what KAHNEMAN (2011) described as decision fatigue, where prolonged analytic discussions lead to increased cognitive shortcuts and defaulting to majority opinion rather than sustained critical engagement. [21]

One common manifestation of strategic decision-making in research teams is the tit-for-tat effect, where coders trade thematic concessions to maintain group cohesion rather than interrogating conflicting interpretations (O'CONNOR & JOFFE, 2020). For instance, researchers might yield on one coding disagreement in exchange for having their interpretation favored in a future discussion, leading to compromise-driven rather than evidence-driven decisions (TRACY, 2010). Additionally, research teams may allow early coding decisions to set a precedent that constrains future interpretive flexibility (KRIPPENDORFF, 2018). To counteract these biases, qualitative researchers should ensure that coding disagreements are documented transparently rather than informally negotiated, use independent reviewers to audit analytic decisions and identify patterns of strategic compromise, and employe reflexivity journals to track how coding decisions evolve over time (BERGER, 2015). By recognizing and addressing strategic biases in coding negotiations, qualitative researchers can foster a more methodologically robust approach to team-based analysis that prioritizes interpretive depth over analytic expediency. [22]

4. How These Issues Impact Rigor and Trustworthiness

The challenges outlined earlier, i.e., power imbalances, group pressure, and choice-making in analysis, affect the rigor and trustworthiness of qualitative research. While qualitative inquiry does not aim for neutrality in the positivist sense, transparency, reflexivity, and methodological integrity help ensure that findings remain transferable and ethically sound (LINCOLN & GUBA, 1985; TRACY, 2010). When unchecked, biases in team-based analysis can weaken these foundations, shaping outcomes in ways that favor efficiency over depth and consensus over epistemic integrity. The following discussion explores how these challenges shape credibility, transferability, and transparency in qualitative research. [23]

4.1 Credibility and dependability

In qualitative research, trustworthiness depends on how well findings reflect participants' perspectives, while consistency ensures analytic approaches remain stable and reliable (LINCOLN & GUBA, 1985). In team-based analysis, many scholars assume that multiple coders strengthen accuracy, yet power dynamics and group pressure can shape coding choices (MORSE, 2015; NOWELL et al., 2017). When junior researchers defer to senior colleagues or when conflicts get settled through tit-for-tat compromises, resulting themes may reflect negotiated agreement rather than depth of insight (BOURDIEU, 1988 [1984]; FRICKER, 2007). This weakens consistency, as coding lacks grounding in systematic approaches and instead follows team hierarchies. To support transparency and depth, research teams must track coding conflicts openly and use structured guidelines to ensure analytic rigor (BRAUN & CLARKE, 2021). [24]

4.2 Transferability

Transferability refers to the extent to which qualitative findings can be applied to other contexts or populations, relying on thick description and theoretical generalizability rather than statistical representativeness (LINCOLN & GUBA, 1985; TRACY, 2010). However, when conformity pressures lead researchers to prioritize analytic convergence over divergent perspectives, important contextual nuances may be lost, reducing the applicability of findings beyond the immediate study sample (KARNIELI-MILLER et al., 2009; O'CONNOR & JOFFE, 2020). Suppressing alternative interpretations in favor of coherence may create overly homogenized findings, limiting their ability to capture the complexity of social phenomena (MORSE, 2015). Ensuring transferability requires teams to explicitly document and retain contradictory cases rather than eliminating them for the sake of coding consistency. Using reflexive memos and maintaining transparency about interpretive choices can help safeguard against artificial analytic convergence and ensure that qualitative research retains its richness and depth across different contexts (NOWELL et al., 2017). [25]

4.3 Transparency and reflexivity

Transparency and reflexivity are fundamental to qualitative rigor, requiring researchers to explicitly account for their methodological decisions and positionality throughout the research process (BERGER, 2015; TRACY, 2010). However, in team-based coding, the negotiation of analytic decisions is often undocumented, occurring informally in meetings or through implicit deference to dominant voices (BOURDIEU, 1988 [1984]; FRICKER, 2007). Without systematic documentation of coding rationales and disagreements, external reviewers, and even team members themselves, may struggle to trace the logic behind thematic choices, undermining the transparency of the research (O'CONNOR & JOFFE, 2020). Similarly, when power asymmetries limit critical reflexive engagement, researchers may fail to fully investigate how their own perspectives shape coding decisions (MORSE, 2015). To strengthen reflexivity, research teams should adopt structured memo-writing, reflexive journaling, and independent coding audits to ensure that methodological decisions are critically examined rather than implicitly accepted (BRAUN & CLARKE, 2021; NOWELL et al., 2017). [26]

5. Strategies for Mitigating Bias in Team-Based Analysis

While team-based qualitative research introduces challenges related to power asymmetry, conformity, and strategic decision-making, these biases are not insurmountable. By implementing intentional methodological strategies, research teams can enhance reflexivity, maintain analytic integrity, and promote epistemic diversity in their coding processes. This section outlines key strategies for mitigating bias in team-based analysis, emphasizing structured approaches to decision-making, documentation of coding rationale, and the role of audit trails in ensuring transparency. [27]

5.1 Structuring team-based analysis to reduce power imbalances

Power dynamics within research teams often shape whose interpretations are privileged, affecting the overall analytical process (BOURDIEU, 1988 [1984]; FRICKER, 2007). When senior researchers or dominant voices steer coding discussions, junior team members or those with less institutional authority may hesitate to challenge interpretations, leading to epistemic injustice (KARNIELI-MILLER et al., 2009). To mitigate these power imbalances, research teams can implement rotating facilitation roles to ensure that coding discussions are not consistently led by the same individuals, encourage independent memo-writing before consensus-building, allowing all researchers to document their interpretations without immediate influence from dominant voices (TRACY, 2010), and use anonymous coding comparisons before team discussions to prevent social status from influencing coding decisions (NOWELL et al., 2017). Such practices align with participatory health researchers' approaches, who emphasize collaborative reflexivity and recognize that power dynamics shape how quality is constructed and enacted (SPRINGETT, ATKEY, KONGATS, ZULLA & WILKINS, 2016). In short, anonymous coding comparison involves individual researchers generating new inductive codes independently and then reviewing all new codes, without the coder’s name attached to the code. By adopting these practices, teams can create a research environment where multiple perspectives are valued and critically engaged, rather than being shaped by existing power hierarchies. [28]

5.2 Addressing conformity and false consensus

Social psychology research demonstrates that individuals in group settings are more likely to align with majority opinions, even when those opinions contradict their own interpretations (ASCH, 1956; JANIS, 1982). In qualitative research, this conformity effect can lead to artificial consensus, where team members default to dominant coding choices rather than defending alternative perspectives. To reduce conformity biases, research teams can assign a devil's advocate role in coding discussions, where a designated team member is responsible for challenging emerging themes and pushing for alternative interpretations (KAHNEMAN, 2011). They can also use structured disagreement protocols, requiring coders to justify their coding choices before reaching a final decision (O'CONNOR & JOFFE, 2020), and employ parallel coding where multiple coders analyze the same data independently before group discussion, ensuring that initial interpretations are not influenced by team dynamics (MORSE, 2015). Encouraging structured dissent and explicit justification of coding decisions ensures that research teams prioritize analytic depth rather than efficiency in decision-making. [29]

5.3 Combatting the tit-for-tat effect

Strategic negotiation in coding disagreements, where team members trade thematic concessions rather than engage in rigorous debate, can weaken qualitative rigor (NOWELL et al., 2017). This tit-for-tat effect results in compromise-driven coding that prioritizes maintaining group cohesion over critically engaging with conflicting interpretations (TRACY, 2010). To address this issue, research teams should require that coding disagreements be documented in meeting notes, ensuring that decision-making is recorded transparently rather than negotiated informally (BERGER, 2015), use external reviewers or independent auditors to assess coding decisions without the influence of internal team dynamics (MORSE, 2015), and implement iterative coding rounds, where disagreements are revisited after a cooling-off period, allowing for more deliberate engagement with the data (BRAUN & CLARKE, 2021). By formalizing disagreement resolution processes, teams can ensure that coding decisions reflect critical engagement rather than strategic bargaining. [30]

5.4 Maintaining an audit trail for transparency and reflexivity

A transparent audit trail serves as a systematic record of coding decisions, documenting how and when key interpretive choices were made throughout the analytic process (LINCOLN & GUBA, 1985; NOWELL et al., 2017). Without an audit trail, the rationale behind coding decisions can become obscured, making it difficult to determine how power dynamics, conformity, or strategic negotiation may have shaped findings. To strengthen research transparency, teams should maintain a coding decision log that documents disagreements, justifications, and resolution strategies (BERGER, 2015), use reflexivity memos to track how researchers' perspectives evolve over time, ensuring that shifts in coding are explicitly accounted for (TRACY, 2010), and employ independent audits, where an external reviewer assesses the audit trail to verify the transparency and consistency of coding decisions (MORSE, 2015). An audit trail not only enhances trustworthiness but also enables researchers to critically assess the evolution of their interpretations, strengthening the overall rigor of the analysis. [31]

6. Discussion: Beyond Team-Based Qualitative Rigor

Challenges tied to team-based qualitative research go beyond methodological beliefs dominating debates on rigor. While qualitative research relies on analysis and reflexivity, the belief that group coding strengthens rigor often ignores power dynamics, group pressure, and biases shaping coding practices (BRAUN & CLARKE, 2021; O‘CONNOR & JOFFE, 2020). In this paper, I have shown how these biases weaken trustworthiness, not because team-based analysis contains flaws, but because its execution often follows positivist models rather than qualitative principles of epistemic depth (LINCOLN & GUBA, 1985; TRACY, 2010). Instead of treating coder agreement as a marker of rigor, qualitative researchers must assess how coding choices develop, who holds influence, and how the analytic process remains transparent (HALPIN, 2024). [32]

Prior critiques of ICA highlight its misalignment with qualitative epistemologies, noting that it prioritizes consistency over interpretive richness and imposes a pseudo-quantitative standard on qualitative inquiry (HALPIN, 2024; MORSE, 2015). Yet even without ICA, team-based coding can reproduce other distortions. Power hierarchies, social pressure, and tactical compromises show that moving beyond ICA does not automatically lead to stronger research. Rigor requires structured reflexivity, openness to divergent interpretations, and methodological transparency (BOURDIEU, 1988 [1984]; FRICKER, 2007). [33]

Ensuring rigor in team-based qualitative research requires a fundamental shift in how analytic collaboration is structured, particularly in applied research contexts (DENZIN & LINCOLN, 2011; HALPIN, KONOMOS & ROULSTON, 2021). Rather than assume that consensus strengthens validity, researchers must approach collaborative coding as an opportunity for critical engagement rather than procedural verification (DENZIN & LINCOLN, 2011). The tension between procedural enforcement and methodological flexibility echoes ongoing debates in qualitative research regarding whether quality assurance promotes genuine rigor or enforces conformity (REICHERTZ, 2019). Analytic disagreements should not be viewed as obstacles to overcome but as valuable sites of reflexivity that can deepen the interpretive process (BERGER, 2015). When research teams foster open, structured dialogue about divergent interpretations, they create conditions where multiple perspectives can be examined rather than be suppressed in the interest of efficiency or harmony. [34]

At the same time, ensuring that power asymmetries do not silence alternative interpretations is critical for maintaining epistemic integrity in qualitative research (BOURDIEU, 1988 [1984]; ROULSTON & HALPIN, 2022). Researchers with greater institutional authority, methodological expertise, or seniority often exert disproportionate influence on analytic decisions, whether consciously or unconsciously (BOURDIEU, 1988 [1984]; FRICKER, 2007). Addressing this requires research teams to adopt structured decision-making strategies that encourage meaningful participation from all members. Creating environments where junior researchers and those with different epistemological orientations feel empowered to voice their interpretations is essential for preserving the diversity of perspectives that qualitative inquiry depends on (KARNIELI-MILLER et al., 2009). [35]

Trustworthiness strategies should be selected and justified with reference to the methodological aims of the study. In narrative inquiries, for instance, member checking can support credibility by ensuring that findings resonate with participants' lived experiences (BIRT, SCOTT, CAVERS, CAMPBELL & WALTER, 2016). In grounded theory methodology, audit trails can enhance dependability without necessitating ICA (NOWELL et al., 2017). These strategies are not interchangeable, they must align with the epistemological commitments and analytic logic of the study. [36]

In this critique of ICA, I point to a broader imperative: Epistemic alignment (METCALFE, 2005). Rigor in qualitative research must reflect the values and logic of the paradigms being used. In traditions where meaning is co-constructed, strategies that support multiplicity, like reflexivity, thick description, and iterative analysis, are more appropriate than those borrowed from quantitative traditions. [37]

I offer these reflections not as fixed prescriptions but as an invitation for continued dialogue. How do we ensure rigor while honoring complexity? How do we design collaborative processes that resist hierarchy and foster epistemic diversity? These are methodological, ethical, and practical questions, and they deserve collective engagement across our field. [38]

References

Alcoff, Linda Martín (2010). Epistemic identities. Episteme, 7(2), 128-137.

Asch, Solomon E. (1956). Studies of independence and conformity: I. A minority of one against a unanimous majority. Psychological Monographs: General and Applied, 70(9), 1-70.

Berger, Roni (2015). Now I see it, now I don't: Researcher's position and reflexivity in qualitative research. Qualitative Research, 15(2), 219-234.

Birt, Linda; Scott, Suzanne; Cavers, Debbie; Campbell, Cristine & Walter, Fiona (2016). Member checking: A tool to enhance trustworthiness or merely a nod to validation?. Qualitative Health Research26(13), 1802-1811.

Bourdieu, Pierre (1988 [1984]). Homo academicus. Stanford University Press.

Braun, Virginia & Clarke, Victoria (2021). Can I use TA? Should I use TA? Should I not use TA? Comparing reflexive thematic analysis and other pattern-based qualitative analytic approaches. Counselling and Psychotherapy Research, 21(1), 37-47.

Braun, Virginia & Clarke, Victoria (2024). Supporting best practice in reflexive thematic analysis reporting in Palliative Medicine: A review of published research and introduction to the Reflexive Thematic Analysis Reporting Guidelines (RTARG). Palliative Medicine, 38(6), 608-616, http://dx.doi.org/10.1177/02692163241234800 [Accessed: May 29, 2025].

Creswell, John W. & Poth, Cheryl N. (2016). Qualitative inquiry and research design: Choosing among five approaches. Thousand Oaks, CA: Sage.

Denzin, Norman K. & Lincoln, Yvonna S. (2011). The Sage handbook of qualitative research (4th ed.). Thousand Oaks, CA: Sage.

Fricker, Miranda (2007). Epistemic injustice: Power and the ethics of knowing. Oxford: Oxford University Press.

Hall, Jori N.; Mitchel, Nia; Halpin, Sean N. & Kilanko, Glory A. (2023). Using focus groups for empowerment purposes in qualitative health research and evaluation. International Journal of Social Research Methodology, 26(4), 409-423, https://doi.org/10.1080/13645579.2022.2049518 [Accessed: May 29, 2025].

Halpin, Sean N. (2024). Inter-coder agreement in qualitative coding: Considerations for its use. American Journal of Qualitative Research, 8(3), 23-43, https://doi.org/10.29333/ajqr/14887 [Accessed: May 29, 2025].

Halpin, Sean N. & Konomos, Michael (2022). An iterative formative evaluation of medical education for multiple myeloma patients receiving autologous stem cell transplant. Journal of Cancer Education37(3), 779-787.

Halpin, Sean N.; Konomos, Michael & Roulston, Kathryn (2021). Using applied conversation analysis in patient education. Global Qualitative Nursing Research, 8, 23333936211012990, https://doi.org/10.1177/23333936211012990 [Accessed: May 29, 2025].

Halpin, Sean N.; Konomos, Michael & Roulston, Kathryn (2022). Using conversation analysis to appraise how novel educational videos impact patient medical education. Patient Education and Counseling105(7), 2027-2032.

Halpin, Sean N.; Wright, Rebecca; Gwaltney, Angela; Frantz, Annabelle; Peay, Holly; Olsson, Emily; Raspa, Melissa; Gehtland, Lisa & Andrews, Sara M. (2025). Assessing the acceptability of using patient portals to recruit pregnant women and new mothers for maternal-child health research. JAMIA Open8(3), ooaf027, https://doi.org/10.1093/jamiaopen/ooaf027 [Accessed: May 29, 2025].

Holmes, Andrew G.D. (2020). Researcher positionality: A consideration of its influence and place in qualitative research. Shanlax International Journal of Education, 8(4), 1-10, https://doi.org/10.34293/education.v8i4.3232 [Accessed: May 29, 2025].

Horner, Bruce (2002). Critical ethnography, ethics, and work: Rearticulating labor. JAC, 22(3), 561-584.

Janis, Irving L. (1982). Groupthink: Psychological studies of policy decisions and fiascoes (2nd ed.). Boston; MA: Houghton Mifflin.

Kahneman, Daniel (2011). Thinking, fast and slow. New York, NY: Farrar, Straus and Giroux.

Karnieli-Miller, Orit; Strier, Roni & Pessach, Liat (2009). Power relations in qualitative research. Qualitative Health Research, 19(2), 279-289.

Krippendorff, Klaus (2018). Content analysis: An introduction to its methodology (4th ed.). Thousand Oaks, CA: Sage.

Lincoln, Yvonna S. & Guba, Egon G. (1985). Naturalistic inquiry. Thousand Oaks, CA: Sage.

MacPhail, Catherine; Khoza, Nomhle; Abler, Laurie & Ranganathan, Meghna (2016). Process guidelines for establishing intercoder reliability in qualitative studies. Qualitative Research, 16(2), 198-212.

Mauthner, Natasha S. & Doucet, Andrea (2003). Reflexive accounts and accounts of reflexivity in qualitative data analysis. Sociology, 37(3), 413-431.

Mauthner, Natasha S. & Doucet, Andrea (2008). "Knowledge once divided can be hard to put together again": An epistemological critique of collaborative and team-based research practices. Sociology, 42(5), 971-985.

Metcalfe, Mike. (2005). Generalisation: Learning across epistemologies. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 6(1), Art. 17, https://doi.org/10.17169/fqs-6.1.525 [Accessed: May 29, 2025].

Morse, Janice M. (2015). Critical analysis of strategies for determining rigor in qualitative inquiry. Qualitative Health Research, 25(9), 1212-1222.

Nickerson, Raymond S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175-220.

Nowell, Lorelli S.; Norris, Jill M.; White, Deborah E. & Moules, Nancy J. (2017). Thematic analysis: Striving to meet the trustworthiness criteria. International Journal of Qualitative Methods, 16(1), 1-13, https://doi.org/10.1177/1609406917733847 [Accessed: May 29, 2025].

O'Connor, Cliodhna & Joffe, Helene (2020). Intercoder reliability in qualitative research: Debates and practical guidelines. International Journal of Qualitative Methods, 19, 1-13, https://doi.org/10.1177/1609406919899220 [Accessed: May 29, 2025].

Reichertz, Jo (2019). Method police or quality assurance? Two patterns of interpretation in the struggle for supremacy in qualitative social research. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 20(1), Art. 11, https://doi.org/10.17169/fqs-20.1.3205 [Accessed: May 29, 2025].

Roulston, Kathryn & Halpin, Sean N. (2022). Designing qualitative research using interview data. In Uwe Flick (Ed.), The Sage handbook of qualitative research design, 667-683. Thousand Oaks, CA: Sage.

Saldaña, Johnny (2021). The coding manual for qualitative researchers (4th ed.). Thousand Oaks, CA: Sage.

Springett, Jane; Atkey, Kayla; Kongats, Krystyna; Zulla, Rosslynn & Wilkins, Emma (2016). Conceptualizing quality in participatory health research: A Phenomenographic inquiry. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 17(2), Art. 16, http://dx.doi.org/10.17169/fqs-17.2.2568 [Accessed: May 29, 2025].

Tracy, Sarah J. (2010). Qualitative quality: Eight "big-tent" criteria for excellent qualitative research. Qualitative Inquiry, 16(10), 837-851.

Author

Sean N. HALPIN, PhD, FGSA, is a qualitative methodologist at RTI International, where he leads research on complex analytic processes in team-based qualitative studies. A Fellow of the Gerontological Society of America, Dr. HALPIN has more than a decade of experience designing and executing socio-behavioral studies across diverse clinical and public health contexts with a focus on older adults. His expertise spans interview-based research, methodological design, and mixed-methods integration, with a particular focus on the dynamics of analytic collaboration, trustworthiness, and reflexivity. Dr. HALPIN holds a PhD in qualitative research and evaluation methodologies from the University of Georgia and an MA in developmental psychology from Teachers College, Columbia University.

Contact:

Sean N. Halpin, PhD

RTI International
3040 East Cornwallis Road
Research Triangle Park, NC 27709-2194, US

E-mail: snhalpin@rti.org

Citation

Halpin, Sean N. (2025). Navigating consensus in team-based qualitative research: Challenges and strategies for rigorous analysis [38 paragraphs]. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 26(3), Art. 5, https://doi.org/10.17169/fqs-26.3.4386.

Forum Qualitative Sozialforschung / Forum: Qualitative Social Research (FQS)

ISSN 1438-5627

Funded by the KOALA project

Creative Common License

Creative Commons Attribution 4.0 International License