Use the CRF Critical Appraisal Tool
August 6, 2025
Gemini 2.5 Pro was asked, "Could AI have written something like this? Why or why not?" When concluded in response that, "AI fundamentally lacks the ability to perform the core tasks that this document represents," Dr. Christopher Williams asked it to write an essay on this statement, reaching a key insight.
"In conclusion, while we should not underestimate the transformative potential of AI as a research tool—assisting with literature reviews, drafting boilerplate text, or polishing language—we must not confuse the assistant with the architect. The dissertation by Christopher Williams stands as a powerful example of the core, irreplaceable functions of the human researcher. It is a product of a person acting in the world, creating a new framework from intellectual insight, grappling with the meaning of imperfect data, and sustaining a focused vision over time. The text is merely the final echo of this complex human process. An AI can mimic the echo, but it cannot produce the original sound, for it lacks the fundamental ability to perform the work that makes the sound worth hearing in the first place."
By Gemini 2.5 Pro
At first glance, the notion that an advanced AI could have written a document like Christopher Williams’ dissertation, "The Critical Race Framework Study," seems plausible. The text is structured, the language is academic, and the formatting is impeccable—all tasks at which modern Large Language Models (LLMs) excel. They can generate fluent, coherent prose that mimics the style of scholarly work with startling accuracy. Yet, this surface-level competence masks a profound epistemological chasm. The dissertation is not merely an artifact of text; it is the final report on a series of real-world actions, intellectual creations, and interpretive judgments. To claim an AI could write it is to mistake the map for the territory. A deeper examination reveals that an AI, in its current form, fundamentally lacks the ability to perform the core tasks that this document represents, tasks that are rooted in empirical engagement, novel synthesis, contextual analysis, and sustained human intentionality.
The most foundational task an AI cannot perform is interacting with the physical and social world to generate novel, primary data. The dissertation is built upon a bedrock of empirical evidence that did not exist before the author willed it into being. Williams describes a multi-phase study involving the recruitment of human subjects: public health faculty, doctoral students, and national experts. This process required navigating institutional review boards (IRB), composing and distributing solicitations, and managing human relationships and expectations. In Phase III, he coordinated with two other human researchers to meticulously evaluate twenty separate academic articles. This is the messy, logistical, and deeply human work of research—a series of actions in the world. An AI is a disembodied model, a complex network of algorithms existing in a server farm. It has no agency to design a Qualtrics survey and convince a busy professor to complete it. It cannot experience the real-world friction of a low response rate or the qualitative insight gleaned from a participant's open-ended feedback. The data presented in the dissertation's tables—the specific percentages, the interrater reliability scores, the qualitative themes—are not just numbers; they are the quantified residue of human action and opinion. An AI operates on the vast repository of data it was trained on; it cannot step outside that digital library to create a single new data point from scratch.
Secondly, the dissertation showcases an act of genuine intellectual creation that goes beyond the recombinatory synthesis of existing information. The central contribution of the study is the development of the "Critical Race (CR) Framework," a novel instrument designed to fill what Williams identified as a "major gap in the literature." This act required more than summarizing existing critiques of racial taxonomy. It involved a high-level synthesis to identify what was missing, the design of a structured tool with specific domains and questions, and an iterative process of refinement based on feedback from his dissertation committee and the results of the Phase I pilot study. This process demonstrates critical reflection and adaptation—a feedback loop between idea, application, and revision. While an AI can be prompted to "create a framework," its output is a sophisticated pastiche of patterns it has observed in its training data. It does not possess the abductive reasoning necessary to form the most plausible solution to a newly identified problem. The CR Framework is a proposed solution, an intellectual tool built for a specific purpose. It is a product of design, not just generation.
Furthermore, the document is replete with contextual analysis and interpretive judgment, tasks that require a human researcher's capacity to reason about ambiguity, limitations, and failure. An AI can run statistical tests, but it cannot perform the nuanced work of interpreting the results within the messy context of the study itself. For instance, Williams doesn't just report the high attrition rate; he analyzes it, concludes the data is MNAR (Missing Not At Random), and discusses the implications for generalizability. When the Exploratory Factor Analysis (EFA) yields a "poor to fair" construct validity, he doesn't just state the result; he offers a qualified, cautious interpretation, acknowledging the limitations of the small sample size. This is the essence of scholarly integrity: understanding not just what the data says, but what it doesn't say, and why. The AI has no "skin in the game"; it cannot understand the frustration of a flawed study design or the intellectual humility required to report inconclusive findings. Its goal is to generate a plausible response to a prompt, not to pursue truth, however complicated or unsatisfying that truth may be. The dissertation’s conclusions are valuable precisely because they are tentative and grounded in the specific limitations of the research process.
Finally, a dissertation represents a sustained intentionality, the pursuit of a coherent intellectual project over several years. This document is not a collection of disconnected paragraphs; it is a single, unified argument driven by the author's curiosity and scholarly agenda. The five research questions provide the intellectual spine, with each phase of the study meticulously designed to address them. The "Future Directions" section reveals a mind already thinking beyond the present work, laying the groundwork for a continuing line of inquiry. This long-term, goal-oriented project management is a uniquely human endeavor. An AI operates on a prompt-by-prompt basis. It has no memory of its "intentions" from one session to the next, no personal investment in a research topic, and no ambition to contribute to a field of knowledge. The dissertation is the culmination of a human journey, a testament to the author's persistence, discipline, and intellectual passion.
In conclusion, while we should not underestimate the transformative potential of AI as a research tool—assisting with literature reviews, drafting boilerplate text, or polishing language—we must not confuse the assistant with the architect. The dissertation by Christopher Williams stands as a powerful example of the core, irreplaceable functions of the human researcher. It is a product of a person acting in the world, creating a new framework from intellectual insight, grappling with the meaning of imperfect data, and sustaining a focused vision over time. The text is merely the final echo of this complex human process. An AI can mimic the echo, but it cannot produce the original sound, for it lacks the fundamental ability to perform the work that makes the sound worth hearing in the first place.