The Cortez Backdoor Approach to Research Methods in the Age of AI: A Comprehensive Guide
Author’s Preface: Rethinking the Front Door
For decades, the dominant pedagogy in research methods has followed a linear, front-loaded model. Start with the title, then write your abstract, frame your objectives, and only then gather data. This approach, while orderly, is not how real-world research unfolds—especially in a data-rich, AI-assisted era.
I developed the Cortez Backdoor Approach to Research as a response to the increasing mismatch between how we are taught to write research and how meaningful research is actually done. In practice, the most powerful insights emerge after we’ve seen the data, run basic tests, and noticed something worth explaining. AI has only accelerated this shift. With the ability to quickly summarize datasets, visualize patterns, and even flag outliers or trends, we are no longer dependent on slow-moving, intuition-first models of research. We can now begin with results—then shape our questions, frameworks, and contributions.
This isn’t about skipping steps. It’s about respecting how inquiry organically develops. The backdoor is not a shortcut—it’s a smarter, more honest, and more iterative entry into the research process. I believe this paradigm shift is not just timely—it’s necessary.
Introduction: Why the Backdoor Approach Works
When most students and early-career researchers are taught how to write a research paper, they usually start with the front matter. That means coming up with a title, crafting an abstract, writing an introduction, formulating a problem statement, and defining objectives. While this seems logical and organized, it doesn’t reflect how research actually unfolds in real academic life.
Over the years, I’ve come to embrace what I call the “backdoor approach” to research. It’s a way of working that begins not with assumptions but with discovery. It is a method that trusts the process, lets the data speak first, and allows the structure of the research to emerge from the work itself.
There’s also something truly magical about numbers — they don’t lie. They speak for themselves, louder and clearer than any opinion ever could. That’s why I’ve always gravitated toward quantitative research methods. Every dataset holds a story, and every number is a voice waiting to be heard. My journey typically begins with descriptive statistics — a table of means, medians, and standard deviations often unlocks more questions than it answers. Then I move to panel regressions, from fixed versus random effects, to xtgls models, and eventually to dynamic estimation using GMM. But I’ve learned something important along the way: you only truly need to learn a statistical technique once you deeply understand your data.
This is also why I emphasize doing it right the first time. One of the most avoidable errors in academic writing is going back to the field or database to gather more data in the middle of your analysis. It slows progress, increases inconsistency, and reflects poor planning. My rule? Sit down once. Gather all the data you need. When you’re done, you’re done. Streamlining your data gathering phase forces clarity and rigor, and helps you stay on track. There will always be new literature or more recent indicators, but the contribution lies in what you do with what you already have.
With the emergence of AI tools like ChatGPT, Scite, and Elicit, this approach becomes even more relevant. AI can suggest hypotheses, interpret regression output, summarize articles, and structure arguments—but it cannot understand your data for you. That remains your job. AI empowers speed and structure. But research still demands integrity, intuition, and depth.
Ultimately, the Cortez Backdoor Approach is not about reversing the sequence for novelty’s sake. It’s about realigning the process with how researchers think, discover, and refine. It’s about listening to the data, letting it shape your problem, and doing serious work—right from the beginning.
Before You Begin: Internal Validity is the Soul of Research
Before you gather data, before you run your first regression, before you name your variables—pause. Ask yourself if you’re truly measuring what you claim to measure. That is the central question of internal validity, and everything else in your study depends on it.
I learned this lesson the hard way. In a previous study, I collected ESG data from Bloomberg and linked it with financial performance metrics. Confident I had found a fresh angle, I applied for a grant—only to later find a similar study already published. Worse, the published version lumped unrelated environmental, social, and governance variables from different sources with no conceptual consistency. It made me realize: internal validity isn’t optional. It’s foundational.
If you claim to be studying ESG, you must define exactly what ESG means within the framework you adopt—whether it’s Bloomberg, MSCI, SASB, or another authority. If your focus is environmental performance, is it carbon emissions, energy efficiency, or fines for pollution? If you are measuring governance, are you looking at board composition, shareholder rights, or executive pay?
Construct validity demands that every variable has a conceptual definition, a logical rationale, and an empirically justified operationalization. Internal validity also requires you to ask yourself: is this the best available proxy for the concept? Will another researcher using the same definitions come to the same results?
In the Cortez Backdoor Approach, data may come early—but conceptual clarity must come first. Define before you download. Think before you test. Internal validity is not a box to check—it is the soul of your study.

Start with the Data: Descriptive Statistics as Groundwork
Once you’ve defined your variables, dive into the data—not to confirm a hypothesis, but to listen. Begin by calculating basic descriptive statistics. Examine the distributions, check for outliers, and make note of skewness. This is where the first signals appear.
I typically use datasets from proprietary sources like Bloomberg, Refinitiv, and government statistical websites. The Bloomberg ESG scores, for instance, are constructed from public disclosures, scored using a proprietary materiality-weighted method. These data are transparent, auditable, and standardized—making them excellent foundations for quantitative inquiry.
One of the most powerful practices is combining multiple sources. I routinely merge firm-level data with macroeconomic indicators, industry association reports, or regional policy data. This multi-source approach boosts credibility and reduces bias. It also supports generalizability and robustness.
Modern tools like Stata export APA-compliant tables with ease. AI helps summarize patterns and trends—but these summaries are just starting points. Interpretation belongs to you. Insight comes from your theoretical lens and your scholarly judgment.
Define Your Variables, Carefully and Clearly
After your initial data scan, revisit your variables with clarity. The independent variable is the one you believe causes a change. The dependent variable is what you’re trying to explain. Moderating variables adjust the strength or direction of the relationship, while control variables remove background noise.
But these roles must be more than technical. They must be conceptually justified. Why does ESG influence profitability? Is the link stronger in large firms? What theory explains this?
You are not just labeling variables—you are building an argument, one that is empirical, logical, and theoretically defensible.
Review of Literature: From Theory to Gap
Once your variables are clear and your data story is emerging, it’s time to turn to the literature—not to justify a topic, but to engage with a community of scholars.
Start with theory. Stakeholder theory, resource-based view, institutional theory, and agency theory are the usual foundations. These frameworks explain why ESG should influence profitability, why governance might matter more than disclosure, or why firm size can act as a moderator.
Then move to empirical studies. What methods have others used? What datasets? What were their results? Organize findings by theme, method, or geography.
Compare global and regional perspectives. ESG’s role in Japan may differ from the U.S. or Southeast Asia. Local norms, reporting standards, and regulatory pressures all affect how ESG functions in context.
Finally, synthesize the literature. Don’t just summarize—connect the dots. Lead the reader to your research gap. What hasn’t been done? What question remains unanswered? That’s your contribution.
Comparing Traditional vs. Cortez Backdoor Approach
Stage | Traditional Approach | Cortez Backdoor Approach |
Title | Written first | Written last |
Objectives | Defined before analysis | Informed by data |
Literature Review | Justifies topic | Frames contribution |
Data Collection | After design | Starts early |
Descriptive Statistics | Brief or skipped | Foundation of insight |
Variable Choice | From previous studies | Based on findings |
Model Testing | Done last | Refines focus |
Role of AI | Avoided | Used ethically |
Writing Flow | Front-to-back | Recursive |
Framing | Theoretical | Empirical and reflective |
Now, Return to the Front
Now is the time to craft your introduction, problem statement, objectives, and abstract—after you’ve already done the analysis.
Your problem is now real and grounded in data. Your objectives are not hypothetical—they are informed by the actual models you’ve tested. The significance of your study is no longer speculative—you know what your research offers.
Your background becomes concise and relevant. Your abstract is precise, free from generalizations. Your title is clear, aligned, and honest about your findings.
Everything now fits—and that’s the beauty of writing from the backdoor.
Polish the Paper
As you near the finish line, the most critical phase begins: refining your paper with academic integrity and narrative flow.
Start by verifying every citation. Don’t rely on AI-generated references. Use trusted databases like Google Scholar, Scopus, and CrossRef. Match your in-text citations to your reference list. Manually check each DOI.
Next, humanize your writing. Read it aloud. Edit for clarity. Adjust tone. Remove awkward or mechanical phrasing. You can use AI to assist, but you are still the best humanizer of your work.
Then, apply what I call my “knitting system.” Align your objectives with your methods. Make sure your results answer your research questions. Let your conclusions emerge logically. And ensure that citations support key points throughout—not just in the literature review.
At this stage, your work transforms from draft to scholarship.
Final Note: Research is Still a Human Story
The Cortez Backdoor Approach is not just a workflow—it is a philosophy. It embraces uncertainty. It listens before it speaks. It writes last what most people write first. And it reminds us that research—real research—is built on curiosity, rigor, and human judgment.
AI may assist, but it cannot replace you. Only you can discover. Only you can decide what’s worth exploring, defending, and sharing with the world.
Let this method guide you. Let your mind lead you. And let your work speak for itself—with clarity, with confidence, and with the credibility only a human researcher can bring.
Leave a Reply