PhD Student Essentials

Early Career Funding Without Publications:How to Win Grants Anyway

The strategic playbook for securing early career funding when you lack publications—proven strategies for PhD and postdoc fellowship applications facing the classic catch-22
14 min readFor PhD students & early-career researchersUpdated 2025

You're pursuing early career funding for your research. You've spent two years in your PhD program preparing for a fellowship application. You've mastered techniques, devoured the literature, and developed a research vision that keeps you awake at night. But when you open that application, one section stops you cold: Track Record.

The application wants publications. Peer-reviewed papers. Evidence that you can see a project through from hypothesis to journal. And you have... a thesis proposal draft, some promising preliminary data on your advisor's hard drive, and a presentation you gave at a regional conference to twelve people (three of whom were asleep). Whether you're applying for early career funding like the NSF GRFP, NIH F31, or an ERC Starting Grant, this challenge is universal.

Welcome to the feasibility paradox—the recursive trap that has frustrated every generation of doctoral students. Funding agencies want to see that you can produce results before they give you money to produce results. It feels like being asked to prove you can drive before anyone will let you touch a car.

The Feasibility Catch-22

Need funding to
generate data

Need data to
get funding

This recursive trap has frustrated every PhD student who ever stared at a grant application

But here's what nobody tells you during orientation: the absence of a publication record isn't a death sentence. It's a rhetorical challenge. And like any rhetorical challenge, it has solutions—strategies that successful early-career applicants have been using for decades, often without consciously articulating what they're doing.

This guide unpacks those strategies. Not with vague reassurances ("just believe in yourself!"), but with concrete tactics grounded in how review panels actually evaluate proposals from applicants without the traditional markers of scientific achievement.

Understanding the Reviewer's Risk Calculus for Postdoc Fellowship Applications

Before we talk solutions, we need to understand the problem at a deeper level. Grant review for postdoc fellowship applications is fundamentally an exercise in information asymmetry. You know your true capability and the reality of your project's status; the reviewer has only the proposal document.

Faced with this uncertainty, reviewers rely on heuristics—mental shortcuts. The publication record is the most powerful heuristic available. Seeing a string of papers allows the reviewer to bypass the cognitive effort of evaluating basic competence. It's shorthand for: "This person can start projects, finish them, survive peer review, and communicate results."

When you remove that shortcut by having no publications, something important happens: the reviewer is forced into slower, more analytical, more critical processing. They're no longer skimming for confirmation of competence—they're actively hunting for reasons to doubt you.

This isn't about tricking reviewers or gaming the system. It's about understanding that when the standard credibility marker is missing, you need to construct what I call a "constellation of competence"—multiple smaller signals that together achieve the same effect as a publication list. This approach works for any postdoc fellowship application, from training grants to independent research awards.

Costly Signals vs. Cheap Talk: What Actually Convinces Postdoc Fellowship Reviewers

In economics and evolutionary biology, there's a concept called "signaling theory." It explains how agents convey information about hidden qualities to others. The key insight: some signals are costly (hard to fake) and some are cheap (easy to fake). Reviewers instinctively discount cheap signals.

A peer-reviewed publication is the ultimate costly signal. It required months or years of work plus external validation from anonymous experts. But when that signal is unavailable, you need to substitute other costly signals—indicators that demonstrate competence in ways that couldn't be faked by someone who lacked the underlying capability.

Costly Signals (High Value)
  • Detailed power analysis with statistical justification
  • Comprehensive contingency plans for specific failure modes
  • Well-documented GitHub repository with tests
  • Preprint with DOI on bioRxiv/arXiv
  • Conference oral presentation (with acceptance rate)
Cheap Talk (Ignored)
  • "I am a hard-working and capable researcher"
  • "Data will be analyzed using standard methods"
  • "Manuscript in preparation" (without DOI)
  • Generic letters saying "good student"
  • "We plan to publish soon"

The distinction matters enormously. When you write "I am committed to high-quality research," a reviewer's brain barely registers the words—anyone can claim commitment. But when you present a power analysis showing you've calculated effect sizes based on pilot data and justified your sample size using specific statistical frameworks, the reviewer thinks: "This person has actually thought through the experiment."

Similarly, "Manuscript in preparation" is invisible. A preprint with a DOI that the reviewer can click and read? That's verifiable evidence. The signal becomes costly because it required you to actually produce something substantial.

The Numbers That Should Reassure You

Before diving into specific strategies, let's confront the statistical reality that often gets obscured by the success stories you see on Twitter and in department newsletters.

The "No Publication" Reality Check
Mathematics PhDs with zero publications at graduation~50%

Source: Analysis of PhD graduation requirements by field

NSF GRFP applicants in 1st year (typically no pubs)~40%

Based on eligibility window data

NSF GRFP success rate (all applicants)~15-17%

Many awardees have zero publications

The anxiety around "no publications" often stems from a sampling bias. You see the CVs of the people who won every award, got into prestigious programs, and landed tenure-track positions. You don't see the equally successful researchers who started with thin CVs and built their careers through strategic positioning rather than early publication volume.

A study of NIH F31 outcomes found that reviewer critiques of unsuccessful applications cited "weak training plan," "overly ambitious aims," or "insufficient sponsor funding" far more often than "lack of student publications." The proposal problems that actually sink applications are frequently things you can control—regardless of your publication status. Understanding how to build your academic track record strategically matters more than raw publication counts, especially when paired with our funded proposal samples guide.

Need Help Structuring Your Early Career Grant?

Proposia's AI-powered workflow helps early career researchers build compelling proposals even with limited publications. Get tailored guidance for NSF GRFP, NIH F31, and ERC applications.

Strategy 1: Preprints as Visible Evidence

The single fastest way to convert "work in progress" to "verifiable output" is through preprints. The NIH explicitly allows and encourages citing preprints in grant applications and biosketches (Notice NOT-OD-17-050). The NSF follows similar policies.

A preprint posted to bioRxiv, arXiv, or your field-specific server provides a permanent DOI, a timestamp of priority, and—critically—something the reviewer can actually examine. It transforms your claim from "I'm working on something" to "Here is the something I'm working on."

Preprint Citation Format

Doe, J., & Smith, A. (2025). Novel mechanisms of chromatin remodeling. [Preprint]. bioRxiv. DOI: 10.1101/2025.01.01.123456

Include "[Preprint]" to differentiate from peer-reviewed work. If your preprint has generated engagement (downloads, citations, discussion), mention this in your personal statement.

The psychological function of a preprint goes beyond mere documentation. It signals confidence—you're willing to expose your work to the scientific community before formal peer review. It demonstrates alignment with Open Science values, which is increasingly scored explicitly in European funding (Horizon Europe, Wellcome Trust). And it removes the opacity of "unpublished data" by letting the reviewer verify your claims directly. This strategy is particularly valuable when building your preliminary data portfolio and avoiding the common preliminary data traps.

Strategy 2: Code as Credibility

In computational fields—bioinformatics, machine learning, computational social science, mathematical modeling—your code is your research output. A well-maintained GitHub repository can function as a stronger feasibility signal than a middle-author paper.

But the operative word is "well-maintained." A code dump—files uploaded the night before the deadline with no documentation—signals nothing positive. To function as a credibility marker, your repository needs to mimic the structure of a published software tool.

Repository Must-Haves
  • • Comprehensive README with installation and usage
  • • Commit history showing sustained development
  • • CITATION.cff file for proper academic citation
  • • License file (typically MIT or Apache 2.0)
  • • Basic tests demonstrating the code works
Integration Into Proposal

Instead of: "We will develop a simulation model."

Write: "Preliminary code for the model core has been developed and is available at [github.com/...], demonstrating feasibility of the proposed framework."

This transforms software from a "planned activity" to a "preliminary result." The reviewer can click the link, browse the code, check the commit history, and see that you've been working on this for months—not just claiming you will. Using a strong grant proposal template ensures this evidence is presented effectively.

Strategy 3: The Data Fragment Approach

A common misconception among PhD students preparing postdoc fellowship applications is that "preliminary data" must look like Figure 1 of a published paper—a complete, polished story with beautiful error bars and statistical significance stars. In reality, for fellowship applications like the F31 or GRFP, fragments of data that demonstrate methodological competence are often sufficient.

The goal of preliminary data in this context isn't to prove your hypothesis (which would make the grant unnecessary). It's to prove you can execute the necessary techniques.

The Competence Demonstration Principle

Each preliminary figure should whisper to the reviewer: "I know what I'm doing."

Example: If your proposal involves complex flow cytometry, a single dot plot showing clean population separation with high viability is a powerful signal. It tells the reviewer: "I know how to prepare samples, run the machine, and gate the data." This removes the technical risk associated with that method.

Negative data can work too: "Initial optimization using Protocol A resulted in low yield; however, modification of buffer pH (Figure 2) restored activity." This shows troubleshooting ability—highly valued in trainees.

The art lies in strategic selection. You don't need every experiment to have worked perfectly. You need enough evidence to convince the reviewer that when things don't work, you know how to fix them. This troubleshooting capability matters more than pristine results in many postdoc fellowship evaluations.

Strategy 4: The Literature as Your Dataset

In the very early stages of a PhD, when wet-lab or field data simply doesn't exist yet, the published literature itself must be treated as a dataset. A rigorous, systematic synthesis can serve as a feasibility anchor.

Instead of writing a narrative literature review that reviewers will skim, create structured evidence. A table listing prior studies, their key findings, their specific limitations, and how your proposed work addresses each limitation. This visually demonstrates that you haven't just read the papers—you've synthesized them into a logical framework that makes your proposed experiment feel inevitable.

Developing a novel conceptual framework or logic model based on existing literature is itself a form of intellectual output. It substitutes "intellectual feasibility" (the idea makes sense given what we know) for "empirical feasibility" (the data is already there). Strong proposals do both, but when empirical data is thin, intellectual feasibility must carry more weight—and be presented with more rigor.

As discussed in our guide to winning proposal anatomy, reviewers respond to story structure. Your literature synthesis should tell a story that ends with a gap only your research can fill. This narrative approach works particularly well for postdoc fellowship applications where innovation matters as much as track record.

Strategy 5: Methodological Granularity as Trust Signal

Vagueness is the enemy of funding. A reviewer cannot trust a student who writes "Data will be analyzed using standard statistical methods." This phrase could be produced equally well by someone who knows exactly what they're doing and someone who has no idea. Overcoming this confidence gap in grant writing requires deliberate specificity.

To establish credibility without publications, you must name the specific tools, protocols, and tests you'll use. This signals "conscious competence"—awareness of the specific logistical requirements of your experiments.

Vague (Signals Incompetence)

"We will measure protein expression using standard methods."

Specific (Signals Competence)

"Protein expression will be quantified using BCA assay for total protein normalization, followed by Western blotting with antibody clone 4G10 (Millipore), visualized on a Li-Cor Odyssey system to ensure linear dynamic range."

The specific version demonstrates that you've already thought through the experiment in detail. You know which antibody you'll use. You know what instrument will visualize the results. You've considered the technical requirements for quantitative analysis. None of this requires having run the experiment yet—it requires having planned it with genuine care.

Similarly, as we explored in our guide to scope and budget calibration, anticipating specific failure modes and documenting alternative strategies demonstrates mature scientific thinking that transcends publication count. This detailed planning is especially critical for postdoc fellowship success.

Strategy 6: Leveraging Institutional Capital

When you're an unknown quantity, reviewers look to your environment for reassurance. The logic is straightforward: "This student may be unproven, but they're in a world-class lab with excellent resources and mentorship, so they'll likely succeed."

This isn't just about listing your advisor's publications. It's about making the institutional investment in your success explicit and verifiable.

Sponsor Statement Requirements

Generic letters are fatal. Your sponsor must write a highly personalized letter addressing your specific strengths and the specific training plan. "We have established a bi-weekly meeting schedule to review raw data" beats "I will meet with the student regularly."

Core Facilities as Risk Mitigation

Don't just list equipment. State: "The Flow Cytometry Core, staffed by three PhD-level technicians, provides training and troubleshooting, ensuring the technical feasibility of Aim 2." This tells the reviewer you won't be struggling alone with unfamiliar equipment.

Training Plan as Publication Pipeline

If publications are missing, the training plan should address this explicitly: "The training plan includes a dedicated module on scientific writing, with the goal of submitting the first manuscript by Month 12." This transforms the weakness into a structured objective.

Letters from collaborators providing specific resources can substitute for your own track record. "I will provide the transgenic mouse line and train the applicant in its handling" turns a proposed method into a confirmed resource. You've effectively outsourced the technical risk to an expert. This institutional support is particularly valuable for postdoc fellowship applications where mentorship quality is explicitly evaluated.

Strategy 7: Computational Evidence as Feasibility

For wet-lab proposals, computational simulations provide a "dry run" of the experiment. They can demonstrate feasibility without giving away the biological result.

If you propose synthesizing a new molecule, a DFT calculation predicting its stability is valid preliminary data. If your proposal involves a clinical trial, a power analysis simulation using published effect sizes proves statistical feasibility. If you're modeling a biological system, the simulation itself demonstrates you understand the parameters.

Tailoring to Your Target Mechanism

Different funding bodies have distinct definitions of what "feasibility" means, and the importance of publications varies accordingly. Understanding these nuances is essential for strategic positioning.

Mechanism-Specific Strategies for Unpublished Applicants

NSF GRFP

~15-16% success rate

Explicitly funds people, not projects. Your personal statement about "distance traveled" and Broader Impacts can compensate entirely for missing publications.

Leadership valuedScience communicationDiversity & outreach

NIH F31

~19-21% success rate

Training grant logic: lack of publications can actually be framed as the reason for the fellowship—"protected time to synthesize preliminary data into manuscripts."

Sponsor track record criticalTraining plan specificityIndependence trajectory

ERC Starting Grant

~14% success rate

European mechanisms favor narrative CVs over publication lists. From 2026, feasibility won't even be evaluated at Step 1—only the groundbreaking nature of your vision.

Bold vision over track recordNarrative CV format"Outsider" advantage

The mechanism selection itself is a strategic decision for postdoc fellowship applicants. If your track record is thin but your vision is bold, the ERC Starting Grant or foundation grants may be better targets than traditional NIH mechanisms. If your Broader Impacts are strong but your data is weak, the NSF GRFP is designed for exactly your profile.

As detailed in our guide to building a narrative CV, framing matters enormously. The same research program, presented differently, can look like "risky bet on an unknown" or "strategic investment in emerging talent."

Beyond the Journal Article: Your Alternative Track Record

Preprints

NIH & NSF Accepted

The distinction between "In Preparation" and "Preprint" is the difference between a promise and a product.

Strategy: Post to bioRxiv, arXiv, or ChemRxiv. Cite with DOI. Mention downloads/altmetrics if significant.

Code Repositories

Computational Fields

A well-maintained GitHub repository can signal stronger feasibility than a middle-author paper.

Requirements: README documentation, commit history showing sustained effort, CITATION.cff file, not just a "code dump."

Conference Presentations

Peer Review Lite

Acceptance to selective conferences implies your work passed a quality threshold judged by experts.

Maximize impact: "Accepted for oral presentation at Society for Neuroscience (top 15% of submissions)" beats a bare citation.

Controlling the Narrative: Don't Let Reviewers Fill in the Blanks

Perhaps the most important strategy is rhetorical control. You must not let the reviewer infer why there are no papers—you must tell them. Silence invites unfavorable assumptions.

In your Personal Statement or Background section, frame the absence of publications as a strategic choice or structural necessity rather than a failure.

The "Tool Builder" Narrative

"My first two years have been dedicated to the engineering and validation of a novel high-throughput screening platform. With this tool now fully operational (see Figure 2), I am positioned to generate publication-quality data at an accelerated rate in the coming year."

This frames time as infrastructure investment, not lack of productivity.

The "Field Transition" Narrative

"Transitioning from a background in Physics to Computational Neuroscience required a dedicated period of intensive coursework and coding skill acquisition. Having now mastered these tools, as evidenced by my GitHub portfolio, I am applying them to..."

This explains the gap as necessary retraining for an interdisciplinary career.

When mentioning work in progress, specificity creates credibility. "These results are currently being compiled into a manuscript titled 'Mechanism of X,' which we anticipate submitting to Journal Y in Q4 2025" is far more believable than "we plan to publish soon."

The visual presentation of your proposal also signals competence independent of publication record. Professional document design with your grant proposal template reduces cognitive load and implicitly communicates attention to detail—essential for competitive postdoc fellowship applications.

The Bottom Line for Postdoc Fellowship Success

The feasibility paradox is real, but it's also porous. It's constructed of reviewer habits and risk aversion—both of which can be managed through strategic signaling.

The absence of publications doesn't render your postdoc fellowship application unfundable. It shifts the burden of proof to other areas of your application. When you can't rely on the publication heuristic, you must:

Create costly signals through detailed methodology, rigorous contingency plans, and professional visuals
Generate verifiable outputs: preprints with DOIs, GitHub repositories with documentation, conference presentations with acceptance rates
Leverage your environment: sponsor reputation, institutional core facilities, collaborator letters with specific resource commitments
Control the narrative: frame the absence of papers as strategic positioning, not failure
Match mechanism to profile: choose funding programs that explicitly value potential over track record

Remember This

For the PhD student staring at a blank postdoc fellowship application and a thin CV, the message is clear: You don't need a Nature paper to win a fellowship.

You need a compelling idea, a rigorous plan, and the rhetorical skill to show the reviewer that you're a risk worth taking. The "no publication" applicant isn't an outlier—in many postdoc fellowship mechanisms, they're the norm. Your job is to be the best-prepared version of that norm.

Reviewers are humans who want to fund promising science and support the next generation of researchers. They're trapped in the same imperfect system you are. Give them a reason to fight for your postdoc fellowship proposal—a vision so well-articulated, a plan so meticulously constructed, that the preliminary data becomes the appetizer for a feast of discovery they can't wait to fund.

Ready to Build Your Feasibility Case?

Transform your thin CV into a compelling funding argument. Get AI-powered guidance tailored for early-career researchers.