Building Science Under Real Constraints
Why scientific progress depends on who is allowed to assemble, and under what conditions.
Note: This essay is adapted from material I submitted to the White House Office of Science and Technology Policy for Accelerating the American Scientific Enterprise.
Here, I expand on those ideas more personally, drawing on my experience building and operating scientific organizations under real constraints. This post inaugurates Culture Shock, a series of reflections on how science is practiced, organized, and funded in the world as it actually exists.
The Hidden Constraint in Scientific Progress
For a long time, conversations about scientific progress have focused on what we should work on: which technologies matter, which problems are urgent, which fields deserve investment. Far less attention has been paid to how scientific work is allowed to organize itself in practice. That omission matters more than it might appear.
When you try to do use-inspired, long-horizon scientific work outside narrow disciplinary lanes, you quickly encounter structural friction. Promising ideas do not fail because they lack merit, but because they do not fit existing institutional containers. Academic pathways reward novelty and publication, but are not designed for sustained tool-building or infrastructure-heavy research. Industry excels at execution for near-term commercial objectives, but is not designed for open-ended exploration. Between these poles sits a wide class of scientific problems that are real, consequential, and unresolved, yet difficult to pursue seriously within either setting.
Science has an organizational problem: who gets to assemble scientific teams around meaningful work, and under what funding conditions. Substantial resources already flow through science. The issue is not total funding, nor a lack of ideas. The constraint is how narrowly resources are coupled to specific institutional forms and career paths. In practice, the bottleneck is who is permitted to assemble people, capital, and time around problems that do not yet fit familiar molds.
Science has an organizational problem: who gets to assemble scientific teams around meaningful work, and under what funding conditions.
Talent Is Not the Scarcity
The question is not whether people want to do science, but whether they are given credible opportunities to organize around it.
I have always found it odd that a recurring concern in metascience discussions is whether alternative research organizations can attract scientists or engineers at all. In practice, this has never been the limiting factor. When scientists are given resources, ranging from microgrants like Fast Grants to substantial philanthropic or public funding, they have consistently been willing to take on risk, whether through PhDs, multiple postdocs, or what some now describe as venture-funded research projects.
The consistent signal is that talent is not the scarcity. The appetite to do science exceeds the number of legitimate places currently allowed to practice it.
Why Science Needs a Portfolio of Organizations
What ultimately determines scientific progress is not the absolute scale of a scientific goal, but whether the system supports a portfolio of attempts at reaching it.
What ultimately determines scientific progress is not the absolute scale of a scientific goal, but whether the system supports a portfolio of attempts at reaching it.
It is increasingly fashionable to assemble large moonshots because the destination is easy for everyone to agree on. But scientific progress rarely follows a linear, computable path. We often do not know the right approach and must invent new technologies to tackle the problem at hand. When we went to the moon, many teams brought forth inventions that are now part of daily life, not because there was a single correct path, but because many parallel efforts were supported.
Certain scientific challenges do require large numbers of scientists coordinating around shared infrastructure over long periods of time. Others are better explored by small teams pursuing speculative ideas with the freedom to abandon unproductive directions quickly. The critical requirement is not size, but flexibility: the ability for scientists themselves to assemble at the scale the work demands, and to do so with clear processes, timelines, and expectations. When that flexibility exists, ambitious outcomes become feasible. When it does not, even well-funded efforts stall.
This perspective has been shaped by my own experience building a focused research organization. That model has proven useful for certain kinds of problems and has attracted significant attention and funding in part because it makes an ambitious claim. But it is neither the only way to do science nor a universally appropriate one. No single organizational form should dominate. If we want asymmetric upside, we are better off supporting many different kinds of organizations making many kinds of bets, ranging from clear engineering efforts to deeply speculative science, each supported in ways that match their risk profile.
When Scale Helps and When It Hurts
Trying to build research organizations under these conditions forces a shift in perspective. Instead of asking whether an idea neatly fits an institution’s mold, a more productive question is whether the institution fits the work.
Sydney Brenner once argued that progress in science depends on new techniques, new discoveries, and new ideas, probably in that order. Scaling scientific effort before the right concepts and tools exist can lock in the wrong abstractions. Large, coordinated projects pursued too early can amplify confusion rather than insight.
From that perspective, organizational form matters as much as technique. When we do not yet understand the problem or lack the right tools, smaller, exploratory efforts are not a failure of ambition. They are how fields discover what should eventually be scaled. Scientific progress depends not only on better theories or tools, but on a diverse ecosystem of organizational forms capable of exploring uncertainty before committing to coordination.
Prediction, Prestige, and the Limits of Foresight
One of the most worrisome modes of science funding is reliance on consensus as a proxy for success. Traditional peer review is optimized to identify proposals that are defensible and aligned with prevailing expert opinion. This produces steady progress in popular areas, but it systematically filters out ideas that are uncomfortable, incomplete, or difficult to justify in advance. Many transformative advances only become obvious in retrospect, after surviving early stages that looked messy and uncertain.
This limitation is visible in how funding organizations talk about success. Funders are quick to claim credit for breakthroughs like the internet, mRNA vaccines, or artificial intelligence, yet no single agency can plausibly argue that it consistently predicted these outcomes far in advance. Scientific progress is nonlinear, and our predictive signals are heavily biased by prestige networks.
We have not yet built a diverse enough portfolio of scientific organizations to understand which approaches work best under which conditions. Allowing enterprising scientists to assemble and be funded in different ways is how we run those experiments.
Throughout my own scientific training, I also saw a gradual shift in how ambition is expressed. Increasingly, science is framed in terms of logistics: which labs can coordinate large efforts, how many samples can be processed, how comprehensively a space can be mapped, and in the age of AI, how many data points can be generated. Coordinated projects such as the Human Genome Project, ENCODE, and brain connectomics efforts reflect a mode of science where enumeration at scale becomes the central objective, rather than use or insight. These projects promise enormous value, but their impact ultimately depends on whether enumeration is paired with mechanisms, models, and interpretation.
This distinction matters because much of science funding still treats prediction as the primary signal of future success. Proposals are evaluated based on whether they should work, rather than whether they do work once given the chance. Shifting emphasis from prediction to observation does not remove risk. It relocates it to where it can be meaningfully measured. The most informative signal of scientific progress is not whether an idea is well defended in advance, but how it performs when tested in practice.
Freedom to Assemble
The practical constraint that appears repeatedly is funding continuity. Philanthropy is often willing to take early risks, but rarely structured to support sustained experimentation at scale. Public funding, meanwhile, is substantial but tightly coupled to established institutional forms. The missing layer is flexible funding lines that allow these sources to reinforce rather than substitute for one another, enabling new scientific organizations to mature without being forced prematurely into academic or commercial categories.
Recent advances in artificial intelligence make these issues more urgent. Generating hypotheses, synthesizing literature, and proposing experimental designs are becoming cheaper and more accessible. The binding constraint is no longer ideation, but execution. Discovery depends on tight feedback loops between thinking and doing, where experiments inform models and models shape the next experiment. Jake Feala’s Closed-Loop Manifesto captures this dynamic well.
This is where scientific infrastructure matters, not as a prestige asset, but as a practical one. Scientific tools that are affordable, distributed, and directly usable by small teams enable kinds of experimentation that centralized facilities alone cannot. The history of sequencing is instructive. The widespread availability of next-generation sequencing through Illumina transformed biology not because a single institute sequenced everything, but because sequencing became accessible across the scientific ecosystem at unprecedented scale.
These dynamics also shape scientific careers. The system produces far more highly trained researchers than it can absorb into traditional academic roles. This is often framed as an individual failure, but it is better understood as a structural mismatch. Many scientists want to pursue ambitious, real-world problems without abandoning research or narrowing prematurely to commercial endpoints. Creating legitimate, durable pathways for this work relieves pressure on universities rather than competing with them.
The same logic applies to national scientific infrastructure. Decades of public investment have created extraordinary capabilities within national laboratories and shared facilities. These resources are essential components of the scientific ecosystem, particularly for work that cannot be done at the scale or cost of individual laboratories.
At the same time, it is natural for research groups to rely on the facilities they know best and can access most easily. When organizational structures implicitly favor local ownership over shared infrastructure, usage patterns reflect convenience rather than intent. This is not a failure of any single institution. It is a systems-level outcome.
A portfolio view of scientific organizations changes how this is interpreted. Rather than expecting every group or institution to optimize around the same facilities, the goal becomes making different capabilities discoverable, accessible, and usable by the kinds of organizations they are best suited to support. In such a system, national laboratories are not competitors to universities or startups, but complementary infrastructure that expands the range of scientific work that can be attempted.
During my PhD and postdoctoral training in Boston, I was largely unaware of the full range of capabilities available at places like the Joint Genome Institute. That gap was not malicious or negligent; it reflected how fragmented the scientific landscape remains. When publicly funded capabilities are difficult to discover or integrate into everyday practice, we fail to realize their full return on investment. A more diverse portfolio of organizations, each designed to interface with shared infrastructure in different ways, makes those capabilities easier to access and more likely to be used.
These ideas are consistent with several recent points that surfaced the same constraint from different directions. One is Sam Arbesman’s observation that research institutes, whether for-profit or nonprofit, tend to converge on remarkably similar forms. That convergence suggests not optimality, but regression toward a narrow set of permissible organizational structures. When everything collapses to the same model, we lose the ability to explore alternatives.
Another by Brian Halligan focused on how to strengthen Boston’s venture capital investments in biotech sparked by the significant downturn in biotech jobs. That framing is reasonable, but incomplete. Boston’s deeper advantage is its concentration of scientific capability. That capability is unlocked not simply by more investment, but by enabling scientists the freedom to assemble around how they actually practice their work. In a city that once defined political independence through freedom of assembly, the modern constraint is whether scientists are allowed the same freedom to organize their work.
Applying the Scientific Method to Science Itself
Taken together, these observations point to a simple conclusion. Many of the hardest problems facing science today are technical, but not in the narrow sense of equations, assays, or instruments. They are technical in the deeper engineering sense. They concern how we design organizations, funding pathways, and feedback loops. These are systems problems, and like other engineering problems, they can be studied, tested, and improved.
What makes this moment different, and genuinely hopeful, is that we no longer have to debate these questions in the abstract. We can apply the scientific method to the practice of science itself. We can build alternative structures, observe what works, discard what does not, and iterate.
We are in dire need of people who have actually built and operated scientific organizations under real constraints to help guide the direction of scientific progress, alongside the theorists, administrators, and observers who currently shape much of the conversation. Optimism comes not from believing any single model will prevail, but from recognizing that progress accelerates when the structure of science itself is treated as something we are allowed to experiment on, deliberately and continuously.
Henry Lee builds and studies scientific organizations, with a focus on enabling ambitious research under real-world constraints.



Thank you. This is gold for the likes of us. One question in the service of "Culture Shock":
Do you think there’s value in studying how China is structuring and running science right now?
They’re not direct equivalents, but it’s interesting that China already has institutional forms that look somewhat like NSF's Tech Labs (CAS Strategic Priority Programs, National Labs, NRDIs), and in some cases seem to go beyond. This seems to become more important as people recognise that "Chinese scientific capacity" may end up being even bigger than its industrial prowess. (for example: China's mention in Noubar Afeyan’s 2026 annual letter).
I had Claude help pull together a few points from my scattered research-notes. There are many more, and these may lack context or even be off, but they felt directionally interesting.
1) CAS Strategic Priority Program (SPP), Class B
The “Lead Scientist” functions almost like a CEO—setting direction, building the team, and shifting budgets internally, with no hard funding cap and the ability to scale resources as uncertainty plays out.
2) National Labs (e.g. Peng Cheng Lab)
These operate under a “Three Uncertainties” policy: no fixed staff size, no civil-service ranks, and fully contract-based hiring. That seems to allow market-rate pay, easier exits for underperformers, and quicker team reconfiguration.
3) New R&D Institutions (NRDIs)
Labs like Zhijiang are set up as corporate entities that can hold equity or form joint ventures, while still receiving large amounts of public CAPEX—blurring the academic / industry boundary without forcing an early commercial endpoint.
4) Lump-sum (“baogan”) funding
Instead of tightly specified grants, labs receive large block grants with high internal freedom, mainly constrained by a “negative list.”
5) Evaluation tied to shared infrastructure use
In places like Guangdong, NRDIs are ranked on things like instrument-sharing and competitiveness, with real consequences for the bottom tier.