Poly-Directional Scaling and a Recursive Mindset in Mycelium R&D
Beyond the Scale Train
In most bioprocess development workflows, scale is treated as a linear sequence (i.e. bench to pilot to production) where each step is used to stabilize and de-risk the next. This deterministic approach can be difficult when it comes to mycelium, where scale-dependent behavior is a defining feature. Fungal growth is physically plastic and responsive in ways that shift dramatically across volumes, formats, and environmental contexts. A process that performs beautifully at bench scale can behave entirely differently in pilot. In this case it may be valuable to address a multidimensional design space where scaling up, scaling down, scaling laterally, and scaling digitally all happen in parallel as part of a coordinated, adaptive learning system.
Poly-directional scaling
I think of poly-directional scaling as a development strategy that treats scale as a multidimensional design space rather than a linear path. It involves scaling up, scaling down, scaling laterally, and integrating digital modeling in parallel, allowing for recursive learning across distinct system formats, feature targets, and scale-dependent effects. In this framework, scale isn’t something to navigate after the fundamentals are resolved, but rather a multitude of distinct dimensions of ‘scale’ that can be resolved in parallel.
Scale Up ⇄ Down
Bi-directional scaling refers to the intentional practice of scaling both up and down in parallel during process and product development. This might mean rapidly advancing a minimum viable process to pilot scale while simultaneously developing medium- or high-throughput bench-scale systems designed to isolate specific variables or mechanisms. The value of this approach lies in its ability to engage with scale-dependent phenomena early in the development cycle while still investing in the clarity and control afforded by small-scale experimentation. Rather than waiting for scale to reveal problems after the fact, bi-directional scaling allows for recursive learning between scales, where insights from the bench can inform adjustments at pilot, and emergent behaviors at pilot can guide refinement of experimental design at the bench. It treats scale not as a hierarchy of confidence, but as a network of perspectives allowing for early and concurrent detection of scale-dependent uncertainties while resolving fundamentals.
Target ⇄ Format
Lateral scaling refers to partitioning distinct response targets across tailored experimental systems to accelerate multi-objective learning. In many development scenarios (and particularly the case with mycelium), we’re not optimizing a single outcome but navigating a set of interconnected targets that may differ in complexity, responsiveness, and cost to interrogate. Rather than forcing all targets through a common experimental format, lateral scaling enables each to be addressed in a system matched to its constraints. For instance, a lower-cost, high-throughput format may be useful for a simple or well-behaved target, while more complex targets are explored in bespoke systems with greater resolution but perhaps at the cost of lower throughput. This allows learning to proceed in parallel, and when coupled with adaptive learning tools, insights gained from the faster, cheaper target can inform and guide experimentation in the slower more expensive target, ultimately shortening the number of experiments to reach the multi-target goal. Critically, this approach depends on designing systems not for generality but for specificity (i.e. matching system to target to question) and can be meaningfully enabled by open-source tools, generative AI design assistance, and modest fabrication resources; a reasonably motivated researcher, with the tools at-hand, can envision, design, and assemble bespoke cultural or bioreactor systems tailored to their respective goals.
Physical ⇄ Digital
Adaptive learning refers to a recursive machine learning-driven process where each round of experimentation is designed to reduce uncertainty in the system. Rather than committing to a fixed experimental plan, adaptive learning uses model training and feedback to determine which experimental runs are likely to be most informative given the current state of knowledge. At each iteration, the model identifies regions of high uncertainty or high potential value and proposes the next set of experiments accordingly. This allows experimentation to remain target-driven as the model evolves, and helps focus physical effort on the areas of greatest learning opportunity. In practical terms, this means experiments are selected not just to confirm hypotheses but to actively shape the model’s understanding of the system relative to the goal, allowing for convergence on outcomes of high practical value while avoiding wasted effort in regions of diminishing return (i.e. futile searches). By explicitly utilizing the evolution of model uncertainty as a function of experiment, adaptive learning creates a framework where the pace of insight, rather than the pace of experimentation, becomes the measure of progress. For further reading, Intellegens provides a wonderful series of white papers on adaptive design-of-experiment.
The digital dimension of poly-directional scaling emerges through the integration of adaptive learning across all physical systems and scales. As experimental data is collected across bench, pilot, and lateral formats, models can be trained to reconcile both shared and system-specific behaviors. This may involve maintaining multiple models; format-specific models to resolve discrete learnings within individual systems, and multi-system models to capture global relationships across scales. This allows for the design of efficient experiments not in isolation, but in coordination, where the next best experiment in one system is informed by what has already been learned in another. As model resolution improves, it becomes possible to identify which features are most influenced by scale and which remain robust, allowing scale-dependent risks to be addressed early. Over time, more experimentation can be offloaded to in silico exploration, accelerating global optima selection without additional physical cost. And throughout the process, the evolution of uncertainty provides a direct way to monitor the rate of learning, helping to qualify progress, avoid futile searches, and target resources where they are most likely to produce new understanding.
Scaling as a Learning System
Taken together, I think of poly-directional scaling as a practical and adaptive framework for navigating the complexity inherent to mycelium R&D. It creates space for early engagement with scale-dependent behavior, distributes learning across multiple tailored systems, and aligns experimentation with the structure of uncertainty rather than a fixed sequence. Critically, it supports the three core organizing principles that I deeply believe in:
(1) It respects the physical plasticity of fungal systems,
(2) It acknowledges the high-dimensionality of fungal response spaces, and
(3) It makes use of adaptive learning to prioritize insight over effort.
The result is a more responsive, efficient, and resilient development process that moves faster not by skipping steps, but by learning more from every step taken.