Our team will review your submission and reach out within 48 hours to discuss how Crunch’s research community can tackle your challenge.
Broad Institute of Harvard and MIT: Predicting biomarkers for autoimmune disease from standard pathology images.
Virtual Spatial Biology: Predicting biomarkers for autoimmune disease from standard pathology images.

The "Invisible" Signs of Cancer Risk.
The Eric and Wendy Schmidt Center at the Broad Institute and the Klarman Cell Observatory (KCO) are addressing a critical diagnostic gap in Inflammatory Bowel Disease (IBD). Patients with IBD have chronic inflammation that doubles their risk of colorectal cancer.

Currently, detecting pre-cancerous lesions (dysplasia) relies on pathologists manually reviewing H&E stained tissue slides. However, molecular changes occur before visual changes. The Broad Institute wanted to use Machine Learning to see these invisible molecular signals. The challenge was finding a way to map standard, affordable tissue images (H&E) to complex, expensive genetic data (Spatial Transcriptomics) to enable mass screening.
A Three-Phase R&D Pipeline.
Instead of a simple "accuracy" contest, the Broad Institute and Crunch designed a full R&D pipeline simulation across three phases ("Crunches"):
This structure moved beyond "model building" to "biomarker discovery."
Multimodal Foundation Models.
The challenge attracted nearly 1,000 experts from 62 countries. The winning solutions (from researchers at ETH Zürich, Stanford, and Mayo Clinic) deployed advanced Multimodal Learning techniques to bridge the gap between vision and biology.
The Methodology:

Kalin Nonchev from ETH Zurich's Department of Computer Science secured first place with his DeepSpot methodology.
DeepSpot's application generated 1,792 spatial transcriptomics samples from The Cancer Genome Atlas (TCGA) cohorts, analyzing 37 million spots across melanoma and renal cell carcinoma datasets. The methodology demonstrated multi-cancer validation across metastatic melanoma, kidney, lung, and colon cancers, achieving significant improvement in gene correlation compared to existing methods.
Following the conclusion of the challenge in March 2025, Schmidt Center scientists analyzed the top-performing models and ordered a custom dysplasia gene panel based on AI predictions for cancer detection. Custom gene panels are now in manufacturing, with wet-lab validation experiments launching soon. Winners will be announced in December 2025.
From Code to Clinical Validation.
The challenge successfully validated the concept of "Virtual Spatial Biology"—using AI to infer expensive biological data from cheap images.
Key Results: