Showing posts with label HPCC. Show all posts
Showing posts with label HPCC. Show all posts

Supercomputing Mobilizing against COVID19

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled, and supercomputer facilities have even begun preemptively restricting visitor access. But tech is striking back, and hard: day by day, more and more organizations are dedicating supercomputing power toward the effort to diagnose, understand and fight back against COVID-19.

Testing for COVID-19

Before supercomputers began spinning up to find a cure, researchers were scrambling to simply diagnose the disease as cases in China’s Hubei province spun out of control.

With limited (and rapidly iterated) test kits available, Chinese researchers turned to AI and supercomputing for answers. They trained an AI model on China’s first petascale supercomputer, Tianhe-1, with the aim of distinguishing between the CT scans of pneumonic patients with COVID-19 and patients with non-COVID-19 pneumonia.

In a paper, the researchers reported nearly 80% accuracy when testing this method against external datasets, dramatically outperforming early test kits as well as human radiologists:





The Summit supercomputer. The big gun was brought out early:




One of the first systems to join the fight was the world’s most powerful publicly-ranked supercomputer: Summit. Oak Ridge National Laboratory (ORNL) pitted Summit’s 148 Linpack petaflops of performance against a crucial “spike” protein on the coronavirus that researchers believe may be key to disabling its ability to infect. Testing how various compounds interact with key virus components can be an extremely time-consuming task, so the researchers – a team from ORNL’s Center for Molecular Biophysics –  were granted a discretionary time allocation on Summit, which allowed them to cycle through 8,000 compounds within a few days.

Using Summit, the research time identified 77 compounds that may be promising candidates for testing by medical researchers. “Summit was needed to rapidly get the simulation results we needed. It took us a day or two whereas it would have taken months on a normal computer,” said Jeremy Smith, director of UT/ORNL CMB and principal researcher for the study. The researchers are preparing to repeat the study using a new, higher-quality model of the spike protein recently made available.

Major organizations have opened their doors – and wallets – for coronavirus computing proposals

Last week, the National Science Foundation (NSF) issued a Dear Colleague Letter expressing interest in proposals for “non-medical, non-clinical-care research that can be used immediately to be understand how to model and understand the spread of COVID-19; to inform and educate about the science of virus transmission and prevention; and to encourage the development of processes and actions to address this global challenge.” Two days later, it issued another Dear Colleague Letter specifically inviting rapid response research proposals for COVID-19 computing activities through its Office of Advanced Cyberinfrastructure. As a complement to existing funding opportunities, the NSF also invited requests for supplemental funding.

Even with their quick response, though, the NSF weren’t the first to open their pocketbooks. In January, the European Commission announced a €10 million call for expressions of interest for projects that fight COVID-19 through vaccine development, treatment and diagnostics. Then, on the same day as the latest NSF Dear Colleague Letter, they announced an additional €37.5 million in funding.

€3 million of this funding has already been allocated to the Exscalate4CoV (E4C) program in Italy – one of the hardest-hit countries. E4C is operating through Exscalate, a supercomputing platform that uses a chemical library of over 500 billion molecules to conduct pathogen research.

Specifically, E4C is aiming to identify candidate molecules for drugs, help design a biochemical and cellular screening test, identify key genomic regions in COVID-19 and more.

Beyond E4C, the EU also highlighted “on-demand, large-scale virtual screening” of potential drugs and antibodies at the HPC Centre of Excellence for Computational Biomolecular Research, as well as “prioritized and immediate access” to supercomputers operated by the EuroHPC Joint Undertaking.

Presumably, as the NSF and European Commission funding opportunities are leveraged, high-performance computing will play an increasingly large role in the fight against the coronavirus.



Post by Jai Krishna Ponnappan

COVID-19 High Performance Computing Consortium



The COVID-19 High Performance Computing Consortium Bringing together the Federal government, industry, and academic leaders to provide access to the world’s most powerful high-performance computing resources in support of COVID-19 research. Over 402 petaflops, 105,334 nodes, 3,539,044 CPU cores, 41,286 GPUs, and counting.






The world's leading medical researchers are rushing to find a treatment for COVID-19 with the help of the most powerful and advanced supercomputers in the world.
Researchers aross the globe are submitting potential treatments and cures to the COVID-19 High Performance Computing Consortium.
The consortium, using a network of supercomputers and laboratotires, can run through simulations to narrow down or rule out drug compounds to use in a cure much faster than traditional methods.
"It's a means by which one can begin to analyze tremendously complex or large problems," says Vice President of Technical Computing at IBM Cognitive Systems Dave Turek. "Pharmaceutical companies may have billions of compounds that could be potential drugs."
Any researcher can submit proposals to the consortium for the supercomputes to run through.
"So, there are very novel techniques, specifically using A.I. on these supercomputers that are beginnign to speculate about new kinds of molecules that could be created to treat COVID-19," says Turek.

The COVID-19 High Performance Computing Consortium is a unique private-public effort spearheaded by the White House Office of Science and Technology Policy, the U.S. Department of Energy and IBM to bring together federal government, industry, and academic leaders who are volunteering free compute time and resources on their world-class machines.


Consortium partners include:

  • Industry
    • IBM
    • Amazon Web Services
    • AMD
    • Google Cloud
    • Hewlett Packard Enterprise
    • Microsoft
    • NVIDIA
  • Academia
    • Massachusetts Institute of Technology
    • Rensselaer Polytechnic Institute
    • University of Illinois
    • University of Texas at Austin
    • University of California - San Diego
    • Carnegie Mellon University
    • University of Pittsburgh
    • Indiana University
    • University of Wisconsin-Madison
  • Department of Energy National Laboratories
    • Argonne National Laboratory
    • Lawrence Livermore National Laboratory
    • Los Alamos National Laboratory
    • Oak Ridge National Laboratory
    • National Energy Research Scientific Computing Center
    • Sandia National Laboratories
  • Federal Agencies
    • National Science Foundation
      • XSEDE
      • Pittsburgh Supercomputing Center (PSC)
      • Texas Advanced Computing Center (TACC)
      • San Diego Supercomputer Center (SDSC)
      • National Center for Supercomputing Applications (NCSA)
      • Indiana University Pervasive Technology Institute (IUPTI)
      • Open Science Grid (OSG)
      • National Center for Atmospheric Research (NCAR)
    • NASA
Researchers are invited to submit COVID-19 related research proposals to the consortium via this online portal, which will then be reviewed for matching with computing resources from one of the partner institutions. An expert panel comprised of top scientists and computing researchers will work with proposers to assess the public health benefit of the work, with emphasis on projects that can ensure rapid results.
Fighting COVID-19 will require extensive research in areas like bioinformatics, epidemiology, and molecular modeling to understand the threat we’re facing and form strategies to address it. This work demands a massive amount of computational capacity. The COVID-19 High Performance Computing Consortium helps aggregate computing capabilities from the world's most powerful and advanced computers to help COVID-19 researchers execute complex computational research programs to help fight the virus.
About the Consortium, the HPC Systems & How to Join
Consortium members manage a range of computing capabilities that span from small clusters to some of the largest supercomputers in the world. As a member, you would support this crucial work by not only offering your computational resources, but also your deep technical capabilities and expertise to help COVID-19 researchers execute complex computational research programs. We hope that you will join us in this crucial mission.
We are currently providing broad access to portions of over 30 supercomputing systems, representing over over 402 petaflops, 105,334 nodes, 3,539,044 CPU cores, 41,286 GPUs, and counting. Their basic specifications are described below. Additional resources will be added as our consortium grows; please check back for updates.