Speaker
Description
Practical effective usage of HPC resources inevitably ends up in the high throughput regime. No matter how well-optimized a computational chemistry code base may be, most of them (e.g. VASP, GROMACS etc) are geared towards single calculations, while journals require "manifold systems" to prove general results. To this end there has been a steady rise in workflow engines, like AiiDA or PyIron or Luigi or Fireworks. These have a variety of biases based on the original use case. Here, via the usage of MongoDB data-bases and Jobflow with Fireworks, we demonstrate how to perform well versioned (via DVC) high throughput calculations across different computational centers. Part of this is also involves generating optimal binaries for different target hardware.