Optimize rapid deployment and updating of VM images at the remote compute cluster

Project Details

Project Lead
Jan Balewski 
Project Manager
Jan Balewski 
Project Members
Michael Poat, Justin Stevens  
Supporting Experts
Gregor von Laszewski  
Massachusetts Institute of Technology, Laboratory for Nuclear Science  
Physics (203) 


Monte-Carlo simulations of future particle physics experiments usually require the use of complex simulations packages developed and maintained over many years by a broad community, e.g. Geant @ CERN. It is relatively straightforward to deploy the most recent version of such software on a private machine and customize it to the needs of a particular experiment. It is more challenging to scale up to 10-20 computer nodes which may have different hardware and computing environments. One often uses 'spare cycles' at some computing facility affiliated with institutions supporting the new project. However, there is typically a tension between the required throughput and the stability of a large computing facility due to its core mission and the specific needs of a new experiment to customize the environment and update the existing libraries or operating system. The advent of virtual machines (VM) allows for a hardware-agnostic cloning of an experiment specific computation environment to be deployed at an arbitrary computer facility. This project intends to explore the practicality of this approach. We would like to investigate how easy it is to frequently build new VMs locally, ship them to a FutureGrid computing facility, deploy 10-20 copies, run for a week, and transfer back (moderate size) results. Then erase it all and start over.

Intellectual Merit

Optimization of the design of the future DarkLight experiment intended to run at JLab

Broader Impacts

Explores usability of cloud-like resources for physics experiment design

Scale of Use

few weeks of CPU on 10-20 cores, used over few months