Dr. A. P. J. Abdul Kalam Cray HPC award to TARANG
I am happy to share a news that I received Dr. A.P.J. Abdul Kalam Cray HPC award for the development of TARANG. The citation reads.. "Development of open-source code TARANG and using it for turbulence simulation at extreme scales”. I thank all contributors to TARANG. I have missed many names in my acceptance speech (given below)… but I thank all of you!! The award is really to the whole team who are scattered at many places.
“Prof. Balakrishnan, Prof. Balaram, Prof. Narasimha, Dr. Di Rose, and members of Cray Computers, I thank you all for the award that means a lot to me and our turbulent group at IIT Kanpur. Many thanks!
I will briefly describe about our code TARANG and its history. It is a spectral code for solving fluid flows. Spectral means we use Fourier transforms to resolve a flow at various length scales. This method is very accurate, and hence it is popular choice for weather modellers.
Now, how is TARANG unique? There are many kinds of flows—with neutral fluid like water, with charged fluids as in stars and in Earth’s interiors, thermal fluids, etc. I wanted one flow solver that can simulate all such flows. It saves a lot of time. Maintaining a single code is also much simpler than many codes for different applications.
First inspiration of TARANG and to make it open source came from my friend Daniele Carati of Brussels. I met him for the first time in 2003. On the first day itself, he gave me the their code TURBO, which was a product of several years of work. Daniele is exceptionally generour, but such code sharing practice exists in Europe, but almost nonexistent in India.
Though TURBO is free and it is a good code written in Fortran, I wanted a general-purpose partial-differential equation solver with classes representing physical quantities like velocity vector, curl etc. So I chose C++ as a programming language, and create objects like vector field, scalar field, etc. We also planned to add a novel features called energy transfers in our code. This feature, developed around 2000 with Gaurav Dar and Eswaran, is a very useful diagnostic tool.
Development of Tarang started in 2003, and its first version came after 4 years that include many sleepless nights. Many students of our lab tested and ran the code, and we started to write papers on fluid, convection, MHD, etc. On this front, I particularly acknowledge the efforts of Mani Chandra, Supriyo Paul, Rakesh Yadav, Pankaj Mishra, Abhishek Kumar.
The first set of runs were performed on EKA of CRL. After CRL closed down, we moved to CDAC. We received many tips from engineers of CRL and CDAC. We are particularly thankful to Sandeep Joshi and his team who hosted us in CDAC during several summers. Here we started to work on a pencil-based FFT and parallel I/O. This work was completed by Anando Chatterjee. At this time we also started to use newly-arrived IITK’s HPC cluster.
The biggest break however came when I met Ravi Samtaney of KAUST. Through a joint project, we got access to Blue Gene/P and later on Cray XC40. KAUST maintains their cluster very well and encourages big and challenging projects. We planned to run our code on the whole of these clusters, for which we perfected the pencil-based FFT (called FFTK—Fast Fourier Transform, Kanpur) and parallel I/O (h5si). We got access to the full machines on two occasions and run the scaling tests round the clock. During this work we could show scaling of Tarang up to nearly 2 lac (196608) processors.
We solved many interesting turbulence problems using Tarang. Most notable was in collaboration with Abhishek Kumar. Here we showed that the turbulent thermal convection has a similar physics as hydrodynamic turbulence. This problem involved many physics ideas, but what won the day was 4096^3 grid simulation on Cray XC40 that showed this scaling very convincingly. This result was covered in Nature Asia news in 2017.
Before closing, I add that supercomputing has opened a unique window for scientific research. We can now attempt problems that we could never imagine, e.g., simulate brain with billion neutrons, large clusters of molecules that Tanushree simulates, etc. But as a nation we are falling behind in this front—both in resources and in coordinated efforts to create large programs and creative hardware. I hope we become a leader in this field, but this dream is at some distance.
I again thank you for the honour!”
“Prof. Balakrishnan, Prof. Balaram, Prof. Narasimha, Dr. Di Rose, and members of Cray Computers, I thank you all for the award that means a lot to me and our turbulent group at IIT Kanpur. Many thanks!
I will briefly describe about our code TARANG and its history. It is a spectral code for solving fluid flows. Spectral means we use Fourier transforms to resolve a flow at various length scales. This method is very accurate, and hence it is popular choice for weather modellers.
Now, how is TARANG unique? There are many kinds of flows—with neutral fluid like water, with charged fluids as in stars and in Earth’s interiors, thermal fluids, etc. I wanted one flow solver that can simulate all such flows. It saves a lot of time. Maintaining a single code is also much simpler than many codes for different applications.
First inspiration of TARANG and to make it open source came from my friend Daniele Carati of Brussels. I met him for the first time in 2003. On the first day itself, he gave me the their code TURBO, which was a product of several years of work. Daniele is exceptionally generour, but such code sharing practice exists in Europe, but almost nonexistent in India.
Though TURBO is free and it is a good code written in Fortran, I wanted a general-purpose partial-differential equation solver with classes representing physical quantities like velocity vector, curl etc. So I chose C++ as a programming language, and create objects like vector field, scalar field, etc. We also planned to add a novel features called energy transfers in our code. This feature, developed around 2000 with Gaurav Dar and Eswaran, is a very useful diagnostic tool.
Development of Tarang started in 2003, and its first version came after 4 years that include many sleepless nights. Many students of our lab tested and ran the code, and we started to write papers on fluid, convection, MHD, etc. On this front, I particularly acknowledge the efforts of Mani Chandra, Supriyo Paul, Rakesh Yadav, Pankaj Mishra, Abhishek Kumar.
The first set of runs were performed on EKA of CRL. After CRL closed down, we moved to CDAC. We received many tips from engineers of CRL and CDAC. We are particularly thankful to Sandeep Joshi and his team who hosted us in CDAC during several summers. Here we started to work on a pencil-based FFT and parallel I/O. This work was completed by Anando Chatterjee. At this time we also started to use newly-arrived IITK’s HPC cluster.
The biggest break however came when I met Ravi Samtaney of KAUST. Through a joint project, we got access to Blue Gene/P and later on Cray XC40. KAUST maintains their cluster very well and encourages big and challenging projects. We planned to run our code on the whole of these clusters, for which we perfected the pencil-based FFT (called FFTK—Fast Fourier Transform, Kanpur) and parallel I/O (h5si). We got access to the full machines on two occasions and run the scaling tests round the clock. During this work we could show scaling of Tarang up to nearly 2 lac (196608) processors.
We solved many interesting turbulence problems using Tarang. Most notable was in collaboration with Abhishek Kumar. Here we showed that the turbulent thermal convection has a similar physics as hydrodynamic turbulence. This problem involved many physics ideas, but what won the day was 4096^3 grid simulation on Cray XC40 that showed this scaling very convincingly. This result was covered in Nature Asia news in 2017.
Before closing, I add that supercomputing has opened a unique window for scientific research. We can now attempt problems that we could never imagine, e.g., simulate brain with billion neutrons, large clusters of molecules that Tanushree simulates, etc. But as a nation we are falling behind in this front—both in resources and in coordinated efforts to create large programs and creative hardware. I hope we become a leader in this field, but this dream is at some distance.
I again thank you for the honour!”
Comments
All the best in pursuing this dream.