-
Notifications
You must be signed in to change notification settings - Fork 12
Design for Loading Prepartioned Data
Derek Groen edited this page Aug 31, 2015
·
7 revisions
- The Moves List which is applied directly in HemeLB resides in OptimisedDecomposition::CompileMoveData(). This function is encapsulated inside OptimisedDecomposition::PopulateMovesList().
- Fortunately, the Moves List data structure is completely independent from the ParMETIS data structure, and CompileMoveData() serves to convert the ParMETIS data to the HemeLB-specific Moves List format.
- PopulateMovesList() is called after ParMETIS has been applied, which in turn is called after the BasicDecomposition has been done. The BasicDecomposition uses a legacy decomposition technique that still relies on the original structure of blocks.
- Unfortunately, the referencing to sites in blocks is ingrained quite deeply in the HemeLB domain decomposition code, and I expect that any modifications to that aspect will require a rewrite of PopulateMovesList() and CompileMoveData().
- Leave BasicDecomposition and the legacy block structure in HemeLB for the time being.
- Add a switch/option to skip the callParMETIS() function and execute a modified instance of PopulateMovesList().
- The modified instance of PopulateMovesList() uses a new CompileMoveDataFromFile() function, which is partially based on CompileMoveData().
- The most important difference between CompileMoveDataFromFile() and CompileMoveData() is that the former will load the data in the vector "partitionVector" from a file which contains the complete domain decomposition information.
Here are a few options I can think of:
- HGB format, which is a binary format that contains lattice sites + decomposition information (could be redundant if used in conjunction with GMY)
- GMY+TXT, which is the GMY + an ASCII file mapping each global site index to its intended core.
- GMY+HPB, which is the same as above, except that the mapping information is stored in a binary format.