I have following questions regarding query processing in MapD-core:
- I understand that if the data involved in query processing as size more than the GPU memory, MapD-core throws following exception:
Exception: Query couldn’t keep the entire working set of columns in GPU memory
Is there an easy way to add batched execution (dividing the input data into smaller batches such that the data in each batch can fit in GPU memory and after all batches are done, combine the result)? I need some pointers such as what files would be involved in such a change?
- I believe that at the time of the query execution, if the required data is not in GPU memory it is transferred from CPU. Could you please let me know where the functions involved in this functionality.