MapD has two options when there is not enough GPU memory available to meet the requirements of executing a query.
The first option is to turn the watch dog off which will allow the query to run in stages on the GPU. MapD will then orchestrate the transfer of data through the various layers of our data abstraction to move the data onto the GPU for execution.
The second option is to set option
allow-cpu-retry to direct queries, that do not fit in the GPU memory available, to fall back and be executed on the CPU.
It is probably worth noting here that MapD Core is an in-memory database, and so if you believe your common use case is going to exhaust the capabilities of the VRAM on the GPU’s available we might recommend doing a scaling operation to determine what size installation you need to solve your issue at hand. MapD can scale across multiple GPU’s in a single machine (up to 20 physically (most we have found in one machine), and up to 64 using GPU visualization tools like bitfusion flex), then MapD can scale across multiple machines in a distributed model allowing for many servers each with many cards. So the size of data that can be operated on is very flexible.
If you described the end goal of what you are trying to determine in regards to MapD in a broad context including your use case and some details of your schema and data sizing, we would be able to guide you more directly to an endpoint rather than chasing individual tidbits.