Support for Mac OS X


Is there any support to run MapD on a MacOS?
The community binaries is currently only for Linux.



Thanks for looking into MapD.

The community edition is Linux only currently. We will expand that over time as resources and requests settle into a managable pattern.

if you wish to try it on Mac you would have to build the Open Source version.



Thank you, Dwayne, for the feedback. I got it running using VirtualBox(Centos) VM


I did some research on this topic, and I believe the best you can achieve is CPU-mode with what you have described. Even if your OSX box has NVidia graphics, the CUDA and clang libraries are currently not compatible, and have not been for quite a while. (If you run mapd in a CentOS vm on a OSX host, you will still be limited to cpu-mode.) I heard a report that this might change “soon”, but since the new macs are not using NVidia graphics I don’t anticipate this for at least another year.


@jfb Nvidia provides webdrivers for Mac. Using these drivers from Nvidia it is possible to connect an external GPU unit containing an Nvidia GPU card. One such eGPU is the Aorus GTX 1070 gaming box. I have this setup and available on my Mac today with CUDA 8 installed and working. What’s the ETA for MapD on Mac OS X?




We are not planning any work in that direction for the immediate future, but if you have the hardware and the ‘will’ we would welcome and help with any contributions to the code base to get it working there.

Are you able to build and run the standard cuda samples currently?

What does nvidia-smi respond with?



Hi Dwayne. At this stage it is not urgent to get it working on MacOS. We can run it on a VirtualBox and have also ordered a server with Pascal GPU’s.
Thanks for the response and willingness to help.


Hi Dwayne,
I’d be happy to get the code working on my MBP. Yes, I was able to build and run the samples. There isn’t an nvidia-smi for OS X, from my readings online. But, here’s some output from my MBP that might help.

⌘ deviceQuery: 11:11 AM$ ./deviceQuery
./deviceQuery Starting…

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 1070"
CUDA Driver Version / Runtime Version 8.0 / 8.0
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 8192 MBytes (8589737984 bytes)
(15) Multiprocessors, (128) CUDA Cores/MP: 1920 CUDA Cores
GPU Max Clock rate: 1721 MHz (1.72 GHz)
Memory Clock rate: 4004 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 2097152 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 196 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 1070
Result = PASS

⌘ bandwidthTest: 11:12 AM$ ./bandwidthTest
[CUDA Bandwidth Test] - Starting…
Running on…

Device 0: GeForce GTX 1070
Quick Mode

Host to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 1353.8

Device to Host Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 2762.3

Device to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 191734.8

Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

Pls let me know if you need more info.




Hi - curious - what model MBP is this please? (I have a 2015 one with a GT750M, which is tiny). A 1070 sounds far more modern


Hi Niviksha,
I’m connecting an eGPU to a 2016 MBP via a Thunderbolt3 connector. The eGPU is the Aorus 1070 game box:



Hi Bosco - Thanks. If you’re interested, I wrote up a semi-detailed version of my experiences on a Mac (successful) - here. Mac os Sierra(2015 MacBook Pro 10.12 with Nvidia GPU) successfully working

Hope it helps.


Hi all,

I successfully managed to build and start mapd on macOS Sierra 10.12.6 using a new MB Pro and an eGPU box (Akitio Node) connected via TB3. I am using a GTX 1080 Ti as GPU in that setup and the performance of mapd is phenomenal!

It took a few tweaks to actually build mapd-core successfully with that setup. The biggest issue was caused by gdal. I initially installed gdal via homebrew. But this installs gdal 1.15.x, which fails to link with csvimport.

Once I switched to the newest version of gdal (2.x) everything worked fine. Note that the 2.x version of gdal can be installed using brew cask install gdal-framework.

Did first tests with NYC taxi trip data for 2015 in conjunction with Immerse and I am amazed by the fantastic response time. Will document my findings soon.



@crjaensch That’s so cool! Now I’m thinking I need to get an eGPU enclosure for my Mac. :slight_smile:

Btw we’re looking at the possibility of having a community build for Mac (i.e. so you wouldn’t have to build from source). Stay tuned on that!


@darwin It is great news to hear that you are planning a community build for macOS! Would that include the GPU dataframe feature?

I build mapd-core without this, because the current version of Apache Arrow was not compatible. I will try to build against Arrow 0.4.1 when I got a bit of extra time.

BTW: You guys have done a great job; even when my eGPU is not connected (so falling back to CPU mode), MapD’s performance is still excellent :slight_smile: .

Below is a short comparison of hybrid CPU/GPU versus CPU-only mode running the following queries against a NYC taxi trip data set (18 columns) with 173 million records. All this on a 2017 MacBook Pro with a 2.8 GHz Kabylake processor and 16 GB RAM; as GPU I used an NVIDIA GTX 1080 Ti.

Using CPU/GPU hybrid mode
Q1: 28 ms
Q2: 42 ms
Q3: 43 ms
Q4: 446 ms

Using CPU-only mode
Q1: 48 ms
Q2: 234 ms
Q3: 804 ms
Q4: 1097 ms

–Query 1:
SELECT COUNT(*) as val FROM nyc_trip_2013 WHERE ((pickup_datetime >= CAST(‘2013-04-30 00:00:00’ AS TIMESTAMP(0)) AND pickup_datetime <= CAST(‘2013-05-31 00:00:00’ AS TIMESTAMP(0))))

– Query 2:
SELECT payment_type as key0,COUNT(*) AS val FROM nyc_trip_2013 WHERE ((pickup_datetime >= CAST(‘2013-04-30 00:00:00’ AS TIMESTAMP(0)) AND pickup_datetime <= CAST(‘2013-05-31 00:00:00’ AS TIMESTAMP(0)))) GROUP BY key0 ORDER BY val DESC LIMIT 100

– Query 3:
SELECT PG_EXTRACT(‘isodow’, pickup_datetime) as key0, PG_EXTRACT(‘hour’, pickup_datetime) as key1, COUNT(*) AS color FROM nyc_trip_2013 WHERE ((pickup_datetime >= CAST(‘2013-04-30 00:00:00’ AS TIMESTAMP(0)) AND pickup_datetime <= CAST(‘2013-05-31 00:00:00’ AS TIMESTAMP(0)))) GROUP BY key0, key1 ORDER BY key0,key1

– Query 4:
SELECT cast((cast(trip_distance as float) - 0) * 0.4 as int) as key0, COUNT(*) AS val FROM nyc_trip_2013 WHERE (trip_distance >= 0 AND trip_distance <= 30) AND ((pickup_datetime >= CAST(‘2013-04-30 00:00:00’ AS TIMESTAMP(0)) AND pickup_datetime <= CAST(‘2013-05-31 00:00:00’ AS TIMESTAMP(0)))) GROUP BY key0 HAVING key0 >= 0 AND key0 < 12 ORDER BY key0


Thanks, Niviksha. Yes, I was able to get the eGPU working with the MBP. Didn’t get time to work on the MapD build yet. But crjaensch was successful!




@crjaensch, Thanks for posting your successful results with the MapD build on MBP. For the GPU dataframe did you look into using PyGDF with MapD? I checked this out on an Nvidia-Docker image (with MapD, PyGDF, H2OaiGLM) on Ubuntu and it all worked fine. I countered some memory issues due to an older Nvidia GPU card with just 3GB memory.




i think you will be more pleased if you try to run queries concurrently in GPU mode because i guess you are not saturating the GPU at all especially in the first two queries


Have look also at This is WIP, but already pretty good if you’re working in Python.


@aznable: yes, I noticed as well that the GPU is hardly challenged when my test queries are run sequentially. That’s why quickly investigated how to run the queries concurrently.

@billmaimone Thanks for pointing out pymapd :slight_smile: I tested it today and wrote a short Jupyter notebook (see link below) to run SQL queries sequentially and concurrently using my local test bed, a 2017 MacBook Pro with an Nvidia GTX 1080 Ti attached as eGPU, to get a feel of MapD’s performance capabilities.

@aznable: The concurrent query tests now demonstrate that the GPU is run at full capacity once the twenty queries (defined in the notebook) are kicked off concurrently.

Link to the above mentioned Jupyter notebook:


@Bosco, thanks for the hint regarding the dockerized GPU data frame example. Unfortunately, macOS does not support Nvidia-Docker due to the Hypervisor limitations.

Instead, I will to try to get the PyGDF example working, once I managed to successfully build MapD core with Apache Arrow. Unfortunately, this will have to wait until next weekend.