I want to analyze the query execution on MapD using nvprof


#1

I would like to know which cuda API is called to execute the query on MapD.
I’d like to run a code file using the nvprof command and print it in the form shown below.

The query statement that I want to analyze is as follows.

select
l_returnflag,
l_linestatus,
sum(l_quantity) as sum_qty,
sum(l_extendedprice) as sum_base_price,
sum(l_extendedprice * (1 - l_discount)) as sum_disc_price,
sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge,
avg(l_quantity) as avg_qty,
avg(l_extendedprice) as avg_price,
avg(l_discount) as avg_disc,
count(*) as count_order
from
lineitem
where
l_shipdate <= date ‘1998-12-01’ - interval ‘116’ day (3)
group by
l_returnflag,
l_linestatus
order by
l_returnflag,
l_linestatus;

image


#2

Hi @Hyuck,

you can run nvprof on your server

e.g.

stop your mapd server; assuming bin directory of your installation is defined into the PATH variable

nvprof --unified-memory-profiling off mapd_server --config [your storage directory]/mapd.conf

on another terminal you connect to the database then run your query/queries

stop with a control+C the nvprof command; then you will get a summary of cuda calls

==16598== Profiling application: mapd_server --config /opt/mapd_storage/mapd.conf
==16598== Profiling result:
            Type  Time(%)      Time     Calls       Avg       Min       Max  Name
 GPU activities:   82.20%  80.638ms        45  1.7920ms     704ns  5.0394ms  [CUDA memcpy HtoD]
                   17.47%  17.137ms         1  17.137ms  17.137ms  17.137ms  multifrag_query_hoisted_literals
                    0.28%  271.96us       136  1.9990us  1.9510us  3.8400us  init_group_by_buffer_gpu(long*, long const *, unsigned int, unsigned int, unsigned int, unsigned int, bool, char)
                    0.05%  47.102us         2  23.551us  1.4720us  45.630us  [CUDA memcpy DtoH]
      API calls:   38.05%  229.20ms       136  1.6853ms  3.5330us  228.68ms  cudaLaunch
                   27.20%  163.85ms         1  163.85ms  163.85ms  163.85ms  cuCtxCreate
                   13.78%  82.986ms        45  1.8441ms  2.9460us  5.1675ms  cuMemcpyHtoD
                    9.31%  56.058ms         1  56.058ms  56.058ms  56.058ms  cuLinkAddFile
                    4.04%  24.355ms         1  24.355ms  24.355ms  24.355ms  cuLinkComplete
                    2.87%  17.273ms         2  8.6364ms  12.069us  17.261ms  cuMemcpyDtoH
                    2.52%  15.178ms         1  15.178ms  15.178ms  15.178ms  cuGraphicsGLRegisterBuffer
                    0.87%  5.2254ms         1  5.2254ms  5.2254ms  5.2254ms  cuGLGetDevices
                    0.83%  5.0241ms         1  5.0241ms  5.0241ms  5.0241ms  cuModuleLoadDataEx
                    0.24%  1.4558ms         1  1.4558ms  1.4558ms  1.4558ms  cuMemAlloc
                    0.11%  648.09us       109  5.9450us     104ns  154.17us  cuDeviceGetAttribute
                    0.10%  595.39us         1  595.39us  595.39us  595.39us  cuLinkAddData
                    0.04%  266.78us         2  133.39us  132.71us  134.07us  cuDeviceTotalMem
                    0.02%  115.32us      1088     105ns      88ns  1.9690us  cudaSetupArgument
                    0.01%  59.173us         1  59.173us  59.173us  59.173us  cuLinkCreate
                    0.01%  43.448us        52     835ns     155ns  3.4850us  cuCtxSetCurrent
                    0.01%  40.875us         1  40.875us  40.875us  40.875us  cuDeviceGetName
                    0.00%  20.513us       136     150ns     115ns  2.8380us  cudaConfigureCall
                    0.00%  6.5210us         1  6.5210us  6.5210us  6.5210us  cuLaunchKernel
                    0.00%  5.8740us         6     979ns     248ns  4.2740us  cuEventCreate
                    0.00%  4.4340us         2  2.2170us     829ns  3.6050us  cuInit
                    0.00%  2.8530us         1  2.8530us  2.8530us  2.8530us  cuLinkDestroy
                    0.00%  2.3330us         1  2.3330us  2.3330us  2.3330us  cuDeviceGetByPCIBusId
                    0.00%  2.0680us         1  2.0680us  2.0680us  2.0680us  cuDeviceGetPCIBusId
                    0.00%     906ns         3     302ns     128ns     585ns  cuDeviceGetCount
                    0.00%     761ns         4     190ns     150ns     248ns  cuDeviceGet
                    0.00%     701ns         1     701ns     701ns     701ns  cuModuleGetFunction
                    0.00%     563ns         1     563ns     563ns     563ns  cuCtxGetCurrent
                    0.00%     366ns         1     366ns     366ns     366ns  cuCtxGetDevice
                    0.00%     363ns         1     363ns     363ns     363ns  cudaDriverGetVersion
                    0.00%     165ns         1     165ns     165ns     165ns  cuDriverGetVersion

#3

My variables are set like this.
$MAPD_PATH = /opt/mapd
$MAPD_STORAGE = /home/mapd (non-default, high-capacity directory).)
$MAPD_PATH/systemd/install_mapd_systemd.sh was also reset.
Even if I set up a new path, if I run $MAPD_PATH/bin/mapd_server, that error appears.

The mapd.conf file and data are currently present in the $MAPD_STORAGE.

image

Is the only way to resolve that error is to recompile the mapd_server?


#4

in mapd.conf file you should specify /home/mapd/data as your data directory.

It looks like as the software is looking for a relative path as data directory, so it’s looking data directory under /installation_dir/bin directory


#5

Thank you very much. really really Thank you, Sir.
If I have any questions, I’ll call back. Please give me your answer then! Thank you.