Copy rows from table to table


i usually use INSERT INTO tableA SELECT * FROM tableB in mysql, but not able to do in mapd. Do we have any similar way to do the job?


You can try with copy utility from mapdql

A copy from and a subsequent copy to would do the job

Or a copy from to a pipe where a stream insert os command is reading (I haven’t tried)


do u mean to export the tableB to a csv file, then copy to tableA ?
tried "copy tableA from tableB’ or ‘copy tableA from table tableB’ but no hope :frowning:


Yap I mean export and import; but I would give a try exporting to a pipe where a stream insert command is reading to minimize the disk reads; I haven’t tried any of those commands with a pipe as a source or target.

This is intended as a workaround while a proper ias will be implemented


Thanks aznable, i will take a try.


Do you know when it will be implemented?

Copying in and out of csv is way too slow for me.


I am not a developer of the project so i dont know when will be implemented

I tried opening a mpadql copying from a table and writing on a pipe (unix command mknod /path/pipe_name p) and writing with another mapdql copying from pipe and writing to another table

first mapdql:
copy (select c1_year,c2_month from flights ) to ‘/opt/opendata/pipe_test’ ;
second mapdql:
mapdql> copy flights_pipe from ‘/opt/opendata/pipe_test’ ;
Loaded: 123534963 recs, Rejected: 13 recs in 73.725000 secs
1 rows returned.
Execution time: 73727 ms, Total time: 73760 ms

i have to investigate about 13 rejected records
doing a copy to file and a subsequent write to tables it takes more time and put pressure on fs-cache/disks

time mapdql -u mapd -p HyperInteractive -q -t -n <<EOF
copy (select c1_year,c2_month from flights ) to ‘/opt/opendata/flights_exp.csv’ ;
copy flights_csv from ‘/opt/opendata/flights_exp.csv’ ;
Loaded: 123534976 recs, Rejected: 0 recs in 14.271000 secs
1 rows returned.
Execution time: 14275 ms, Total time: 14282 ms

real 1m44.138s

using streaminsert command instead second copy

time cat /opt/opendata/pipe_test | ./StreamInsert --table flights_pipe --database xxxxx -u xxxxx -p xxxxxxxxx --batch 1000000 --quoted true
Field Delimiter: ,
Line Delimiter: \n
Null String: \N
Insert Batch Size: 1000000
1000000 Rows Inserted, 0 rows skipped.
123534976 Rows Inserted, 0 rows skipped.

real 2m13.018s

it’s slower but you can insert on a remote database and no records are rejected; it’s also quite easy to place on an endless loop if you plan to do multiple IAS during a day


Thanks @aznable, will try your way.

im a bit disappointed because i usually do in mysql is make a temp table, fill data then select

  • Make a temp table to store some middle data,
    e.g.Create table if not exist TEMPX ( col1, col2, col3 )

  • Then fill or update the temp table using a query
    insert into TEMPX
    select col_ax as col1, col_bx as col2, col_cx as col3
    from tableA left join table B …
    on key duplicate …

  • Select col1, col2 from TEMPX …

Hopefully we can do the insert action soon.
Thanks again and have good day.