I have a .bak file (MSFT SQL backup file) in an ftp server and I’d want to load it into mapD core. The .bak file is approx 400 Gb so it’s a large file and that’s why I’d want to plan this transfer effectively.
As of now I would:
- first connect to the ftp server from an EC2 instance running MSFT SQL server
- transfer the .bak file in .zip compressed format from the ftp server to my MSFT SQL server
- Uncompress the zip file to get the .bak file
- Restore DB in MSFT SQL Server
- Export the DB (all tables) from MSFT SQL server to MapD Core
If you have advise for 1-4 you are more than welcome, but the part I am really looking for help is 5.
- What’d be a good format to export the Db from MSFT SQL server as to make the transfer to mapD core as seamless as possible?
- Also I was planning to put the files in whatever recommended format from 1. in EBS Storage and then use mapD core to read the data from the EBS volume that I’d attach to the p2.large EC2 instance running mapD. Is reading from EBS faster than reading from S3?
Any advise on this process would be greatly appreciated it.
I am using amazon m4.xlarge for MSFT SQL server, and p2.large MapD AWS marketplace latest version, as well as S3 and EBS Storage if needed.