Sure. I’ll provide a primer on working with polys currently.
I’ll preface this by saying that our current polygon support is for backend polygon rendering only. We currently do not support any geospatial querying against polygon tables (such as postgis’ ST_Contains, ST_Distance, etc.). Tho, you can run SQL queries against polygon tables, including joins, to produce choropleths and the like.
We are currently working on formal geospatial query support which should be released sometime early next year which may impact some of the following details.
You can import KML, GEOJSON, or SHAPEFILES. If you want to import data from arcGIS you’ll need to export your data into one of these formats.
Importing poly data can be done through two interfaces, either via immerse:
or via mapql (our command-line interface) using the
Note that when importing shapefiles (
.dbf files are also required and need to be in the same directory as the
.shp. You only need to supply the
.shp file via the import commands tho.
We only support importing polygon and multipolygon types currently. Trying to import anything else; Point, Multipoint, LineString, etc. will result in an error.
Vertex data must be in WGS84 lat/lon coordinates.
We currently do not support holes. Polygons with holes can be properly imported, but renders will lack holes.
We currently only support simple polygons. Trying to import complex polygons (i.e. polygons with self-intersections) will result in an error. If exporting from arcGIS, be sure to remove any self-intersections first (I believe arcGIS has tools for this).
At import, 4 columns will be added internally for rendering (
mapd_geo_polydrawinfo). Note these are added for rendering purposes only and should not generally be queried against. The
mapd_geo_coods contains the vertex data converted from WGS84 to mercator.
Any other columns described in the imported poly file will also be added which can be used to join against.
At present, Immerse does not have support for backend-rendered polygons. The only way to render polygons currently is to run a
render_vega request against the server. See:
for some info on what a vega that renders polys should look like.
- biggest caveat is table size. If you try to import a very large polygon file, you’re likely to run into performance problems. So try to work within modest polygon sizes (i.e. try not to import polygon files with millions of polygons)
Lastly, if you want to play with some data, here’s a good example polygon shapefile of the US zipcodes you can play with: https://www.census.gov/geo/maps-data/data/cbf/cbf_zcta.html
This shapefile is of reasonable size and imports fine (approx 35K polygons)
Hopefully that’s enough to get you started. If you begin to start playing with this and have more questions, let us know.
NOTE that all above caveats should be addressed when our geo support rolls out so stay tuned for that.