Integrated jupyterlab

I’m running omnisci on AWS, and pulling information from the database into jupyterlab is very slow. This is the precompiled AMI, with both jupyterlab and omnsci running on the same instance, but there is some sort of bottleneck that is messing things up. A select function of a 2 column table is linear with length, and 100,000 rows takes over 10 seconds. Running via sqleditor takes a few ms as you might expect. Any ideas as to what might be the cause?


can you supply a code example?

something like that?

con = connect(user="admin", password= "HyperInteractive", host="", dbname="omnisci")
c = con.cursor()
c.execute("SELECT depdelay, arrdelay FROM flights LIMIT 100000")

10000ms is really high, but the SQLEditor is goin gto return just the first 1000 records.
The first time it can by very high expeciallly if you are using dictionary encoding datatypes.


This is on AWS in JupyterLab:

con = omnisci_connect()

c.execute('SELECT "ADDRID", "PLACEKEY" FROM placekey LIMIT 100000')

CPU times: user 9.04 s, sys: 6.23 ms, total: 9.05 s
Wall time: 9.28 s

Running the equivalent on the Mac version on my MacBook takes 11ms…

My guess is that it’s due to the connection between the containers being http?




Yes, you are right. However, using the internal connection turns out on a prolonged response time.

As a fast workaround, you can connect to the database directly in binary mode.

This should fix the performance problem. I will ask internally if there is a more straight full workaround and come back to you after the weekend.


1 Like