An error while running a query

Hello,

When I ran a query this exception shows:
“Projection query output result set on table(s): flights_2008_7M would contain 9223372036854775807 rows, which is more than the current system limit of 256000000”

Even thought I have a space in disk and I have 102GB free in main memory.

Is omnisci limiting the number of rows?

Also, exactly how much space does each row take?

I really appreciate your help.

My gratitude,

Sama

Hi @missasma,

Could I ask you how many rows are in the flights_2008_7m table?
We have set hard limits on projections, because the operation it’s expensive, but on just 7 Millions records also if you trying to project all the columns of the table, you should be able to run the query regardless of the memory you have in the machine

as an example, I run this query on my notebook that has 32GB of memory without particular issues (well the query takes a long time to run because of the projection using lots ofcolumns)

omnisql> select * from flights_2008_7m;
...
7009728 rows returned.
Execution time: 21092 ms, Total time: 119800 ms

the space used by each row depends on the data type of each column.

to get the space taken by each column on the disk you can try this

omnisql> select count(*) from flights_2008_7m;
EXPR$0
7009728
1 rows returned.
Execution time: 18 ms, Total time: 20 ms
omnisql> show table details flights_2008_7m ;
table_id|table_name|column_count|is_sharded_table|shard_count|max_rows|fragment_size|max_rollback_epochs|min_epoch|max_epoch|min_epoch_floor|max_epoch_floor|metadata_file_count|total_metadata_file_size|total_metadata_page_count|total_free_metadata_page_count|data_file_count|total_data_file_size|total_data_page_count|total_free_data_page_count
82|flights_2008_7M|58|false|0|4611686018427387904|2000000|-1|1|1|0|0|1|16777216|4096|3868|3|1610612736|768|127

then take the total_data_file_size and divide for the number of rows you got with count(*)

so in this case, 1610612736/7009728 so around 229 bytes per row.

Anyway, when you run a query the 229 number could be different and it’s likely that you need more memory.

Please check the number of rows you got in the table because the very big number looks to be a bug.

Bests,
Candido