The 32m of rows is an arbitrary number hard-coded into the database, so you can’t change this behavior.
Anyway the runtime and the amount of resources that would required to fetch such a massive number of rows would be staggering, putting in knee the entire system.
Which kind of workload are you running? I can’t imagine any that would require a such massive resultset.
I would run multiple batches of sql filtering (if possible) with the rowid logical column, but it’s going to take a very long time, because our database is tuned to process big datasets returning small dataset (after filtering aggregation) and like all columnar database. At least the one I worked with, isnt stellar on returning big datasets (we are working on it of course)