Hi everyone, I connected R to Snowflake and ran the DQ dashboard. I’m curious to know whether the computation was done locally in R or utilized Snowflake’s computation power?
When using test data with 1000 patients and thread=1, it took approximately 38 minutes to complete the DQ check. However, my actual CDM data is much larger, with around 1,000,000 patients in the person table. What’s the optimal approach for running the DQ check on this massive dataset?