Bulk Loading data into COLUMNSTORE table

Bulk loading is the most performant way to move data into a columnstore index because it operates on batches of rows. Bulk loading fills rowgroups to maximum capacity and compresses them directly into the columnstore. Only rows at the end of a load that don’t meet the minimum of 102,400 rows per rowgroup go to the deltastore. For understanding bulk load and minimal logging see Fastest Data Loading using Bulk Load and Minimal Logging

 Bulk loading has these built-in performance optimizations:
enter site 1) Parallel loads. You can have multiple concurrent bulk loads (bcp or bulk insert) that are each loading a separate data file. Unlike rowstore bulk loads into SQL Server, you don’t need to specify TABLOCK because each bulk import thread will load data exclusively into a separate rowgroups (compressed or delta rowgroups) with exclusive lock on it. Using TABLOCK will force an exclusive lock on the table and you will not be able to import data in parallel.
http://fruth.com/wp-content/2015/5/no-deposit-casino-bonus-code-coupon.html 2) Minimal logging. A bulk load uses minimal logging on data that goes directly to compressed rowgroups. Any data that goes to a delta rowgroup is fully logged. This includes any batch sizes that are less than 102,400 rows. However, with bulk loading the goal is for most of the data to bypass delta rowgroups.
http://danielleinthecity.ca/the-hoods/st-lawrence-market/dsc_0341/ 3) Locking Optimization. When loading into compressed rowgroup, the X lock on rowgroup is acquired. However, when bulk loading into delta rowgroup, an X lock is acquired at rowgroup but SQL Server still locks the locks PAGE/EXTENT because X rowgroup lock is not part of locking hierarchy.
If you have a nonclustered btree index on a columnstore index, there is no locking or logging optimization for the index itself but the optimizations on clustered columnstore index as described above are still there.

Set these buffer properties of the data flow in SSIS to try to insert data directly into compressed row groups instead of in the delta store:

  • Set  AutoAdjustBufferSize to True
  • Set the erythromycin acne 500mg DefaultBufferMaxRows property of the data flow to 1,048,576, which is  the maximum number of rows for a columnstore row group.
  • Set the DefaultBufferSize to 100MB (104857600), which is the maximum. If (estimated row size * max number of rows) is bigger than 100MB, you’ll end up with less rows in the rows group.

Also set these settings in the OLE DB Destination in SSIS:

  • Fast Load, obviously
  • Rows Per Batch: 1,048,576. This will set the batch size property of the INSERT BULK command.
  • Maximum Insert Commit Size: 0. Although the default is around 2 billion, it can still force intermediate commits. In my case, I had a commit after 150,000 rows, so I ended up with row group sizes of 150,000 rows. They were directly compressed, but a tad too small. Setting this property to 0 will force a single commit at the end, so you finally get your data loaded directly into the CCI with a row group size of  1,048,576.

Refer this Microsoft post for understanding these in more details  https://blogs.msdn.microsoft.com/sqlcat/2016/02/29/sql-server-2016-ssis-data-flow-buffer-auto-sizing-capability-benefits-data-loading-on-clustered-columnstore-tables/

Size of rowgroup can be influenced by

  • batch / commit size
  • dictionary size
  • memory pressure
  • parallelism