ColumnStoreでのデータロードと操作

You are viewing an old version of this category. View the current version here.

ColumnStoreは数通りのデータ投入方法をサポートします。

  • cpimportはデータの投入に最も高速な方法で、特定のパフォーマンスモジュールへ紐づけることを可能とします。 通常はこの方法が最初の選択肢となります
  • LOAD DATA INFILEはデータ一括挿入のもう一つの方法です。 通常は、この方法は内部的にcpimportプロセスへデータを流します。ユーザーモジュール上にいくらかのメモリオーバーーヘッドが必要となり、そのため大きなデータのインポートには堅実な方法ではありません。
  • LOAD DATA INFILE provides another means of bulk inserting data.
    • By default with autocommit on it will internally stream the data to an instance of the cpimport process. This requires some memory overhead on the UM server so may be less reliable than cpimport for very large imports.
    • In transactional mode DML inserts are performed which will be significantly slower plus it will consume both binlog transaction files and ColumnStore VersionBuffer files.
  • DML, i.e. INSERT, UPDATE, and DELETE, provide row level changes. ColumnStore is optimized towards bulk modifications and so these operations are slower than they would be in say InnoDB.
    • Currently ColumnStore does not support operating as a replication slave target.
  • Bulk DML operations will in general perform better than multiple individual statements.
    • INSERT INTO SELECT with autocommit behaves similarly to LOAD DATE INFILE in that internally it is mapped to cpimport for higher performance.
    • Bulk update operations based on a join with a small staging table can be relatively fast especially if updating a single column.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.