Search code examples
clickhouse

DB::Exception: Too many partitions for single INSERT block (more than 100). The limit is controlled by 'max_partitions_per_insert_block' setting


  1. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0xe0c86b5 in /usr/bin/clickhouse
  2. ? @ 0x8941ed0 in /usr/bin/clickhouse
  3. DB::MergeTreeDataWriter::splitBlockIntoParts(DB::Block const&, unsigned long, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::Context const>, std::__1::shared_ptr<DB::ChunkOffsets>) @ 0x144380e3 in /usr/bin/clickhouse
  4. DB::ReplicatedMergeTreeSinkImpl::consume(DB::Chunk) @ 0x14548f63 in /usr/bin/clickhouse
  5. DB::SinkToStorage::onConsume(DB::Chunk) @ 0x14a14422 in /usr/bin/clickhouse
  6. ? @ 0x14944feb in /usr/bin/clickhouse
  7. ? @ 0x14944d39 in /usr/bin/clickhouse
  8. DB::ExceptionKeepingTransform::work() @ 0x1494461f in /usr/bin/clickhouse
  9. DB::ExecutionThreadContext::executeTask() @ 0x14766aec in /usr/bin/clickhouse
  10. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic*) @ 0x1475bcfb in /usr/bin/clickhouse
  11. DB::PipelineExecutor::executeStep(std::__1::atomic*) @ 0x1475b0e8 in /usr/bin/clickhouse
  12. DB::TCPHandler::processInsertQuery() @ 0x1470c9a4 in /usr/bin/clickhouse
  13. DB::TCPHandler::runImpl() @ 0x14704022 in /usr/bin/clickhouse
  14. DB::TCPHandler::run() @ 0x14719259 in /usr/bin/clickhouse
  15. Poco::Net::TCPServerConnection::start() @ 0x1761d1b4 in /usr/bin/clickhouse
  16. Poco::Net::TCPServerDispatcher::run() @ 0x1761e3db in /usr/bin/clickhouse
  17. Poco::PooledThread::run() @ 0x177a5767 in /usr/bin/clickhouse
  18. Poco::ThreadImpl::runnableEntry(void*) @ 0x177a319d in /usr/bin/clickhouse
  19. ? @ 0x7f92266d6b43 in ?
  20. ? @ 0x7f9226768a00 in ? : While sending /var/lib/clickhouse/store/5bb/5bb957f7-65f5-47e1-a29f-caa9fae3edae/shard3_replica2/20.bin. (TOO_MANY_PARTS) (version 23.2.4.12 (official build)) last_exception_time: 2023-05-23 22:17:51

I am seeing below error in database. After going thru multiple docs I understood this is happening due to too many parts in single insert block. So, I cancelled the query and new query is being executing.

However, old one is still executing in the backend and showing the exception. How can I get rid of this old part?

Truncating table is removing the /var/lib/clickhouse/store/5bb/5bb957f7-65f5-47e1-a29f-caa9fae3edae/shard3_replica2/20.bin and exception gone. But truncating is not possible is prod.

So, I want to know is there a way that we can delete /var/lib/clickhouse/store/5bb/5bb957f7-65f5-47e1-a29f-caa9fae3edae/shard3_replica2/20.bin without truncating table.


Solution

  • To fix one single accident insert you can temporarily set max_partitions_per_insert_block=0 (I actually always set max_partitions_per_insert_block=0 for all my systems during setup).

    Or you can simply remove that N.bin file rm /var/lib/clickhouse/store/5bb/5bb9...edae/shard...ca2/20.bin

    You don't need to truncate anything.