Metrics

impala-metrics

Name Value Description
catalog.server.client-cache.clients-in-use 0 The number of clients currently in use by the Catalog Server client cache.
catalog.server.client-cache.total-clients 1 The total number of clients in the Catalog Server client cache.
external-data-source.class-cache.hits 0 Number of cache hits in the External Data Source Class Cache
external-data-source.class-cache.misses 0 Number of cache misses in the External Data Source Class Cache
impala-server.backends.client-cache.clients-in-use 0 The number of active Impala Backend clients. These clients are for communication with other Impala Daemons.
impala-server.backends.client-cache.total-clients 3 The total number of Impala Backend clients in this Impala Daemon's client cache. These clients are for communication with other Impala Daemons.
impala.thrift-server.backend.connection-setup-queue-size 0 The number of connections to the Impala Backend Server that have been accepted and are waiting to be setup.
impala.thrift-server.backend.connections-in-use 1 The number of active Impala Backend client connections to this Impala Daemon.
impala.thrift-server.backend.timedout-cnxn-requests 0 The number of connection requests to the Impala Backend Server that have been timed out waiting to be setup.
impala.thrift-server.backend.total-connections 1 The total number of Impala Backend client connections made to this Impala Daemon over its lifetime.
impala.thrift-server.beeswax-frontend.connection-setup-queue-size 0 The number of Beeswax API connections to this Impala Daemon that have been accepted and are waiting to be setup.
impala.thrift-server.beeswax-frontend.connections-in-use 0 The number of active Beeswax API connections to this Impala Daemon.
impala.thrift-server.beeswax-frontend.timedout-cnxn-requests 0 The number of Beeswax API connection requests to this Impala Daemon that have been timed out waiting to be setup.
impala.thrift-server.beeswax-frontend.total-connections 13.98K The total number of Beeswax API connections made to this Impala Daemon over its lifetime.
impala.thrift-server.hiveserver2-frontend.connection-setup-queue-size 0 The number of HiveServer2 API connections to this Impala Daemon that have been accepted and are waiting to be setup.
impala.thrift-server.hiveserver2-frontend.connections-in-use 2 The number of active HiveServer2 API connections to this Impala Daemon.
impala.thrift-server.hiveserver2-frontend.timedout-cnxn-requests 0 The number of HiveServer2 API connection requests to this Impala Daemon that have been timed out waiting to be setup.
impala.thrift-server.hiveserver2-frontend.total-connections 7.30K The total number of HiveServer2 API connections made to this Impala Daemon over its lifetime.
kudu-client.version kudu 1.10.0-cdh6.3.2 (rev 09cea83ce09c429de2c5c9776908db0000857495) A version string identifying the Kudu client
mem-tracker.process.bytes-freed-by-last-gc -1.00 B The amount of memory freed by the last memory tracker garbage collection.
mem-tracker.process.bytes-over-limit -1.00 B The amount of memory by which the process was over its memory limit the last time the memory limit was encountered.
mem-tracker.process.limit 49.41 GB The process memory tracker limit.
mem-tracker.process.num-gcs 0 The total number of garbage collections performed by the memory tracker over the life of the process.
process-start-time 2024-09-12 11:37:41.386630000 The local start time of the process
request-pool-service.resolve-pool-duration-ms Last (of 36336): 0. Min: 0, max: 48, avg: 0.0976717 Time (ms) spent resolving request request pools.
rpc-method.backend.ImpalaInternalService.ExecQueryFInstances.call_duration Count: 3370, min / max: 0 / 32ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 1ms, 90th %-ile: 1ms, 95th %-ile: 1ms, 99.9th %-ile: 18ms
rpc-method.beeswax.BeeswaxService.close.call_duration Count: 152, min / max: 2ms / 9ms, 25th %-ile: 3ms, 50th %-ile: 4ms, 75th %-ile: 6ms, 90th %-ile: 7ms, 95th %-ile: 7ms, 99.9th %-ile: 9ms
rpc-method.beeswax.BeeswaxService.get_default_configuration.call_duration Count: 5580, min / max: 0 / 2ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 0, 99.9th %-ile: 1ms
rpc-method.beeswax.BeeswaxService.get_log.call_duration Count: 5580, min / max: 0 / 1ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 0, 99.9th %-ile: 1ms
rpc-method.beeswax.BeeswaxService.get_state.call_duration Count: 39400, min / max: 0 / 2ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 0, 99.9th %-ile: 1ms
rpc-method.beeswax.BeeswaxService.query.call_duration Count: 5580, min / max: 3ms / 74ms, 25th %-ile: 4ms, 50th %-ile: 4ms, 75th %-ile: 5ms, 90th %-ile: 5ms, 95th %-ile: 5ms, 99.9th %-ile: 26ms
rpc-method.beeswax.ImpalaService.CloseInsert.call_duration Count: 5428, min / max: 2ms / 62ms, 25th %-ile: 3ms, 50th %-ile: 4ms, 75th %-ile: 6ms, 90th %-ile: 7ms, 95th %-ile: 7ms, 99.9th %-ile: 11ms
rpc-method.beeswax.ImpalaService.GetRuntimeProfile.call_duration Count: 5428, min / max: 0 / 1ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 1ms, 99.9th %-ile: 1ms
rpc-method.beeswax.ImpalaService.PingImpalaService.call_duration Count: 5580, min / max: 0 / 19ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 0, 99.9th %-ile: 10ms
rpc-method.hs2.TCLIService.CloseOperation.call_duration Count: 2277, min / max: 0 / 23ms, 25th %-ile: 1ms, 50th %-ile: 1ms, 75th %-ile: 1ms, 90th %-ile: 3ms, 95th %-ile: 5ms, 99.9th %-ile: 12ms
rpc-method.hs2.TCLIService.CloseSession.call_duration Count: 2915, min / max: 0 / 4ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 1ms, 99.9th %-ile: 1ms
rpc-method.hs2.TCLIService.ExecuteStatement.call_duration Count: 2300, min / max: 1ms / 20s213ms, 25th %-ile: 2ms, 50th %-ile: 3ms, 75th %-ile: 15ms, 90th %-ile: 80ms, 95th %-ile: 268ms, 99.9th %-ile: 20s112ms
rpc-method.hs2.TCLIService.FetchResults.call_duration Count: 3198, min / max: 0 / 31ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 1ms, 95th %-ile: 1ms, 99.9th %-ile: 15ms
rpc-method.hs2.TCLIService.GetInfo.call_duration Count: 2066, min / max: 0 / 31ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 1ms, 99.9th %-ile: 17ms
rpc-method.hs2.TCLIService.GetOperationStatus.call_duration Count: 2375, min / max: 0 / 1ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 0, 99.9th %-ile: 1ms
rpc-method.hs2.TCLIService.GetResultSetMetadata.call_duration Count: 1760, min / max: 0 / 1ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 1ms, 99.9th %-ile: 1ms
rpc-method.hs2.TCLIService.OpenSession.call_duration Count: 3099, min / max: 0 / 111ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 1ms, 95th %-ile: 1ms, 99.9th %-ile: 93ms
statestore-subscriber.registration-id 734892cd1f859425:9b09443282647cb4 The most recent registration ID for this subscriber with the statestore. Set to 'N/A' if no registration has been completed
statestore-subscriber.statestore.client-cache.clients-in-use 0 The number of active StateStore subscriber clients in this Impala Daemon's client cache. These clients are for communication from this role to the StateStore.
statestore-subscriber.statestore.client-cache.total-clients 1 The total number of StateStore subscriber clients in this Impala Daemon's client cache. These clients are for communication from this role to the StateStore.
thread-manager.running-threads 28 The number of running threads in this process.
thread-manager.total-threads-created 79818 Threads created over the lifetime of the process.
tmp-file-mgr.active-scratch-dirs 2 The number of active scratch directories for spilling to disk.
tmp-file-mgr.active-scratch-dirs.list [/data1/impala/impalad/impala-scratch, /data2/impala/impalad/impala-scratch] The set of all active scratch directories for spilling to disk.
tzdata-path /usr/share/zoneinfo Path to the time-zone database

admission-controller

Name Value Description
admission-controller.agg-mem-reserved.root.admin 0 Resource Pool root.admin Aggregate Mem Reserved
admission-controller.agg-mem-reserved.root.default 0 Resource Pool root.default Aggregate Mem Reserved
admission-controller.agg-mem-reserved.root.root 0 Resource Pool root.root Aggregate Mem Reserved
admission-controller.agg-num-queued.root.admin 0 Resource Pool root.admin Aggregate Queue Size
admission-controller.agg-num-queued.root.default 0 Resource Pool root.default Aggregate Queue Size
admission-controller.agg-num-queued.root.root 0 Resource Pool root.root Aggregate Queue Size
admission-controller.agg-num-running.root.admin 0 Resource Pool root.admin Aggregate Num Running
admission-controller.agg-num-running.root.default 0 Resource Pool root.default Aggregate Num Running
admission-controller.agg-num-running.root.root 0 Resource Pool root.root Aggregate Num Running
admission-controller.local-backend-mem-reserved.root.admin 0 Resource Pool root.admin Mem Reserved by the backend coordinator
admission-controller.local-backend-mem-reserved.root.default 0 Resource Pool root.default Mem Reserved by the backend coordinator
admission-controller.local-backend-mem-reserved.root.root 0 Resource Pool root.root Mem Reserved by the backend coordinator
admission-controller.local-backend-mem-usage.root.admin 0 Resource Pool root.admin Coordinator Backend Mem Usage
admission-controller.local-backend-mem-usage.root.default 0 Resource Pool root.default Coordinator Backend Mem Usage
admission-controller.local-backend-mem-usage.root.root 0 Resource Pool root.root Coordinator Backend Mem Usage
admission-controller.local-mem-admitted.root.admin 0 Resource Pool root.admin Local Mem Admitted
admission-controller.local-mem-admitted.root.default 0 Resource Pool root.default Local Mem Admitted
admission-controller.local-mem-admitted.root.root 0 Resource Pool root.root Local Mem Admitted
admission-controller.local-num-admitted-running.root.admin 0 Resource Pool root.admin Coordinator Backend Num Running
admission-controller.local-num-admitted-running.root.default 0 Resource Pool root.default Coordinator Backend Num Running
admission-controller.local-num-admitted-running.root.root 0 Resource Pool root.root Coordinator Backend Num Running
admission-controller.local-num-queued.root.admin 0 Resource Pool root.admin Queue Size on the coordinator
admission-controller.local-num-queued.root.default 0 Resource Pool root.default Queue Size on the coordinator
admission-controller.local-num-queued.root.root 0 Resource Pool root.root Queue Size on the coordinator
admission-controller.pool-clamp-mem-limit-query-option.root.admin false If false, the mem_limit query option will not be bounded by the max/min query mem limits specified for the pool
admission-controller.pool-clamp-mem-limit-query-option.root.default true If false, the mem_limit query option will not be bounded by the max/min query mem limits specified for the pool
admission-controller.pool-clamp-mem-limit-query-option.root.root true If false, the mem_limit query option will not be bounded by the max/min query mem limits specified for the pool
admission-controller.pool-max-mem-resources.root.admin 0 Resource Pool root.admin Configured Max Mem Resources
admission-controller.pool-max-mem-resources.root.default -1 Resource Pool root.default Configured Max Mem Resources
admission-controller.pool-max-mem-resources.root.root -1 Resource Pool root.root Configured Max Mem Resources
admission-controller.pool-max-query-mem-limit.root.admin 0 Resource Pool root.admin Max Query Memory Limit
admission-controller.pool-max-query-mem-limit.root.default 0 Resource Pool root.default Max Query Memory Limit
admission-controller.pool-max-query-mem-limit.root.root 0 Resource Pool root.root Max Query Memory Limit
admission-controller.pool-max-queued.root.admin 0 Resource Pool root.admin Configured Max Queued
admission-controller.pool-max-queued.root.default 200 Resource Pool root.default Configured Max Queued
admission-controller.pool-max-queued.root.root 200 Resource Pool root.root Configured Max Queued
admission-controller.pool-max-requests.root.admin 0 Resource Pool root.admin Configured Max Requests
admission-controller.pool-max-requests.root.default -1 Resource Pool root.default Configured Max Requests
admission-controller.pool-max-requests.root.root -1 Resource Pool root.root Configured Max Requests
admission-controller.pool-min-query-mem-limit.root.admin 0 Resource Pool root.admin Min Query Memory Limit
admission-controller.pool-min-query-mem-limit.root.default 0 Resource Pool root.default Min Query Memory Limit
admission-controller.pool-min-query-mem-limit.root.root 0 Resource Pool root.root Min Query Memory Limit
admission-controller.time-in-queue-ms.root.admin 0 Resource Pool root.admin Time in Queue
admission-controller.time-in-queue-ms.root.default 0 Resource Pool root.default Time in Queue
admission-controller.time-in-queue-ms.root.root 0 Resource Pool root.root Time in Queue
admission-controller.total-admitted.root.admin 0 Total number of requests admitted to pool root.admin
admission-controller.total-admitted.root.default 669 Total number of requests admitted to pool root.default
admission-controller.total-admitted.root.root 3.60K Total number of requests admitted to pool root.root
admission-controller.total-dequeued.root.admin 0 Total number of requests dequeued in pool root.admin
admission-controller.total-dequeued.root.default 0 Total number of requests dequeued in pool root.default
admission-controller.total-dequeued.root.root 0 Total number of requests dequeued in pool root.root
admission-controller.total-queued.root.admin 0 Total number of requests queued in pool root.admin
admission-controller.total-queued.root.default 0 Total number of requests queued in pool root.default
admission-controller.total-queued.root.root 0 Total number of requests queued in pool root.root
admission-controller.total-rejected.root.admin 0 Total number of requests rejected in pool root.admin
admission-controller.total-rejected.root.default 0 Total number of requests rejected in pool root.default
admission-controller.total-rejected.root.root 0 Total number of requests rejected in pool root.root
admission-controller.total-released.root.admin 0 Total number of requests that have completed and released resources in pool root.admin
admission-controller.total-released.root.default 669 Total number of requests that have completed and released resources in pool root.default
admission-controller.total-released.root.root 3.60K Total number of requests that have completed and released resources in pool root.root
admission-controller.total-timed-out.root.admin 0 Total number of requests timed out waiting while queued in pool root.admin
admission-controller.total-timed-out.root.default 0 Total number of requests timed out waiting while queued in pool root.default
admission-controller.total-timed-out.root.root 0 Total number of requests timed out waiting while queued in pool root.root

buffer-pool

Name Value Description
buffer-pool.arena-0.allocated-buffer-sizes Count: 367, min / max: 8.00 KB / 128.00 KB, 25th %-ile: 8.00 KB, 50th %-ile: 8.00 KB, 75th %-ile: 64.00 KB, 90th %-ile: 64.00 KB, 95th %-ile: 64.00 KB, 99.9th %-ile: 128.00 KB Statistics for buffer sizes allocated from the system. Only a subset of allocations are counted in this metric to reduce overhead.
buffer-pool.arena-0.clean-page-hits 0 Number of times a clean page was evicted to fulfil an allocation.
buffer-pool.arena-0.direct-alloc-count 23.51K Number of times a new buffer was directly allocated from the system allocator to fulfil an allocation.
buffer-pool.arena-0.local-arena-free-buffer-hits 8.56K Number of times a free buffer was recycled from this core's arena to fulfil an allocation.
buffer-pool.arena-0.num-final-scavenges 0 Number of times the allocator had to lock all arenas and scavenge to fulfil an allocation.
buffer-pool.arena-0.num-scavenges 0 Number of times the allocator had to scavenge to fulfil an allocation.
buffer-pool.arena-0.numa-arena-free-buffer-hits 0 Number of times that a recycled buffer within the same NUMA node was used to fulfil an allocation.
buffer-pool.arena-0.system-alloc-time 58.002ms Total time the buffer pool spent in the system allocator for this arena.
buffer-pool.arena-1.allocated-buffer-sizes Count: 550, min / max: 8.00 KB / 128.00 KB, 25th %-ile: 64.00 KB, 50th %-ile: 64.00 KB, 75th %-ile: 64.00 KB, 90th %-ile: 64.00 KB, 95th %-ile: 64.00 KB, 99.9th %-ile: 128.00 KB Statistics for buffer sizes allocated from the system. Only a subset of allocations are counted in this metric to reduce overhead.
buffer-pool.arena-1.clean-page-hits 0 Number of times a clean page was evicted to fulfil an allocation.
buffer-pool.arena-1.direct-alloc-count 35.21K Number of times a new buffer was directly allocated from the system allocator to fulfil an allocation.
buffer-pool.arena-1.local-arena-free-buffer-hits 4.15K Number of times a free buffer was recycled from this core's arena to fulfil an allocation.
buffer-pool.arena-1.num-final-scavenges 0 Number of times the allocator had to lock all arenas and scavenge to fulfil an allocation.
buffer-pool.arena-1.num-scavenges 0 Number of times the allocator had to scavenge to fulfil an allocation.
buffer-pool.arena-1.numa-arena-free-buffer-hits 0 Number of times that a recycled buffer within the same NUMA node was used to fulfil an allocation.
buffer-pool.arena-1.system-alloc-time 114.004ms Total time the buffer pool spent in the system allocator for this arena.
buffer-pool.arena-2.allocated-buffer-sizes Count: 354, min / max: 8.00 KB / 128.00 KB, 25th %-ile: 8.00 KB, 50th %-ile: 8.00 KB, 75th %-ile: 64.00 KB, 90th %-ile: 64.00 KB, 95th %-ile: 64.00 KB, 99.9th %-ile: 128.00 KB Statistics for buffer sizes allocated from the system. Only a subset of allocations are counted in this metric to reduce overhead.
buffer-pool.arena-2.clean-page-hits 0 Number of times a clean page was evicted to fulfil an allocation.
buffer-pool.arena-2.direct-alloc-count 22.71K Number of times a new buffer was directly allocated from the system allocator to fulfil an allocation.
buffer-pool.arena-2.local-arena-free-buffer-hits 8.04K Number of times a free buffer was recycled from this core's arena to fulfil an allocation.
buffer-pool.arena-2.num-final-scavenges 0 Number of times the allocator had to lock all arenas and scavenge to fulfil an allocation.
buffer-pool.arena-2.num-scavenges 0 Number of times the allocator had to scavenge to fulfil an allocation.
buffer-pool.arena-2.numa-arena-free-buffer-hits 0 Number of times that a recycled buffer within the same NUMA node was used to fulfil an allocation.
buffer-pool.arena-2.system-alloc-time 37.001ms Total time the buffer pool spent in the system allocator for this arena.
buffer-pool.arena-3.allocated-buffer-sizes Count: 555, min / max: 8.00 KB / 128.00 KB, 25th %-ile: 64.00 KB, 50th %-ile: 64.00 KB, 75th %-ile: 64.00 KB, 90th %-ile: 64.00 KB, 95th %-ile: 64.00 KB, 99.9th %-ile: 128.00 KB Statistics for buffer sizes allocated from the system. Only a subset of allocations are counted in this metric to reduce overhead.
buffer-pool.arena-3.clean-page-hits 0 Number of times a clean page was evicted to fulfil an allocation.
buffer-pool.arena-3.direct-alloc-count 35.55K Number of times a new buffer was directly allocated from the system allocator to fulfil an allocation.
buffer-pool.arena-3.local-arena-free-buffer-hits 4.17K Number of times a free buffer was recycled from this core's arena to fulfil an allocation.
buffer-pool.arena-3.num-final-scavenges 0 Number of times the allocator had to lock all arenas and scavenge to fulfil an allocation.
buffer-pool.arena-3.num-scavenges 0 Number of times the allocator had to scavenge to fulfil an allocation.
buffer-pool.arena-3.numa-arena-free-buffer-hits 0 Number of times that a recycled buffer within the same NUMA node was used to fulfil an allocation.
buffer-pool.arena-3.system-alloc-time 120.004ms Total time the buffer pool spent in the system allocator for this arena.
buffer-pool.arena-4.allocated-buffer-sizes Count: 353, min / max: 8.00 KB / 64.00 KB, 25th %-ile: 8.00 KB, 50th %-ile: 8.00 KB, 75th %-ile: 64.00 KB, 90th %-ile: 64.00 KB, 95th %-ile: 64.00 KB, 99.9th %-ile: 64.00 KB Statistics for buffer sizes allocated from the system. Only a subset of allocations are counted in this metric to reduce overhead.
buffer-pool.arena-4.clean-page-hits 0 Number of times a clean page was evicted to fulfil an allocation.
buffer-pool.arena-4.direct-alloc-count 22.60K Number of times a new buffer was directly allocated from the system allocator to fulfil an allocation.
buffer-pool.arena-4.local-arena-free-buffer-hits 8.54K Number of times a free buffer was recycled from this core's arena to fulfil an allocation.
buffer-pool.arena-4.num-final-scavenges 0 Number of times the allocator had to lock all arenas and scavenge to fulfil an allocation.
buffer-pool.arena-4.num-scavenges 0 Number of times the allocator had to scavenge to fulfil an allocation.
buffer-pool.arena-4.numa-arena-free-buffer-hits 0 Number of times that a recycled buffer within the same NUMA node was used to fulfil an allocation.
buffer-pool.arena-4.system-alloc-time 39.001ms Total time the buffer pool spent in the system allocator for this arena.
buffer-pool.arena-5.allocated-buffer-sizes Count: 530, min / max: 8.00 KB / 128.00 KB, 25th %-ile: 64.00 KB, 50th %-ile: 64.00 KB, 75th %-ile: 64.00 KB, 90th %-ile: 64.00 KB, 95th %-ile: 64.00 KB, 99.9th %-ile: 128.00 KB Statistics for buffer sizes allocated from the system. Only a subset of allocations are counted in this metric to reduce overhead.
buffer-pool.arena-5.clean-page-hits 0 Number of times a clean page was evicted to fulfil an allocation.
buffer-pool.arena-5.direct-alloc-count 33.97K Number of times a new buffer was directly allocated from the system allocator to fulfil an allocation.
buffer-pool.arena-5.local-arena-free-buffer-hits 3.31K Number of times a free buffer was recycled from this core's arena to fulfil an allocation.
buffer-pool.arena-5.num-final-scavenges 0 Number of times the allocator had to lock all arenas and scavenge to fulfil an allocation.
buffer-pool.arena-5.num-scavenges 0 Number of times the allocator had to scavenge to fulfil an allocation.
buffer-pool.arena-5.numa-arena-free-buffer-hits 0 Number of times that a recycled buffer within the same NUMA node was used to fulfil an allocation.
buffer-pool.arena-5.system-alloc-time 121.004ms Total time the buffer pool spent in the system allocator for this arena.
buffer-pool.arena-6.allocated-buffer-sizes Count: 347, min / max: 8.00 KB / 64.00 KB, 25th %-ile: 8.00 KB, 50th %-ile: 8.00 KB, 75th %-ile: 64.00 KB, 90th %-ile: 64.00 KB, 95th %-ile: 64.00 KB, 99.9th %-ile: 64.00 KB Statistics for buffer sizes allocated from the system. Only a subset of allocations are counted in this metric to reduce overhead.
buffer-pool.arena-6.clean-page-hits 0 Number of times a clean page was evicted to fulfil an allocation.
buffer-pool.arena-6.direct-alloc-count 22.22K Number of times a new buffer was directly allocated from the system allocator to fulfil an allocation.
buffer-pool.arena-6.local-arena-free-buffer-hits 8.34K Number of times a free buffer was recycled from this core's arena to fulfil an allocation.
buffer-pool.arena-6.num-final-scavenges 0 Number of times the allocator had to lock all arenas and scavenge to fulfil an allocation.
buffer-pool.arena-6.num-scavenges 0 Number of times the allocator had to scavenge to fulfil an allocation.
buffer-pool.arena-6.numa-arena-free-buffer-hits 0 Number of times that a recycled buffer within the same NUMA node was used to fulfil an allocation.
buffer-pool.arena-6.system-alloc-time 24.000ms Total time the buffer pool spent in the system allocator for this arena.
buffer-pool.arena-7.allocated-buffer-sizes Count: 530, min / max: 8.00 KB / 128.00 KB, 25th %-ile: 64.00 KB, 50th %-ile: 64.00 KB, 75th %-ile: 64.00 KB, 90th %-ile: 64.00 KB, 95th %-ile: 64.00 KB, 99.9th %-ile: 128.00 KB Statistics for buffer sizes allocated from the system. Only a subset of allocations are counted in this metric to reduce overhead.
buffer-pool.arena-7.clean-page-hits 0 Number of times a clean page was evicted to fulfil an allocation.
buffer-pool.arena-7.direct-alloc-count 33.98K Number of times a new buffer was directly allocated from the system allocator to fulfil an allocation.
buffer-pool.arena-7.local-arena-free-buffer-hits 3.79K Number of times a free buffer was recycled from this core's arena to fulfil an allocation.
buffer-pool.arena-7.num-final-scavenges 0 Number of times the allocator had to lock all arenas and scavenge to fulfil an allocation.
buffer-pool.arena-7.num-scavenges 0 Number of times the allocator had to scavenge to fulfil an allocation.
buffer-pool.arena-7.numa-arena-free-buffer-hits 0 Number of times that a recycled buffer within the same NUMA node was used to fulfil an allocation.
buffer-pool.arena-7.system-alloc-time 110.003ms Total time the buffer pool spent in the system allocator for this arena.
buffer-pool.clean-page-bytes 0 Total bytes of clean page memory cached in the buffer pool.
buffer-pool.clean-pages 0 Total number of clean pages cached in the buffer pool.
buffer-pool.clean-pages-limit 4.20 GB Limit on number of clean pages cached in the buffer pool.
buffer-pool.free-buffer-bytes 0 Total bytes of free buffer memory cached in the buffer pool.
buffer-pool.free-buffers 0 Total number of free buffers cached in the buffer pool.
buffer-pool.limit 42.00 GB Maximum allowed bytes allocated by the buffer pool.
buffer-pool.reserved 0 Total bytes of buffers reserved by Impala subsystems
buffer-pool.system-allocated 0 Total buffer memory currently allocated by the buffer pool.
buffer-pool.unused-reservation-bytes 0 Total bytes of buffer reservations by Impala subsystems that are currently unused

datastream-manager

Name Value Description
senders-blocked-on-recvr-creation 0 Number of senders waiting for receiving fragment to initialize
total-senders-blocked-on-recvr-creation 0 Total number of senders that have been blocked waiting for receiving fragment to initialize.
total-senders-timedout-waiting-for-recvr-creation 0 Total number of senders that timed-out waiting for receiving fragment to initialize.

impala-server

Name Value Description
impala-server.backend-num-queries-executed 16.56K The total number of queries that executed on this backend over the life of the process.
impala-server.backend-num-queries-executing 0 The number of queries currently executing on this backend.
impala-server.ddl-durations-ms Count: 3218, min / max: 8ms / 20s211ms, 25th %-ile: 22ms, 50th %-ile: 27ms, 75th %-ile: 57ms, 90th %-ile: 253ms, 95th %-ile: 286ms, 99.9th %-ile: 20s096ms Distribution of DDL operation latencies
impala-server.hash-table.total-bytes 0 The current size of all allocated hash tables.
impala-server.hedged-read-ops 0 The total number of hedged reads attempted over the life of the process
impala-server.hedged-read-ops-win 0 The total number of times hedged reads were faster than regular read operations
impala-server.io-mgr.bytes-read 426.33 MB The total number of bytes read by the IO manager.
impala-server.io-mgr.bytes-written 0 Total number of bytes written to disk by the IO manager.
impala-server.io-mgr.cached-bytes-read 0 Total number of cached bytes read by the IO manager.
impala-server.io-mgr.local-bytes-read 426.33 MB Total number of local bytes read by the IO manager.
impala-server.io-mgr.num-open-files 0 The current number of files opened by the IO Manager
impala-server.io-mgr.remote-data-cache-dropped-bytes 0 Total number of bytes not inserted in remote data cache due to concurrency limit.
impala-server.io-mgr.remote-data-cache-hit-bytes 0 Total number of bytes of hits in the remote data cache.
impala-server.io-mgr.remote-data-cache-miss-bytes 0 Total number of bytes of misses in the remote data cache.
impala-server.io-mgr.remote-data-cache-total-bytes 0 Current byte size of the remote data cache.
impala-server.io-mgr.short-circuit-bytes-read 426.33 MB Total number of short-circuit bytes read by the IO manager.
impala-server.io.mgr.cached-file-handles-hit-count 7735 Number of cache hits for cached HDFS file handles
impala-server.io.mgr.cached-file-handles-hit-ratio Last (of ): 1. Min: , max: , avg: 0.991667 HDFS file handle cache hit ratio, between 0 and 1, where 1 means all reads were served from cached file handles.
impala-server.io.mgr.cached-file-handles-miss-count 65 Number of cache misses for cached HDFS file handles
impala-server.io.mgr.cached-file-handles-reopened 0 Number of cached HDFS file handles reopened
impala-server.io.mgr.num-cached-file-handles 0 Number of currently cached HDFS file handles in the IO manager.
impala-server.io.mgr.num-file-handles-outstanding 0 Number of HDFS file handles that are currently in use by readers.
impala-server.mem-pool.total-bytes 0 The current size of the memory pool shared by all queries
impala-server.num-files-open-for-insert 0 The number of HDFS files currently open for writing.
impala-server.num-fragments 11.10K The total number of query fragments processed over the life of the process.
impala-server.num-fragments-in-flight 0 The number of query fragment instances currently executing.
impala-server.num-open-beeswax-sessions 0 The number of open Beeswax sessions.
impala-server.num-open-hiveserver2-sessions 0 The number of open HiveServer2 sessions.
impala-server.num-queries 19.78K The total number of queries processed over the life of the process
impala-server.num-queries-expired 0 Number of queries expired due to inactivity.
impala-server.num-queries-registered 0 The total number of queries registered on this Impala server instance. Includes queries that are in flight and waiting to be closed
impala-server.num-queries-spilled 0 Number of queries for which any operator spilled.
impala-server.num-sessions-expired 0 Number of sessions expired due to inactivity.
impala-server.query-durations-ms Count: 16559, min / max: 6ms / 1s135ms, 25th %-ile: 126ms, 50th %-ile: 132ms, 75th %-ile: 146ms, 90th %-ile: 170ms, 95th %-ile: 194ms, 99.9th %-ile: 724ms Distribution of query latencies
impala-server.ready true Indicates if the Impala Server is ready.
impala-server.resultset-cache.total-bytes 0 Total number of bytes consumed for rows cached to support HS2 FETCH_FIRST.
impala-server.resultset-cache.total-num-rows 0 Total number of rows cached to support HS2 FETCH_FIRST.
impala-server.scan-ranges.num-missing-volume-id 0 The total number of scan ranges read over the life of the process that did not have volume metadata
impala-server.scan-ranges.total 3.90K The total number of scan ranges read over the life of the process
impala-server.version impalad version 3.2.0-cdh6.3.2 RELEASE (build 1bb9836227301b839a32c6bc230e35439d5984ac) The full version string of the Impala Server.

catalog

Name Value Description
catalog.curr-serviceid f5695436c38e42b6:b054890354a639ce Catalog service id.
catalog.curr-topic 73677 Statestore topic update version.
catalog.curr-version 37174 Catalog topic update version.
catalog.num-databases 15 The number of databases in the catalog.
catalog.num-tables 154 The number of tables in the catalog.
catalog.ready true Indicates if the catalog is ready.

jvm

Name Value Description
jvm.code-cache.committed-usage-bytes 30.94 MB Jvm code-cache Committed Usage Bytes
jvm.code-cache.current-usage-bytes 29.96 MB Jvm code-cache Current Usage Bytes
jvm.code-cache.init-usage-bytes 2.44 MB Jvm code-cache Init Usage Bytes
jvm.code-cache.max-usage-bytes 240.00 MB Jvm code-cache Max Usage Bytes
jvm.code-cache.peak-committed-usage-bytes 30.94 MB Jvm code-cache Peak Committed Usage Bytes
jvm.code-cache.peak-current-usage-bytes 30.33 MB Jvm code-cache Peak Current Usage Bytes
jvm.code-cache.peak-init-usage-bytes 2.44 MB Jvm code-cache Peak Init Usage Bytes
jvm.code-cache.peak-max-usage-bytes 240.00 MB Jvm code-cache Peak Max Usage Bytes
jvm.compressed-class-space.committed-usage-bytes 4.50 MB Jvm compressed-class-space Committed Usage Bytes
jvm.compressed-class-space.current-usage-bytes 4.34 MB Jvm compressed-class-space Current Usage Bytes
jvm.compressed-class-space.init-usage-bytes 0 Jvm compressed-class-space Init Usage Bytes
jvm.compressed-class-space.max-usage-bytes 1.00 GB Jvm compressed-class-space Max Usage Bytes
jvm.compressed-class-space.peak-committed-usage-bytes 4.50 MB Jvm compressed-class-space Peak Committed Usage Bytes
jvm.compressed-class-space.peak-current-usage-bytes 4.34 MB Jvm compressed-class-space Peak Current Usage Bytes
jvm.compressed-class-space.peak-init-usage-bytes 0 Jvm compressed-class-space Peak Init Usage Bytes
jvm.compressed-class-space.peak-max-usage-bytes 1.00 GB Jvm compressed-class-space Peak Max Usage Bytes
jvm.gc_count 840 Jvm Garbage Collection Count
jvm.gc_num_info_threshold_exceeded 0 Jvm Pause Detection Info Threshold Exceeded
jvm.gc_num_warn_threshold_exceeded 0 Jvm Pause Detection Warning Threshold Exceeded
jvm.gc_time_millis 5s764ms Jvm Garbage Collection Time
jvm.gc_total_extra_sleep_time_millis 24m8s Jvm Pause Detection Extra Sleep Time
jvm.heap.committed-usage-bytes 382.00 MB Jvm heap Committed Usage Bytes
jvm.heap.current-usage-bytes 258.03 MB Jvm heap Current Usage Bytes
jvm.heap.init-usage-bytes 394.00 MB Jvm heap Init Usage Bytes
jvm.heap.max-usage-bytes 382.00 MB Jvm heap Max Usage Bytes
jvm.heap.peak-committed-usage-bytes 0 Jvm heap Peak Committed Usage Bytes
jvm.heap.peak-current-usage-bytes 0 Jvm heap Peak Current Usage Bytes
jvm.heap.peak-init-usage-bytes 0 Jvm heap Peak Init Usage Bytes
jvm.heap.peak-max-usage-bytes 0 Jvm heap Peak Max Usage Bytes
jvm.metaspace.committed-usage-bytes 46.46 MB Jvm metaspace Committed Usage Bytes
jvm.metaspace.current-usage-bytes 45.81 MB Jvm metaspace Current Usage Bytes
jvm.metaspace.init-usage-bytes 0 Jvm metaspace Init Usage Bytes
jvm.metaspace.max-usage-bytes -1.00 B Jvm metaspace Max Usage Bytes
jvm.metaspace.peak-committed-usage-bytes 46.46 MB Jvm metaspace Peak Committed Usage Bytes
jvm.metaspace.peak-current-usage-bytes 45.81 MB Jvm metaspace Peak Current Usage Bytes
jvm.metaspace.peak-init-usage-bytes 0 Jvm metaspace Peak Init Usage Bytes
jvm.metaspace.peak-max-usage-bytes -1.00 B Jvm metaspace Peak Max Usage Bytes
jvm.non-heap.committed-usage-bytes 81.90 MB Jvm non-heap Committed Usage Bytes
jvm.non-heap.current-usage-bytes 80.11 MB Jvm non-heap Current Usage Bytes
jvm.non-heap.init-usage-bytes 2.44 MB Jvm non-heap Init Usage Bytes
jvm.non-heap.max-usage-bytes -1.00 B Jvm non-heap Max Usage Bytes
jvm.non-heap.peak-committed-usage-bytes 0 Jvm non-heap Peak Committed Usage Bytes
jvm.non-heap.peak-current-usage-bytes 0 Jvm non-heap Peak Current Usage Bytes
jvm.non-heap.peak-init-usage-bytes 0 Jvm non-heap Peak Init Usage Bytes
jvm.non-heap.peak-max-usage-bytes 0 Jvm non-heap Peak Max Usage Bytes
jvm.ps-eden-space.committed-usage-bytes 107.00 MB Jvm ps-eden-space Committed Usage Bytes
jvm.ps-eden-space.current-usage-bytes 48.99 MB Jvm ps-eden-space Current Usage Bytes
jvm.ps-eden-space.init-usage-bytes 99.00 MB Jvm ps-eden-space Init Usage Bytes
jvm.ps-eden-space.max-usage-bytes 107.00 MB Jvm ps-eden-space Max Usage Bytes
jvm.ps-eden-space.peak-committed-usage-bytes 116.00 MB Jvm ps-eden-space Peak Committed Usage Bytes
jvm.ps-eden-space.peak-current-usage-bytes 116.00 MB Jvm ps-eden-space Peak Current Usage Bytes
jvm.ps-eden-space.peak-init-usage-bytes 99.00 MB Jvm ps-eden-space Peak Init Usage Bytes
jvm.ps-eden-space.peak-max-usage-bytes 116.00 MB Jvm ps-eden-space Peak Max Usage Bytes
jvm.ps-old-gen.committed-usage-bytes 263.00 MB Jvm ps-old-gen Committed Usage Bytes
jvm.ps-old-gen.current-usage-bytes 199.61 MB Jvm ps-old-gen Current Usage Bytes
jvm.ps-old-gen.init-usage-bytes 263.00 MB Jvm ps-old-gen Init Usage Bytes
jvm.ps-old-gen.max-usage-bytes 263.00 MB Jvm ps-old-gen Max Usage Bytes
jvm.ps-old-gen.peak-committed-usage-bytes 263.00 MB Jvm ps-old-gen Peak Committed Usage Bytes
jvm.ps-old-gen.peak-current-usage-bytes 199.61 MB Jvm ps-old-gen Peak Current Usage Bytes
jvm.ps-old-gen.peak-init-usage-bytes 263.00 MB Jvm ps-old-gen Peak Init Usage Bytes
jvm.ps-old-gen.peak-max-usage-bytes 263.00 MB Jvm ps-old-gen Peak Max Usage Bytes
jvm.ps-survivor-space.committed-usage-bytes 12.00 MB Jvm ps-survivor-space Committed Usage Bytes
jvm.ps-survivor-space.current-usage-bytes 9.42 MB Jvm ps-survivor-space Current Usage Bytes
jvm.ps-survivor-space.init-usage-bytes 16.00 MB Jvm ps-survivor-space Init Usage Bytes
jvm.ps-survivor-space.max-usage-bytes 12.00 MB Jvm ps-survivor-space Max Usage Bytes
jvm.ps-survivor-space.peak-committed-usage-bytes 24.50 MB Jvm ps-survivor-space Peak Committed Usage Bytes
jvm.ps-survivor-space.peak-current-usage-bytes 19.22 MB Jvm ps-survivor-space Peak Current Usage Bytes
jvm.ps-survivor-space.peak-init-usage-bytes 16.00 MB Jvm ps-survivor-space Peak Init Usage Bytes
jvm.ps-survivor-space.peak-max-usage-bytes 24.50 MB Jvm ps-survivor-space Peak Max Usage Bytes
jvm.total.committed-usage-bytes 463.90 MB Jvm total Committed Usage Bytes
jvm.total.current-usage-bytes 338.14 MB Jvm total Current Usage Bytes
jvm.total.init-usage-bytes 380.44 MB Jvm total Init Usage Bytes
jvm.total.max-usage-bytes 1.61 GB Jvm total Max Usage Bytes
jvm.total.peak-committed-usage-bytes 485.40 MB Jvm total Peak Committed Usage Bytes
jvm.total.peak-current-usage-bytes 415.32 MB Jvm total Peak Current Usage Bytes
jvm.total.peak-init-usage-bytes 380.44 MB Jvm total Peak Init Usage Bytes
jvm.total.peak-max-usage-bytes 1.63 GB Jvm total Peak Max Usage Bytes

memory

Name Value Description
memory.mapped-bytes 10.75 GB Total bytes of memory mappings in this process (the virtual memory size).
memory.rss 1.16 GB Resident set size (RSS) of this process, including TCMalloc, buffer pool and Jvm.
memory.thp.defrag [always] madvise never The system-wide 'defrag' setting for Transparent Huge Pages.
memory.thp.enabled [always] madvise never The system-wide 'enabled' setting for Transparent Huge Pages.
memory.thp.khugepaged-defrag 1 The system-wide 'defrag' setting for khugepaged.
memory.total-used 791.15 MB Total memory currently used by TCMalloc and buffer pool.

rpc

Name Value Description
mem-tracker.ControlService.current_usage_bytes 0 Memtracker ControlService Current Usage Bytes
mem-tracker.ControlService.peak_usage_bytes 17.03 KB Memtracker ControlService Peak Usage Bytes
mem-tracker.DataStreamService.current_usage_bytes 0 Memtracker DataStreamService Current Usage Bytes
mem-tracker.DataStreamService.peak_usage_bytes 431.00 B Memtracker DataStreamService Peak Usage Bytes
rpc.impala.ControlService.rpcs_queue_overflow 0 Service impala.ControlService: Total number of incoming RPCs that were rejected due to overflow of the service queue.
rpc.impala.DataStreamService.rpcs_queue_overflow 0 Service impala.DataStreamService: Total number of incoming RPCs that were rejected due to overflow of the service queue.

scheduler

Name Value Description
simple-scheduler.assignments.total 13.26K The number of assignments
simple-scheduler.initialized true Indicates whether the scheduler has been initialized.
simple-scheduler.local-assignments.total 13.26K Number of assignments operating on local data
simple-scheduler.num-backends 3 The number of backend connections from this Impala Daemon to other Impala Daemons.

statestore-subscriber

Name Value Description
rpc-method.statestore-subscriber.StatestoreSubscriber.Heartbeat.call_duration Count: 7989689, min / max: 0 / 42ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 0, 99.9th %-ile: 1ms
rpc-method.statestore-subscriber.StatestoreSubscriber.UpdateState.call_duration Count: 83673172, min / max: 0 / 3s115ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 1ms, 99.9th %-ile: 1ms
statestore-subscriber.connected true Whether the Impala Daemon considers itself connected to the StateStore.
statestore-subscriber.heartbeat-interval-time Last (of 20131189): 1.00004. Min: 0, max: 13.6635, avg: 1.00023 The time (sec) between Statestore heartbeats.
statestore-subscriber.last-recovery-duration 0 The amount of time the StateStore subscriber took to recover the connection the last time it was lost.
statestore-subscriber.last-recovery-time N/A The local time that the last statestore recovery happened.
statestore-subscriber.topic-catalog-update.processing-time-s Last (of 10066204): 0. Min: 0, max: 22.3768, avg: 8.66488e-05 Statestore Subscriber Topic catalog-update Processing Time
statestore-subscriber.topic-catalog-update.update-interval Last (of 10066204): 2.00007. Min: 0, max: 2.96158, avg: 2.00026 Interval between topic updates for Topic catalog-update
statestore-subscriber.topic-impala-membership.processing-time-s Last (of 200805128): 0. Min: 0, max: 4.38116, avg: 9.00222e-06 Statestore Subscriber Topic impala-membership Processing Time
statestore-subscriber.topic-impala-membership.update-interval Last (of 200805128): 0.100004. Min: 0, max: 12.7235, avg: 0.10025 Interval between topic updates for Topic impala-membership
statestore-subscriber.topic-impala-request-queue.processing-time-s Last (of 200805128): 0. Min: 0, max: 4.38116, avg: 2.396e-05 Statestore Subscriber Topic impala-request-queue Processing Time
statestore-subscriber.topic-impala-request-queue.update-interval Last (of 200805128): 0.100004. Min: 0, max: 12.7235, avg: 0.100251 Interval between topic updates for Topic impala-request-queue
statestore-subscriber.topic-update-duration Last (of 210871332): 0. Min: 0, max: 22.3768, avg: 2.75937e-05 The time (sec) taken to process Statestore subscriber topic updates.
statestore-subscriber.topic-update-interval-time Last (of 411676460): 0.100004. Min: 0, max: 12.7235, avg: 0.146709 The time (sec) between Statestore subscriber topic updates.

tcmalloc

Name Value Description
tcmalloc.bytes-in-use 736.72 MB Number of bytes used by the application. This will not typically match the memory use reported by the OS, because it does not include TCMalloc overhead or memory fragmentation.
tcmalloc.pageheap-free-bytes 0 Number of bytes in free, mapped pages in page heap. These bytes can be used to fulfill allocation requests. They always count towards virtual memory usage, and unless the underlying memory is swapped out by the OS, they also count towards physical memory usage.
tcmalloc.pageheap-unmapped-bytes 6.10 GB Number of bytes in free, unmapped pages in page heap. These are bytes that have been released back to the OS, possibly by one of the MallocExtension "Release" calls. They can be used to fulfill allocation requests, but typically incur a page fault. They always count towards virtual memory usage, and depending on the OS, typically do not count towards physical memory usage.
tcmalloc.physical-bytes-reserved 791.15 MB Derived metric computing the amount of physical memory (in bytes) used by the process, including that actually in use and free bytes reserved by tcmalloc. Does not include the tcmalloc metadata.
tcmalloc.total-bytes-reserved 6.88 GB Bytes of system memory reserved by TCMalloc.