Metrics

impala-metrics

Name Value Description
catalog.server.client-cache.clients-in-use 0 The number of clients currently in use by the Catalog Server client cache.
catalog.server.client-cache.total-clients 1 The total number of clients in the Catalog Server client cache.
external-data-source.class-cache.hits 0 Number of cache hits in the External Data Source Class Cache
external-data-source.class-cache.misses 0 Number of cache misses in the External Data Source Class Cache
impala-server.backends.client-cache.clients-in-use 0 The number of active Impala Backend clients. These clients are for communication with other Impala Daemons.
impala-server.backends.client-cache.total-clients 6 The total number of Impala Backend clients in this Impala Daemon's client cache. These clients are for communication with other Impala Daemons.
impala.thrift-server.backend.connection-setup-queue-size 0 The number of connections to the Impala Backend Server that have been accepted and are waiting to be setup.
impala.thrift-server.backend.connections-in-use 2 The number of active Impala Backend client connections to this Impala Daemon.
impala.thrift-server.backend.timedout-cnxn-requests 0 The number of connection requests to the Impala Backend Server that have been timed out waiting to be setup.
impala.thrift-server.backend.total-connections 2 The total number of Impala Backend client connections made to this Impala Daemon over its lifetime.
impala.thrift-server.beeswax-frontend.connection-setup-queue-size 0 The number of Beeswax API connections to this Impala Daemon that have been accepted and are waiting to be setup.
impala.thrift-server.beeswax-frontend.connections-in-use 0 The number of active Beeswax API connections to this Impala Daemon.
impala.thrift-server.beeswax-frontend.timedout-cnxn-requests 0 The number of Beeswax API connection requests to this Impala Daemon that have been timed out waiting to be setup.
impala.thrift-server.beeswax-frontend.total-connections 420 The total number of Beeswax API connections made to this Impala Daemon over its lifetime.
impala.thrift-server.hiveserver2-frontend.connection-setup-queue-size 0 The number of HiveServer2 API connections to this Impala Daemon that have been accepted and are waiting to be setup.
impala.thrift-server.hiveserver2-frontend.connections-in-use 1 The number of active HiveServer2 API connections to this Impala Daemon.
impala.thrift-server.hiveserver2-frontend.timedout-cnxn-requests 0 The number of HiveServer2 API connection requests to this Impala Daemon that have been timed out waiting to be setup.
impala.thrift-server.hiveserver2-frontend.total-connections 2.91K The total number of HiveServer2 API connections made to this Impala Daemon over its lifetime.
kudu-client.version kudu 1.10.0-cdh6.3.2 (rev 09cea83ce09c429de2c5c9776908db0000857495) A version string identifying the Kudu client
mem-tracker.process.bytes-freed-by-last-gc -1.00 B The amount of memory freed by the last memory tracker garbage collection.
mem-tracker.process.bytes-over-limit -1.00 B The amount of memory by which the process was over its memory limit the last time the memory limit was encountered.
mem-tracker.process.limit 49.41 GB The process memory tracker limit.
mem-tracker.process.num-gcs 0 The total number of garbage collections performed by the memory tracker over the life of the process.
process-start-time 2025-06-10 23:41:43.990525000 The local start time of the process
request-pool-service.resolve-pool-duration-ms Last (of 8200): 0. Min: 0, max: 9, avg: 0.101098 Time (ms) spent resolving request request pools.
rpc-method.backend.ImpalaInternalService.ExecQueryFInstances.call_duration Count: 2099, min / max: 0 / 21ms, 25th %-ile: 0, 50th %-ile: 5ms, 75th %-ile: 6ms, 90th %-ile: 6ms, 95th %-ile: 7ms, 99.9th %-ile: 15ms
rpc-method.beeswax.BeeswaxService.get_default_configuration.call_duration Count: 420, min / max: 0 / 1ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 0, 99.9th %-ile: 1ms
rpc-method.beeswax.BeeswaxService.get_log.call_duration Count: 420, min / max: 0 / 1ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 0, 99.9th %-ile: 1ms
rpc-method.beeswax.BeeswaxService.get_state.call_duration Count: 1329, min / max: 0 / 1ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 0, 99.9th %-ile: 1ms
rpc-method.beeswax.BeeswaxService.query.call_duration Count: 420, min / max: 4ms / 25ms, 25th %-ile: 4ms, 50th %-ile: 5ms, 75th %-ile: 5ms, 90th %-ile: 6ms, 95th %-ile: 6ms, 99.9th %-ile: 25ms
rpc-method.beeswax.ImpalaService.CloseInsert.call_duration Count: 420, min / max: 2ms / 9ms, 25th %-ile: 3ms, 50th %-ile: 4ms, 75th %-ile: 6ms, 90th %-ile: 6ms, 95th %-ile: 6ms, 99.9th %-ile: 9ms
rpc-method.beeswax.ImpalaService.GetRuntimeProfile.call_duration Count: 420, min / max: 0 / 1ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 1ms, 99.9th %-ile: 1ms
rpc-method.beeswax.ImpalaService.PingImpalaService.call_duration Count: 420, min / max: 0 / 1ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 0, 99.9th %-ile: 1ms
rpc-method.hs2.TCLIService.CloseOperation.call_duration Count: 5501, min / max: 0 / 41ms, 25th %-ile: 0, 50th %-ile: 1ms, 75th %-ile: 2ms, 90th %-ile: 23ms, 95th %-ile: 27ms, 99.9th %-ile: 38ms
rpc-method.hs2.TCLIService.CloseSession.call_duration Count: 4215, min / max: 0 / 1ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 1ms, 99.9th %-ile: 1ms
rpc-method.hs2.TCLIService.ExecuteStatement.call_duration Count: 5501, min / max: 1ms / 1s619ms, 25th %-ile: 2ms, 50th %-ile: 3ms, 75th %-ile: 16ms, 90th %-ile: 42ms, 95th %-ile: 48ms, 99.9th %-ile: 1s158ms
rpc-method.hs2.TCLIService.FetchResults.call_duration Count: 9759, min / max: 0 / 9s982ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 1ms, 95th %-ile: 2ms, 99.9th %-ile: 5ms
rpc-method.hs2.TCLIService.GetInfo.call_duration Count: 2834, min / max: 0 / 1ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 0, 99.9th %-ile: 1ms
rpc-method.hs2.TCLIService.GetOperationStatus.call_duration Count: 90049, min / max: 0 / 6ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 0, 99.9th %-ile: 1ms
rpc-method.hs2.TCLIService.GetResultSetMetadata.call_duration Count: 1927, min / max: 0 / 1ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 1ms, 99.9th %-ile: 1ms
rpc-method.hs2.TCLIService.OpenSession.call_duration Count: 4251, min / max: 0 / 1ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 1ms, 95th %-ile: 1ms, 99.9th %-ile: 1ms
statestore-subscriber.registration-id ce4dd8ee0fdf05a8:4336a23940884e9a The most recent registration ID for this subscriber with the statestore. Set to 'N/A' if no registration has been completed
statestore-subscriber.statestore.client-cache.clients-in-use 0 The number of active StateStore subscriber clients in this Impala Daemon's client cache. These clients are for communication from this role to the StateStore.
statestore-subscriber.statestore.client-cache.total-clients 1 The total number of StateStore subscriber clients in this Impala Daemon's client cache. These clients are for communication from this role to the StateStore.
thread-manager.running-threads 28 The number of running threads in this process.
thread-manager.total-threads-created 24876 Threads created over the lifetime of the process.
tmp-file-mgr.active-scratch-dirs 2 The number of active scratch directories for spilling to disk.
tmp-file-mgr.active-scratch-dirs.list [/data1/impala/impalad/impala-scratch, /data2/impala/impalad/impala-scratch] The set of all active scratch directories for spilling to disk.
tzdata-path /usr/share/zoneinfo Path to the time-zone database

admission-controller

Name Value Description
admission-controller.agg-mem-reserved.root.default 0 Resource Pool root.default Aggregate Mem Reserved
admission-controller.agg-mem-reserved.root.root 0 Resource Pool root.root Aggregate Mem Reserved
admission-controller.agg-num-queued.root.default 0 Resource Pool root.default Aggregate Queue Size
admission-controller.agg-num-queued.root.root 0 Resource Pool root.root Aggregate Queue Size
admission-controller.agg-num-running.root.default 0 Resource Pool root.default Aggregate Num Running
admission-controller.agg-num-running.root.root 0 Resource Pool root.root Aggregate Num Running
admission-controller.local-backend-mem-reserved.root.default 0 Resource Pool root.default Mem Reserved by the backend coordinator
admission-controller.local-backend-mem-reserved.root.root 0 Resource Pool root.root Mem Reserved by the backend coordinator
admission-controller.local-backend-mem-usage.root.default 0 Resource Pool root.default Coordinator Backend Mem Usage
admission-controller.local-backend-mem-usage.root.root 0 Resource Pool root.root Coordinator Backend Mem Usage
admission-controller.local-mem-admitted.root.default 0 Resource Pool root.default Local Mem Admitted
admission-controller.local-mem-admitted.root.root 0 Resource Pool root.root Local Mem Admitted
admission-controller.local-num-admitted-running.root.default 0 Resource Pool root.default Coordinator Backend Num Running
admission-controller.local-num-admitted-running.root.root 0 Resource Pool root.root Coordinator Backend Num Running
admission-controller.local-num-queued.root.default 0 Resource Pool root.default Queue Size on the coordinator
admission-controller.local-num-queued.root.root 0 Resource Pool root.root Queue Size on the coordinator
admission-controller.pool-clamp-mem-limit-query-option.root.default true If false, the mem_limit query option will not be bounded by the max/min query mem limits specified for the pool
admission-controller.pool-clamp-mem-limit-query-option.root.root true If false, the mem_limit query option will not be bounded by the max/min query mem limits specified for the pool
admission-controller.pool-max-mem-resources.root.default -1 Resource Pool root.default Configured Max Mem Resources
admission-controller.pool-max-mem-resources.root.root -1 Resource Pool root.root Configured Max Mem Resources
admission-controller.pool-max-query-mem-limit.root.default 0 Resource Pool root.default Max Query Memory Limit
admission-controller.pool-max-query-mem-limit.root.root 0 Resource Pool root.root Max Query Memory Limit
admission-controller.pool-max-queued.root.default 200 Resource Pool root.default Configured Max Queued
admission-controller.pool-max-queued.root.root 200 Resource Pool root.root Configured Max Queued
admission-controller.pool-max-requests.root.default -1 Resource Pool root.default Configured Max Requests
admission-controller.pool-max-requests.root.root -1 Resource Pool root.root Configured Max Requests
admission-controller.pool-min-query-mem-limit.root.default 0 Resource Pool root.default Min Query Memory Limit
admission-controller.pool-min-query-mem-limit.root.root 0 Resource Pool root.root Min Query Memory Limit
admission-controller.time-in-queue-ms.root.default 0 Resource Pool root.default Time in Queue
admission-controller.time-in-queue-ms.root.root 0 Resource Pool root.root Time in Queue
admission-controller.total-admitted.root.default 1.86K Total number of requests admitted to pool root.default
admission-controller.total-admitted.root.root 420 Total number of requests admitted to pool root.root
admission-controller.total-dequeued.root.default 0 Total number of requests dequeued in pool root.default
admission-controller.total-dequeued.root.root 0 Total number of requests dequeued in pool root.root
admission-controller.total-queued.root.default 0 Total number of requests queued in pool root.default
admission-controller.total-queued.root.root 0 Total number of requests queued in pool root.root
admission-controller.total-rejected.root.default 0 Total number of requests rejected in pool root.default
admission-controller.total-rejected.root.root 0 Total number of requests rejected in pool root.root
admission-controller.total-released.root.default 1.86K Total number of requests that have completed and released resources in pool root.default
admission-controller.total-released.root.root 420 Total number of requests that have completed and released resources in pool root.root
admission-controller.total-timed-out.root.default 0 Total number of requests timed out waiting while queued in pool root.default
admission-controller.total-timed-out.root.root 0 Total number of requests timed out waiting while queued in pool root.root

buffer-pool

Name Value Description
buffer-pool.arena-0.allocated-buffer-sizes Count: 56, min / max: 8.00 KB / 2.00 MB, 25th %-ile: 16.00 KB, 50th %-ile: 32.00 KB, 75th %-ile: 64.00 KB, 90th %-ile: 512.00 KB, 95th %-ile: 512.00 KB, 99.9th %-ile: 2.00 MB Statistics for buffer sizes allocated from the system. Only a subset of allocations are counted in this metric to reduce overhead.
buffer-pool.arena-0.clean-page-hits 0 Number of times a clean page was evicted to fulfil an allocation.
buffer-pool.arena-0.direct-alloc-count 3.59K Number of times a new buffer was directly allocated from the system allocator to fulfil an allocation.
buffer-pool.arena-0.local-arena-free-buffer-hits 207.43K Number of times a free buffer was recycled from this core's arena to fulfil an allocation.
buffer-pool.arena-0.num-final-scavenges 0 Number of times the allocator had to lock all arenas and scavenge to fulfil an allocation.
buffer-pool.arena-0.num-scavenges 0 Number of times the allocator had to scavenge to fulfil an allocation.
buffer-pool.arena-0.numa-arena-free-buffer-hits 0 Number of times that a recycled buffer within the same NUMA node was used to fulfil an allocation.
buffer-pool.arena-0.system-alloc-time 22.000ms Total time the buffer pool spent in the system allocator for this arena.
buffer-pool.arena-1.allocated-buffer-sizes Count: 71, min / max: 8.00 KB / 1.00 MB, 25th %-ile: 32.00 KB, 50th %-ile: 64.00 KB, 75th %-ile: 64.00 KB, 90th %-ile: 256.00 KB, 95th %-ile: 512.00 KB, 99.9th %-ile: 1.00 MB Statistics for buffer sizes allocated from the system. Only a subset of allocations are counted in this metric to reduce overhead.
buffer-pool.arena-1.clean-page-hits 0 Number of times a clean page was evicted to fulfil an allocation.
buffer-pool.arena-1.direct-alloc-count 4.56K Number of times a new buffer was directly allocated from the system allocator to fulfil an allocation.
buffer-pool.arena-1.local-arena-free-buffer-hits 239.26K Number of times a free buffer was recycled from this core's arena to fulfil an allocation.
buffer-pool.arena-1.num-final-scavenges 0 Number of times the allocator had to lock all arenas and scavenge to fulfil an allocation.
buffer-pool.arena-1.num-scavenges 0 Number of times the allocator had to scavenge to fulfil an allocation.
buffer-pool.arena-1.numa-arena-free-buffer-hits 0 Number of times that a recycled buffer within the same NUMA node was used to fulfil an allocation.
buffer-pool.arena-1.system-alloc-time 19.000ms Total time the buffer pool spent in the system allocator for this arena.
buffer-pool.arena-2.allocated-buffer-sizes Count: 49, min / max: 8.00 KB / 1.00 MB, 25th %-ile: 8.00 KB, 50th %-ile: 64.00 KB, 75th %-ile: 64.00 KB, 90th %-ile: 1.00 MB, 95th %-ile: 1.00 MB, 99.9th %-ile: 1.00 MB Statistics for buffer sizes allocated from the system. Only a subset of allocations are counted in this metric to reduce overhead.
buffer-pool.arena-2.clean-page-hits 0 Number of times a clean page was evicted to fulfil an allocation.
buffer-pool.arena-2.direct-alloc-count 3.14K Number of times a new buffer was directly allocated from the system allocator to fulfil an allocation.
buffer-pool.arena-2.local-arena-free-buffer-hits 277.39K Number of times a free buffer was recycled from this core's arena to fulfil an allocation.
buffer-pool.arena-2.num-final-scavenges 0 Number of times the allocator had to lock all arenas and scavenge to fulfil an allocation.
buffer-pool.arena-2.num-scavenges 0 Number of times the allocator had to scavenge to fulfil an allocation.
buffer-pool.arena-2.numa-arena-free-buffer-hits 0 Number of times that a recycled buffer within the same NUMA node was used to fulfil an allocation.
buffer-pool.arena-2.system-alloc-time 12.000ms Total time the buffer pool spent in the system allocator for this arena.
buffer-pool.arena-3.allocated-buffer-sizes Count: 56, min / max: 8.00 KB / 1.00 MB, 25th %-ile: 64.00 KB, 50th %-ile: 64.00 KB, 75th %-ile: 64.00 KB, 90th %-ile: 512.00 KB, 95th %-ile: 512.00 KB, 99.9th %-ile: 1.00 MB Statistics for buffer sizes allocated from the system. Only a subset of allocations are counted in this metric to reduce overhead.
buffer-pool.arena-3.clean-page-hits 0 Number of times a clean page was evicted to fulfil an allocation.
buffer-pool.arena-3.direct-alloc-count 3.62K Number of times a new buffer was directly allocated from the system allocator to fulfil an allocation.
buffer-pool.arena-3.local-arena-free-buffer-hits 241.62K Number of times a free buffer was recycled from this core's arena to fulfil an allocation.
buffer-pool.arena-3.num-final-scavenges 0 Number of times the allocator had to lock all arenas and scavenge to fulfil an allocation.
buffer-pool.arena-3.num-scavenges 0 Number of times the allocator had to scavenge to fulfil an allocation.
buffer-pool.arena-3.numa-arena-free-buffer-hits 0 Number of times that a recycled buffer within the same NUMA node was used to fulfil an allocation.
buffer-pool.arena-3.system-alloc-time 12.000ms Total time the buffer pool spent in the system allocator for this arena.
buffer-pool.arena-4.allocated-buffer-sizes Count: 52, min / max: 8.00 KB / 2.00 MB, 25th %-ile: 8.00 KB, 50th %-ile: 32.00 KB, 75th %-ile: 64.00 KB, 90th %-ile: 512.00 KB, 95th %-ile: 512.00 KB, 99.9th %-ile: 2.00 MB Statistics for buffer sizes allocated from the system. Only a subset of allocations are counted in this metric to reduce overhead.
buffer-pool.arena-4.clean-page-hits 0 Number of times a clean page was evicted to fulfil an allocation.
buffer-pool.arena-4.direct-alloc-count 3.36K Number of times a new buffer was directly allocated from the system allocator to fulfil an allocation.
buffer-pool.arena-4.local-arena-free-buffer-hits 311.31K Number of times a free buffer was recycled from this core's arena to fulfil an allocation.
buffer-pool.arena-4.num-final-scavenges 0 Number of times the allocator had to lock all arenas and scavenge to fulfil an allocation.
buffer-pool.arena-4.num-scavenges 0 Number of times the allocator had to scavenge to fulfil an allocation.
buffer-pool.arena-4.numa-arena-free-buffer-hits 0 Number of times that a recycled buffer within the same NUMA node was used to fulfil an allocation.
buffer-pool.arena-4.system-alloc-time 13.000ms Total time the buffer pool spent in the system allocator for this arena.
buffer-pool.arena-5.allocated-buffer-sizes Count: 59, min / max: 8.00 KB / 1.00 MB, 25th %-ile: 16.00 KB, 50th %-ile: 64.00 KB, 75th %-ile: 64.00 KB, 90th %-ile: 64.00 KB, 95th %-ile: 512.00 KB, 99.9th %-ile: 1.00 MB Statistics for buffer sizes allocated from the system. Only a subset of allocations are counted in this metric to reduce overhead.
buffer-pool.arena-5.clean-page-hits 0 Number of times a clean page was evicted to fulfil an allocation.
buffer-pool.arena-5.direct-alloc-count 3.83K Number of times a new buffer was directly allocated from the system allocator to fulfil an allocation.
buffer-pool.arena-5.local-arena-free-buffer-hits 227.61K Number of times a free buffer was recycled from this core's arena to fulfil an allocation.
buffer-pool.arena-5.num-final-scavenges 0 Number of times the allocator had to lock all arenas and scavenge to fulfil an allocation.
buffer-pool.arena-5.num-scavenges 0 Number of times the allocator had to scavenge to fulfil an allocation.
buffer-pool.arena-5.numa-arena-free-buffer-hits 0 Number of times that a recycled buffer within the same NUMA node was used to fulfil an allocation.
buffer-pool.arena-5.system-alloc-time 30.001ms Total time the buffer pool spent in the system allocator for this arena.
buffer-pool.arena-6.allocated-buffer-sizes Count: 45, min / max: 8.00 KB / 1.00 MB, 25th %-ile: 16.00 KB, 50th %-ile: 32.00 KB, 75th %-ile: 64.00 KB, 90th %-ile: 512.00 KB, 95th %-ile: 512.00 KB, 99.9th %-ile: 1.00 MB Statistics for buffer sizes allocated from the system. Only a subset of allocations are counted in this metric to reduce overhead.
buffer-pool.arena-6.clean-page-hits 0 Number of times a clean page was evicted to fulfil an allocation.
buffer-pool.arena-6.direct-alloc-count 2.89K Number of times a new buffer was directly allocated from the system allocator to fulfil an allocation.
buffer-pool.arena-6.local-arena-free-buffer-hits 308.00K Number of times a free buffer was recycled from this core's arena to fulfil an allocation.
buffer-pool.arena-6.num-final-scavenges 0 Number of times the allocator had to lock all arenas and scavenge to fulfil an allocation.
buffer-pool.arena-6.num-scavenges 0 Number of times the allocator had to scavenge to fulfil an allocation.
buffer-pool.arena-6.numa-arena-free-buffer-hits 0 Number of times that a recycled buffer within the same NUMA node was used to fulfil an allocation.
buffer-pool.arena-6.system-alloc-time 15.000ms Total time the buffer pool spent in the system allocator for this arena.
buffer-pool.arena-7.allocated-buffer-sizes Count: 57, min / max: 8.00 KB / 1.00 MB, 25th %-ile: 64.00 KB, 50th %-ile: 64.00 KB, 75th %-ile: 64.00 KB, 90th %-ile: 512.00 KB, 95th %-ile: 512.00 KB, 99.9th %-ile: 1.00 MB Statistics for buffer sizes allocated from the system. Only a subset of allocations are counted in this metric to reduce overhead.
buffer-pool.arena-7.clean-page-hits 0 Number of times a clean page was evicted to fulfil an allocation.
buffer-pool.arena-7.direct-alloc-count 3.66K Number of times a new buffer was directly allocated from the system allocator to fulfil an allocation.
buffer-pool.arena-7.local-arena-free-buffer-hits 239.51K Number of times a free buffer was recycled from this core's arena to fulfil an allocation.
buffer-pool.arena-7.num-final-scavenges 0 Number of times the allocator had to lock all arenas and scavenge to fulfil an allocation.
buffer-pool.arena-7.num-scavenges 0 Number of times the allocator had to scavenge to fulfil an allocation.
buffer-pool.arena-7.numa-arena-free-buffer-hits 0 Number of times that a recycled buffer within the same NUMA node was used to fulfil an allocation.
buffer-pool.arena-7.system-alloc-time 19.000ms Total time the buffer pool spent in the system allocator for this arena.
buffer-pool.clean-page-bytes 0 Total bytes of clean page memory cached in the buffer pool.
buffer-pool.clean-pages 0 Total number of clean pages cached in the buffer pool.
buffer-pool.clean-pages-limit 4.20 GB Limit on number of clean pages cached in the buffer pool.
buffer-pool.free-buffer-bytes 0 Total bytes of free buffer memory cached in the buffer pool.
buffer-pool.free-buffers 0 Total number of free buffers cached in the buffer pool.
buffer-pool.limit 42.00 GB Maximum allowed bytes allocated by the buffer pool.
buffer-pool.reserved 0 Total bytes of buffers reserved by Impala subsystems
buffer-pool.system-allocated 0 Total buffer memory currently allocated by the buffer pool.
buffer-pool.unused-reservation-bytes 0 Total bytes of buffer reservations by Impala subsystems that are currently unused

datastream-manager

Name Value Description
senders-blocked-on-recvr-creation 0 Number of senders waiting for receiving fragment to initialize
total-senders-blocked-on-recvr-creation 0 Total number of senders that have been blocked waiting for receiving fragment to initialize.
total-senders-timedout-waiting-for-recvr-creation 0 Total number of senders that timed-out waiting for receiving fragment to initialize.

impala-server

Name Value Description
impala-server.backend-num-queries-executed 2.28K The total number of queries that executed on this backend over the life of the process.
impala-server.backend-num-queries-executing 0 The number of queries currently executing on this backend.
impala-server.ddl-durations-ms Count: 3642, min / max: 12ms / 1s631ms, 25th %-ile: 17ms, 50th %-ile: 18ms, 75th %-ile: 19ms, 90th %-ile: 20ms, 95th %-ile: 21ms, 99.9th %-ile: 1s246ms Distribution of DDL operation latencies
impala-server.hash-table.total-bytes 0 The current size of all allocated hash tables.
impala-server.hedged-read-ops 0 The total number of hedged reads attempted over the life of the process
impala-server.hedged-read-ops-win 0 The total number of times hedged reads were faster than regular read operations
impala-server.io-mgr.bytes-read 88.27 GB The total number of bytes read by the IO manager.
impala-server.io-mgr.bytes-written 0 Total number of bytes written to disk by the IO manager.
impala-server.io-mgr.cached-bytes-read 0 Total number of cached bytes read by the IO manager.
impala-server.io-mgr.local-bytes-read 88.27 GB Total number of local bytes read by the IO manager.
impala-server.io-mgr.num-open-files 0 The current number of files opened by the IO Manager
impala-server.io-mgr.remote-data-cache-dropped-bytes 0 Total number of bytes not inserted in remote data cache due to concurrency limit.
impala-server.io-mgr.remote-data-cache-hit-bytes 0 Total number of bytes of hits in the remote data cache.
impala-server.io-mgr.remote-data-cache-miss-bytes 0 Total number of bytes of misses in the remote data cache.
impala-server.io-mgr.remote-data-cache-total-bytes 0 Current byte size of the remote data cache.
impala-server.io-mgr.short-circuit-bytes-read 70.19 GB Total number of short-circuit bytes read by the IO manager.
impala-server.io.mgr.cached-file-handles-hit-count 1072122 Number of cache hits for cached HDFS file handles
impala-server.io.mgr.cached-file-handles-hit-ratio Last (of ): 1. Min: , max: , avg: 0.997988 HDFS file handle cache hit ratio, between 0 and 1, where 1 means all reads were served from cached file handles.
impala-server.io.mgr.cached-file-handles-miss-count 2161 Number of cache misses for cached HDFS file handles
impala-server.io.mgr.cached-file-handles-reopened 0 Number of cached HDFS file handles reopened
impala-server.io.mgr.num-cached-file-handles 0 Number of currently cached HDFS file handles in the IO manager.
impala-server.io.mgr.num-file-handles-outstanding 0 Number of HDFS file handles that are currently in use by readers.
impala-server.mem-pool.total-bytes 0 The current size of the memory pool shared by all queries
impala-server.num-files-open-for-insert 0 The number of HDFS files currently open for writing.
impala-server.num-fragments 5.36K The total number of query fragments processed over the life of the process.
impala-server.num-fragments-in-flight 0 The number of query fragment instances currently executing.
impala-server.num-open-beeswax-sessions 0 The number of open Beeswax sessions.
impala-server.num-open-hiveserver2-sessions 0 The number of open HiveServer2 sessions.
impala-server.num-queries 5.92K The total number of queries processed over the life of the process
impala-server.num-queries-expired 0 Number of queries expired due to inactivity.
impala-server.num-queries-registered 0 The total number of queries registered on this Impala server instance. Includes queries that are in flight and waiting to be closed
impala-server.num-queries-spilled 0 Number of queries for which any operator spilled.
impala-server.num-sessions-expired 0 Number of sessions expired due to inactivity.
impala-server.query-durations-ms Count: 2279, min / max: 7ms / 13s498ms, 25th %-ile: 30ms, 50th %-ile: 913ms, 75th %-ile: 1s004ms, 90th %-ile: 1s071ms, 95th %-ile: 1s114ms, 99.9th %-ile: 2s658ms Distribution of query latencies
impala-server.ready true Indicates if the Impala Server is ready.
impala-server.resultset-cache.total-bytes 0 Total number of bytes consumed for rows cached to support HS2 FETCH_FIRST.
impala-server.resultset-cache.total-num-rows 0 Total number of rows cached to support HS2 FETCH_FIRST.
impala-server.scan-ranges.num-missing-volume-id 0 The total number of scan ranges read over the life of the process that did not have volume metadata
impala-server.scan-ranges.total 949.26K The total number of scan ranges read over the life of the process
impala-server.version impalad version 3.2.0-cdh6.3.2 RELEASE (build 1bb9836227301b839a32c6bc230e35439d5984ac) The full version string of the Impala Server.

catalog

Name Value Description
catalog.curr-serviceid f5695436c38e42b6:b054890354a639ce Catalog service id.
catalog.curr-topic 1113 Statestore topic update version.
catalog.curr-version 40090 Catalog topic update version.
catalog.num-databases 15 The number of databases in the catalog.
catalog.num-tables 154 The number of tables in the catalog.
catalog.ready true Indicates if the catalog is ready.

jvm

Name Value Description
jvm.code-cache.committed-usage-bytes 30.31 MB Jvm code-cache Committed Usage Bytes
jvm.code-cache.current-usage-bytes 29.98 MB Jvm code-cache Current Usage Bytes
jvm.code-cache.init-usage-bytes 2.44 MB Jvm code-cache Init Usage Bytes
jvm.code-cache.max-usage-bytes 240.00 MB Jvm code-cache Max Usage Bytes
jvm.code-cache.peak-committed-usage-bytes 30.31 MB Jvm code-cache Peak Committed Usage Bytes
jvm.code-cache.peak-current-usage-bytes 29.99 MB Jvm code-cache Peak Current Usage Bytes
jvm.code-cache.peak-init-usage-bytes 2.44 MB Jvm code-cache Peak Init Usage Bytes
jvm.code-cache.peak-max-usage-bytes 240.00 MB Jvm code-cache Peak Max Usage Bytes
jvm.compressed-class-space.committed-usage-bytes 4.50 MB Jvm compressed-class-space Committed Usage Bytes
jvm.compressed-class-space.current-usage-bytes 4.32 MB Jvm compressed-class-space Current Usage Bytes
jvm.compressed-class-space.init-usage-bytes 0 Jvm compressed-class-space Init Usage Bytes
jvm.compressed-class-space.max-usage-bytes 1.00 GB Jvm compressed-class-space Max Usage Bytes
jvm.compressed-class-space.peak-committed-usage-bytes 4.50 MB Jvm compressed-class-space Peak Committed Usage Bytes
jvm.compressed-class-space.peak-current-usage-bytes 4.32 MB Jvm compressed-class-space Peak Current Usage Bytes
jvm.compressed-class-space.peak-init-usage-bytes 0 Jvm compressed-class-space Peak Init Usage Bytes
jvm.compressed-class-space.peak-max-usage-bytes 1.00 GB Jvm compressed-class-space Peak Max Usage Bytes
jvm.gc_count 442 Jvm Garbage Collection Count
jvm.gc_num_info_threshold_exceeded 0 Jvm Pause Detection Info Threshold Exceeded
jvm.gc_num_warn_threshold_exceeded 0 Jvm Pause Detection Warning Threshold Exceeded
jvm.gc_time_millis 3s405ms Jvm Garbage Collection Time
jvm.gc_total_extra_sleep_time_millis 3s093ms Jvm Pause Detection Extra Sleep Time
jvm.heap.committed-usage-bytes 378.50 MB Jvm heap Committed Usage Bytes
jvm.heap.current-usage-bytes 161.65 MB Jvm heap Current Usage Bytes
jvm.heap.init-usage-bytes 394.00 MB Jvm heap Init Usage Bytes
jvm.heap.max-usage-bytes 378.50 MB Jvm heap Max Usage Bytes
jvm.heap.peak-committed-usage-bytes 0 Jvm heap Peak Committed Usage Bytes
jvm.heap.peak-current-usage-bytes 0 Jvm heap Peak Current Usage Bytes
jvm.heap.peak-init-usage-bytes 0 Jvm heap Peak Init Usage Bytes
jvm.heap.peak-max-usage-bytes 0 Jvm heap Peak Max Usage Bytes
jvm.metaspace.committed-usage-bytes 46.25 MB Jvm metaspace Committed Usage Bytes
jvm.metaspace.current-usage-bytes 45.57 MB Jvm metaspace Current Usage Bytes
jvm.metaspace.init-usage-bytes 0 Jvm metaspace Init Usage Bytes
jvm.metaspace.max-usage-bytes -1.00 B Jvm metaspace Max Usage Bytes
jvm.metaspace.peak-committed-usage-bytes 46.25 MB Jvm metaspace Peak Committed Usage Bytes
jvm.metaspace.peak-current-usage-bytes 45.57 MB Jvm metaspace Peak Current Usage Bytes
jvm.metaspace.peak-init-usage-bytes 0 Jvm metaspace Peak Init Usage Bytes
jvm.metaspace.peak-max-usage-bytes -1.00 B Jvm metaspace Peak Max Usage Bytes
jvm.non-heap.committed-usage-bytes 81.06 MB Jvm non-heap Committed Usage Bytes
jvm.non-heap.current-usage-bytes 79.87 MB Jvm non-heap Current Usage Bytes
jvm.non-heap.init-usage-bytes 2.44 MB Jvm non-heap Init Usage Bytes
jvm.non-heap.max-usage-bytes -1.00 B Jvm non-heap Max Usage Bytes
jvm.non-heap.peak-committed-usage-bytes 0 Jvm non-heap Peak Committed Usage Bytes
jvm.non-heap.peak-current-usage-bytes 0 Jvm non-heap Peak Current Usage Bytes
jvm.non-heap.peak-init-usage-bytes 0 Jvm non-heap Peak Init Usage Bytes
jvm.non-heap.peak-max-usage-bytes 0 Jvm non-heap Peak Max Usage Bytes
jvm.ps-eden-space.committed-usage-bytes 102.00 MB Jvm ps-eden-space Committed Usage Bytes
jvm.ps-eden-space.current-usage-bytes 91.52 MB Jvm ps-eden-space Current Usage Bytes
jvm.ps-eden-space.init-usage-bytes 99.00 MB Jvm ps-eden-space Init Usage Bytes
jvm.ps-eden-space.max-usage-bytes 103.00 MB Jvm ps-eden-space Max Usage Bytes
jvm.ps-eden-space.peak-committed-usage-bytes 124.50 MB Jvm ps-eden-space Peak Committed Usage Bytes
jvm.ps-eden-space.peak-current-usage-bytes 124.50 MB Jvm ps-eden-space Peak Current Usage Bytes
jvm.ps-eden-space.peak-init-usage-bytes 99.00 MB Jvm ps-eden-space Peak Init Usage Bytes
jvm.ps-eden-space.peak-max-usage-bytes 124.50 MB Jvm ps-eden-space Peak Max Usage Bytes
jvm.ps-old-gen.committed-usage-bytes 263.00 MB Jvm ps-old-gen Committed Usage Bytes
jvm.ps-old-gen.current-usage-bytes 58.87 MB Jvm ps-old-gen Current Usage Bytes
jvm.ps-old-gen.init-usage-bytes 263.00 MB Jvm ps-old-gen Init Usage Bytes
jvm.ps-old-gen.max-usage-bytes 263.00 MB Jvm ps-old-gen Max Usage Bytes
jvm.ps-old-gen.peak-committed-usage-bytes 263.00 MB Jvm ps-old-gen Peak Committed Usage Bytes
jvm.ps-old-gen.peak-current-usage-bytes 58.87 MB Jvm ps-old-gen Peak Current Usage Bytes
jvm.ps-old-gen.peak-init-usage-bytes 263.00 MB Jvm ps-old-gen Peak Init Usage Bytes
jvm.ps-old-gen.peak-max-usage-bytes 263.00 MB Jvm ps-old-gen Peak Max Usage Bytes
jvm.ps-survivor-space.committed-usage-bytes 13.50 MB Jvm ps-survivor-space Committed Usage Bytes
jvm.ps-survivor-space.current-usage-bytes 11.25 MB Jvm ps-survivor-space Current Usage Bytes
jvm.ps-survivor-space.init-usage-bytes 16.00 MB Jvm ps-survivor-space Init Usage Bytes
jvm.ps-survivor-space.max-usage-bytes 13.50 MB Jvm ps-survivor-space Max Usage Bytes
jvm.ps-survivor-space.peak-committed-usage-bytes 20.00 MB Jvm ps-survivor-space Peak Committed Usage Bytes
jvm.ps-survivor-space.peak-current-usage-bytes 15.06 MB Jvm ps-survivor-space Peak Current Usage Bytes
jvm.ps-survivor-space.peak-init-usage-bytes 16.00 MB Jvm ps-survivor-space Peak Init Usage Bytes
jvm.ps-survivor-space.peak-max-usage-bytes 20.00 MB Jvm ps-survivor-space Peak Max Usage Bytes
jvm.total.committed-usage-bytes 459.56 MB Jvm total Committed Usage Bytes
jvm.total.current-usage-bytes 241.52 MB Jvm total Current Usage Bytes
jvm.total.init-usage-bytes 380.44 MB Jvm total Init Usage Bytes
jvm.total.max-usage-bytes 1.60 GB Jvm total Max Usage Bytes
jvm.total.peak-committed-usage-bytes 488.56 MB Jvm total Peak Committed Usage Bytes
jvm.total.peak-current-usage-bytes 278.31 MB Jvm total Peak Current Usage Bytes
jvm.total.peak-init-usage-bytes 380.44 MB Jvm total Peak Init Usage Bytes
jvm.total.peak-max-usage-bytes 1.63 GB Jvm total Peak Max Usage Bytes

memory

Name Value Description
memory.mapped-bytes 9.35 GB Total bytes of memory mappings in this process (the virtual memory size).
memory.rss 588.24 MB Resident set size (RSS) of this process, including TCMalloc, buffer pool and Jvm.
memory.thp.defrag [always] madvise never The system-wide 'defrag' setting for Transparent Huge Pages.
memory.thp.enabled [always] madvise never The system-wide 'enabled' setting for Transparent Huge Pages.
memory.thp.khugepaged-defrag 1 The system-wide 'defrag' setting for khugepaged.
memory.total-used 253.89 MB Total memory currently used by TCMalloc and buffer pool.

rpc

Name Value Description
mem-tracker.ControlService.current_usage_bytes 0 Memtracker ControlService Current Usage Bytes
mem-tracker.ControlService.peak_usage_bytes 146.96 KB Memtracker ControlService Peak Usage Bytes
mem-tracker.DataStreamService.current_usage_bytes 0 Memtracker DataStreamService Current Usage Bytes
mem-tracker.DataStreamService.peak_usage_bytes 48.91 KB Memtracker DataStreamService Peak Usage Bytes
rpc.impala.ControlService.rpcs_queue_overflow 0 Service impala.ControlService: Total number of incoming RPCs that were rejected due to overflow of the service queue.
rpc.impala.DataStreamService.rpcs_queue_overflow 0 Service impala.DataStreamService: Total number of incoming RPCs that were rejected due to overflow of the service queue.

scheduler

Name Value Description
simple-scheduler.assignments.total 2.59M The number of assignments
simple-scheduler.initialized true Indicates whether the scheduler has been initialized.
simple-scheduler.local-assignments.total 2.59M Number of assignments operating on local data
simple-scheduler.num-backends 3 The number of backend connections from this Impala Daemon to other Impala Daemons.

statestore-subscriber

Name Value Description
rpc-method.statestore-subscriber.StatestoreSubscriber.Heartbeat.call_duration Count: 618935, min / max: 0 / 1ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 0, 99.9th %-ile: 1ms
rpc-method.statestore-subscriber.StatestoreSubscriber.UpdateState.call_duration Count: 6487102, min / max: 0 / 2s276ms, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 0, 99.9th %-ile: 1ms
statestore-subscriber.connected true Whether the Impala Daemon considers itself connected to the StateStore.
statestore-subscriber.heartbeat-interval-time Last (of 618935): 1.00104. Min: 0, max: 1.03504, avg: 1.00016 The time (sec) between Statestore heartbeats.
statestore-subscriber.last-recovery-duration 0 The amount of time the StateStore subscriber took to recover the connection the last time it was lost.
statestore-subscriber.last-recovery-time N/A The local time that the last statestore recovery happened.
statestore-subscriber.topic-catalog-update.processing-time-s Last (of 309477): 0. Min: 0, max: 2.27509, avg: 7.66226e-05 Statestore Subscriber Topic catalog-update Processing Time
statestore-subscriber.topic-catalog-update.update-interval Last (of 309477): 2.00008. Min: 0, max: 2.02308, avg: 2.00018 Interval between topic updates for Topic catalog-update
statestore-subscriber.topic-impala-membership.processing-time-s Last (of 6177625): 0. Min: 0, max: 0.0620025, avg: 7.61795e-06 Statestore Subscriber Topic impala-membership Processing Time
statestore-subscriber.topic-impala-membership.update-interval Last (of 6177625): 0.100004. Min: 0, max: 2.4541, avg: 0.100185 Interval between topic updates for Topic impala-membership
statestore-subscriber.topic-impala-request-queue.processing-time-s Last (of 6177625): 0. Min: 0, max: 0.0620025, avg: 1.96124e-05 Statestore Subscriber Topic impala-request-queue Processing Time
statestore-subscriber.topic-impala-request-queue.update-interval Last (of 6177625): 0.100004. Min: 0, max: 2.4531, avg: 0.100186 Interval between topic updates for Topic impala-request-queue
statestore-subscriber.topic-update-duration Last (of 6487102): 0. Min: 0, max: 2.27509, avg: 2.28963e-05 The time (sec) taken to process Statestore subscriber topic updates.
statestore-subscriber.topic-update-interval-time Last (of 12664727): 0.100004. Min: 0, max: 2.4541, avg: 0.146614 The time (sec) between Statestore subscriber topic updates.

tcmalloc

Name Value Description
tcmalloc.bytes-in-use 144.64 MB Number of bytes used by the application. This will not typically match the memory use reported by the OS, because it does not include TCMalloc overhead or memory fragmentation.
tcmalloc.pageheap-free-bytes 0 Number of bytes in free, mapped pages in page heap. These bytes can be used to fulfill allocation requests. They always count towards virtual memory usage, and unless the underlying memory is swapped out by the OS, they also count towards physical memory usage.
tcmalloc.pageheap-unmapped-bytes 5.25 GB Number of bytes in free, unmapped pages in page heap. These are bytes that have been released back to the OS, possibly by one of the MallocExtension "Release" calls. They can be used to fulfill allocation requests, but typically incur a page fault. They always count towards virtual memory usage, and depending on the OS, typically do not count towards physical memory usage.
tcmalloc.physical-bytes-reserved 253.89 MB Derived metric computing the amount of physical memory (in bytes) used by the process, including that actually in use and free bytes reserved by tcmalloc. Does not include the tcmalloc metadata.
tcmalloc.total-bytes-reserved 5.49 GB Bytes of system memory reserved by TCMalloc.