We’re currently working on redesigning Insights and the metrics that we’re going to provide. Our plan is to first keep extending Prometheus endpoint with valuable metrics and offer at some point connectors to the most popular metrics tools

We’d love if you want to provide feedback on this. Please reach out to support to start a conversation. Thanks!

Prometheus endpoint

GrapheneDB provides a Prometheus endpoint if you navigate to the Insights tab. This endpoint can be used as a target for scrapping database instance metrics.


The Prometheus configuration file should look like this:

  scrape_interval:     15s  # By default, scrape targets every 15 seconds.
  evaluation_interval: 15s

  - job_name: 'graphenedb-prometheus'
    scheme: 'https'
    - targets: ['db-jnbxly9x0fbarnn5x9cz.graphenedb.com:24780']

Please, note that we have removed the /metrics part from the given URL. Prometheus expects metrics to be available on targets on a path of /metrics.

After Prometheus is started, you should be able to check that the targets state at http://localhost:9090/targets


Cluster Prometheus configuration

For Cluster plans, each node will expose a Prometheus endpoint. You can configure Prometheus for scrapping metrics on each node:

  scrape_interval:     15s  # By default, scrape targets every 15 seconds.
  evaluation_interval: 15s

  - job_name: 'graphenedb-prometheus'
    scheme: 'https'
    - targets: ['db-1-4on3rt6tpwbnj6zknmg4.graphenedb.com:24780’, 'db-2-4on3rt6tpwbnj6zknmg4.graphenedb.com:24780', 'db-3-4on3rt6tpwbnj6zknmg4.graphenedb.com:24780']

On the Prometheus targets states page, you should see:


Metrics reference

Metric nameDescription
jvm_gc_collection_seconds_countTime spent in a given JVM garbage collector in seconds
jvm_memory_pool_bytes_usedUsed bytes of a given JVM memory pool
jvm_memory_bytes_usedUsed bytes of a given JVM memory area
jvm_memory_bytes_maxMax (bytes) of a given JVM memory area
jvm_memory_bytes_initInitial bytes of a given JVM memory area
jvm_classes_loaded_totalThe total number of classes that have been loaded since the JVM has started execution
jvm_threads_currentCurrent thread count of a JVM
process_open_fdsNumber of open file descriptors
jvm_classes_loadedThe number of classes that are currently loaded in the JVM
station_cpu_usage_seconds_totalTotal amount of CPU time
station_cpu_load_average_deltaCPU load average
station_memory_failures_totalCumulative count of memory allocation failures
station_memory_usage_bytesCurrent memory usage in bytes
station_cache_bytesAmount of bytes of cache
station_network_receive_bytes_totalCumulative count of bytes received
station_scrape_error1 if there was an error while getting metrics, 0 otherwise
station_uptime_secondsStation uptime
station_last_seenLast time the station was seen by the exporter
station_network_transmit_bytes_totalCumulative count of bytes transmitted
station_spec_cpu_sharesCPU share of the station
station_spec_memory_limit_bytesMemory limit for the station
station_spec_memory_soft_limit_bytesMemory soft limit for the station
station_spec_cpu_period_usCPU period of the station
station_spec_cpu_quota_usCPU quota of the station

On Cluster databases, ONgDB and Neo4j Enterprise metrics will be also available. Please, check Neo4j Operation Manual to get a list of the available metrics.

Grafana dashboard examples

Grafana is a popular and powerful tool to visualize metrics. Please, visit Grafana documentation for more information.

Please, find in the following links some Grafana dashboard exported by our team as a starting point of metrics visualization:

  • Single database metrics
  • Cluster database nodes metrics

Halin tool

Halin is a cluster-enabled monitoring tool for Neo4j, that provides insights into live metrics, queries, configuration and more.

You just need to visit https://halin.agraphenedb.com and enter the parameters shown on the Insights tab.

Please read more on Halin project here.