Insights
We’re currently working on redesigning Insights and the metrics that we’re going to provide. Our plan is to first keep extending Prometheus endpoint with valuable metrics and offer at some point connectors to the most popular metrics tools
We’d love if you want to provide feedback on this. Please reach out to support to start a conversation. Thanks!
Prometheus endpoint
GrapheneDB provides a Prometheus endpoint if you navigate to the Insights tab. This endpoint can be used as a target for scrapping database instance metrics.
The Prometheus configuration file should look like this:
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
evaluation_interval: 15s
scrape_configs:
- job_name: 'graphenedb-prometheus'
scheme: 'https'
static_configs:
- targets: ['db-jnbxly9x0fbarnn5x9cz.graphenedb.com:24780']
Please, note that we have removed the /metrics
part from the given URL. Prometheus expects metrics to be available on targets on a path of /metrics.
After Prometheus is started, you should be able to check that the targets state at http://localhost:9090/targets
Cluster Prometheus configuration
For Cluster plans, each node will expose a Prometheus endpoint. You can configure Prometheus for scrapping metrics on each node:
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
evaluation_interval: 15s
scrape_configs:
- job_name: 'graphenedb-prometheus'
scheme: 'https'
static_configs:
- targets: ['db-1-4on3rt6tpwbnj6zknmg4.graphenedb.com:24780’, 'db-2-4on3rt6tpwbnj6zknmg4.graphenedb.com:24780', 'db-3-4on3rt6tpwbnj6zknmg4.graphenedb.com:24780']
On the Prometheus targets states page, you should see:
Metrics reference
Metric name | Description |
---|---|
jvm_gc_collection_seconds_count | Time spent in a given JVM garbage collector in seconds |
jvm_memory_pool_bytes_used | Used bytes of a given JVM memory pool |
jvm_memory_bytes_used | Used bytes of a given JVM memory area |
jvm_memory_bytes_max | Max (bytes) of a given JVM memory area |
jvm_memory_bytes_init | Initial bytes of a given JVM memory area |
jvm_classes_loaded_total | The total number of classes that have been loaded since the JVM has started execution |
jvm_threads_current | Current thread count of a JVM |
process_open_fds | Number of open file descriptors |
jvm_classes_loaded | The number of classes that are currently loaded in the JVM |
station_cpu_usage_seconds_total | Total amount of CPU time |
station_cpu_load_average_delta | CPU load average |
station_memory_failures_total | Cumulative count of memory allocation failures |
station_memory_usage_bytes | Current memory usage in bytes |
station_cache_bytes | Amount of bytes of cache |
station_network_receive_bytes_total | Cumulative count of bytes received |
station_scrape_error | 1 if there was an error while getting metrics, 0 otherwise |
station_uptime_seconds | Station uptime |
station_last_seen | Last time the station was seen by the exporter |
station_network_transmit_bytes_total | Cumulative count of bytes transmitted |
station_spec_cpu_shares | CPU share of the station |
station_spec_memory_limit_bytes | Memory limit for the station |
station_spec_memory_soft_limit_bytes | Memory soft limit for the station |
station_spec_cpu_period_us | CPU period of the station |
station_spec_cpu_quota_us | CPU quota of the station |
On Cluster databases, ONgDB and Neo4j Enterprise metrics will be also available. Please, check Neo4j Operation Manual to get a list of the available metrics.
Grafana dashboard examples
Grafana is a popular and powerful tool to visualize metrics. Please, visit Grafana documentation for more information.
Please, find in the following links some Grafana dashboard exported by our team as a starting point of metrics visualization:
- Single database metrics
- Cluster database nodes metrics
Updated 3 months ago