We’re currently working on redesigning Insights and the metrics that we’re going to provide. Our plan is to first keep extending Prometheus endpoint with valuable metrics and offer at some point connectors to the most popular metrics tools
We’d love if you want to provide feedback on this. Please reach out to support to start a conversation. Thanks!
GrapheneDB provides a Prometheus endpoint if you navigate to the Insights tab. This endpoint can be used as a target for scrapping database instance metrics.
The Prometheus configuration file should look like this:
global: scrape_interval: 15s # By default, scrape targets every 15 seconds. evaluation_interval: 15s scrape_configs: - job_name: 'graphenedb-prometheus' scheme: 'https' static_configs: - targets: ['db-jnbxly9x0fbarnn5x9cz.graphenedb.com:24780']
Please, note that we have removed the
/metrics part from the given URL. Prometheus expects metrics to be available on targets on a path of /metrics.
After Prometheus is started, you should be able to check that the targets state at http://localhost:9090/targets
For Cluster plans, each node will expose a Prometheus endpoint. You can configure Prometheus for scrapping metrics on each node:
global: scrape_interval: 15s # By default, scrape targets every 15 seconds. evaluation_interval: 15s scrape_configs: - job_name: 'graphenedb-prometheus' scheme: 'https' static_configs: - targets: ['db-1-4on3rt6tpwbnj6zknmg4.graphenedb.com:24780’, 'db-2-4on3rt6tpwbnj6zknmg4.graphenedb.com:24780', 'db-3-4on3rt6tpwbnj6zknmg4.graphenedb.com:24780']
On the Prometheus targets states page, you should see:
Time spent in a given JVM garbage collector in seconds
Used bytes of a given JVM memory pool
Used bytes of a given JVM memory area
Max (bytes) of a given JVM memory area
Initial bytes of a given JVM memory area
The total number of classes that have been loaded since the JVM has started execution
Current thread count of a JVM
Number of open file descriptors
The number of classes that are currently loaded in the JVM
Total amount of CPU time
CPU load average
Cumulative count of memory allocation failures
Current memory usage in bytes
Amount of bytes of cache
Cumulative count of bytes received
1 if there was an error while getting metrics, 0 otherwise
Last time the station was seen by the exporter
Cumulative count of bytes transmitted
CPU share of the station
Memory limit for the station
Memory soft limit for the station
CPU period of the station
CPU quota of the station
On Cluster databases, ONgDB and Neo4j Enterprise metrics will be also available. Please, check Neo4j Operation Manual to get a list of the available metrics.
Grafana is a popular and powerful tool to visualize metrics. Please, visit Grafana documentation for more information.
Please, find in the following links some Grafana dashboard exported by our team as a starting point of metrics visualization:
- Single database metrics
- Cluster database nodes metrics
Halin is a cluster-enabled monitoring tool for Neo4j, that provides insights into live metrics, queries, configuration and more.
You just need to visit https://halin.agraphenedb.com and enter the parameters shown on the Insights tab.
Please read more on Halin project here.
Updated 9 days ago