1. How to use monitor custom application with mflib Prometheus

How to use monitor custom application with mflib Prometheus

Home Forums FABRIC General Questions and Discussion How to use monitor custom application with mflib Prometheus

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
    Posts
  • #6739
    Acheme Acheme
    Participant

      Hello,

      My interest is to generate metrics from my python code and use prometheus server installed with mflib to scrape the metrics from the application target.

      1. Please can someone show me a quick step to achieve this
      2. Or specifically, how do I edit the prometheus config file in the docker container installed on the measurement node with mflib

      Thank you very much.

      Regards,

      Acheme

      #6799
      Charles Carpenter
      Participant
        There are a couple of methods to export metrics from your running python code.
        For a general overview on writing prometheus exporters see https://prometheus.io/docs/instrumenting/writing_exporters/
        Option 1) Create your own exporter in python. This will run a small http server and will allow Prometheus to query your code every x seconds (30 seconds is usually the default). This is best for a consistently running process. Python has a promtheus_client module, pip install prometheus-client that handles most of the work for you. You will need to add a function that will be called whenever a prometheus instance makes the request. See https://prometheus.github.io/client_python/getting-started/three-step-demo/  and  https://pypi.org/project/prometheus-client/  and  https://github.com/prometheus/client_python
        You can do a test of the running exporter using curl or wget with the address of the exporter and the path /metrics.
        Next configure the prometheus on the meas_node to scrape your newly created exporter. The config file is /opt/fabric_prometheus/prometheus/prometheus_config.yml . ssh to the meas_node and sudo vim /opt/fabric_prometheus/prometheus/prometheus_config.yml Add the new scrape section at the end of the file.
        Something like

         

        # My Exporter
        - job_name: 'my_exporter_name'
          static_configs:
          - targets: ['my-exporter-address:port']
        The scrape will default to the /metrics path.

        Save the file and

        docker restart fabric_prometheus_prometheus
        Use the Explore tab in Grafana with the PromQl {job=”my_exporter_name”} to see the metrics.

         

        Option 2) Use the node_exporters textfile collector. This is best for sporatic metrics, perhaps a cron job that runs hourly. The collector reads text files found in the /var/lib/node_exporter directory. The files need to be in the format found at https://prometheus.io/docs/instrumenting/exposition_formats/#text-based-format There is a python module for writing out text files See https://prometheus.github.io/client_python/exporting/textfile/ https://github.com/prometheus/node_exporter#textfile-collector

        That should get you started. Let me know if you have more questions.
        -Charles

         

        #6807
        Acheme Acheme
        Participant

          Thank you very much Charles.

          Found out the config file path is actually /opt/fabric_prometheus/prometheus/config/prometheus_config.yml

          Regards,

          Acheme

        Viewing 3 posts - 1 through 3 (of 3 total)
        • You must be logged in to reply to this topic.