If you are a customer of the new InterSystems IRIS® Cloud SQL and InterSystems IRIS® Cloud IntegratedML® cloud offerings and want access to the metrics of your deployments and send them to your own Observability platform, here is a quick and dirty way to get it done by sending the metrics to Google Cloud Platform Monitoring (formerly StackDriver).
The Cloud portal does contain a representation of some top level metrics for at-a-glance heads up metrics, which is powered by a metrics endpoint that is exposed to you, but without some inspection you would not know it was there.
🚩 This approach is most likely taking advantage of a "to be named feature", so with that being said, it is not future-proof and definitely not supported by InterSystems.
![]() |
![]() |
So what if you wanted a more comprehensive set exported? This technical article/example shows a technique to scrape and forward metrics to observability, it can be modified to suit your needs, to scrape ANY metrics target and send to ANY observability platform using the Open Telemetry Collector.
The mechanics leading up to the above result can be accomplished in many ways, but for here we are standing up a Kubernetes pod to run a python script in one container, and Otel in another to pull and push the metrics... definitely a choose your own adventure, but for this example and article k8s is the actor pulling this off with Python.
Steps:
- Prereqs
- Python
- Container
- Kubernetes
- Google Cloud Monitoring
Prerequisites:
- An active subscription to IRIS® Cloud SQL
- One Deployment, running, optionally with Integrated ML
- Secrets to supply to your environment
Environment Variables
Python:
Here is the python hackery to pull the metrics from the Cloud Portal and export them locally as metrics for the otel collector to scrape:
Docker:
Deployment:
k8s; Create us a namespace:
kubectl create ns iris
k8s; Add the secret:
kubectl create secret generic iris-cloudsql -n iris \ --from-literal=user=$IRIS_CLOUDSQL_USER \ --from-literal=pass=$IRIS_CLOUDSQL_PASS \ --from-literal=clientid=$IRIS_CLOUDSQL_CLIENTID \ --from-literal=api=$IRIS_CLOUDSQL_API \ --from-literal=deploymentid=$IRIS_CLOUDSQL_DEPLOYMENTID \ --from-literal=userpoolid=$IRIS_CLOUDSQL_USERPOOLID
otel, Create Config:
apiVersion: v1 data: config.yaml: | receivers: prometheus: config: scrape_configs: - job_name: 'IRIS CloudSQL' # Override the global default and scrape targets from this job every 5 seconds. scrape_interval: 30s scrape_timeout: 30s static_configs: - targets: ['192.168.1.96:5000'] metrics_path: / exporters: googlemanagedprometheus: project: "pidtoo-fhir" service: pipelines: metrics: receivers: [prometheus] exporters: [googlemanagedprometheus] kind: ConfigMap metadata: name: otel-config namespace: iris
k8s; Load the otel config as a configmap:
kubectl -n iris create configmap otel-config --from-file config.yaml
k8s; deploy load balancer (definitely optional), MetalLB. I do this to scrape and inspect from outside of the cluster.
cat <<EOF | kubectl apply -f -n iris - apiVersion: v1 kind: Service metadata: name: iris-cloudsql-exporter-service spec: selector: app: iris-cloudsql-exporter type: LoadBalancer ports: - protocol: TCP port: 5000 targetPort: 8000 EOF
gcp; need the keys to google cloud, the service account needs to be scoped
roles/monitoring.metricWriter
kubectl -n iris create secret generic gmp-test-sa --from-file=key.json=key.json
k8s; the deployment/pod itself, two containers:
kubectl -n iris apply -f deployment.yaml
Running
Assuming nothing is amiss, lets peruse the namespace and see how we are doing.
✔ 2 config maps, one for GCP, one for otel
✔ 1 load balancer
✔ 1 pod, 2 containers successful scrapes
Google Cloud Monitoring
Inspect observability to see if the metrics are arriving ok and be awesome in observability!