新しい投稿

查找

記事
· 2024年3月4日 8m read

InterSystems IRIS® CloudSQL Metrics to Google Cloud Monitoring

If you are a customer of the new InterSystems IRIS® Cloud SQL and InterSystems IRIS® Cloud IntegratedML® cloud offerings and want access to the metrics of your deployments and send them to your own Observability platform, here is a quick and dirty way to get it done by sending the metrics to Google Cloud Platform Monitoring (formerly StackDriver).

The Cloud portal does contain a representation of some top level metrics for at-a-glance heads up metrics, which is powered by a metrics endpoint that is exposed to you, but without some inspection you would not know it was there. 

🚩 This approach is most likely taking advantage of a "to be named feature", so with that being said, it is not future-proof and definitely not supported by InterSystems.


So what if you wanted a more comprehensive set exported? This technical article/example shows a technique to scrape and forward metrics to observability, it can be modified to suit your needs, to scrape ANY metrics target and send to ANY observability platform using the Open Telemetry Collector.

The mechanics leading up to the above result can be accomplished in many ways, but for here we are standing up a Kubernetes pod to run a python script in one container, and Otel in another to pull and push the metrics... definitely a choose your own adventure, but for this example and article k8s is the actor pulling this off with Python.

Steps:

  • Prereqs
  • Python
  • Container
  • Kubernetes
  • Google Cloud Monitoring

Prerequisites:

  • An active subscription to IRIS®  Cloud SQL
  • One Deployment, running, optionally with Integrated ML
  • Secrets to supply to your environment 

Environment Variables

 
 Obtain Secrets

Python:

Here is the python hackery to pull the metrics from the Cloud Portal and export them locally as metrics for the otel collector to scrape:

 
iris_cloudsql_exporter.py

Docker:

 
Dockerfile


Deployment:

k8s; Create us a namespace:

kubectl create ns iris

k8s; Add the secret:

kubectl create secret generic iris-cloudsql -n iris \
    --from-literal=user=$IRIS_CLOUDSQL_USER \
    --from-literal=pass=$IRIS_CLOUDSQL_PASS \
    --from-literal=clientid=$IRIS_CLOUDSQL_CLIENTID \
    --from-literal=api=$IRIS_CLOUDSQL_API \
    --from-literal=deploymentid=$IRIS_CLOUDSQL_DEPLOYMENTID \
    --from-literal=userpoolid=$IRIS_CLOUDSQL_USERPOOLID

otel, Create Config:

apiVersion: v1
data:
  config.yaml: |
    receivers:
      prometheus:
        config:
          scrape_configs:
          - job_name: 'IRIS CloudSQL'
              # Override the global default and scrape targets from this job every 5 seconds.
            scrape_interval: 30s
            scrape_timeout: 30s
            static_configs:
                    - targets: ['192.168.1.96:5000']
            metrics_path: /

    exporters:
      googlemanagedprometheus:
        project: "pidtoo-fhir"
    service:
      pipelines:
        metrics:
          receivers: [prometheus]
          exporters: [googlemanagedprometheus]
kind: ConfigMap
metadata:
  name: otel-config
  namespace: iris

k8s; Load the otel config as a configmap:

kubectl -n iris create configmap otel-config --from-file config.yaml

k8s; deploy load balancer (definitely optional), MetalLB.  I do this to scrape and inspect from outside of the cluster.

cat <<EOF | kubectl apply -f -n iris -
apiVersion: v1
kind: Service
metadata:
  name: iris-cloudsql-exporter-service
spec:
  selector:
    app: iris-cloudsql-exporter
  type: LoadBalancer
  ports:
  - protocol: TCP
    port: 5000
    targetPort: 8000
EOF

gcp; need the keys to google cloud, the service account needs to be scoped 

  • roles/monitoring.metricWriter
kubectl -n iris create secret generic gmp-test-sa --from-file=key.json=key.json

k8s; the deployment/pod itself, two containers:

 
deployment.yaml
kubectl -n iris apply -f deployment.yaml

Running

Assuming nothing is amiss, lets peruse the namespace and see how we are doing.

✔ 2 config maps, one for GCP, one for otel

 

✔ 1 load balancer

 

✔ 1 pod, 2 containers successful scrapes

   

Google Cloud Monitoring

Inspect observability to see if the metrics are arriving ok and be awesome in observability!

 

1 Comment
ディスカッション (1)2
続けるにはログインするか新規登録を行ってください
記事
· 2024年3月4日 4m read

IKO - Lessons Learned (Part 2 - The IrisCluster)

We now get to make use of the IKO.

Below we define the environment we will be creating via a Custom Resource Definition (CRD). It lets us define something outside the realm of what the Kubernetes standard knows (this is objects such as your pods, services, persistent volumes (and claims), configmaps, secrets, and lots more). We are building a new kind of object, an IrisCluster object.

apiVersion: intersystems.com/v1alpha1
kind: IrisCluster
metadata:
  name: simple
spec:
  licenseKeySecret:
    #; to activate ISC license key
    name: iris-key-secret
  configSource:
    #; contains CSP-merge.ini, which is merged into IKO's
    #; auto-generated configuration.
    name: iris-cpf
  imagePullSecrets:
    - name: intersystems-pull-secret

  topology:
    data:
      image: containers.intersystems.com/intersystems/irishealth:2023.3
      compatibilityVersion: "2023.3"
    webgateway:
      replicas: 1
      image: containers.intersystems.com/intersystems/webgateway:2023.3
      applicationPaths:
        #; All of the IRIS instance's system default applications.
        #; For Management Portal only, just use '/csp/sys'.
        #; To support other applications, please add them to this list.
        - /csp/sys
        - /csp/broker
        - /api
        - /isc
        - /oauth2
        - /ui
        - /csp/healthshare
      alternativeServers: LoadBalancing
      loginSecret:
        name: iris-webgateway-secret

  serviceTemplate:
    # ; to enable external IP addresses
    spec:
      type: LoadBalancer

The IrisCluster object oversees and facilitates the deployment of all the components of our IRIS environment. In this specific environment we will have:

  • 1 IRIS For Health Instance (in the form of a data node)
  • 1 Web Gateway (in the form of a web gateway node)

The iris-key-secret is an an object of kind secret. Here we will store our key. To create it:

kubectl create secret generic iris-key-secret --from-file=iris.key

Note that you'll get an error if your file is not named iris.key. If you insist on naming it something else you can do this:

kubectl create secret generic iris-key-secret --from-file=iris.key=yourKeyFile.key

The iris-cpf is a configuration file. We will create it as an object of configmap kind.

kubectl create cm iris-cpf --from-file common.cpf

In the common.cpf file there is just the password hash. You can create it using the passwordhash image as follows:

$ docker run --rm -it containers.intersystems.com/intersystems/passwordhash:1.1 -algorithm SHA512 -workfactor 10000
Enter password:
Enter password again:
PasswordHash=2b679c8c944e2cbc2c5e4b12c62b76d5dee07f28099083940b816197ca0ffbd807c36cef7d16e17bdfe4f7a2cd45a09f6e50bef1bac8f5978362eef7d2997f3a,eac33175d6268d7bb89edb48600a3fd59d9ccd4777959bbbcc31cdb726f9b956e31fedd44c016a48d0098ffc605ac6a17b5767bfdebefe01b078ef2efd40f84f,10000,SHA512

Then put the output in your common.cpf (attached). Note that the data.cpf and compute.cpf mentioned in the IKO docs are to specify additional configuration of the data and compute nodes. This is overkill for us right now - just know that they exist.

We just want to define a password of our own at startup. If we do not, we will be prompted to change our password the first time we sign in (note that the first time the default username/password is _SYSTEM/SYS, in case you do not define one).

Onto the next secret, the one for pulling the image from the registry. I use the InterSystems Container Registry (ICR), but lots of our clients have their own registries where they push our images to. That is great too. Just note that how you create your secret depends on how you access your registry. For the ICR it is as follows:

kubectl create secret docker-registry intersystems-pull-secret --docker-server=https://containers.intersystems.com --docker-username='<your username>' --docker-password='<your password>' --docker-email='<your email>'

We have one secret left, but let's just gloss over the topology first.

Topology is the IRIS environment we want to create. Specifically, this is the data node and web gateway. Regarding the image, I see some people like to use the :latest tag as is normally good practice to ensure the most up to date software. I think in this case it would actually be better practice to specify what version one wants as it is best practice to specify the compatibilityVersion. See more about that here.

As for the webgateway, we can configure how many we want, what application paths should be available and the loginSecret. This secret is how the webgateway will be logging into IRIS.

kubectl create secret generic iris-webgateway-secret --from-literal='username=CSPSystem' --from-literal='password=SYS'

That's our last secret, but you can read up more about them on the Kubernetes documentation.

Finally, we have the serviceTemplate.

Our process will create two services that are of significance to us (the rest are outside the scope of this article and should not concern you at this time): 1) simple and 2) simple-webgateway.

For now, all you need to know about services is that they expose applications that run on pods. By running kubectl get svc, you can see external IP that these two services create. If you're running your kubernetes cluster on docker-desktop like me, then it will be localhost.

And we notice the familiar ports.

That's because this is our internal and external webservers. For example, we can go to our management portal through the external web server: http://localhost/csp/sys/UtilHome.csp. http takes us automatically to port 80 (https to 443) which is why we don't need to specify the port here.

That's it for now. In the next article we'll take another bite out of services.
 

1 Comment
ディスカッション (1)1
続けるにはログインするか新規登録を行ってください
お知らせ
· 2024年3月4日

[Webinar in Hebrew] Introducing InterSystems Cloud Services

Hi Community,

We're pleased to invite you to the upcoming webinar in Hebrew:

👉 Introducing InterSystems Cloud Services 👈

📅 Date & time: March 20th, 3:00 PM IDT

In this session we will review InterSystems cloud options, introduce the InterSystems Cloud Portal and provide a quick overview of specific cloud services 

  • FHIR Server, and the FHIR SQL Builder
  • FHIR Transformation Service
  • IRIS Cloud SQL, and IntegratedML
  • IRIS Managed Cloud Service
  • Health Connect Cloud

Presenters:
🗣 @Ariel Glikman, Sales Engineer, InterSystems
🗣 @Keren Skubach, Senior Sales Engineer, InterSystems
🗣 @Tani Frankel, Sales Engineer Manager, InterSystems

➡️ Register today and enjoy!
 

ディスカッション (0)1
続けるにはログインするか新規登録を行ってください
記事
· 2024年3月3日 5m read

How to send messages to Microsoft Teams

Hi community,

The aim of this article is to explain how to create messaging between IRIS and Microsoft Teams.

In my company, we wanted to monitor error messages, and we used the Ens.Alert class to redirect those error messages through a Business Operation that sent an email.
The problem was that we sent those error messages to a support account where there were many emails. We wanted something specific for a specific team.

So we investigated how to make these messages reach the development team directly and they could have, in real time, a notification of an error in our production.
In our company we use Microsoft Teams as a corporate tool, so we asked ourselves: How could we make these messages reach the IRIS development team?

Previous steps

Please, expand to know how to configure your teams with the app Incoming Webhook.

 
Previous steps

Note: Webhook link is divided in two parts. Server and URL, remember this when you going to configure the component.

https://YOURCOMPANY.webhook.office.com/webhookb2/40cc6704-1bc5-4f87-xxxx-xxxxxxxxf@5xxxxxa-643b-47a3-xxxxx-fc962cc7cdb2/IncomingWebhook/6f272d796f1844b8b0b57b61365f8961/2ff46079-ee4a-442b-a642-dc418f6c67ee
Server: YOURCOMPANY.webhook.office.com
URL: /webhookb2/40cc6704-1bc5-4f87-xxxx-xxxxxxxxf@5xxxxxa-643b-47a3-xxxxx-fc962cc7cdb2/IncomingWebhook/6f272d796f1844b8b0b57b61365f8961/2ff46079-ee4a-442b-a642-dc418f6c67ee

Calling to webhook API

The Incoming Webhook app admits the Office 360 connector cards. You can create your card using the adaptivecard designer.

So, I've designed a card to display a error message (Ens.AlertRequest).

 
AdaptiveCard for Ens.AlertRequest

Using this schema, You can create the message using the messages of St.Teams like this

set class=##class(St.Teams.Msg.Adaptive.Request).%New()
set class.Type = "message"
set attach = ##class(St.Teams.Msg.Adaptive.Attachment).%New()
set content = ##class(St.Teams.Msg.Adaptive.Content).%New()

set container = ##class(St.Teams.Msg.Common.Item).%New()
set container.Type = "Container"
set item1=##class(St.Teams.Msg.Common.Item).%New()
set item1.Type = "TextBlock"
set item1.Text = "Unhandled error"
set item1.Weight = "bolder"
set item1.Size = "Medium"
set item2=##class(St.Teams.Msg.Common.Item).%New()
set item2.Type = "TextBlock"
set item2.Text = "St.Teams.BO.MainProcess"
set item2.Weight = "bolder"
set item2.Size = "small"
set item2.IsSubtitle = 1
set item3=##class(St.Teams.Msg.Common.Item).%New()
set item3.Type = "TextBlock"
set item3.Text = "ERROR <Ens>ErrFTPListFailed: 'Unable to open data connection to 127.0.0. on port 8080',código=425)"
set item3.Wrap = 1
set factSet=##class(St.Teams.Msg.Common.Item).%New()
set factSet.Type = "FactSet"
set factItem1 =##class(St.Teams.Msg.Common.FactItem).%New()
set factItem1.Title = "SessionId"
set factItem1.Value = "111"
set factItem2 =##class(St.Teams.Msg.Common.FactItem).%New()
set factItem2.Title = "Time"
set factItem2.Value = "2024-02-28 11:00:15"
do factSet.Facts.Insert(factItem1)
do factSet.Facts.Insert(factItem2)

do container.Items.Insert(item1)
do container.Items.Insert(item2)
do container.Items.Insert(item3)
do container.Items.Insert(factSet)

do content.Body.Insert(container)
set attach.Content = content
do class.Attachments.Insert(attach)

it creates the Json to call to the Webhook. But we want to create the message from a Ens.AlertRequest message, the best way is using a Data Transformer.

Then, the rule of your Ens.Alert should be like this:

It transform the Ens.AlertRequest using the St.Teams.DT.EnsAlertToAdpativeRequest and send it to St.Teams.BO.Api.Teams.

Then you recive the message directly into your Teams group.

I hope it is as useful to you as it has been to us.

15 Comments
ディスカッション (15)4
続けるにはログインするか新規登録を行ってください
記事
· 2024年3月2日 4m read

IKO - Lessons Learned (Part 1 - Helm)

The IKO documentation is robust. A single web page, that consists of about 50 actual pages of documentation. For beginners that can be a bit overwhelming. As the saying goes: how do you eat an elephant? One bite at a time. Let's start with the first bite: helm.

What is Helm?

Helm is to Kubernetes what the InterSystems Package Manager (IPM, formerly ObjectScript Package Manager - ZPM) is to IRIS.

It facilitates the installation of applications on the platform - in a fashion suitable for Kubernetes. That's to say that it is developed in such a way to facilitate installation to your needs, whether it be a development, test, or production environment.

We provide on our WRC software distribution all you will need under the IRIS Components tab - it consists of a .tar.gz. Extract it and you will get a .tar. Extract it again and you will see a folder iris_operator_<yourversion>. In here are a README with instructions, as well as 3 folders - an image of the IKO (you could have also got this from the InterSystems Container Registry), chart, and samples. Samples is just to help you form your files but is not actually necessary for IKO installation. Chart, however, is necessary. Let's take a peek.

chart
|
|-> iris-operator
               |
               | -> README.md
               | -> .helmignore
               | -> Chart.yaml
               | -> values.yaml
               | -> templates 
                      | -> _helpers.tpl
                      | -> apiregistration.yaml
                      | -> appcatalog-user-roles.yaml
                      | -> cleaner.yaml
                      | -> cluster-role.yaml
                      | -> cluster-role-binding.yaml
                      | -> deployment.yaml
                      | -> mutating-webhook.yaml
                      | -> NOTES.txt
                      | -> service.yaml
                      | -> service-account.yaml
                      | -> user-roles.yaml
                      | -> validating-webhook.yaml
               

 

This is the meat and potatoes (a funny way to say basic ingredients) of the application we will be installing. Don't worry. The only thing that we care about is going to be the values.yaml. Everything else is going on behind the scenes, thanks to Helm. Phew! But it's important to know that though our operator may seem like an ordinary pod, it is a lot more than that.

Most of the contents of the values.yaml are also going to be out of the scope of this article because you will not have to worry about them. We will care about just 4 fields (okay, 5 at most).

They are operator.registry, operator.repository, operator.tag, imagePullSecrets.name[0], and imagePullPolicy.

Where is your IKO image? Is your organization using a private repository? Are you planning on pulling from the ICR? Specify your image details in the registry, repository, and tag fields. If you are using the ICR you can leave it as is.

How will you access the ICR, or your organization repository? Assuming it is private you will need to specify your details with which you can access it for pulling. In the next article I touch on how to create this secret, which we can call intersystems-pull-secret instead of the standard dockerhub-secret which is what is presently there if you downloaded the files from the WRC.

Finally for the imagePullPolicy we can leave it as Always, or alternatively change it to IfNotPresent or Never. I'll refer you to the Kubernetes documentation if you need clarification - here. I tend to use IfNotPresent.

Looks like we're good to go (assuming you already have helm installed, if not install it first)! Let's install the IKO. We are going to need to tell helm where the folder with all our goodies is (that's the iris-operator folder you see above). If we were to be sitting at the chart directory you can use the command

helm install intersystems iris-operator

but perhaps you're sitting a little higher. No problem. This is fine too assuming you are sitting in a repository with iris_operator_amd-3.6.7.100:

helm install intersystems iris_operator_amd-3.6.7.100/chart/iris-operator

You'll get a message that the installation was a success and you can double check your deployment is running as is noted by the message and in our docs.

kubectl --namespace=default get deployments -l "release=intersystems, app=iris-operator"

In the next post we'll put the InterSystems Kubernetes Operator to use.

2 Comments
ディスカッション (2)2
続けるにはログインするか新規登録を行ってください