新しい投稿

Pesquisar

記事
· 2024年3月11日 3m read

Deploying IRIS For Health on OpenShift

In case you're planning on deploying IRIS For Health, or any of our containerized products, via the IKO on OpenShift, I wanted to share some of the hurdles we had to overcome.

As with any IKO based installation, we first need to deploy the IKO itself. However we were getting this error:

Warning FailedCreate 75s (x16 over 3m59s) replicaset-controller Error creating: pods "intersystems-iris-operator-amd-f6757dcc-" is forbidden: unable to validate against any security context constraint:

proceeded by a list of all the security context constraints (SCCs) it could not validate against.

If you're like me, you may be surprised to see such an error when deploying in Kubernetes, because a security context constraint is not a Kubernetes object. This comes from the OpenShift universe, which extends the regular Kubernetes definition (read more about that here). 

What happens is that when we install the IKO via helm (see more on how to do that here) we create a service account.

[ User accounts are for humans. Service accounts are for application processes - Kubernetes docs].

This service account is put in charge of creating objects, such as the IKO pod. However, it fails.

OpenShift has a wide array of security permissions that can be limited, and one way to do this is via the security context constraint. 

What we needed to do was to create the following SecurityContextConstraint:

# Create SCC for ISC resources
kind: SecurityContextConstraints
apiVersion: security.openshift.io/v1
metadata:
  name: iris-scc
  namespace: <iris namespace>
allowPrivilegedContainer: false
runAsUser:
  type: RunAsAny
seLinuxContext:
  type: RunAsAny
fsGroup:
  type: RunAsAny
supplementalGroups:
  type: RunAsAny
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowHostDirVolumePlugin: false
allowHostIPC: false
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
users:
  - system:serviceaccount:<iris namespace>:intersystems-iris-operator-amd

This gives access to the intersystems-iris-operator-amd service account to create objects by allowing it to validate against the iris-scc.

Next is to deploy the IrisCluster itself (more on that here). But this was failing too, because we needed to give the default service account access to the anyuid SCC, allowing our containers to run as any user (more specifically, we need to let the irisowner/51773 user run the containers!). We do this as follows:

ocm adm policy add-scc-to-user anyuid -z default -n <iris namespace>

We then create a rolebinding for the Admin role to the service account intersystems-iris-operator-amd, giving it the ability to create and monitor in the namespace. In OpenShift one can do this via the console, or as explained in kubectl create rolebinding.

One very last thing to note is that you may notice the container getting a SIGKILL, as is shown in the IRIS Messages Log:

Initializing IRIS, please wait...
Merging IRIS, please wait...
Starting IRIS
Startup aborted.
Unexpected failure: The target process received a termination signal 9.
Operation aborted.
[ERROR] Command "iris start IRIS quietly" exited with status 256

This could be due to Resource Quotas and Limit Ranges. Take into account that these exist at both the pod level and the container level.

Hope this helps and happy deploying!

P.S.

You may have noted that in the values.yaml of the Helm chart, there is this snippet:

serviceAccount:
  # Specifies whether a ServiceAccount should be created
  create: true
  # The name of the ServiceAccount to use.
  # If not set and create is true, a name is generated using the fullname template
  name:

You can actually change edit this and use a service account that already exists. For example:

serviceAccount:
  # Specifies whether a ServiceAccount should be created
  create: false
  # The name of the ServiceAccount to use.
  # If not set and create is true, a name is generated using the fullname template
  name: myExistingServiceAccount

Note that this is not a one size fits all, but it could help you if you're deploying on a strict system where you cannot create service accounts, but can use some that already exist.

ディスカッション (0)1
続けるにはログインするか新規登録を行ってください
質問
· 2024年3月8日

Classic View Page for CCR to be deprecated - did we miss anything on the new UI?

Last year we introduced our new angular-based View page for CCR as part of the UI refresh for the application.  This has been used very effectively by close to 1000 users around the world as the default UI for viewing CCR, and as a result we're getting ready to completely disable the "classic" View page. 

Benefits of the new page include:

  • modern look and feel
  • reworked UX   
  • dynamic data updates
  • new tabbed access to reduce scrolling
  • dynamic workflow visualization (coming soon)

Before we turn off access to the old UI, we really want to make sure that users are able to do everything they need to with the new UI.  Have you found that you need to revert to using the Classic View page to accomplish certain tasks?  If so, please let us know in the comments below.

6 Comments
ディスカッション (6)4
続けるにはログインするか新規登録を行ってください
記事
· 2024年3月8日 3m read

IKO - Lessons Learned (Part 4 - The Storage Class)

The IKO will dynamically provision storage in the form of persistent volumes and pods will claim them via persistent volume claims.

But storage can come in different shapes and sizes. The blueprint to the details about the persistent volumes comes in the form of the storage class.

This raises the question: we've deployed the IrisCluster, and haven't specified a storage class yet. So what's going on?

You'll notice that with a simple

kubectl get storageclass

you'll find the storage classes that exist in your cluster. Note that storage classes are a cluster wide resource, not per namespace as other objects, like our pods and services.

You'll also notice that one of the storage classes is marked as default. This is the one that the IKO takes when we do not specify any. What if none are marked as default? In this case we have the following problem:

Persistent volumes are not able to be created, which in turn means persistent volume claims are not bound and therefore the pod is stuck in a pending state. It's like going to a restaurant and after looking at the menu telling the waiter/waitress that you'd like to order food, close the menu, hand it back to your server, and say thanks. We need to be more specific or our instructions are so vague that they mean nothing.

To solve this problem you could either set a default storage class in your cluster, or set the storage class name field in the CRD (this way you don't need to change what your default cluster storage class is in case you choose to use the non default storage class):

apiVersion: intersystems.com/v1alpha1
kind: IrisCluster
metadata:
  name: simple
spec:
  licenseKeySecret:
    #; to activate ISC license key
    name: iris-key-secret
  configSource:
    #; contains CSP-merge.ini, which is merged into IKO's
    #; auto-generated configuration.
    name: iris-cpf
  imagePullSecrets:
    - name: intersystems-pull-secret
  storageClassName: your-sc

  topology:
    data:
      image: containers.intersystems.com/intersystems/irishealth:2023.3
      compatibilityVersion: "2023.3"
      mirrored: true
      webgateway:
        image: containers.intersystems.com/intersystems/webgateway:2023.3
        type: apache
        replicas: 1
        applicationPaths:
          - /csp/sys
          - /csp/healthshare
          - /api/atelier
          - /csp/broker
          - /isc
          - /oauth2
          - /ui
        loginSecret:
           name: iris-webgateway-secret
           
    arbiter:
     image: containers.intersystems.com/intersystems/arbiter:2023.3
    webgateway:
      replicas: 1
      image: containers.intersystems.com/intersystems/webgateway:2023.3
      applicationPaths:
        #; All of the IRIS instance's system default applications.
        #; For Management Portal only, just use '/csp/sys'.
        #; To support other applications, please add them to this list.
        - /csp/sys
        - /csp/broker
        - /api
        - /isc
        - /oauth2
        - /ui
        - /csp/healthshare
      alternativeServers: LoadBalancing
      loginSecret:
        name: iris-webgateway-secret

  serviceTemplate:
    # ; to enable external IP addresses
    spec:
      type: LoadBalancer

Note that there are specific requirements for the storage class, as documented in the docs:

"Any storage class you define must include Kubernetes setting volumeBindingMode: WaitForFirstConsumerOpens for correct operation of the IKO."

Furthermore, I like to use allowVolumeExpansion: true.

Note that the provisioner of your storage class is platform specific.

The storage class pops up all over the CRD so remember to set it when you are customizing your storage for your cluster, in order to make sure you use the storage class that's right for you.

1 Comment
ディスカッション (1)1
続けるにはログインするか新規登録を行ってください
記事
· 2024年3月6日 9m read

Connecting to DynamoDB Using Embedded Python: A Tutorial for Using Boto3 and ObjectScript to Write to DynamoDB

Introduction

As the health interoperability landscape expands to include data exchange across on-premise as well as hosted solutions, we are seeing an increased need to integrate with services such as cloud storage. One of the most prolifically used and well supported tools is the NoSQL database DynamoDB (Dynamo), provided by Amazon Web Services (AWS).

4 Comments
ディスカッション (4)1
続けるにはログインするか新規登録を行ってください
記事
· 2024年3月6日 3m read

IKO - Lessons Learned (Part 3 - Services 101 and The Sidecars)

The IKO allows for sidecars. The idea behind them is to have direct access to a specific instance of IRIS. If we have mirrored data nodes, the web gateway will (correctly) only give us access to the primary node. But perhaps we need access to a specific instance. The sidecar is the solution.

Building on the example from the previous article, we introduce the sidecar by using a mirrored data node and of course arbiter.

apiVersion: intersystems.com/v1alpha1
kind: IrisCluster
metadata:
  name: simple
spec:
  licenseKeySecret:
    #; to activate ISC license key
    name: iris-key-secret
  configSource:
    #; contains CSP-merge.ini, which is merged into IKO's
    #; auto-generated configuration.
    name: iris-cpf
  imagePullSecrets:
    - name: intersystems-pull-secret

  topology:
    data:
      image: containers.intersystems.com/intersystems/irishealth:2023.3
      compatibilityVersion: "2023.3"
      mirrored: true
      webgateway:
        image: containers.intersystems.com/intersystems/webgateway:2023.3
        type: apache
        replicas: 1
        applicationPaths:
          - /csp/sys
          - /csp/healthshare
          - /api/atelier
          - /csp/broker
          - /isc
          - /oauth2
          - /ui
        loginSecret:
           name: iris-webgateway-secret
           
    arbiter:
     image: containers.intersystems.com/intersystems/arbiter:2023.3
    webgateway:
      replicas: 1
      image: containers.intersystems.com/intersystems/webgateway:2023.3
      applicationPaths:
        #; All of the IRIS instance's system default applications.
        #; For Management Portal only, just use '/csp/sys'.
        #; To support other applications, please add them to this list.
        - /csp/sys
        - /csp/broker
        - /api
        - /isc
        - /oauth2
        - /ui
        - /csp/healthshare
      alternativeServers: LoadBalancing
      loginSecret:
        name: iris-webgateway-secret

  serviceTemplate:
    # ; to enable external IP addresses
    spec:
      type: LoadBalancer

 

Notice how the sidecar is nearly identical to the 'maincar' webgateway. It just is placed within the data node. That's because it's a second container that sits in the pod alongside the IRIS image. This all sounds great. But how do we actually access it? The IKO nicely creates services for us, but for the sidecar that responsibility will fall on us.

So how do we expose this webgateway? With a service like this:

apiVersion: v1
kind: Service
metadata:
  name: sidecar-service
spec:
  ports:
  - name: http
    port: 81
    protocol: TCP
    targetPort: 80
  selector:
    intersystems.com/component: data
    intersystems.com/kind: IrisCluster
    intersystems.com/mirrorRole: backup
    intersystems.com/name: simple
    intersystems.com/role: iris
  type: LoadBalancer

Now our 'maincar' service always points at the primary and the sidecar at the backup. But we very well could have created a sidecar service to expose data-0-0 and one to expose data-0-1, regardless of which is the primary or backup. Services give the possibility of exposing any pod we want and targeting it by what you notice is the selector, which just identifies a pod (or multiple pods) by its labels.

We've barely scratched the surface on services and haven't even mentioned their more sophisticated partner, ingress. You can read up more about that here in the meantime.

In the next bite sized article we'll cover the storage class.

1 Comment
ディスカッション (1)1
続けるにはログインするか新規登録を行ってください