查找

記事
· 2025年11月12日 9m read

IKO Plus: VIP in Kubernetes on IrisClusters

Power your IrisCluster serviceTemplate with kube-vip

If you're running IRIS in a mirrored IrisCluster for HA in Kubernetes, the question of providing a Mirror VIP (Virtual IP) becomes relevant. Virtual IP offers a way for downstream systems to interact with IRIS using one IP address. Even when a failover happens, downstream systems can reconnect to the same IP address and continue working.

The lead in above was stolen (gaffled, jacked, pilfered) from techniques shared to the community for vips across public clouds with IRIS by @Eduard Lebedyuk ...

Articles: ☁ vip-aws | vip-gcp | vip-azure

This version strives to solve the same challenges for IRIS on Kubernetes when being deployed via MAAS, on prem, and possibly yet to be realized using cloud mechanics with Manged Kubernetes Services.  

 

Distraction

This distraction will highlight kube-vip, where it fits into a Mirrored IrisCluster, and enabling "floating ip" for layers 2-4 with the serviceTemplate, or one of your own.  I'll walk through a quick install of the project, apply it to a Mirrored IrisCluster and attest to a failover of the mirror against the floating vip is timely and functional.

IP

Snag an available IPv4 address off your network and set it asside for use as the VIP for the IrisCluster (or a range of them).  For this distraction we value the predictability of a single ip address to support the workload.

192.168.1.152

This is the one address to rule them all, and in use for the remainder of the article.

Kubernetes Cluster

The Cluster itself is running Canonical Kubernetes on commodity hardware of 3 physical nodes on a flat 192.X network, home lab is the strictest definition of the term.

Nodes

You'll want to do this step through some slick hook to get some work done on the node during the scheduling for implementing the virutal interface/ip.  Hopefully your nodes will have some consistentcy with the NIC hardware, making the node prep easy.  However my cluster above had some varying network interfaces, as its purchase spanned over multiple prime days, so I virtualized all them by aliasing the active nic to vip0 as an interface.

I ran the following on the nodes before I get started to add a virtual nic to a physical interface and ensure it starts at boot on the nodes.

 
vip0.sh

You should see the system assign vip0 interface and tee it up for start on boot.

If your commodity network gear lets you know when something new has arrived on the network, you may get something like this after adding those interfaces on your cell.


 

 

💫 kube-vip

A descprition of kube-vip from the ether:

kube-vip provides a virtual IP (VIP) for Kubernetes workloads, giving them a stable, highly available network address that automatically fails over between nodes — enabling load balancer–like or control plane–style redundancy without an external balancer.

The commercial workload use case is prevelant for secure implementations where the IP address space is limited and DNS is a bit tricky, like HSCN connectivity for instance in England.  The less important, but thing to solve for most standing up clusters outside of the public cloud, is basically ALB/NLB like connectivity to the workloads... have solved this with Cilium, MetalLB, and now have added kube-vip to my list.

On each node, kube-vip runs as a container, via a Daemonset, that participates in a leader-election process using Kubernetes Lease objects to determine which node owns the virtual IP (VIP). The elected leader binds the VIP directly to a host network interface (for example, creating a virtual interface like eth0:vip0) and advertises it to the surrounding network. In ARP mode, kube-vip periodically sends gratuitous ARP messages so other hosts route traffic for the VIP to that node’s MAC address. When the leader fails or loses its lease, another node’s kube-vip instance immediately assumes leadership, binds the VIP locally, and begins advertising it, enabling failover. This approach effectively makes the VIP “float” across nodes, providing high-availability networking for control-plane endpoints or load-balanced workloads without relying on an external balancer.

Shorter version:

The kube-vip containers have an election to employ a leader to determine which node should own the Virtual IP, and then binds the IP to the interface accordingly on that node and advertises it to the network.  This enables the IP address to "float" across the nodes, bind only to healthy ones, and make services accessible via IP... all using Kubernetes native magic.

The install was dead simple, no immediate chart needed, but very easy to wrap up in one if desired, here we will just deploy the manifests that support its install as specified on the getting started md of the project.

 
kube-vip.yaml

Apply it! sa, rbac, daemonset.



Bask in the glory of the running pods of the Daemonset (hopefully)

IrisCluster

Nothing special here, but a vanilla mirrorMap of ( primary/failover ).

 
IrisCluster.yaml ( abbreviated )

Apply it, and make sure its running...

kubectl apply -f IrisCluster.yaml -n ikoplus

Its alive (and mirroring) !

Annotate

This binds the VirtualIP to the Service, and is the trigger for setting up the service to the vip.

Keep in mind here, we can specify a range of ip addresses too, which you can get creative with your use case, this skips the trigger and just pulls one from a range, recall we foregoed it in the install of kube-virt, but check the yaml for the commented example.


 

kubectl annotate service nginx-lb kube-vip.io/loadbalancerIPs="192.168.1.152" --overwrite -n ikoplus


Attestation

Lets launch a pod that continually polls against the smp url constructed with the vip and watch teh status codes during fail over, then we will send one of the mirrors members "casters up" and see how the vip takes over on the alternate node.

 
podviptest.yaml

 🎉

1 Comment
ディスカッション (1)1
続けるにはログインするか新規登録を行ってください
記事
· 2025年11月12日 3m read

Recursos relacionados con VS Code

Después de los dos webinars que realizamos centrados en VS Code ["Introducción" y "Más allá de lo básico"; en hebreo], un compañero de la comunidad inglesa preparó para los participantes algunos enlaces relacionados con recursos relevantes que enviamos como seguimiento. Los compartimos aquí también.
Por supuesto, todos estáis invitados a añadir más recursos útiles.

* No se avalan las extensiones de terceros

ディスカッション (0)1
続けるにはログインするか新規登録を行ってください
記事
· 2025年11月12日 6m read

Rédaction d'une spécification OpenAPI 2.0

L'INTERFACE DE PROGRAMMATION D'APPLICATION À TRANSFERT D'ÉTAT REPRÉSENTATIF ou API REST (Representational State Transfer Application Programming Interface) est un moyen conforme à la norme permettant aux applications web de communiquer entre elles à l'aide de méthodes HTTP telles que GET, POST, PUT, DELETE, etc. Elle est conçue autour de ressources, qui peuvent être diverses, allant d'un utilisateur à un fichier.

ディスカッション (0)2
続けるにはログインするか新規登録を行ってください
質問
· 2025年11月11日

HTTP post request rejected

Hi guys,

I'm looking to mimic this Post request URL where I'm sending a token :

So I created the below code but I'm getting "HTTP/1.1 405 Method Not Allowed" error 

Url="myurl/confirmed?id="_token
Set Httprequest1=##class(%Net.HttpRequest).%New()
Set Httprequest1.SSLConfiguration="LS2"
Set Httprequest1.Server="myserver.com" 
Set Httprequest1.Timeout=30
Set Httprequest1.Https=1
Set Httprequest1.Port=7711
set Httprequest1.ContentType="application/json"
Do Httprequest1.SetHeader("Accept","application/json")
Do Httprequest1.SetHeader("Accept-Language","en_US")
//D Httprequest1.EntityBody.Write(token)
Set tSc=Httprequest1.Post(Url)
Set StateLine=Httprequest1.HttpResponse.StatusLine
^Out2($zdt($h),1)=tSc_"|"_StateLine

So what am I doing wrong ?

Thanks

3 Comments
ディスカッション (3)3
続けるにはログインするか新規登録を行ってください
記事
· 2025年11月11日 5m read

终于等到你:欢迎了解InterSystems IRIS对Golang的支持

导言

InterSystems IRIS 数据平台一直以其性能、互操作性和跨编程语言的灵活性而著称。多年来,开发人员可以将 IRIS 与 Python、Java、JavaScript 和 .NET 结合使用,但 Go(或Golang)开发人员却只能望洋兴叹。

Golang Logo

这种等待终于结束了。

新的go-irisnative驱动程序为 InterSystems IRIS 带来了GoLang 支持,实现了标准的 database/sql API。这意味着 Go 开发人员现在可以使用熟悉的数据库工具、连接池和查询接口来构建由 IRIS 支持的应用程序。


为什么要支持 GoLang

GoLang 是一种专为简单性、并发性和性能而设计的语言,是云原生和基于微服务架构的理想选择。它为 Kubernetes、Docker 和 Terraform 等一些世界上最具可扩展性的系统提供了支持。

将 IRIS 引入 Go 生态系统可实现以下目标

  • 使用 IRIS 作为后台的轻量级高性能服务
  • 并行查询执行或后台处理的本机并发性
  • 与容器化和分布式系统无缝集成
  • 通过 Go 的 database/sql 界面进行自动化数据库访问

这种集成使 IRIS 成为现代云就绪 Go 应用程序的完美选择。


开始使用

1.安装

go get github.com/caretdev/go-irisnative

2.连接 IRIS

下面介绍如何使用标准的 database/sql API 进行连接:

import (
    "database/sql"
    "fmt"
    "log"
    _ "github.com/caretdev/go-irisnative"
)

func main() {
    db, err := sql.Open("iris", "iris://_SYSTEM:SYS@localhost:1972/USER")
    if err != nil {
        log.Fatal(err)
    }
    defer db.Close()

    // Simple ping to test connection
    if err := db.Ping(); err != nil {
        log.Fatal("Failed to connect:", err)
    }

    fmt.Println("Connected to InterSystems IRIS!")
}

3.创建表格

让我们创建一个简单的演示表:

_, err = db.Exec(`CREATE TABLE IF NOT EXISTS demo (
    id INT PRIMARY KEY,
    name VARCHAR(50)
)`)
if err != nil {
    log.Fatal(err)
}
fmt.Println("Table created.")

4.插入数据

目前不支持多行插入- 每次调用只插入一行:

_, err = db.Exec(`INSERT INTO demo (id, name) VALUES (?, ?)`, 1, "Alice")
if err != nil {
    log.Fatal(err)
}

_, err = db.Exec(`INSERT INTO demo (id, name) VALUES (?, ?)`, 2, "Bob")
if err != nil {
    log.Fatal(err)
}

fmt.Println("Data inserted.")

5.查询数据

使用 database/sql 界面可直接进行查询:

rows, err := db.Query(`SELECT id, name FROM demo`)
if err != nil {
    log.Fatal(err)
}
defer rows.Close()

for rows.Next() {
    var id int
    var name string
    if err := rows.Scan(&id, &name); err != nil {
        log.Fatal(err)
    }
    fmt.Printf("ID: %d, Name: %s\n", id, name)
}

这就是在 Go 中执行基本 SQL 操作所需的全部内容。


如何运行

在引擎盖下,go-irisnative驱动程序使用IRIS Native API与数据库进行高效的底层通信。该驱动程序实现了 Go 的标准 database/sql/driver 接口,因此可与现有的 Go 工具兼容,例如:IRIS Native API、IRIS Native API、IRIS Native API、IRIS Native API、IRIS Native API:

  • sqlx
  • gorm (使用自定义方言)
  • 标准 Go 移植工具

这为开发人员提供了一个熟悉的 API,具有本地 IRIS 访问的功能和性能。


使用实例

  • 微服务:轻量级Go服务直接连接到IRIS。
  • 数据API:通过REST或gRPC端点暴露IRIS数据。
  • 集成工具:在Go管道中桥接IRIS数据与其他系统。
  • 云原生IRIS应用:在Kubernetes或Docker上部署基于IRIS的Go应用。

使用测试容器进行测试

如果您想运行自动化测试而无需管理实时IRIS实例,可以使用testcontainers-iris-go
它会启动一个临时 IRIS 容器进行集成测试。

测试设置示例:

import (
    "context"
    "database/sql"
    "flag"
    "log"
    "os"
    "testing"
    iriscontainer "github.com/caretdev/testcontainers-iris-go"
    "github.com/stretchr/testify/require"
    "github.com/testcontainers/testcontainers-go"
)

var connectionString string = "iris://_SYSTEM:SYS@localhost:1972/USER"
var container *iriscontainer.IRISContainer = nil
func TestMain(m *testing.M) {
    var (
        useContainer   bool
        containerImage string
    )
    flag.BoolVar(&useContainer, "container", true, "Use container image.")
    flag.StringVar(&containerImage, "container-image", "", "Container image.")
    flag.Parse()
    var err error
    ctx := context.Background()
    if useContainer || containerImage != "" {
        options := []testcontainers.ContainerCustomizer{
            iriscontainer.WithNamespace("TEST"),
            iriscontainer.WithUsername("testuser"),
            iriscontainer.WithPassword("testpassword"),
        }
        if containerImage != "" {
            container, err = iriscontainer.Run(ctx, containerImage, options...)
        } else {
            // or use default docker image
            container, err = iriscontainer.RunContainer(ctx, options...)
        }
        if err != nil {
            log.Println("Failed to start container:", err)
            os.Exit(1)
        }
        defer container.Terminate(ctx)
        connectionString = container.MustConnectionString(ctx)
        log.Println("Container started successfully", connectionString)
    }

    var exitCode int = 0
    exitCode = m.Run()

    if container != nil {
        container.Terminate(ctx)
    }
    os.Exit(exitCode)
}

func openDbWrapper[T require.TestingT](t T, dsn string) *sql.DB {
    db, err := sql.Open(`intersystems`, dsn)
    require.NoError(t, err)
    require.NoError(t, db.Ping())
    return db
}

func closeDbWrapper[T require.TestingT](t T, db *sql.DB) {
    if db == nil {
        return
    }
    require.NoError(t, db.Close())
}

func TestConnect(t *testing.T) {
    db := openDbWrapper(t, connectionString)
    defer closeDbWrapper(t, db)

    var (
        namespace string
        username  string
    )
    res := db.QueryRow(`SELECT $namespace, $username`)
    require.NoError(t, res.Scan(&namespace, &username))
    require.Equal(t, "TEST", namespace)
    require.Equal(t, "testuser", username)
}

这是CI/CD 管道单元测试的理想选择,可确保 Go 应用程序与 IRIS 在隔离状态下无缝运行。


结论

InterSystems IRIS对Go语言的支持现已到来,这无疑是一个变革性的进步。
通过go-irisnative您现在可以直接利用 IRIS 的强大功能构建可扩展、并发和云原生的应用程序。

无论您是要构建微服务、API 还是集成工具,Go 都能为您提供简单性和性能,而 IRIS 则能为您提供可靠性和丰富的数据功能。

👉 来试一试吧!

ディスカッション (0)1
続けるにはログインするか新規登録を行ってください