新しい投稿

查找

記事
· 2026年1月23日 4m read

Circumventing Default SQL Upper-Case Collation Behavior post IRIS 2024.2.0

We were testing upgrades on our internal systems recently, and were initially surprised to see a change in some instances of SQL behavior. On further investigation, the changes make a lot of sense, and are easy to adapt to. But, to hopefully save you some time and provide additional information in case this is something you too find curious or interesting, we thought we would share our understanding of the changes and how we got around it to preserve previous behavior where desired.

Starting in IRIS version 2024.2.0, slight tweaks were made to SQL UNION and SELECT DISTINCT behaviors to consistently enforce the implicit DISTINCT function. The upgrade note is available on the Upgrade Impact Checklist here under "DP-429949: Apply collation for UNION"

Background

Previously, for UNION actions the field values were taken in their given state, and the column was assumed to use EXACT collation. This happened regardless of the fields' collation or the namespace default settings. This meant that the casing provided was the casing returned. In the new behavior, the default field collation for that column, which is typically SQLUPPER, will be applied to each value.

A similar change is seen with SELECT DISTINCT calls. Previously, a SELECT DISTINCT call returned the provided casing. In the new behavior, the default field collation for that column, which is typically SQLUPPER, will be applied to each value.

What to do if I see this come up?

If you would prefer to retain the previous provided casing behavior, like we needed to in a few distinct areas, there are a number of ways to enforce this.

(1) Add %EXACT to your fields.

Adding %EXACT to either UNION or SELECT DISTINCT queries will preserve the same behavior as original. %EXACT can be added to any columns for which you want to prevent the default SQLUPPER behavior.

See below for examples.

Returned results from a simple SELECT DISTINCT query prior to IRIS 2024.2.0:

SELECT DISTINCT FirstName FROM DNames
FirstName
David
Dimitri
Daniel
Darren
Dany

 

Versus returned results with IRIS version in or after 2024.2.0:

FirstName
DAVID
DIMITRI
DANIEL
DARREN
DANY

 

Instead, in IRIS 2024.2.0 or newer using SELECT DISTINCT with %EXACT:
NOTE: Because %EXACT makes the returned column an expression, you will need to specify as <column name> to retain it in the result 

SELECT DISTINCT %EXACT FirstName as FirstName FROM DNames
FirstName
David
Dimitri
Daniel
Darren
Dany

 

The same %EXACT principle as applied to a UNION queries:

SELECT %EXACT FirstName as AllDNames FROM DNames
UNION
SELECT %EXACT LastName FROM DNames
FirstName
David
Dimitri
Daniel
DeStefanis
De Lorenzo

 

(2) Change from using UNION to UNION ALL.

UNION ALL returns the results without applying the default SQLUPPER collation.

It is important to note that UNION and UNION ALL have varying behavior. UNION filters out duplicate rows in cases where all columns in the results are the same, whereas UNION ALL does not. If your dataset has no duplicate rows (for example, say each row has a unique %GUID or other identifier), then UNION ALL will return the exact same results as UNION.

However, if your data does contains duplicates which need to be filtered, this will not work as a valid replacement.

See below for examples.

Returned results from a simple UNION query prior to 2024.2.0:

SELECT FirstName as AllDNames FROM DNames
UNION
SELECT LastName FROM DNames
FirstName
David
Dimitri
Daniel
DeStefanis
De Lorenzo

 

Versus returned results with IRIS version in or after 2024.2.0:

FirstName
DAVID
DIMITRI
DANIEL
DESTEFANIS
DE LORENZO

 

Instead, in IRIS 2024.2.0 or newer using UNION ALL:

SELECT FirstName as AllDNames FROM DNames
UNION ALL
SELECT LastName FROM DNames
FirstName
David
Dimitri
Daniel
DeStefanis
De Lorenzo

 

(3) Override the default SQLUPPER collation behavior namespace wise.

From the documentation, "Each namespace has a current string collation setting. This string collation is defined for the data type in %Library.StringOpens in a new tab. The default is SQLUPPER. This default can be changed." The documentation provides instructions for how to easily change this default. 

Please note that because this change is namespace wide, you should be careful to test thoroughly for unintended affects upon making the change.

ディスカッション (0)1
続けるにはログインするか新規登録を行ってください
記事
· 2026年1月23日 3m read

Lean Integration via OS Delegation

Senior engineering is defined not by the volume of code produced, but by the strategic avoidance of it. In complex integration environments, the tendency to utilize general-purpose libraries for every niche requirement introduces unnecessary overhead. True architectural maturity requires a commitment to "minimalist tooling"—prioritizing resilient, battle-tested system utilities over custom logic. This assessment examines our PGP encryption/decryption pipeline to demonstrate how shifting from application-level libraries to OS-native delegation enhances system durability.

Current State: The Heavyweight Implementation

Our current MPHP.HS.PGPUtil class represents a high-friction design. While functional, the existing InterSystems IRIS Business Process is burdened by a significant dependency footprint. By bridging into Embedded Python to utilize the pgpy library, we have introduced a "weighty" stack that necessitates the Python runtime, third-party library management, and specific cryptographic binaries.

The technical core currently forces the application to manage internal cryptographic states, manual file I/O, and memory heap allocation:

Python Implementation:

ClassMethod DecryptPGP(EncryptedFilePath As %String, TargetFilePath As %String, PrivateKeyPath As %String, Passphrase As %String) As %String [ Language = python ] {
    import pgpy
    try:
        with open(EncryptedFilePath, "rb") as enc_file:
            encrypted_content = enc_file.read()
        pgp_key, _ = pgpy.PGPKey.from_file(PrivateKeyPath)
        encrypted_message = pgpy.PGPMessage.from_blob(encrypted_content)
        with pgp_key.unlock(Passphrase) as unlocked_key:
            decrypted_message = unlocked_key.decrypt(encrypted_message)
        with open(TargetFilePath, "wb") as dec_file:
            dec_file.write(decrypted_message.message)
        return 1
    except Exception as e:
        return f"ERROR: {str(e)}"
}

 

Critical Friction Points

* Maintenance Debt: Every third-party library increases the security attack surface, requiring constant auditing and patching.

* Resource Inefficiency: Loading file data into the application’s memory heap for processing is less efficient than stream-based OS utilities.

* Tool Oversizing: We are utilizing a general-purpose language to solve a narrow, standardized cryptographic task.

Proposed State: Native Utility Delegation

To align with modern programming guidelines, we must move from "writing" to "delegating." In this context, the operating system provides a high-performance, battle-tested solution: GnuPG (GPG).

By replacing custom Python methods with direct OS-level calls, we eliminate the external dependency footprint entirely. Native utilities are optimized to stream data directly from the disk, significantly improving memory efficiency and reducing our codebase to a pure orchestrator.

Command Line Implementation:

ClassMethod DecryptPGP(EncryptedFilePath As %String, TargetFilePath As %String, PrivateKeyPath As %String, Passphrase As %String) As %String {
    // Build the command to delegate to the OS-native GPG utility
    Set cmd = "gpg --batch --yes --passphrase " _ Passphrase _ " --output " _ TargetFilePath _ " --decrypt " _ EncryptedFilePath   
    // Execute at the system level; $zf(-1) returns the OS exit code
    Set exitCode = $zf(-1, cmd) 
    // Return 1 for success (exit code 0), or the error code
    Return (exitCode = 0) ? 1 : "ERROR: GPG exit code " _ exitCode
}

 

Architectural Conclusion

Shifting to an OS-delegation model reinforces two vital pillars of our design philosophy:

* Suitability: Purpose-built binaries outperform general-purpose libraries for standardized tasks.

* Orchestration: Our code should serve as a thin management layer for existing utilities rather than a re-invention of established tools.

The result is a leaner, more secure, and highly maintainable enterprise data pipeline while reserving developer talent to apply to new technical problems.

ディスカッション (0)1
続けるにはログインするか新規登録を行ってください
記事
· 2026年1月23日 6m read

Connecting IRIS to Apache Kafka securely with SSL and SASL

1-) Introduction

This article reflects my recent experiences at trying to connect an IRIS Business Operation to a secure Kafka Server, using SSL tunnels to encrypt the communications and using SASL (Simple Authentication and Security Layer) password hashing with SCRAM-SHA-512.

 

2-) Background Information

Kafka implementations can be made extremely secure by using encryption and ACLs (Access Control Lists) to control access to topics and other resources in the cluster.

2.1-) SSL Encryption

The first part, encryption, can be easily implemented by configuring SSL on the Kafka Listeners.

The instructions and details can be found at this link:  https://kafka.apache.org/41/security/encryption-and-authentication-using-ssl/

From the link, it is important to notice that Kafka can implement Client Authentication through mutual authentication of the SSL tunnel, and that is the default behavior of the Kafka Listener.

However, to implement SASL (Simple Authentication and Security Layer) Authentication with SCRAM-SHA-512 (or SCRAM-SHA-256) hashing, the SSL Client Authentication must be disabled on the Listener.

To disable the Client Authentication on the SSL Listener, it is necessary to change the Kafka Server configuration:

ssl.client.auth=requested

ssl.endpoint.identification.algorithm=””

The default value for ssl.client.auth is required, which implies the client trying to connect to Kafka has to present a digital certificate to authenticate. Although extremely secure, SSL Authentication of the clients connecting to Kafka, require the overhead of issuing and managing the expiration of digital certificates to each individual client that is going to connect. Most customers, prefer not deal with this overhead and instead configure Kafka to use SASL.

The parameter ssl.endpoint.identification.algorithm must be set to an empty string.

2.2-) SASL Authentication

To enable SASL the required information can be found at this link: https://kafka.apache.org/41/security/authentication-using-sasl/

On my test environment I configured a Listener with the SASL_SSL security mapping on Kafka, using the SCRAM-SHA-512 mechanism for password hashing.

listeners=SASL_SSL://:9093

advertised.listeners=SASL_SSL://<kafka-hostname>:9093

sasl.enabled.mechanisms=SCRAM-SHA-512

On my test environment, with three Kafka Servers on the cluster, this configuration needs to be present on all servers.

2.3-) Authorizer and ACLs

To enable ACLs an Authorizer needs to be configured on each Broker and Controller of the Kafka cluster.

The relevant information can be found at this link: https://kafka.apache.org/41/security/authorization-and-acls/

My tests were executed on a cluster in Kraft Mode, so I had to add the following properties in the configuration files for the Kafka Brokers and Controller:

authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer

allow.everyone.if.no.acl.found=true

super.users=User:broker;User:client

The authorizer.class.name points to the default Authorizer that is shipped with Kafka. 

The allow.everyone.if.no.acl.found=true is a bit of overkill that allows any authenticated users to access a resource that has no specific ACL attached to it. The default value of this parameter is false, which will deny access to resources without a specific ACL.

The super.users=User:broker;User:client is a list of Principals (aka Users) that are considered Super Users, which are allowed to access any resource.

2.4-) Create SCRAM Credentials

It is also important to create the SASL credentials on the Kafka Brokers for each user that is going to authenticate on the Cluster, by using the Kafka CLI:

$ kafka-configs.sh --bootstrap-server <Server URI> \

--alter --add-config 'SCRAM-SHA-512=[password=<passwd>]' \

--entity-type users --entity-name <USER> \

--command-config <Config File for Connection>

It is important to create the credentials with the same level of encryption that is configured on the Listener. In my tests, I configured the Listener with SCRAM-SHA-512, but it is possible to use SCRAM-SHA-256 instead.

You can verify that the SCRAM Credentials are created on the cluster by running this Kafka CLI:

$ kafka-configs.sh --bootstrap-server <Server URI> \

--describe \

--entity-type users \

--command-config <Config File for Connection>

Each User (aka Principal) with associated SASL credentials will be listed on the output, like the one bellow:

SCRAM credential configs for user-principal '<USER>' are SCRAM-SHA-512=iterations=4096

At this point, we have done all the configuration required on the Kafka side.

3-) Configuring IRIS

The tests were executed using a IRIS Version soon to be GA 2025.1.3 (Build 457U), which includes some enhancements necessary to use advanced Kafka Security options like SASL SCRAM-SHA-512 or SCRAM-SHA-256.

To test this configuration, I created a simple Business Operation, in a Production in IRIS.

3.1-) Create a Production

Create a new Interoperability Production in your Namespace, if necessary create a Namespace that is capable of running Interoperability Productions:

Create a new Production, and select the options to enable Testing and Trace Events:

3.2-) Create a Simple Business Operation

Create a new Business Operation, using the out-of-the-box Class EnsLib.Kafka.Operation, I named my “Kafka Producer”, select the “Enable Now” option.

On the Business Operation Settings make the necessary changes to the Kafka Settings section:

The above example reflect my test environment, make the necessary adjustments to reflect your environment.

  • On the Servers Setting, change the broker hostname and port number to point to a Kafka Listener that has SASL_SSL security protocol enabled;
  • Credentials point to the Interoperability Credential that has the userID and password created on the Kafka broker, that have SCRAM credentials created, see details bellow;
  • SecurityProtocol has to be set to SASL_SSL;
  • SASLMechanism is set to SCRAM-SHA-512,or the encryption level that is set on your environment;
  • TrustStoreLocation, KeyStoreLocation are the files containing the SSL TrustStore and KeyStore that are going to be used to create the SSL tunnel. Those files are created on the Kafka Server and should be copied over in jks or p12 format;
  • TrustStoreCredentials, KeyStoreCredentails and KeyCredentials point to a Interoperability Credentials that have the credentials to access the TrustStore,KeyStore and Key files, see bellow;

 

3.3-) Create the Credentials

On the Credentials page:

Create two credentials:

The Credential used to authenticate on the Kafka Broker with SCRAM Credentials created on step 2.4 above.

The second credential is the password used to access the TrustStore KeyStore and Keyfile.

In my case all files have the same password, so just one credential opens all files.

3.4-) Sending a Test Message

Now we can test the configuration settings and send a Test Message, using the Test function of the Business Operation.

If everything is configured properly you should see the Test Results instantly. If the Test Results progress bar takes some time to progress, this is an indication that some mis-configuration have occurred, and you need to double-check your settings.

Clicking on the Visual Trace shows the results:

 

And if you have a Kafka IU available you can see the message on the Topic, or you can see the messages using Kafka CLI:

$ kafka-console-consumer.sh \

--bootstrap-server <Server URI> \

--topic <TOPIC> --from-beginning \

--consumer.config <consumer.config.properties>

The above CLI show all messages on the Topic <TOPIC> and it runs until it is interrupted by a Ctrl-C.

This concludes this quick tests.

Best of luck!

1件の新着コメント
ディスカッション (1)1
続けるにはログインするか新規登録を行ってください
記事
· 2026年1月23日 2m read

Destaque de Auditoria 2024.3 - Eventos de Instrução SQL com Granularidade Refinada

A auditoria é uma capacidade crítica que garante a segurança do seu servidor e, há algum tempo, incluímos a capacidade de auditar instruções SQL executadas no servidor.

Com a v2024.3 já disponível, estamos fornecendo opções com granularidade mais refinada para definir esses eventos a serem auditados.

Anteriormente, você podia decidir auditar instruções SQL de acordo com o mecanismo de acesso, por exemplo, executando instruções via JDBC/ODBC vs. SQL embarcado (ex: usando &sql em seu código) vs. SQL Dinâmico (ex: usando %SQL.Statement no código, ou usando a Execução de Consulta SQL do Portal de Administração, ou a partir do SQL Shell no terminal) e agora, além disso, você pode decidir auditar apenas certos tipos de instruções (por mecanismo de acesso, como era possível anteriormente). Esses tipos são:

  • DDL (Linguagem de Definição de Dados) - 

Instruções que alteram elementos do banco de dados, configurações ou outras coisas que não são dados. Ex: CREATE / ALTER TABLE

  • DML (Linguagem de Manipulação de Dados) - 

Instruções que alteram dados. Ex: INSERT / UPDATE / DELETE

  • Utility (Utilitários) - 

Instruções que não alteram dados nem metadados, mas sim alteram o status do processo ou modelos de machine learning.

  • Query (Consulta) - 

Instruções que executam ou definem uma consulta que pode retornar dados. Ex: SELECT

Esta é a lista dos eventos relacionados que você pode encontrar agora na configuração de Eventos de Auditoria (Audit Events):

Veja mais informações na Documentação relacionada e nas Notas de Lançamento relacionadas.

ディスカッション (0)1
続けるにはログインするか新規登録を行ってください
記事
· 2026年1月23日 5m read

Finding your way on the API Manager roads

 

A question that quickly arises when configuring IAM (aka Kong Gateway) is how many routes should be created to reach all the business objects in an IRIS API.

A common mistake is to create one route per business object, unnecessarily multiplying the number of routes.

Let's take the example of the Supply Chain Orchestrator Data Model API:

/api/SC/scmodel/v1/objects
/api/SC/scmodel/v1/objects/BOM
/api/SC/scmodel/v1/objects/Carrier
/api/SC/scmodel/v1/objects/Customer
/api/SC/scmodel/v1/objects/DemandPlan
/api/SC/scmodel/v1/objects/InventoryThreshold
/api/SC/scmodel/v1/objects/Issue
/api/SC/scmodel/v1/objects/LeadtimeVariant
/api/SC/scmodel/v1/objects/Location
/api/SC/scmodel/v1/objects/MfgOrder
/api/SC/scmodel/v1/objects/Milestone
/api/SC/scmodel/v1/objects/Product
/api/SC/scmodel/v1/objects/ProductInventory
/api/SC/scmodel/v1/objects/ProductSupplier
/api/SC/scmodel/v1/objects/ProductionCapacity
/api/SC/scmodel/v1/objects/PurchaseOrder
/api/SC/scmodel/v1/objects/PurchaseOrderLine
/api/SC/scmodel/v1/objects/RouteLeg
/api/SC/scmodel/v1/objects/SCException
/api/SC/scmodel/v1/objects/SLA
/api/SC/scmodel/v1/objects/SalesOrder
/api/SC/scmodel/v1/objects/SalesOrderLine
/api/SC/scmodel/v1/objects/SalesShipment
/api/SC/scmodel/v1/objects/SalesShipmentLine
/api/SC/scmodel/v1/objects/ServiceSLA
/api/SC/scmodel/v1/objects/ShipmentMilestone
/api/SC/scmodel/v1/objects/ShipmentStop
/api/SC/scmodel/v1/objects/ShipmentTracking
/api/SC/scmodel/v1/objects/ShippingCost
/api/SC/scmodel/v1/objects/Supplier
/api/SC/scmodel/v1/objects/SupplyPlan
/api/SC/scmodel/v1/objects/SupplyShipment
/api/SC/scmodel/v1/objects/SupplyShipmentLine
/api/SC/scmodel/v1/objects/TrackingService

Creating a route in API Manager for each business object would be tedious and strongly discouraged, and would not protect you from changes during a version upgrade.

 

Example of a clean cut (recommended)

Route Usage Plugins
/objects$ global list of all objects Cache, rate limit
/objects/{type} BOM / Carrier / Customer Business Monitoring

Kong doesn't natively understand /objects/{type}.

👉 It needs to be translated using a regex :

{type} = [^/]+

We start by creating a service scmodel in Kong 

### Create a new service in Kong for the SC Model application
curl -i -X POST \
--url http://localhost:8001/services/ \
--data 'name=scmodel' \
--data 'url=http://192.168.65.1:52880'

Then we add the route to retrieve the list of all objects with $ for the exact routes:

### Create a route for the SC Model service Objects
curl -i -X POST \
--url http://localhost:8001/services/scmodel/routes \
--data 'name=scmodel-objects' \
--data 'paths=~/api/SC/scmodel/v1/objects$' \
--data 'strip_path=false'

Then we create a generic route for all object type using [^/]+$ as the regex :

### Create a route for the SC Model for all object types
curl -i -X POST \
  http://localhost:8001/services/scmodel/routes \
  --data name=scmodel-object-type \
  --data 'paths[]=~/api/SC/scmodel/v1/objects/[^/]+$' \
  --data strip_path=false

If you want to monitor or manage one type of object in particular with different plugins or different auth, rate-limits, etc., you have to create a specific route for it, using regex_priority:

If multiple regex routes match, Kong chooses the highest regex_priority, otherwise… the internal (non-deterministic) order.

Example below for Customer and Product objects :

### Create a route for the SC Model service Customer
curl -i -X POST \
  http://localhost:8001/services/scmodel/routes \
  --data name=scmodel-object-customer \
  --data 'paths[]=~/api/SC/scmodel/v1/objects/Customer$' \
  --data 'regex_priority=10' \
  --data strip_path=false

### Create a route for the SC Model service Product
curl -i -X POST \
  http://localhost:8001/services/scmodel/routes \
  --data name=scmodel-object-product \
  --data 'paths[]=~/api/SC/scmodel/v1/objects/Product$' \
  --data 'regex_priority=10' \
  --data strip_path=false

You can also create a limited list of objects types by using regex (A,B,C).
Example below for all sales routes :

### Create a route for the SC Model Sales routes
curl -i -X POST \
  http://localhost:8001/services/scmodel/routes \
  --data name=scmodel-object-sales \
  --data 'paths[]=~/api/SC/scmodel/v1/objects/(SalesOrder|SalesOrderLine|SalesShipment|SalesShipmentLine)$' \
  --data 'regex_priority=10' \
  --data strip_path=false

 

You can then find the route configuration in the API Manager portal :

To test it, just make some calls to different routes :

### Get Object BOM
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/BOM'

### Get Object Carrier
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/Carrier'

### Get Object Customer
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/Customer'

### Get Object DemandPlan
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/DemandPlan'

### Get Object InventoryThreshold
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/InventoryThreshold'

### Get Object Issue
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/Issue'

### Get Object Product
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/Product'

### Get Object SalesOrder
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/SalesOrder'

### Get Object SalesOrderLine
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/SalesOrderLine'

### Get Object SalesShipment
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/SalesShipment'

### Get Object SalesShipmentLine
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/SalesShipmentLine'

This allows you to manipulate your Customer business object with a finer level of detail than other objects types:

Analytics for all sales objects :

Analytics of the other object types :

ディスカッション (0)1
続けるにはログインするか新規登録を行ってください