新しい投稿

検索

記事
· 2026年1月23日 2m read

DBサイズを拡張する方法

これは InterSystems FAQ サイトの記事です。

こちらの記事では、データベースサイズを拡張する方法をご紹介します。


1.今すぐ拡張したい場合

2.空き容量がなくなったときに、拡張するサイズを設定したい場合



1.今すぐ拡張したい場合

管理ポータル、またはコマンドで、拡張したいサイズにデータベースサイズを指定します。

管理ポータル:
[システム管理] > [構成] > [システム構成] > [ローカルデータベース]

対象のデータベースを選択し、データベース属性のダイアログで、「現在」に拡張したイサイズ(拡張後のサイズ)を指定します。
保存クリック後、即座にデータベースは拡張されます。

 

上記例は、もともと11MBだったサイズを50MBに拡張、つまり39MB拡張したため、messages.logには以下のようなログが記録されます。

08/04/25-11:27:01:333 (3468) 0 [Database.StartExpansion] Starting Expansion for database c:\intersystems\iris\mgr\user\. 39 MB requested.
08/04/25-11:27:01:424 (3468) 0 [Database.FullExpansion] Expansion completed for database c:\intersystems\iris\mgr\user\. Expanded by 39 MB.


コマンドで行う場合は、以下のようにします。

// %SYSネームスペースに移動します
Set $Namespace="%SYS"
// 設定を変更するデータベースのディレクトリを設定します
Set Directory="C:\InterSystems\IRIS\mgr\user"
Set db=##Class(SYS.Database).%OpenId(Directory)

//サイズを拡張するので、Sizeプロパティを設定して保存します
Set db.Size=100
Set status=db.%Save()


2.空き容量がなくなったときに、拡張するサイズを設定したい場合

データベースファイルの拡張は、そのデータベースに空きブロックがなく、新規ブロックの確保が必要な更新が発生した際に行われます。
既定 (推奨) の設定であるゼロ (0) は、現在のサイズの 12% または 10 MB のいずれか大きいサイズに拡張を行います。
任意の値を指定したい場合は、管理ポータル、またはコマンドで、データベースの拡張サイズを指定します。
次回拡張時に、指定したサイズに拡張されます。

管理ポータル:
[システム管理] > [構成] > [システム構成] > [ローカルデータベース]

 

コマンドで行う場合は、以下のようにします。

// %SYSネームスペースに移動します
Set $Namespace="%SYS"
// 設定を変更するデータベースのディレクトリを設定します
Set Directory="C:\InterSystems\IRIS\mgr\user"
Set db=##Class(SYS.Database).%OpenId(Directory)

//拡張サイズ(ExpansionSize)の変更を行います(MB)
Set db.ExpansionSize = 100
Set status=db.%Save()
ディスカッション (0)0
続けるにはログインするか新規登録を行ってください
記事
· 2026年1月23日 2m read

Fichiers temporaires et singletons : Nettoyage après utilisation

J'ai rencontré à plusieurs reprises un cas où j'ai besoin d'utiliser un fichier/dossier temporaire et de le supprimer ultérieurement.

La solution la plus naturelle consiste alors à suivre les recommandations de "Robust Error Handling and Cleanup in ObjectScript" avec un bloc try/catch/pseudo-finally ou un objet enregistré pour gérer le nettoyage dans le destructeur. %Stream.File* possède également une propriété RemoveOnClose que vous pouvez définir, mais avec précaution, car vous pourriez supprimer accidentellement un fichier important. De plus, cette propriété est réinitialisée par les appels à %Save(), vous devrez donc la remettre à 1 après chaque utilisation.

Il existe cependant un cas particulier : supposons que vous ayez besoin que le fichier temporaire subsiste dans la pile d'exécution. Par exemple :

ClassMethod MethodA()
{
    Do ..MethodB(.filename)
    // Do something else with the filename
}

ClassMethod MethodB(Output filename)
{
    // Create a temp file and set filename to the file's name
    Set filename = ##class(%Library.File).TempFilename()
    
    //... and probably do some other stuff
}

Vous pourriez toujours manipuler des objets %Stream.File* avec RemoveOnClose défini sur 1, mais nous nous intéressons ici uniquement aux fichiers temporaires.

C'est là qu'intervient le concept de « Singleton ». IPM propose une implémentation de base dans %IPM.General.Singleton, que vous pouvez étendre pour répondre à différents cas d'utilisation. Le comportement général et le modèle d'utilisation sont les suivants :

  • À un niveau de pile supérieur, appelez %Get() sur cette classe pour obtenir l'unique instance, également accessible par des appels à %Get() à des niveaux de pile inférieurs.
  • Lorsque l'objet sort de la portée au niveau de pile le plus élevé qui l'utilise, le code de nettoyage est exécuté.

Cette méthode est plus performante qu'une variable `%` car il n'est pas nécessaire de vérifier sa définition, et elle survit également aux appels NEW sans argument aux niveaux de pile inférieurs grâce à une astuce de manipulation d'objets plus profonde.

Concernant les fichiers temporaires, IPM propose également un singleton pour la gestion des fichiers temporaires. Appliquée à ce problème, la solution est la suivante :

ClassMethod MethodA()
{
    Set tempFileManager = ##class(%IPM.Utils.TempFileManager).%Get()
    Do ..MethodB(.filename)
    // Do something else with the filename
    // The temp file is cleaned up automatically when tempFileManager goes out of scope
}

ClassMethod MethodB(Output filename)
{
    Set tempFileManager = ##class(%IPM.Utils.TempFileManager).%Get()
    // Create a temp file and set filename to the file's name
    Set filename = tempFileManager.GetTempFileName(".md")
    
    //... and probably do some other stuff
}
ディスカッション (0)1
続けるにはログインするか新規登録を行ってください
質問
· 2026年1月22日

Email Recipients versus Alert Groups dropdown option

Hi, I have simple email alert setup (EnsLib.EMail.AlertOperation) where in operations I have SMTP server setup and Recipients emails.

Also I find Alert Groups dropdown option in operations, processes, how this is different from setting up simple email alert with recipients list?

Please advise.

1件の新着コメント
ディスカッション (1)2
続けるにはログインするか新規登録を行ってください
ディスカッション (2)3
続けるにはログインするか新規登録を行ってください
記事
· 2026年1月22日 9m read

IRIS Cloud Document - Beginner Guide & Sample : Part II - Sample (Dockerized) Java App

This is the second part of an article pair where I walk you through:

  • Part I - Intro and Quick Tour (the previous article)
    • What is it?
    • Spinning up an InterSystems IRIS Cloud Document deployment
    • Taking a quick tour of the service via the service UI
  • Part II - Sample (Dockerized) Java App (this article)
    • Grabbing the connection details and TLS certificate
    • Reviewing a simple Java sample that creates a collection, inserts documents, and queries them
    • Setting up and running the Java (Dockerized) end‑to‑end sample

As mentioned the goal is to give you a smooth “first run” experience.

Previously we created an IRIS Cloud Document deployment (and took a quick tour), now let's see how we can interact with it from a Java app.

Assuming you want to take this for a drive, and go hands-on, you'll need Docker, and Git, and start by hoping over to the Open Exchange App, and cloning the GitHub repo.

 

4. Note the connection details

Once the deployment is running, open it and look at the Overview page. There’s a table called Making External Connections that lists: 

Keep those values handy; we’ll plug them into the Java demo.

  • Hostname (e.g. k8s-your-hostname.elb.us-east-1.amazonaws.com)
  • Port (should be 443)
  • Namespace (should be USER)
  • SQL username (should be SQLAdmin)
  • Password (you set this when creating or configuring access)

(These Docs also might also help)

  1. Enable external connections
    • Make sure external access is enabled, either for all IP addresses, or your client IP (or IP range) is allowed in the deployment firewall settings.
    • This is done in the Cloud Services Portal when creating the service, or in the section mentioned above.
  2. Download the TLS certificate
    • Cloud Document requires TLS. From the deployment overview there’s a link to download a self‑signed X.509 certificate for your deployment. You’ll use this certificate on your client side to establish a trusted TLS connection. Save it as something like: certs/certificateSQLaaS.pem

That’s all we need from the portal: host, port, namespace, credentials, and the certificate file.

5. Review the Sample Accessing Cloud Document from Java

In general the pattern looks like:

  1. Make a secure connection (Connecting - Docs) - Configure DataSource (server, port, namespace, user, password, TLS).
  2. Ingest some data (Using Document and Collections - Docs) - Get a Collection by name (created automatically the first time). And build JSONObject/JSONArray instances, insert them as Documents.
  3. Query / fetch data back (Querying - Docs) - Query using a ShorthandQuery (string that behaves like a WHERE clause on the collection).

If you’ve used other document databases, this should feel pretty familiar.

The Java driver for Cloud Document lives in the package com.intersystems.document. It gives you three main pieces:

  • DataSource – a connection pool to the Cloud Document server.
  • Document – base class for JSON documents; usually you’ll use its subclasses:
    • JSONObject – JSON object with put() methods for key/value pairs.
    • JSONArray – JSON array with add() methods.
  • Collection – represents a named collection; you can insert, get, getAll, drop, and run queries.

The code and data used in this sample is based directly on the examples provided within our Documentation.

5.1 Making the connection

First, the bits we need for a basic connection:

  • Hostname, port, namespace, user, password – from the deployment’s “external connections” information.
  • The deployment’s X.509 certificate, imported into a Java keystore.
  • A small SSLConfig.properties file so the driver knows which keystore to use.

Building a TLS-enabled DataSource

Here’s a compact example that focuses on the connection itself:

 
Java Connection Code

If SSLConfig.properties and keystore.jks are set up correctly, calling createDataSource() should establish a connection over TLS to your Cloud Document deployment.

The Cloud Document Java driver looks for this SSLConfig.properties file and uses it when you set connectionSecurityLevel to require TLS.

This is what this file would look like:

 
SSLConfig.properties file sample

In the Docker sample I provided there is a script that takes care of this for you.

If you're running your own samples, you can use a line like this one:

keytool -importcert -file /path/to/certs/cloud-document.pem -keystore keystore.jks
  • Answer yes when asked if you want to trust the certificate.
  • Set a password and remember it.

In the Docker sample, our script does this:

 
docker-entrypoint.sh certificate handling

5.2 Ingesting data from Java

Once we have a DataSource, we work with collections and documents.

  • Collection is the named container, like colors or demoPeople.
  • A document is a JSONObject or JSONArray extending Document.

Here’s a small “ingest” example that mirrors the colors JSON file we imported in the UI earlier.

 
Java Ingest Code

A few notes:

  • Collection.getCollection(pool, name) will create the collection on first use if it doesn’t exist.
  • insert() returns the document ID assigned by Cloud Document.
  • insert(List<Document>) does a bulk write and returns all the IDs in a BulkResponse.

This is the same basic pattern you’d use in an application ingesting JSON from a file, a queue, or an API.

5.3 Querying and fetching data

On the Java side you have two main options:

  1. Use the collection-centric APIs (getAll, createShorthandQuery, etc.).
  2. Use regular SQL (for example with JDBC directly) and JSON_TABLE when you want rich SQL projections.

For a first experience, the collection APIs are usually enough.

List all documents in a collection and searching for some

 
Java Fetch/Query Code

What’s happening here:

  • getAll() gives you every document in the collection as Document objects.
  • createShorthandQuery("name > 'H'") creates a query that’s conceptually similar to WHERE name > 'H' in SQL.
  • Cursor lets you iterate the results and also ask for a count.

If you later want to bring this into the SQL world, the same collections you touched here can be queried with JSON_TABLE in the SQL UI or via JDBC. That’s one of the nice aspects of Cloud Document: you don’t have to choose between “document API” and “SQL”; you get both.

6. Setting up and Running the Sample

As mentioned I'm providing a Dockerized sample, to ensure a smooth as possible experience without requiring you to manually download and install various parts, but if you want you can use the same sample and run this on your own.

The Open Exchange and related GitHub repository include detailed instructions for running, but at high-level it comes down to simply:

6.1 Update .env file and place TLS certificate

This is what your environment variables file might look like after your edit it:

 
Environment Variables .env Edited File (example)

6.2 Run docker compose

Just run docker compose up --build and the sample will run.

behind the scenes we will:

  • Stage 1: Use a Maven + JDK image to build a shaded JAR.
  • Stage 2: Use a slim JDK image, copy the JAR and SSLConfig.properties, create a keystore from your cert at container startup, then run the JAR.

Here's a short video demonstrating this:

Wrapping up

If you’re new to InterSystems but not new to programming, the basic path to a good first experience with IRIS Cloud Document is:

  1. Bring the service up: create a deployment, note host/port/namespace/credentials, download the certificate.
  2. Kick the tires in the web portal: upload a JSON file, import into a collection, browse with the Collection Browser, and run a simple SQL query with JSON_TABLE.
  3. Wire it into Java:
    • create a TLS-enabled DataSource (with SSLConfig.properties + keystore),
    • use Collection and Document to ingest data,
    • and query with getAll and shorthand queries.

From there you can iterate toward more interesting things: updates, deletes, richer queries, combining Cloud Document data with relational data, or using other drivers like .NET.

But if you’ve followed along to this point and seen your own JSON documents come back from the Java code, you’ve already taken the most important step: you’re up and running in the InterSystems ecosystem.

Enjoy!

ディスカッション (0)1
続けるにはログインするか新規登録を行ってください