検索

記事
· 2025年10月21日 3m read

What I’ve Learned from Multiple Data Migrations

Hello!!!

Data migration often sounds like a simple "move data from A to B task" until you actually do it. In reality, it is a complex process that blends planning, validation, testing, and technical precision.

Over several projects where I handled data migration into a HIS which runs on IRIS (TrakCare), I realized that success comes from a mix of discipline and automation.

Here are a few points which I want to highlight.

1. Start with a Defined Data Format.

Before you even open your first file, make sure everyone, especially data providers, clearly understands the exact data format you expect. Defining templates early avoids unnecessary bank-and-forth and rework later. 

While Excel or CSV formats are common, I personally feel using a tab-delimited text file (.txt) for data upload is best. It's lightweight, consistent, and avoids issues with commas inside text fields. 

PatID   DOB Gender  AdmDate
10001   2000-01-02  M   2025-10-01
10002   1998-01-05  F   2025-10-05
10005   1980-08-23  M   2025-10-15

Make sure that the date formats given in the file is correct and constant throughout the file because all these files are usually converted from an Excel file and an Basic excel user might make mistakes while giving you the date formats wrong. Wrong date formats can irritate you while converting into horolog.

2. Validate data before you load it.

Never - ever skip validation of data. At least a basic glance on the file would do. IRIS although gives us the performance and flexibility to handle large volumes, but that's only useful if your data is clean. 

ALWAYS, keep a flag (0 or 1) in the parameter of your upload function. Where 0 can mean that you just want to validate the data and not process it. And 1 to process the data.

If validations fails for any of the data, maintain an error log which will tell you exactly which data is throwing you the error. If your code does not give you the capability to find out which data has an errored record, then it will be very tough to figure out the wrong records.

3. Keep detailed and searchable logs.

You can either use Global or tables to capture logs. Make sure you capture the timestamp, the filename, record (which can easily be traceable) and status. 

If the data is small, you can ignore success logs and capture only the error logs. Below is an example of how I use to store error logs.

Set ^LOG("xUpload",+$Horolog,patId)=status_"^"_SQLCODE_"^"_$Get(%msg)

For every insert, we will have an SQLCODE, if there is an error while inserting, then we always get an errored message from %msg

This can also be used while validating data. 

4. Insert data in an Efficient and Controlled Manner.

Efficiency in insertion is not just about speed, it's about data consistency, auditability and control. Before inserting, make sure every single record has passed validation and that no mandatory fields are skipped. Missing required fields can silently break relationships or lead to rejected records later in the workflow.

When performing insert:

  • Always include InsertDateTime and UpdateDateTime fields for tracking. This helps in reconciliation, incremental updates and debugging.
  • Maintaining a dedicated backed user for all automated or migration-related activities. This makes it easier to trace changes in audit logs, and clearly separates system actions from human inputs.

5. Reconcile after Migration/Upload.

Once the activity is completed, perform a reconciliation between source and destination:

  • Record count comparison.
  • Field-by-field checksum validation.
  • Referential integrity checks.

Even a simple hash-based comparison script can help confirm that nothing was lost or altered.

 

These are some of the basic yet essential practices for smooth and reliable data migration. Validations, proper logging, consistent inserts, and attention to master data make a huge difference in quality and traceability.

Keep it clean, automated and well documented. The rest will fall into place.

Feel free to reach out to me for any queries, or discussions around IRIS data migration!

8 Comments
ディスカッション (8)2
続けるにはログインするか新規登録を行ってください
記事
· 2025年10月21日 2m read

Practical use of XECUTE (InterSystems ObjectScript)

If you start with InterSystems ObjectScript, you will meet the XECUTE command.
And beginners may ask: Where and Why may I need to use this ?

The official documentation has a rich collection of code snippets. No practical case.
Just recently, I met a use case that I'd like to share with you.

The scenario:

When you build an IRIS container with Docker, then, in most cases,
you run the  initialization script  

iris session iris < iris.script 

This means you open a terminal session and feed your input line-by-line from the script.
And that's fine and easy if you call methods, or functions, or commands.
But looping over several lines is not possible.
You may argue that running a FOR loop in a line is not a masterpiece.
Right, but the lines are not endless and the code should remain maintainable.

A different goal was to leave no code traces behind after setup.
So iris.script was the location to apply it.

The solution

XECUTE allowed me to cascade my multi-line code.
To avoid conflicts with variable scoping, I just used %Variables 
BTW: The goal was to populate some demo LookupTables.
Just for comfort, I used method names from %PopulateUtils as table names

   ;; generate some demo lookup tables   
   ; inner loop by table
    set %1="for j=1:1:5+$random(10) XECUTE %2,%3,%4"
    ; populate with random values
    set %2="set %key=##class(%PopulateUtils).LastName()"
    set %3="set %val=$ClassMethod(""%PopulateUtils"",%tab)"
    ; write the table
    set %4="set ^Ens.LookupTable(%tab,%key)=%val"
    set %5="set ^Ens.LookupTable(%tab)=$lb($h) write !,""LookupTable "",%tab"
    ; main loop
    XECUTE "for %tab=""FirstName"",""City"",""Company"",""Street"",""SSN"" XECUTE %1,%5"
    ;; just in Docker

The result satisfied the requirements without leaving permanent traces behind 
And it did not interfere with the code deposited in IPM.
So it was only used once by Docker container build.
 

1 Comment
ディスカッション (1)1
続けるにはログインするか新規登録を行ってください
お知らせ
· 2025年10月21日

Actualización del buscador de la Comunidad de desarrolladores este fin de semana

Hola, comunidad!

Este fin de semana actualizaremos el motor de búsqueda de la Comunidad de Desarrolladores para hacerlo más rápido y preciso (eso esperamos 😉).

Only one in three consumers install firmware updates right away - BetaNews

Durante la actualización, es posible que experimentéis cierta lentitud o breves interrupciones en el rendimiento de la búsqueda. Si notáis algo inusual o tenéis algún problema, avisadnos en los comentarios más abajo: vuestros comentarios nos ayudan a garantizar que todo funcione sin problemas.

Gracias por vuestra paciencia y por ayudarnos a mejorar aún más la Comunidad.

ディスカッション (0)1
続けるにはログインするか新規登録を行ってください
質問
· 2025年10月21日

¿Cómo procesar ficheros en EnsLib.RecordMap.Service.FTPService files uno a uno?

Hola comunidad,

Tengo un servicio que utiliza EnsLib.RecordMap.Service.FTPService para capturar ficheros en un directorio FTP.

Necesitaría que en lugar de cargarlos todos a la vez, los hiciera de uno en uno.

Tengo una clase que extiende de esta clase porque hace procesos previos, lo guarda todo en la clase RecordMap y luego los procesa todos los registros a la vez.

Cuando invoco al BP, lo hace a través del método set tStatus = ..SendRequest(message, 1)

He puesto el flag SynchronousSend = 1, pero sigue procesando todos a la vez.

¿Hay alguna forma que el proceso no continue con el siguiente fichero hasta que el BP no indique que ha terminado?

Saludos cordiales

4 Comments
ディスカッション (4)2
続けるにはログインするか新規登録を行ってください
質問
· 2025年10月21日

How to process EnsLib.RecordMap.Service.FTPService files one by one?

Hi community,

I have a service that uses EnsLib.RecordMap.Service.FTPService to capture files in an FTP directory.

Instead of uploading them all at once, I would need to do so one at a time.

I have a class that extends this class because it preprocesses, saves everything in the RecordMap class, and then processes all the records at once.

When I invoke the BP, it does so through the method set tStatus = ..SendRequest(message, 1).

I've set the SynchronousSend flag to 1, but it continues processing all the files at once.

Is there a way to prevent the process from continuing to the next file until the BP indicates it's finished?

Best regards.

1 Comment
ディスカッション (1)2
続けるにはログインするか新規登録を行ってください