検索

記事
· 2025年10月10日 3m read

Trapstar Puffer Jacket – Der Streetwear Trend in Deutschland

 

Trapstar is a well-known London-based brand that has made a name for itself worldwide in recent years. The brand is particularly popular in Germany. The Trapstar Puffer Jacket is one of its most sought-after pieces. It combines style, comfort, and an urban look. Many young people see it not just as clothing, but also as an expression of their personality.

Why a Trapstar Puffer Jacket

A Trapstar puffer jacket offers more than just warmth. It's a statement. Streetwear is no longer just fashion these days, but a way of life. In cities like Berlin, Hamburg, or Munich, you see more and more people wearing these jackets. They pair perfectly with sneakers, hoodies, and sweatpants. The brand stands for authenticity. Wearing a Trapstar puffer jacket demonstrates that they understand and embrace street style.

Design and features

What makes a Trapstar puffer jacket special is its unique design. These jackets often feature eye-catching logos and embroidery. Some models are simple in black or gray, while others feature bold colors like red or blue.

The material is high-quality. The jacket keeps you warm without being too heavy. Many models are water-repellent, making them perfect for rainy days in Germany.

Trapstar Puffer Jacket for everyday use

In Germany, the Trapstar Puffer Jacket is n't just worn for leisure. It's versatile. Whether for a stroll around town, at a club, or meeting up with friends – it's always a good fit. Young people between 16 and 30 years old especially love this look. The streetwear crowd often pairs the jacket with jeans, caps, or the well-known Trapstar hoodies.

Trapstar in Germany

Germany is an important market for the brand. Online shops and streetwear stores in Berlin and Frankfurt regularly carry Trapstar puffer jackets. The trend originated primarily in London but quickly spread throughout Europe. Many German rappers and influencers wear Trapstar, further increasing its popularity.

Quality and comfort

In addition to its looks, the Trapstar Puffer Jacket also impresses with its comfort. It's lightweight and keeps you warm. This is a particular advantage in the German winter. The materials are durable and hard-wearing, and the inner lining ensures a comfortable fit. Many buyers praise the fit.

Prices and availability

A Trapstar puffer jacket isn't a cheap product. Prices usually range between €250 and €400. Nevertheless, many see it as an investment. Streetwear fans know that quality and brand come at a price. In Germany, the jacket is available both online and in select boutiques. Some limited-edition collections sell out quickly.

Trapstar Puffer Jacket for women and men

The jackets are unisex, making them equally appealing to both men and women. Women often pair them with skinny jeans or sneakers, while men prefer to wear them with sweatpants or caps. The versatility of the Trapstar Puffer Jacket makes it appealing to both genders.

Styling tips in Germany

There are simple styling rules to create a true streetwear look.

  • Pair the Trapstar Puffer Jacket with a Trapstar Hoodie for a complete outfit.
  • Sneakers are a must, especially brands like Nike or Adidas.
  • Accessories such as caps or bags complete the outfit.

Trapstar and the music scene

Trapstar is closely linked to the rap and grime scenes. Many artists wear the brand on stage or in music videos. This influence also reaches Germany. Young rap fans see the brand as part of their culture. The Trapstar Puffer Jacket is therefore not just fashion, but also a symbol of music and lifestyle.

Sustainability and Trapstar

More and more shoppers in Germany are paying attention to sustainability. Trapstar is increasingly focusing on durable materials. Even though the brand isn't a classic eco-label, many value quality that will last for years. A high-quality Trapstar Puffer Jacket replaces many other jackets. This also fits with the idea of ​​sustainability.

ディスカッション (0)1
続けるにはログインするか新規登録を行ってください
お知らせ
· 2025年10月10日

[Vidéo] MayVilleHop après 1 an d'utilisation : coordination territoriale & santé populationnelle en Mayenne

Salut la Communauté!

Profitez de regarder la nouvelle vidéo sur la chaîne Youtube d'InterSystems France :

📺 MayVilleHop après 1 an d'utilisation : coordination territoriale & santé populationnelle en Mayenne

Retour d'expérience du GHT de la Mayenne & du Haut Anjou sur l'utilisation de la plateforme MayVilleHop : coordination Ville Hôpital, mise en place de parcours de soins et responsabilité populationnelle pour une meilleure prise en charge des patients sur le territoire. Analyse des résultats après un an d’utilisation.

Intervenants :
🗣  Vincent Errera, Directeur délégué du GHT de la Mayenne et du Haut-Anjou
🗣  Émilie Boudonnet Peloin, IDE Parcours du GHT de la Mayenne et du Haut-Anjou
🗣  Nicolas Eiferman, Directeur général InterSystems France & Benelux
🗣  Dr Hervé Rivière, Directeur Médical InterSystems France

Abonnez-vous à notre chaîne YouTube pour plus de vidéos !

ディスカッション (0)0
続けるにはログインするか新規登録を行ってください
記事
· 2025年10月10日 9m read

IRIS install automation using Ansible

Deploying new IRIS instances can be a time-consuming task, especially when setting up multiple environments with mirrored configurations.

I’ve encountered this issue many times and want to share my experience and recommendations for using Ansible to streamline the IRIS installation process. My approach also includes handling additional tasks typically performed before and after installing IRIS.

This guide assumes you have a basic understanding of how Ansible works, so I won’t go into much detail on its fundamentals. However, if you have questions about anything mentioned here, feel free to ask in the comments below.

The examples provided in this guide were tested using Ansible 3.6 on a Red Hat 8 server, with IRIS 2023.1.1 and Red Hat 8 as the client environment. Other versions of Ansible, Red Hat (or other UNIX flavors), and IRIS may also work, but your mileage may vary.

 

Ansible install

The Ansible server must be a Linux distribution. We use Red Hat 8 in this article, but other Linux distros and versions should work as well.

To install the Ansible packages, you must first install EPEL:

[ansible@auto01 ansible]$ yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm

Then install Ansible:

[ansible@auto01 ansible]$ yum install ansible

In addition to the packages, Ansible requires SSH access to remote servers. I recommend creating an SSH key pair, which is more secure than using traditional passwords. Also, the user used to connect to remote servers must have administrative privileges (i.e., be part of the wheel group).

 

Files and folders

To maintain an organized structure, I recommend the following files and folders under the ansible directory:

[ansible@auto01 ansible]$ ls -l
total 4
-rw-r--r--. 1 ansible ansible 247 Dec  5 00:57 ansible.cfg
drwxrwxr-x. 2 ansible ansible   6 Dec  5 00:56 files
drwxrwxr-x. 2 ansible ansible   6 Dec  5 00:56 inventory
drwxrwxr-x. 2 ansible ansible   6 Dec  5 00:56 library
drwxrwxr-x. 2 ansible ansible   6 Dec  5 00:56 playbooks
drwxrwxr-x. 2 ansible ansible   6 Dec  5 00:56 templates
drwxrwxr-x. 2 ansible ansible   6 Dec  5 00:56 vars
drwxrwxr-x. 2 ansible ansible   6 Dec  5 00:56 vault
File/Folder Description
ansible.cfg Ansible configuration file. Contains directives on how Ansible behaves.
files Contains extra files needed by playbooks, such as IRIS install tar.gz file.
inventory Contains host inventory files. You can have one large inventory file or multiple smaller ones. Splitting the inventory requires more effort when running playbooks on multiple hosts.
library Contains Ansible extra library files. Not required for this examples, but useful for future extensions.
playbooks Contains all the playbooks that you develop, including the IRIS installation playbook discussed below.
templates Contains template files used by playbooks. These are transferred to servers and instantiated with the correct parameters.
vars Contains variables available to all playbooks.
vault Contains sensitive variables accessible only through the ansible-vault command. Useful for handling passwords.

 

After setting up this folder structure, copy the IRIS installer and IRIS license key into the files folder. It should look like this:

[ansible@auto01 ansible]$ ls -l files/
total 759976
-rw-rw-r--. 1 ansible ansible 778207913 Dec  5 14:32 IRISHealth-2023.1.1.380.0.22870-lnxrh8x64.tar.gz
-rw-rw-r--. 1 ansible ansible      1160 Sep  5 19:13 iris.key

 

The inventory

To run playbooks in Ansible, you must define your inventory of servers. There are several ways to do this, each with its own advantages. In this article, we’ll use a single file to define all servers.

The servers.yml file will contain the entire inventory, listing each server along with the variables required for the IRIS installation. Here’s an example:

[ansible@auto01 ansible]$ cat inventory/servers.yml 
---
all:
  hosts:
    test01.mydomain:
      iris_user: irisusr
      iris_group: irisgrp
      mgr_user: irisown
      mgr_group: irismgr
      platform: lnxrh8x64
      iris_cmd: iris
      iris_instances:
        - name: TEST01
          superserver_port: 51773
          webserver_port: 52773
          binary_file: IRISHealth-2023.1.1.380.0.22870-lnxrh8x64
          key_file: iris.key
          install_dir: /test/iris
          jrnpri_dir: /test/jrnpri
          jrnsec_dir: /test/jrnsec
          config_globals: 16384
          config_errlog: 10000
          config_routines: "0,128,0,128,0,1024"
          config_gmheap: 1048576
          config_locksiz: 128057344

 

The vault

To keep passwords secure, create a vault file containing the passwords for the IRIS SuperUser and CSPSystem accounts.

Use the following command to edit the default vault file:

[ansible@auto01 ansible]$ ansible-vault edit vault/defaults.yml
---
# Default passwords
iris_user_passwd: "Ch4ngeTh!s"

 

The playbook

To perform an IRIS installation, several tasks must be executed on the target server. These tasks are grouped and ordered in a file called a playbook.
A playbook is essentially a list of tasks that are executed sequentially on the remote hosts.

Below is the playbook I developed to install IRIS:

[ansible@auto01 ansible]$ cat playbooks/install_iris.yml
#
# Playbook to install Iris
#
- hosts: all
  become: yes
  gather_facts: no
  tasks:
  - name: "Load default passwords"
    include_vars: "../vault/defaults.yml"
  ### PRE-INSTALL TASKS:
  - name: "Install required packets"
    yum:
      name: "{{ item }}"
      state: latest
    loop:
      - "httpd"
      - "java-1.8.0-openjdk"
      - "mod_auth_mellon"
      - "mod_ssl"
  - name: "Create iris group"
    group:
      name: "{{ iris_group }}"
      gid: 5005
  - name: "Create iris mgr group"
    group:
      name: "{{ mgr_group }}"
      gid: 5006
  - name: "Create iris owner user"
    user:
      name: "{{ mgr_user }}"
      uid: 5006
      group: "{{ iris_group }}"
      groups: "{{ mgr_group }}"
  - name: "Create iris user"
    user:
      name: "{{ iris_user }}"
      uid: 5005
      group: "{{ iris_group }}"
  - name: "Create mgr folder"
    file:
      path: "{{ item.install_dir }}/mgr"
      state: directory
      owner: "{{ iris_user }}"
      group: "{{ iris_group }}"
      mode: 0775
    loop: "{{ iris_instances }}"
    loop_control:
      label: "{{ item.name }}"
  - name: "Copy license key"
    copy:
      src: "../files/{{ item.key_file }}"
      dest: "{{ item.install_dir }}/mgr/iris.key"
      owner: "{{ iris_user }}"
      group: "{{ iris_group }}"
      backup: yes
    loop: "{{ iris_instances }}"
    loop_control:
      label: "{{ item.name }}"
  - name: "Create /install folder"
    file:
      path: "/install"
      state: directory
      mode: 0777
  - name: "Create Instances install folders"
    file:
      path: "/install/{{ item.name }}"
      state: directory
      mode: 0777
    loop: "{{ iris_instances }}"
    loop_control:
      label: "{{ item.name }}"
  - name: "Copy IRIS installer"
    copy:
      src: "../files/{{ item.binary_file }}.tar.gz"
      dest: "/install/{{ item.name }}/"
    loop: "{{ iris_instances }}"
    loop_control:
      label: "{{ item.name }}"
  - name: "Untar IRIS installer"
    command:
      cmd: "tar -xzf /install/{{ item.name }}/{{ item.binary_file }}.tar.gz"
      chdir: "/install/{{ item.name }}/"
    loop: "{{ iris_instances }}"
    loop_control:
      label: "{{ item.name }}"
  ### IRIS INSTALL:
  - name: "Install Iris"
    command:
      cmd: "./irisinstall_silent"
      chdir: "/install/{{ item.name }}/{{ item.binary_file }}"
    environment:
      ISC_PACKAGE_INSTANCENAME: "{{ item.name }}"
      ISC_PACKAGE_INSTALLDIR: "{{ item.install_dir }}"
      ISC_PACKAGE_PLATFORM: "{{ platform }}"
      ISC_PACKAGE_UNICODE: "Y"
      ISC_PACKAGE_INITIAL_SECURITY: "Normal"
      ISC_PACKAGE_MGRUSER: "{{ mgr_user }}"
      ISC_PACKAGE_MGRGROUP: "{{ mgr_group }}"
      ISC_PACKAGE_USER_PASSWORD: "{{ iris_user_passwd }}"
      ISC_PACKAGE_CSPSYSTEM_PASSWORD: "{{ iris_user_passwd }}"
      ISC_PACKAGE_IRISUSER: "{{ iris_user }}"
      ISC_PACKAGE_IRISGROUP: "{{ iris_group }}"
      ISC_PACKAGE_SUPERSERVER_PORT: "{{ item.superserver_port }}"
      ISC_PACKAGE_WEBSERVER_PORT: "{{ item.webserver_port }}"
      ISC_PACKAGE_CLIENT_COMPONENTS: "standard_install"
      ISC_PACKAGE_STARTIRIS: "N"
    loop: "{{ iris_instances }}"
    loop_control:
      label: "{{ item.name }}"
  - name: "Remove installers"
    file:
      path: "/install/{{ item.name }}"
      state: absent
    loop: "{{ iris_instances }}"
    loop_control:
      label: "{{ item.name }}"
  ### IRIS CUSTOMIZATIONS:
  - name: "Change iris.cpf"
    lineinfile:
      path: "{{ item[0].install_dir }}/iris.cpf"
      regexp: "{{ item[1].from }}"
      line: "{{ item[1].to }}"
      backup: yes
    with_nested:
      - "{{ iris_instances }}"
      - [ { from: "^TerminalPrompt=.*", to: "TerminalPrompt=8,3,2" },
          { from: "^FreezeOnError=0", to: "FreezeOnError=1" },
          { from: "^AutoParallel=.*", to: "AutoParallel=0" },
          { from: "^FastDistinct=.*", to: "FastDistinct=0" },
          { from: "^LockThreshold=.*", to: "LockThreshold=10000" },
          { from: "^EnsembleAutoStart=.*", to: "EnsembleAutoStart=1" },
          { from: "^MaxIRISTempSizeAtStart=.*", to: "MaxIRISTempSizeAtStart=300" } ]
    loop_control:
      label: "{{ item[0].name }}: {{ item[1].to }}"
  - name: "Change Journal Current Dir"
    lineinfile:
      path: "{{ item.install_dir }}/iris.cpf"
      regexp: "^CurrentDirectory=.*"
      line: "CurrentDirectory={{ item.jrnpri_dir }}"
      backup: yes
    loop: "{{ iris_instances }}"
    loop_control:
      label: "{{ item.name }}"
  - name: "Change Journal Alternate Dir"
    lineinfile:
      path: "{{ item.install_dir }}/iris.cpf"
      regexp: "^AlternateDirectory=.*"
      line: "AlternateDirectory={{ item.jrnsec_dir }}"
      backup: yes
    loop: "{{ iris_instances }}"
    loop_control:
      label: "{{ item.name }}"
  - name: "Change Journal Prefix name"
    lineinfile:
      path: "{{ item.install_dir }}/iris.cpf"
      regexp: "^JournalFilePrefix=.*"
      line: "JournalFilePrefix={{ item.name }}_"
      backup: yes
    loop: "{{ iris_instances }}"
    loop_control:
      label: "{{ item.name }}"
  - name: "Change Globals memory"
    lineinfile:
      path: "{{ item.install_dir }}/iris.cpf"
      regexp: "^globals=.*"
      line: "globals=0,0,{{ item.config_globals }},0,0,0"
      backup: yes
    loop: "{{ iris_instances }}"
    loop_control:
      label: "{{ item.name }}"
  - name: "Change errlog memory"
    lineinfile:
      path: "{{ item.install_dir }}/iris.cpf"
      regexp: "^errlog=.*"
      line: "errlog={{ item.config_errlog }}"
      backup: yes
    loop: "{{ iris_instances }}"
    loop_control:
      label: "{{ item.name }}"
  - name: "Change routines memory"
    lineinfile:
      path: "{{ item.install_dir }}/iris.cpf"
      regexp: "^routines=.*"
      line: "routines={{ item.config_routines }}"
      backup: yes
    loop: "{{ iris_instances }}"
    loop_control:
      label: "{{ item.name }}"
  - name: "Change gmheap memory"
    lineinfile:
      path: "{{ item.install_dir }}/iris.cpf"
      regexp: "^gmheap=.*"
      line: "gmheap={{ item.config_gmheap }}"
      backup: yes
    loop: "{{ iris_instances }}"
    loop_control:
      label: "{{ item.name }}"
  - name: "Change locksiz memory"
    lineinfile:
      path: "{{ item.install_dir }}/iris.cpf"
      regexp: "^locksiz=.*"
      line: "locksiz={{ item.config_locksiz }}"
      backup: yes
    loop: "{{ iris_instances }}"
    loop_control:
      label: "{{ item.name }}"
  ### START IRIS:
  - name: "Start Iris"
    command: "iris start {{ item.name }}"
    loop: "{{ iris_instances }}"
    loop_control:
      label: "{{ item.name }}"
...

As you can see, there are multiple tasks in this playbook—most of them self-explanatory by name. The comments indicate pre-install tasks, installation, and post-install customizations. After executing this playbook, you will have a new IRIS instance installed on the target system, customized with memory and other settings.

 

Run the installation!

After configuring the inventory, vault, and playbooks, you’re ready to execute the IRIS installation using Ansible.
To do so, run the following command:

[ansible@auto01 ansible]$ ansible-playbook -K --ask-vault-pass -i inventory/servers.yml playbooks/install_iris.yml
BECOME password: 
Vault password: 

PLAY [all] **************************************************************************************************************************************************
. . .

Once the playbook execution finishes, you’ll receive a summary of task statuses where you can verify that everything completed successfully.

And that’s it — you’ve just installed IRIS using Ansible! 😁

2 Comments
ディスカッション (2)1
続けるにはログインするか新規登録を行ってください
記事
· 2025年10月9日 6m read

Writing a REST api service for exporting the generated patient data in .csv

Hi,

 

It's me again😁, recently I am working on generating some fake patient data for testing purpose with the help of Chat-GPT by using Python. And, at the same time I would like to share my learning curve.😑

1st of all for building a custom REST api service is easy by extending the %CSP.REST

Creating a REST Service Manually

Let's Start !😂

1. Create a class datagen.restservice which extends  %CSP.REST 

Class datagen.restservice Extends %CSP.REST
{
Parameter CONTENTTYPE = "application/json";
}

 

2. Add a function genpatientcsv() to generate the patient data, and package it into csv string

Class datagen.restservice Extends %CSP.REST
{
Parameter CONTENTTYPE = "application/json";
ClassMethod genpatientcsv() As %String [ Language = python ]
{
    # w ##class(datagen.restservice).genpatientcsv()
    # python.exe -m pip install faker
    # python.exe -m pip install pandas

    from faker import Faker
    import random
    import pandas as pd
    from io import StringIO

    # Initialize Faker
    fake = Faker()

    def generate_patient(patient_id):
        return {
            "PatientID": patient_id,
            "Name": fake.name(),
            "Gender": random.choice(["Male", "Female"]),
            "DOB": fake.date_of_birth(minimum_age=0, maximum_age=100).strftime("%Y-%m-%d"),
            "City": fake.city(),
            "Phone": fake.phone_number(),
            "Email": fake.email(),
            "BloodType": random.choice(["A+", "A-", "B+", "B-", "AB+", "AB-", "O+", "O-"]),
            "Diagnosis": random.choice(["Hypertension", "Diabetes", "Asthma", "Healthy", "Flu"]),
            "Height_cm": round(random.uniform(140, 200), 1),
            "Weight_kg": round(random.uniform(40, 120), 1),
        }

    # Generate 10 patients
    patients = [generate_patient(i) for i in range(1, 11)]

    # Convert to DataFrame
    df = pd.DataFrame(patients)

    # Convert to CSV string (without saving to file)
    csv_buffer = StringIO()
    df.to_csv(csv_buffer, index=False)
    csv_string = csv_buffer.getvalue()

    return csv_string
}
}

you may test the function in the terminal by typing

w ##class(datagen.restservice).genpatientcsv()

3. Add a function GetMyDataCSV() in Python for populating the csv string as a csv file and then output through the REST api service. This can be achieve by,
   3.1. calling the patient data generate function to get the csv string
   3.2. set the %response.ContentType = "text/csv"
   3.3. set the header "Content-Disposition" value to "attachment; filename=mydata.csv"
   3.4.  write the generated csv string as output

remember to pip install the related libraries

Class datagen.restservice Extends %CSP.REST
{
Parameter CONTENTTYPE = "application/json";
ClassMethod GetMyDataCSV() As %Status
{
    // Build CSV string
    Set tCSVString = ##class(datagen.restservice).genpatientcsv()

    //Set headers and output CSV
    Set %response.ContentType = "text/csv"
    Do %response.SetHeader("Content-Disposition","attachment; filename=mydata.csv")
    
    // Output the data
    W tCSVString

    Quit $$$OK
}

ClassMethod genpatientcsv() As %String [ Language = python ]
{
    # w ##class(datagen.restservice).genpatientcsv()
    # python.exe -m pip install faker
    # python.exe -m pip install pandas

    from faker import Faker
    import random
    import pandas as pd
    from io import StringIO

    # Initialize Faker
    fake = Faker()

    def generate_patient(patient_id):
        return {
            "PatientID": patient_id,
            "Name": fake.name(),
            "Gender": random.choice(["Male", "Female"]),
            "DOB": fake.date_of_birth(minimum_age=0, maximum_age=100).strftime("%Y-%m-%d"),
            "City": fake.city(),
            "Phone": fake.phone_number(),
            "Email": fake.email(),
            "BloodType": random.choice(["A+", "A-", "B+", "B-", "AB+", "AB-", "O+", "O-"]),
            "Diagnosis": random.choice(["Hypertension", "Diabetes", "Asthma", "Healthy", "Flu"]),
            "Height_cm": round(random.uniform(140, 200), 1),
            "Weight_kg": round(random.uniform(40, 120), 1),
        }

    # Generate 10 patients
    patients = [generate_patient(i) for i in range(1, 11)]

    # Convert to DataFrame
    df = pd.DataFrame(patients)

    # Convert to CSV string (without saving to file)
    csv_buffer = StringIO()
    df.to_csv(csv_buffer, index=False)
    csv_string = csv_buffer.getvalue()

    return csv_string
}
}

4. Add the route to this function and compile the class

Class datagen.restservice Extends %CSP.REST
{
Parameter CONTENTTYPE = "application/json";
XData UrlMap [ XMLNamespace = "http://www.intersystems.com/urlmap" ]
{
<Routes>
        <Route Url="/export/patientdata" Method="GET" Call="GetMyDataCSV"/>
</Routes>
}

ClassMethod GetMyDataCSV() As %Status
{
    // Build CSV string
    Set tCSVString = ##class(datagen.restservice).genpatientcsv()

    //Set headers and output CSV
    Set %response.ContentType = "text/csv"
    Do %response.SetHeader("Content-Disposition","attachment; filename=mydata.csv")
    
    // Output the data
    W tCSVString

    Quit $$$OK
}

ClassMethod genpatientcsv() As %String [ Language = python ]
{
    # w ##class(datagen.restservice).genpatientcsv()
    # python.exe -m pip install faker
    # python.exe -m pip install pandas

    from faker import Faker
    import random
    import pandas as pd
    from io import StringIO

    # Initialize Faker
    fake = Faker()

    def generate_patient(patient_id):
        return {
            "PatientID": patient_id,
            "Name": fake.name(),
            "Gender": random.choice(["Male", "Female"]),
            "DOB": fake.date_of_birth(minimum_age=0, maximum_age=100).strftime("%Y-%m-%d"),
            "City": fake.city(),
            "Phone": fake.phone_number(),
            "Email": fake.email(),
            "BloodType": random.choice(["A+", "A-", "B+", "B-", "AB+", "AB-", "O+", "O-"]),
            "Diagnosis": random.choice(["Hypertension", "Diabetes", "Asthma", "Healthy", "Flu"]),
            "Height_cm": round(random.uniform(140, 200), 1),
            "Weight_kg": round(random.uniform(40, 120), 1),
        }

    # Generate 10 patients
    patients = [generate_patient(i) for i in range(1, 11)]

    # Convert to DataFrame
    df = pd.DataFrame(patients)

    # Convert to CSV string (without saving to file)
    csv_buffer = StringIO()
    df.to_csv(csv_buffer, index=False)
    csv_string = csv_buffer.getvalue()

    return csv_string
}
}

 

 

OK, now our code is ready. 😁 The next thing is to add the REST service to the web application

Input your Path, Namespace, and Rest service class name, and then Save

Assign the proper application role to this web application (because I am lazy, I just simply assign %All for testing 🤐)

 

OK everything is ready!!😁 Let's test the REST api!!!😂

Input the following path in a bowser

http://localhost/irishealth/csp/mpapp/export/patientdata

It trigger a file download, the file name is mydata.csv😗

Let's check the file 😊

 

Yeah!!! Work well!! 😁😁

Thank you so much for the reading. 😉

3 Comments
ディスカッション (3)2
続けるにはログインするか新規登録を行ってください
記事
· 2025年10月9日 4m read

Expanda a capacidade do ObjectScript de processar YAML

A linguagem ObjectScript possui um suporte incrível a JSON por meio de classes como %DynamicObject e %JSON.Adaptor. Esse suporte se deve à imensa popularidade do formato JSON em relação ao domínio anterior do XML. O JSON trouxe menos verbosidade à representação de dados e aumentou a legibilidade para humanos que precisavam interpretar conteúdo JSON. Para reduzir ainda mais a verbosidade e aumentar a legibilidade, o formato YAML foi criado. O formato YAML, muito fácil de ler, rapidamente se tornou o formato mais popular para representar configurações e parametrizações, devido à sua legibilidade e verbosidade mínima. Embora o XML raramente seja usado para parametrização e configuração, com o YAML, o JSON está gradualmente se limitando a ser um formato de troca de dados, em vez de ser usado para configurações, parametrizações e representações de metadados. Agora, tudo isso é feito com YAML. Portanto, a linguagem primária das tecnologias InterSystems precisa de amplo suporte para processamento YAML, no mesmo nível que para JSON e XML. Por esse motivo, lancei um novo pacote para tornar o ObjectScript um poderoso processador YAML. O nome do pacote é yaml-adaptor.

Vamos começar instalando o pacote

1. Se for de IPM, abra o IRIS Terminal e execute:

USER>zpm “install yaml-adaptor”

2. Se for de Docker, Clone/git pull o repositório do yaml-adaptor em uma pasta local:

$ git clone https://github.com/yurimarx/yaml-adaptor.git

3. Abra o terminal na pasta e execute:

$ docker-compose build

4. Execute o IRIS container do projeto:

$ docker-compose up -d

Por que usar o pacote?

Com este pacote, você poderá interoperar, ler, escrever e transformar YAML em DynamicObjects, JSON e XML bidirecionalmente. Este pacote permite ler e gerar dados, configurações e parametrizações nos formatos mais populares do mercado de forma dinâmica, com pouco código, alto desempenho e em tempo real.

O pacote em ação!

É muito simples. As capacidades são:

1. Converter de YAML string para JSON string

ClassMethod TestYamlToJson() As %Status
{
    Set sc = $$$OK
    set yamlContent = ""_$CHAR(10)_
        "user:"_$CHAR(10)_
        "    name: 'Jane Doe'"_$CHAR(10)_
        "    age: 30"_$CHAR(10)_
        "    roles:"_$CHAR(10)_
        "    - 'admin'"_$CHAR(10)_
        "    - 'editor'"_$CHAR(10)_
        "database:"_$CHAR(10)_
        "    host: 'localhost'"_$CHAR(10)_
        "    port: 5432"_$CHAR(10)_
        ""
    Do ##class(dc.yamladapter.YamlUtil).yamlToJson(yamlContent, .jsonContent)
    Set jsonObj = {}.%FromJSON(jsonContent)
    Write jsonObj.%ToJSON()

    Return sc
}

2. Gerar arquivo YAML de um arquivo JSON 

ClassMethod TestYamlFileToJsonFile() As %Status
{

    Set sc = $$$OK
    Set yamlFile = "/tmp/samples/sample.yaml"
    Set jsonFile = "/tmp/samples/sample_result.json"
    Write ##class(dc.yamladapter.YamlUtil).yamlFileToJsonFile(yamlFile,jsonFile)
    

    Return sc
}

3. Converter de JSON string para YAML string

ClassMethod TestJsonToYaml() As %Status
{
    Set sc = $$$OK
    set jsonContent = "{""user"":{""name"":""Jane Doe"",""age"":30,""roles"":[""admin"",""editor""]},""database"":{""host"":""localhost"",""port"":5432}}"
    Do ##class(dc.yamladapter.YamlUtil).jsonToYaml(jsonContent, .yamlContent)
    Write yamlContent

    Return sc
}

4. Gerar arquivo JSON de um arquivo YAML 

ClassMethod TestJsonFileToYamlFile() As %Status
{

    Set sc = $$$OK
    Set jsonFile = "/tmp/samples/sample.json"
    Set yamlFile = "/tmp/samples/sample_result.yaml"
    Write ##class(dc.yamladapter.YamlUtil).jsonFileToYamlFile(jsonFile, yamlFile)
    

    Return sc
}

5. Carregar um objeto dinâmico a partir de YAML string ou arquivos YAML 

ClassMethod TestYamlFileToDynamicObject() As %Status
{
    Set sc = $$$OK
    Set yamlFile = "/tmp/samples/sample.yaml"
    Set dynamicYaml = ##class(YamlAdaptor).CreateFromFile(yamlFile)

    Write "Title: "_dynamicYaml.title, !
    Write "Version: "_dynamicYaml.version, !

    Return sc
}

6. Gerar YAML de objetos dinâmicos

ClassMethod TestDynamicObjectToYaml() As %Status
{
    Set sc = $$$OK
    Set dynaObj = {}
    Set dynaObj.project = "Project A"
    Set dynaObj.version = "1.0"
    Set yamlContent = ##class(YamlAdaptor).CreateYamlFromDynamicObject(dynaObj)

    Write yamlContent

    Return sc
}

7. Gerar arquivo XML de arquivo YAML

ClassMethod TestXmlFileToYamlFile() As %Status
{

    Set sc = $$$OK
    Set xmlFile = "/tmp/samples/sample.xml"
    Set yamlFile = "/tmp/samples/sample_xml_result.yaml"
    Write ##class(dc.yamladapter.YamlUtil).xmlFileToYamlFile(xmlFile, yamlFile)
    

    Return sc
}

8. Gerar arquivo YAML de arquivo XML

ClassMethod TestYamlFileToXmlFile() As %Status
{

    Set sc = $$$OK
    Set yamlFile = "/tmp/samples/sample.yaml"
    Set xmlFile = "/tmp/samples/sample_result.xml"
    Write ##class(dc.yamladapter.YamlUtil).yamlFileToXmlFile(yamlFile, "sample", xmlFile)
    

    Return sc
}
ディスカッション (0)1
続けるにはログインするか新規登録を行ってください