新しい投稿

Pesquisar

ディスカッション
· 2024年3月12日

[Water Cooler Talk] Is ChatGPT effective in providing ObjectScript guide?

Hi Community!

As an AI language model, ChatGPT is capable of performing a variety of tasks like language translation, writing songs, answering research questions, and even generating computer code. With its impressive abilities, ChatGPT has quickly become a popular tool for various applications, from chatbots to content creation.
But despite its advanced capabilities, ChatGPT is not able to access your personal data. So we need to build a custom ChatGPT AI by using LangChain Framework:

Below are the steps to build a custom ChatGPT:

  • Step 1: Load the document 

  • Step 2: Splitting the document into chunks

  • Step 3: Use Embedding against Chunks Data and convert to vectors

  • Step 4: Save data to the Vector database

  • Step 5: Take data (question) from the user and get the embedding

  • Step 6: Connect to VectorDB and do a semantic search

  • Step 7: Retrieve relevant responses based on user queries and send them to LLM(ChatGPT)

  • Step 8: Get an answer from LLM and send it back to the user

 

  For more details, please Read this article


My personal conclusion
 

In my personal opinion, ChatGPT is effective in providing ObjectScript code and examples, especially for simpler tasks or basic programming concepts. It can generate code snippets, explain programming concepts, and even provide examples or solutions to specific coding problems. However, the effectiveness may vary depending on the complexity of the task and the specific programming language involved. ChatGPT can be quite helpful in providing code examples and explanations.

6 Comments
ディスカッション (6)5
続けるにはログインするか新規登録を行ってください
記事
· 2024年3月12日 5m read

Orchestrating Secure Management Access in InterSystems IRIS with AWS EKS and ALB

As an IT and cloud team manager with 18 years of experience with InterSystems technologies, I recently led our team in the transformation of our traditional on-premises ERP system to a cloud-based solution. We embarked on deploying InterSystems IRIS within a Kubernetes environment on AWS EKS, aiming to achieve a scalable, performant, and secure system. Central to this endeavor was the utilization of the AWS Application Load Balancer (ALB) as our ingress controller. 

However, our challenge extended beyond the initial cluster and application deployment; we needed to establish an efficient and secure method to manage the various IRIS instances, particularly when employing mirroring for high availability.

This post will focus on the centralized management solution we implemented to address this challenge. By leveraging the capabilities of AWS EKS and ALB, we developed a robust architecture that allowed us to effectively manage and monitor the IRIS cluster, ensuring seamless accessibility and maintaining the highest levels of security. 

In the following sections, we will delve into the technical details of our implementation, sharing the strategies and best practices we employed to overcome the complexities of managing a distributed IRIS environment on AWS EKS. Through this post, we aim to provide valuable insights and guidance to assist others facing similar challenges in their cloud migration journeys with InterSystems technologies.

Configuration Summary Our configuration capitalized on the scalability of AWS EKS, the automation of the InterSystems Kubernetes Operator (IKO) 3.6, and the routing proficiency of AWS ALB. This combination provided a robust and agile environment for our ERP system's web services.

Mirroring Configuration and Management Access We deployed mirrored IRIS data servers to ensure high availability. These servers, alongside a single application server, were each equipped with a Web Gateway sidecar pod. Establishing secure access to these management portals was paramount, achieved by meticulous network and service configuration.

Detailed Configuration Steps

Initial Deployment with IKO:

  • We leveraged IKO 3.6, we deployed the IRIS instances, ensuring they adhered to our high-availability requirements.

Web Gateway Management Configuration:

  • We create server access profiles within the Web Gateway Management interface. These profiles, named data00 and data01, were crucial in establishing direct and secure connectivity to the respective Web Gateway sidecar pods associated with each IRIS data server.
  • To achieve precise routing of incoming traffic to the appropriate Web Gateway, we utilized the DNS pod names of the IRIS data servers. By configuring the server access profiles with the fully qualified DNS pod names, such as iris-svc.app.data-0.svc.cluster.local and iris-svc.app.data-1.svc.cluster.local, we ensured that requests were accurately directed to the designated Web Gateway sidecar pods.

https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls...

 

IRIS Terminal Commands:

  • To align the CSP settings with the newly created server profiles, we executed the following commands in the IRIS terminal:
    • d $System.CSP.SetConfig("CSPConfigName","data00") # on data00
    • d $System.CSP.SetConfig("CSPConfigName","data01") # on data01

https://docs.intersystems.com/healthconnectlatest/csp/docbook/DocBook.UI...

NGINX Configuration:

  • The NGINX configuration was updated to respond to /data00 and /data01 paths, followed by creating Kubernetes services and ingress resources that interfaced with the AWS ALB, completing our secure and unified access solution.

Creating Kubernetes Services:

  • I initiated the setup by creating Kubernetes services for the IRIS data servers and the SAM:

 

Ingress Resource Definition:

  • Next, I defined the ingress resources, which route traffic to the appropriate paths using annotations to secure and manage access.

Explanations for the Annotations in the Ingress YAML Configuration:

  • alb.ingress.kubernetes.io/scheme: internal
    • Specifies that the Application Load Balancer should be internal, not accessible from the internet.
    • This ensures that the ALB is only reachable within the private network and not exposed publicly.
  • alb.ingress.kubernetes.io/subnets: subnet-internal, subnet-internal
    • Specifies the subnets where the Application Load Balancer should be provisioned.
    • In this case, the ALB will be deployed in the specified internal subnets, ensuring it is not accessible from the public internet.
  • alb.ingress.kubernetes.io/target-type: ip
    • Specifies that the target type for the Application Load Balancer should be IP-based.
    • This means that the ALB will route traffic directly to the IP addresses of the pods, rather than using instance IDs or other target types.
  • alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true
    • Enables sticky sessions (session affinity) for the target group.
    • When enabled, the ALB will ensure that requests from the same client are consistently routed to the same target pod, maintaining session persistence.
  • alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
    • Specifies the ports and protocols that the Application Load Balancer should listen on.
    • In this case, the ALB is configured to listen for HTTPS traffic on port 443.
  • alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:il-
    • Specifies the Amazon Resource Name (ARN) of the SSL/TLS certificate to use for HTTPS traffic.
    • The ARN points to a certificate stored in AWS Certificate Manager (ACM), which will be used to terminate SSL/TLS connections at the ALB.

These annotations provide fine-grained control over the behavior and configuration of the AWS Application Load Balancer when used as an ingress controller in a Kubernetes cluster. They allow you to customize the ALB's networking, security, and routing settings to suit your specific requirements.

After configuring the NGINX with location settings to respond to the paths for our data servers, the final step was to extend this setup to include the SAM by defining its service and adding the route in the ingress file.

Security Considerations: We meticulously aligned our approach with cloud security best practices, particularly the principle of least privilege, ensuring that only necessary access rights are granted to perform a task.

DATA00:

 

DATA01:

SAM:

Conclusion: 

This article shared our journey of migrating our application to the cloud using InterSystems IRIS on AWS EKS, focusing on creating a centralized, accessible, and secure management solution for the IRIS cluster. By leveraging security best practices and innovative approaches, we achieved a scalable and highly available architecture.

We hope that the insights and techniques shared in this article prove valuable to those embarking on their own cloud migration projects with InterSystems IRIS. If you apply these concepts to your work, we'd be interested to learn about your experiences and any lessons you discover throughout the process

3 Comments
ディスカッション (3)1
続けるにはログインするか新規登録を行ってください
質問
· 2024年3月12日

Best way to translate a XML String to an Ens.Response derived class?

Hello,
First of all thanks for your time and help with this question.

We wonder how could we convert a String which represents an XML, to a class which extends from Ens.Response

Our context is a REST Operation, where we currently split the String with $PIECE and set each propertie as follows:

        set codigo = $PIECE($PIECE(httpResponse,"<error><codigo>",2),"</codigo><descripcion>",1)
        set descripcion = $PIECE($PIECE(httpResponse,"<descripcion>",2),"</descripcion>",1)
        set codigoSERAM = $PIECE($PIECE(httpResponse,"</error><codigo>",2),"</codigo></resultado>",1)
        
        set pResponse = ##class(Mensajes.Response.Radiologia.NumeroOrdenAcodigoSERAMResponse).%New()
        set pResponse.resultado =  ##class(EsquemasDatos.Radiologia.Resultado).%New()
        set pResponse.resultado.error =  ##class(EsquemasDatos.Seguridad.Error).%New()
        set pResponse.resultado.error.codigo = codigo
        set pResponse.resultado.error.descripcion = descripcion
        set pResponse.resultado.codigo = codigoSERAM


Is there any recommended way to convert the XML String which has the following look:

    <resultado>
        <error>
            <codigo>0</codigo>
            <descripcion>Proceso realizado correctamente</descripcion>
        </error>
        <codigo>06050301</codigo>
    </resultado>


    
To the Ens.Response derived class which structure is:

Class Mensajes.Response.Radiologia.NumeroOrdenAcodigoSERAMResponse Extends Ens.Response
{

Property resultado As EsquemasDatos.Radiologia.Resultado;
Class EsquemasDatos.Radiologia.Resultado Extends (%SerialObject, %XML.Adaptor) [ ProcedureBlock ]
{

Property error As EsquemasDatos.Seguridad.Error;
Property codigo As %String(MAXLEN = "");
Class EsquemasDatos.Seguridad.Error Extends (%Persistent, %XML.Adaptor, %JSON.Adaptor) [ ClassType = persistent, ProcedureBlock ]
{

Property codigo As %Numeric;
Property descripcion As %String(MAXLEN = "");

I

 

f the String written in the response would be a JSON we could do:

        set tSC= claseAux.%ConvertJSONToObject(httpRequest.HttpResponse.Data,"Mensajes.Response.Radiologia.NumeroOrdenAcodigoSERAMResponse",.pResponse,1)    


Or we could use something like:

    ##class(%JSON.Adaptor).%JSONImport(input, %mappingName As %String = "")


    

But what is the recommended way to parse XML String to Ens.Response?

Thanks for your time, help and support.

Thank you.

5 Comments
ディスカッション (5)3
続けるにはログインするか新規登録を行ってください
記事
· 2024年3月11日 2m read

Embedded Pythonで$LIST()形式のデータを扱う方法

これは InterSystems FAQ サイトの記事です。

現時点(2024年3月)では、コミュニティに掲載されているPythonライブラリ「iris-dollar-list」を利用することでIRISの$LIST()形式のデータをPythonのリストとして利用することができます。

※標準ツールではありませんがご利用いただけます。詳細はコミュニティの記事「もう1つの $ListBuild() の実装:Pythonライブラリ「iris-dollar-list」」をご参照ください。

WindowsにインストールしたIRISで使用する場合は、以下の方法で「iris-dollar-list」をインストールしてください。

※Windows以外にインストールしたIRISでは、pipコマンドを利用した通常の方法でインストールできます。

コマンドプロンプトを開き、以下実行します。(IRISをデフォルトインストールしたときのディレクトリで掲載しています)

> cd C:\InterSystems\IRIS\bin
> irispip install --target C:\InterSystems\IRIS\mgr\python iris-dollar-list

実行例は以下の通りです。

USER>set ^ListTest=$LISTBUILD("test","あいうえお",101)

USER>:py

Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] on linux
Type quit() or Ctrl-D to exit this shell.
>>> from iris_dollar_list import DollarList
>>> glo=iris.gref("^ListTest")
>>> pythonlist=DollarList.from_bytes(glo[None].encode('ascii')).to_list()
>>> pythonlist
['test', 'あいうえお', 101]
>>>

この他に、現時点では日本語が使用できませんが、英数字であれば%SYS.PythonクラスのToList()メソッドを使用する方法もあります(将来のバージョンで日本語もサポートされる予定です)

2024/6/28更新:バージョン2024.1以降では、$LISTBUILD()に日本語が含まれていても、正しくPython リストに変換できるようになりました。詳しくは、 @Ayumu Tanaka さんが記載された返信に含まれる例文をご参照ください。 
 

実行例は以下の通りです。

USER>set ^ListTest2=$LISTBUILD(123,"hello")

USER>:py

Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] on linux
Type quit() or Ctrl-D to exit this shell.
>>> glo=iris.gref("^ListTest2")
>>> pythonlist=iris.cls("%SYS.Python").ToList(glo[None])
>>> pythonlist
[123, 'hello']
>>> 
1 Comment
ディスカッション (1)2
続けるにはログインするか新規登録を行ってください
記事
· 2024年3月11日 8m read

Generating meaningful test data using Gemini

We all know that having a set of proper test data before deploying an application to production is crucial for ensuring its reliability and performance. It allows to simulate real-world scenarios and identify potential issues or bugs before they impact end-users. Moreover, testing with representative data sets allows to optimize performance, identify bottlenecks, and fine-tune algorithms or processes as needed. Ultimately, having a comprehensive set of test data helps to deliver a higher quality product, reducing the likelihood of post-production issues and enhancing the overall user experience. 

In this article, let's look at how one can use generative AI, namely Gemini by Google, to generate (hopefully) meaningful data for the properties of multiple objects. To do this, I will use the RESTful service to generate data in a JSON format and then use the received data to create objects.

This leads to an obvious question: why not use the methods from %Library.PopulateUtils to generate all the data? Well, the answer is quite obvious as well if you've seen the list of methods of the class - there aren't many methods that generate meaningful data.

So, let's get to it.

Since I'll be using the Gemini API, I will need to generate the API key first since I don't have it beforehand. To do this, just open aistudio.google.com/app/apikey and click on Create API key.

and create an API key in a new project

After this is done, you just need to write a REST client to get and transform data and come up with a query string to a Gemini AI. Easy peasy 😁

For the ease of this example, let's work with the following simple class

Class Restaurant.Dish Extends (%Persistent, %JSON.Adaptor)
{
Property Name As %String;
Property Description As %String(MAXLEN = 1000);
Property Category As %String;
Property Price As %Float;
Property Currency As %String;
Property Calories As %Integer;
}

In general, it would be really simple to use the built-in %Populate mechanism and be done with it. But in bigger projects you will get a lot of properties which are not so easily automatically populated with meaningful data.

Anyway, now that we have the class, let's think about the wording of a query to Gemini. Let's say we write the following query:

{"contents": [{
    "parts":[{
      "text": "Write a json object that contains a field Dish which is an array of 10 elements. Each element contains Name, Description, Category, Price, Currency, Calories of the Restaurant Dish."}]}]}

If we send this request to https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=APIKEY we will get something like:

 
Spoiler

Already not bad. Not bad at all! Now that I have the wording of my query, I need to generate it as automatically as possible, call it and process the result.

Next step - generating the query. Using the very useful article on how to get the list of properties of a class we can generate automatically most of the query.

ClassMethod GenerateClassDesc(classname As %String) As %String
{
    set cls=##class(%Dictionary.CompiledClass).%OpenId(classname,,.status)
    set x=cls.Properties
    set profprop = $lb()
    for i=3:1:x.Count() {
        set prop=x.GetAt(i)
        set $list(profprop, i-2) = prop.Name        
    }
    quit $listtostring(profprop, ", ")
}

ClassMethod GenerateQuery(qty As %Numeric) As %String [ Language = objectscript ]
{
    set classname = ..%ClassName(1)
    set str = "Write a json object that contains a field "_$piece(classname, ".", 2)_
        " which is an array of "_qty_" elements. Each element contains "_
        ..GenerateClassDesc(classname)_" of a "_$translate(classname, ".", " ")_". "
    quit str
}

When dealing with complex relationships between classes it may be easier to use the object constructor to link different objects together or to use a built-in mechanism of %Library.Ppulate.

Following step is to call the Gemini RESTful service and process the resulting JSON.

ClassMethod CallService() As %String
{
 Set request = ..GetLink()
 set query = "{""contents"": [{""parts"":[{""text"": """_..GenerateQuery(20)_"""}]}]}"
 do request.EntityBody.Write(query)
 set request.ContentType = "application/json"
 set sc = request.Post("v1beta/models/gemini-pro:generateContent?key=<YOUR KEY HERE>")
 if $$$ISOK(sc) {
    Set response = request.HttpResponse.Data.Read()	 
    set p = ##class(%DynamicObject).%FromJSON(response)
    set iter = p.candidates.%GetIterator()
    do iter.%GetNext(.key, .value, .type ) 
    set iter = value.content.parts.%GetIterator()
    do iter.%GetNext(.key, .value, .type )
    set obj = ##class(%DynamicObject).%FromJSON($Extract(value.text,8,*-3))
    
    set dishes = obj.Dish
    set iter = dishes.%GetIterator()
    while iter.%GetNext(.key, .value, .type ) {
        set dish = ##class(Restaurant.Dish).%New()
        set sc = dish.%JSONImport(value.%ToJSON())
        set sc = dish.%Save()
    }    
 }
}

Of course, since it's just an example, don't forget to add status checks where necessary.

Now, when I run it, I get a pretty impressive result in my database. Let's run a SQL query to see the data.

The description and category correspond to the name of the dish. Moreover, prices and calories look correct as well. Which means that I actually get a database, filled with reasonably real looking data. And the results of the queries that I'm going to run are going to resemble the real results.

Of course, a huge drawback of this approach is the necessity of writing a query to a generative AI and the fact that it takes time to generate the result. But the actual data may be worth it. Anyway, it is for you to decide 😉

 
P.S.

P.P.S. The first image is how Gemini imagines the "AI that writes a program to create test data" 😆

4 Comments
ディスカッション (4)3
続けるにはログインするか新規登録を行ってください