新しい投稿

検索

質問
· 2023年12月6日

HTTP access to list S3 buckets

I am looking for any examples on hoe to use http request to connect to S3, list buckets and loop thru them to download files recursively.

ディスカッション (0)1
続けるにはログインするか新規登録を行ってください
記事
· 2023年12月4日 10m read

通用RESTful 业务服务和业务操作

1. 通用RESTful业务服务和业务操作


InterSystems IRIS 提供了一组通用的RESTful 业务服务和业务操作类,用户无需开发自定义的业务服务和业务操作类,就可以直接向外提供RESTful服务和调用外部的RESTful API。

BS EnsLib.REST.GenericService 通用REST业务服务
BS EnsLib.REST.SAMLGenericService 检查SAML令牌的签名和时间戳的REST业务服务
BO EnsLib.REST.GenericOperation 通用REST业务操作
BO EnsLib.REST.GenericOperationInProc 用于透传模式的通用REST业务操作

 

2. 通用RESTful 消息

 

通用的RESTful 业务服务和业务操作类使用一个通用的RESTful消息类 - EnsLib.REST.GenericMessage,它是EnsLib.HTTP.GenericMessage的子类,二者数据结构都是

HTTPHeaders 记录http头的数组
Stream 记录http体的数据流
Type 数据流类型,例如是字符流还是二进制流。自动赋值,无需设置
Attributes 记录属性的数组
OriginalFilename 无需使用
OutputFolder 无需使用
OutputFilename 无需使用

因此EnsLib.REST.GenericMessage和EnsLib.HTTP.GenericMessage都可以被通用RESTful业务操作和业务服务所使用。

 

3. 通用RESTful 业务操作

使用通用的RESTful业务操作,可以连接到任何第三方的RESTful服务器,调用其RESTful API。

3.1 向production中加入通用RESTful业务操作

增加通用RESTful业务操作,只需要在Production配置页面的操作中添加EnsLib.REST.GenericOperation。

建议加入Production时,给业务操作起一个名字,用于代表具体的业务,例如是连接到LIS的RESTful 服务,可以命名为RESTtoLIS(可以考虑的命名规则 - 接口方式+业务系统)。如果未命名,默认会使用类名作为业务操作名。

3.2 配置通用RESTful业务操作

主要的设置项是以下3个:

1. HTTP服务器:目标RESTful服务器的服务器名或IP地址

2. HTTP端口:目标RESTful服务器提供RESTful API的端口号

3. URL:RESTful API的服务端点

启用该业务操作后,既可以访问外部RESTful API了。

注意,这里URL可以不填写,而由HTTP header动态指定。

3.3 测试通用RESTful业务操作

启用后,加入的通用的RESTful业务操作即可测试了。因为EnsLib.HTTP.GenericMessage的REST消息体是一个流类型的属性,为了测试时方便输入这个数据,我们增加一个业务流程。

1. 创建一个新的业务流程,设置其请求消息为Ens.StringRequest,用于测试时传入REST body数据。并为其上下文增加一个名为DataBody、类型为%Stream.GlobalCharacter(可持久化的字符流类型)的属性:

2. 在业务流程中增加一个代码流程(<code>),将请求消息的字符串数据写入上下文的DataBody字符流:

Do context.DataBody.Write(request.StringValue)

 注意行首加空格。

 

3. 然后在业务流程中再加入一个调用流程(<call>),调用上面已经加入production的业务操作,例如RESTtoLIS,并设置请求和响应消息为EnsLib.REST.GenericMessage或EnsLib.HTTP.GenericMessage。

4. 配置RESTtoLIS业务操作的请求消息(Request)

可以直接点击构建请求消息(Request Builder)按钮,使用图形化拖拽建立请求消息:

4.1 将左边上下文context里的DataBody拖拽到callrequest的Stream属性上;

4.2 对callrequest的HTTPHeaders赋值,它是一个元素类型为字符串的数组,代表HTTP请求的头。以下3个HTTP头是必须要填写的:

HTTP头属性说明 下标
HTTP方法 "httprequest" 例如"POST"
HTTP消息体的内容类型 "content-type" 例如"application/json"
客户端希望接收的内容类型 "Accept"  例如"*/*"

这3个数组元素赋值,可以通过在添加操作下拉列表中设置(Set)进行赋值。

4.3 (此步可选)如果在3.2中没有配置URL,则可以在这里增加对HTTPHeaders里URL的设置。例如需要动态确定URL,就可以在这里进行配置。如下:

注意,如果这里设置了“URL”,它会覆盖在BO设置中的URL。

4.4 (此步可选)如果有URL参数需要配置,例如/fhir/r4/MedicationRequest?Patient=0a8eebfd-a352-522e-89f0-1d4a13abdebc&_elements=medicationReference,这里需要设置2个URL参数:Patient和_elements,可以用HTTPHeaders的IParams进行设置。

设置方法是:

先确定有多少个参数,将数量放在IParams里,例如2个,所以设置IParams为2;

之后为每个参数设置IParams_i,这里i是参数序号,例如设置IParams_1为“Patient=0a8eebfd-a352-522e-89f0-1d4a13abdebc”;

以此类推。

 

5. 将业务流程加入Production,并测试

确保Production的设置是允许调试。在Production配置页面中选中这个业务流程,在右侧的操作标签页中选择测试按钮,并在弹出的测试消息页面里填入测试用的数据,并点击调用测试服务

然后可以检查测试的消息处理流程,并确认REST消息体和HTTP消息头被正确地传递到目标REST API

 

 

4. 通用RESTful 业务服务

使用通用的RESTful业务服务,可以向外发布能处理任何RESTful API调用请求的RESTful服务端。

4.1 将通用RESTful业务服务加入Production

在Production配置页面,点击服务后面的加号。弹出的向导页面,服务类选择EnsLib.REST.GenericService;输入服务名,建议写一个能代表组件功能的名字,例如向HIS系统开放的REST服务,可以起名RESTforHIS;选中立即启用。

RESTful通用业务服务可以通过2种方式向外提供RESTful API服务:第一种通过Web服务器向外提供服务,第二种使用IRIS服务器的特定TCP端口向外提供服务。第二种方式不依赖于独立的Web服务器,但推荐使用Web服务器,从而得到更好的性能和安全性。

这里我们使用Web服务器提供REST服务,因此在业务服务的端口配置中,保持空白。在接受消息的目标名称中,选择接收RESTful API请求的业务流程或业务操作,这里我们测试使用一个空的业务流程。点击应用激活这些设置。

4.2 建立一个向外提供RESTful API的Web应用

向外发布RESTful服务,不仅涉及到服务发布的URL,还涉及到安全。我们通过创建一个专用的Web应用来进行管理和控制。

在IRIS系统管理门户>系统管理>安全>应用程序>Web应用程序 中,点击新建Web应用程序按钮,新建一个Web应用程序,并做以下配置:

1. 名称,填写一个计划发布的服务端点,例如/IRISRESTServer。注意前面的/

2. NameSpace,选择Production所在的命名空间

3. 选中启用 REST,并设置分派类EnsLib.REST.GenericService

4. 根据安全需要,配置安全设置部分。这里方便测试起见,允许的身份验证方法选择了未验证(无需验证)。如果是生产环境,或者您在做性能压力测试,都应该选择密码Kerberos安全的身份验证方式!

注意,请保证同一个命名空间下,仅有一个分派类为EnsLib.REST.GenericService的REST类型的Web应用。

 

4.3 测试RESTful业务服务

现在就可以测试这个RESTful业务服务了。这个RESTful服务可以响应任何REST API的请求,如何响应则是后续业务流程/业务操作的事。

它的完整的RESTful URL是:[Web服务器地址]:[Web服务器端口]/[Web应用的名称]/[通用REST服务在production中的配置名]/[API名称和参数],例如我在IRIS本机的私有Apache的52773端口上访问上面创建的REST通用业务服务,调用PlaceLabOrder的API (注意,这里我们并没有实现过PlaceLabOrder这个API,但我们依然可以响应,而不会报404错误),那么完整的REST 调用地址是:

127.0.0.1:52773/IRISRESTServer/RESTforHIS/PlaceLabOrder

打开POSTMAN,用POST方法,发起上面REST API的调用:

 在IRIS里会得到类似这样的消息追踪结果,如果你没有实现过处理REST API请求的业务流程,会得到一个500错,但依然可以查看IRIS产生的EnsLib.HTTP.GenericMessage消息内容:

这个通用RESTful业务服务会把REST请求转换为EnsLib.HTTP.GenericMessage消息,向目标业务操作/业务流程发送。因此,通过解析它的消息内容,就知道REST API请求的全部信息:

1. Stream里是POST的数据

2. HTTPHeaders 的下标"HttpRequest"是HTTP的方法

3. HTTPHeaders 的下标"URL"是完整的API路径,包括了服务端点(在"CSPApplication"下标下)、REST业务服务名称(在"EnsConfigName"下标下)和API

后续业务流程可以通过这些数据对REST API请求进行响应。

4.4 使用业务流程对REST API调用进行路由

有了通用RESTful业务服务生成的EnsLib.HTTP.GenericMessage消息,我们就可以使用消息路由规则或业务流程对REST API请求进行路由。这里我使用业务流程方法对REST API请求进行路由演示。

构建一个新的业务流程,请求消息和响应消息都是EnsLib.REST.GenericMessage或EnsLib.HTTP.GenericMessage,同时为context增加一个名为ReturnMsg的字符串类型的属性,并设置它默认值为:"{""Code"":-100,""Msg"":""未实现的API""}"。

在业务流程里增加一个<switch>流程,然后在<switch>下增加2个条件分支,分别为:

名称:下达检验医嘱,条件:判断是否http头的URL为PlaceLabOrder,且http头的HttpRequest为POST:

(request.HTTPHeaders.GetAt("URL")="/IRISRESTServer/RESTforHIS/PlaceLabOrder") && (request.HTTPHeaders.GetAt("HttpRequest")="POST")

名称:查询检验项目,条件:判断是否http头的URL为GetLabItems,且http头的HttpRequest为GET:

(request.HTTPHeaders.GetAt("URL")="/IRISRESTServer/RESTforHIS/GetLabItems") && (request.HTTPHeaders.GetAt("HttpRequest")="GET")

在两个分支里,分别增加<code>, 产生返回的REST消息内容:

 Set context.ReturnMsg="{""Code"":200,""Msg"":""检验医嘱下达成功""}"
 Set context.ReturnMsg="{""Code"":200,""Msg"":""查询检验项目成功""}"

最后在<switch>后增加一个<code>,构建响应消息:

 // 初始化响应消息
 set response = ##class(EnsLib.REST.GenericMessage).%New()
 // 初始化响应消息的流数据
 Set response.Stream = ##class(%Stream.GlobalCharacter).%New()
 // 将REST返回数据写入流
 Do response.Stream.Write(context.ReturnMsg)

编译这个业务流程,并将其加入Production。

之后修改通用RESTful业务服务的设置,将接收消息的目标名称改为这个新建的业务流程。

现在再通过POSTMAN测试一下各种API,并查看返回REST响应:

在真实项目中,根据实际情况,将上面<switch>流程分支的<code>替换为API响应业务流程或业务操作即可。

 

总结:使用通用RESTful业务操作和业务服务,无需创建自定义的RESTful 业务组件类,就可以调用外部RESTful API和向外提供RESTful API服务,降低开发和实施成本,实现低代码开发。

 

后记:关于EnsLib.REST.GenericService对CORS(跨域资源共享)的支持

CORS是一种基于 HTTP 头的机制,通过允许服务器标示除了它自己以外的其它origin(域、协议和端口)等信息,让浏览器可以访问加载这些资源。
所以要让EnsLib.REST.GenericService支持CORS,需要让它的响应消息增加对于CORS支持的HTTP头的信息,这里不详细介绍这些头含义了,大家可以去W3C的网站或者搜索引擎查询具体定义,最简单可以使用以下代码替代上面4.4中的初始化响应消息代码:

  // 设置HTTP响应的头信息
  set tHttpRes=##class(%Net.HttpResponse).%New()
  set tHttpRes.Headers("Access-Control-Allow-Origin")="*"
  set tHttpRes.Headers("Access-Control-Allow-Headers")="*"
  set tHttpRes.Headers("Access-Control-Allow-Methods")="*"
  // 初始化响应消息
  set response = ##class(EnsLib.REST.GenericMessage).%New(,,tHttpRes)

 

关于Cookies:
很多REST API需要事先登陆,获取会话信息并保存到Cookies中,从而作为后续API调用的认证信息。

这种情况下,需要开启Cookies。开启方法是打开REST业务操作组件的配置项“使用Cookie”,见下图。

在开启后,上次API调用返回的消息中的Cookies信息会被保存到业务组件实例中,并在下次API调用时自动加到HTTP头中。

注意,这是针对于同一个REST业务操作组件的。Cookies并不会跨不同的REST业务操作组件共享!


关于返回消息的乱码:

如果你看到返回的REST消息是中文乱码,需要取消选中“阅读原始模式”,见下图。它默认是选中的,意思是不管HTTP头里声明的Charset是什么,它都不会进行转码,因此很适合透传。但如果取消选中该设置,IRIS会基于HTTP头的Charset进行转码,转为IRIS的内码Unicode,从而在消息追踪页面显示正确的中文。

关于HTTPS:

如果IRIS的REST客户端使用HTTPS访问REST服务器端,需要在IRIS实例上配置SSL客户端:

在IRIS系统管理门户>系统管理>安全>SSL/TLS配置中,点击“新建配置”:

输入“配置名称”;

然后“类型”选中“客户端”;

并在“此客户端的凭据” 选择证书文件和密钥文件。如果你手头上没有,可以用openssl生成一对。

保存后,可以测试一下。

然后在BO的配置页面的“Connection Settings”>SSLConfig中选中上一步创建的SSL客户端名称;同时保证HTTPPort里填写正确的端口号,默认HTTPS的端口号是443:

ディスカッション (0)1
続けるにはログインするか新規登録を行ってください
質問
· 2023年12月4日

Custom Application Metric

I made a custom application metric, imported it to the USER namespace and used:

set status = ##class(SYS.Monitor.SAM.Config).Add.ApplicationClass("historymonitor.errorSensor", "USER")

to add it. When I do 'w status' it returns 1 so it is added but I still can't see the custom metric in the api/monitor/metrics endpoint. Even though I added %DB_USER in the application roles for api/monitor.

Does anyone know where the problem might be that the metrics endpoint still doesn't show my metric?

1 Comment
ディスカッション (1)2
続けるにはログインするか新規登録を行ってください
記事
· 2023年12月1日 13m read

"What's taking so long?" - Process Sampling for Performance Analysis

When there's a performance issue, whether for all users on the system or a single process, the shortest path to understanding the root cause is usually to understand what the processes in question are spending their time doing.  Are they mostly using CPU to dutifully march through their algorithm (for better or worse); or are they mostly reading database blocks from disk; or mostly waiting for something else, like LOCKs, ECP or database block collisions?

Tools to help answer the questions above have always been available in various forms. You start with ^JOBEXAM or the Management Portal's Process view to see a process's Routine, its State, and other tidbits, refreshing frequently to get a sense of what is dominating the process's time. You might then use ^mgstat or ^GLOSTAT to measure total system throughput, or use ^LOCKTAB or ^BLKCOL to see if there are sources of LOCK conflicts or block collisions, though it's not always clear how observations at this level reflect on the processes in question.  Lower-level tools like 'iris stat' or OS-level profiling can provide more direct evidence, but involve making inferences about what's going on inside the database kernel. Debuggers and ^%SYS.MONLBL can surely answer a lot of these questions but usually aren't appropriate for use on working production systems.

I created ^PERFSAMPLE to make narrowing in on the root cause of performance issues in the wild quicker and more straight-forward. It's been available in InterSystems IRIS since version 2021.1. PERFSAMPLE samples the state of a set of processes at high frequency, then sorts and counts the sampled data along various dimensions: the process's current routine and namespace, its state string (e.g. GSETW), whether the state is one that indicates waiting or using CPU, the wait state within the database kernel if any, and the PID being sampled (if multiple).  The UI then allows you to see the sorted values for each dimension and dig into them in an order of your choosing.  

Using PERFSAMPLE doesn't change the behavior of the processes being sampled.  It samples information that each process always stores in shared memory, so it has no impact on their performance, and is therefore safe to use on an a live system.  The process running PERFSAMPLE itself does of course consume CPU - more as the sample rate or number of processes to sample is increased - but never more than a single CPU thread.

I'm hopeful that this tool might offer you a little more insight into the performance of your application and help make the most of InterSystems IRIS. 

A Simple Single-Process Example

Take a simple example of one process that is performing slowly.  We'll sample it and perhaps start by looking at what routines are seen most in the samples. In other words, what routine is it spending the most time executing?  Is that expected for this application, or is it surprising?  Then we might look at the most common State (as ^JOBEXAM or %SYS.ProcessQuery would report).  Is it mostly doing global references (e.g. GGET or GSET), doing device IO (READ/WRITE), waiting on a lock (LOCKW), etc?  Maybe its mostly doing global references and so we can look at the Kernel Wait State to see if it's mostly waiting or not, and if it is, for what: disk reads, block collisions, journal writes, another internal resource, etc.  ^PERFSAMPLE lets you aggregate these dimensions of analysis in a heirarchy you choose, like a pivot table.  

Here's what PERFSAMPLE looks like for one process sampled while doing some application activity. We'll look at it first in two dimensions: Using CPU? -> State.  Using CPU says whether the sampled state would indicate that the process is running, or at least it could be assuming CPU is available at the system level, as opposed to waiting for something else. 

PERFSAMPLE for Local Process Activity.  11.00s at 12/01/2023 11:26:46
8842 samples  |  CPULoad* 0.91
-----------------------------'?' for help-------------------------------
Using CPU? [100 %-total]
 > yes                [90.7 %-total]
   no                 [9.33 %-total] 

So this process was spending 90.7% of its time in states where we'd expect it to be using CPU (and indeed this matches the true measure of its CPU time at the operating system level). Now digging in to that 90.7% we find the following states.

PERFSAMPLE for Local Process Activity.  11.00s at 12/01/2023 11:26:46
8842 samples  |  CPULoad* 0.91
-----------------------------'?' for help-------------------------------
Using CPU? [yes] -> Process State [90.7 %-total]
 > RUN                [67.0 %-total]
   GGET               [15.6 %-total]
   GDEF               [5.77 %-total]
   GORD               [1.82 %-total]
   LOCK               [0.509 %-total]

Here we see that some of its CPU time is spent accessing globals (getting values, $order, etc), but it's mostly in other application logic (the general RUN state).  What about the time where it wasn't using CPU?  We go back and dig into Using CPU? [no].

PERFSAMPLE for Local Process Activity.  11.00s at 12/01/2023 11:26:46
8842 samples  |  CPULoad* 0.91
-----------------------------'?' for help-------------------------------
Using CPU? [no] -> Process State [9.33 %-total]
 > GDEF               [4.89 %-total]
   GGET               [4.42 %-total]
   GORD               [0.0226 %-total]

We see that when it wasn't using CPU is all global accesses, but that doesn't tell us why, so we go back up to Using CPU? [no] and add the Kernel Wait State.

PERFSAMPLE for Local Process Activity.  11.00s at 12/01/2023 11:26:46
8842 samples  |  CPULoad* 0.91
-----------------------------'?' for help-------------------------------
Using CPU? [no] -> Kernel Wait State [9.33 %-total]
 > diskio             [9.33 %-total]

Now we can see that this portion of its time was reading database blocks from disk.  

So, what's our conclusion in this simple example?  This process is spending roughly 10% of its time reading the database from disk, 20% of its time doing the rest of what's required for the accessing globals, and 70% in other logic.  This particular example shows a reasonable mix that suggests it might be performing about as expected given the application algorithm.  If that's too slow, we'll need to understand the application code that it's running and perhaps look for opportunities for improvement or parallelization.  If, on the other hand, we had seen that this process was dominated by the diskio wait state, questions about global buffer configuration and underlying storage hardware would come to mind, along with considering opportunities for parallelization or $prefetchon in the application.  

In either case, the immediate next step in data collection might be that we end up back in ^JOBEXAM to see exactly what globals it's referencing, but now better informed with the shape of its performance profile.  Or we might even decide to use ^TRACE (a new utility in 2021.2+) to follow the exact sequence of global references that it's doing and at what lines of application code.

Multiple Processes

PERSAMPLE can sample multiple or all processes, and the PID from which each sample came is available as a dimension of analysis.  So, for example, choosing to analyze Using CPU? -> PID would show the highest CPU users, and Routine -> PID would get the top routines and then the top processes found running each of them.  Choosing to analyze the dimensions in the opposite order with the PID first allows you to see the data for the other dimensions sorted separately for each individual processes out of the multiple processes sampled.

Here's what PERFSAMPLE looks like after sampling all processes on the system under some particular application load.  I chose the option to Ignore samples where the process appears idle (READ, HANG, etc) to filter out processes that aren't likely to be interesting.  Thereby, and as highlighted below, the captured samples are only 27.4% of the total.  We''ll start by looking at the Routines that were running in our samples.  At the same time, note the CPULoad metric, which is simply expressing the average number of jobs that were Using CPU? [yes] across the samples; if the system had a sufficient number of CPUs to schedule all these jobs and if the performance was fairly uniform across the sample period, this would closely match the number of CPU threads reported busy for IRIS processes at the OS level (e.g. if this system had 8 cores with hyperthreading, the OS might show about 25% utilization: ~4 CPU threads utilized on average out of 16).

PERFSAMPLE for Local Process Activity.  11.00s at 12/01/2023 12:05:26
103655 events in 378240 samples [27.4 %-total]  |  CPULoad* 3.96
Multiple jobs included: 9456 samples per job
-----------------------------'?' for help-------------------------------
Routine [27.4 %-total]
 > RECOVTEST          [25.0 %-total]
   JRNDMN             [2.38 %-total]
   c                  [0.0402 %-total]
   shell              [0.0280 %-total]
   %SYS.WorkQueueMgr  [0.00132 %-total]
   %SYS.Monitor.AbstractSensor.1          [0.000529 %-total]
   SYS.Database.1     [0.000529 %-total]
   %Library.ResultSet.1                   [0.000264 %-total]

When we're sampling every process on the system, the display above, expressed as percentages of all samples isn't always the most helpful view.  The system could have a large number of largely idle processes, with the bulk of the application activity being only a small percentage of the total. We can press 'c' to cycle the display to display the counts as a percentage of the subset.

PERFSAMPLE for Local Process Activity.  11.00s at 12/01/2023 12:05:26
103655 events in 378240 samples [27.4 %-total]  |  CPULoad* 3.96
Multiple jobs included: 9456 samples per job
-----------------------------'?' for help-------------------------------
Routine [27.4 %-total]
 > RECOVTEST          [91.1 %-subset]
   JRNDMN             [8.67 %-subset]
   c                  [0.147 %-subset]
   shell              [0.102 %-subset]
   %SYS.WorkQueueMgr  [0.00482 %-subset]
   %SYS.Monitor.AbstractSensor.1          [0.00193 %-subset]
   SYS.Database.1     [0.00193 %-subset]
   %Library.ResultSet.1                   [0.000965 %-subset]

Or, we can press 'c' to display the counts in terms of the number of processes they represent.  It's simply the number of matching samples divided by the number of samples per process, but it's useful because if the performance was fairly uniform across the sample period, this can closely match the number of processes really observed in that state at any one time. Pressing 'c' can also cycle to raw counts. Here's what the top two routines look like in those alternative displays.

Routine [27.4 %-total]
 > RECOVTEST          [9.98 jobs]
   JRNDMN             [0.951 jobs]
...
Routine [103655]
 > RECOVTEST          [94396]
   JRNDMN             [8991]

Note that JRNDMN data point is immediately interesting in this view.  We know there's only one journal daemon, and with the sampled job count very close to 1 (95.1%), it was seen as non-idle in almost every sample - remember we told PERFSAMPLE to ignore samples that looked idle (and if we didn't it'd of course be exactly 1). So we immediately learn that there was substantial journal activity. While there are much more direct ways to measure journal activity if we were looking for it, it's the sort of detail that can jump out when we slice the samples in a certain way.

Now, let's focus in on that RECOVTEST routine that was dominating 90% of the non-idle samples. In a real application, the routine names alone would be more telling and might immediately point you an area of interest, but in my simple example, the load I generated was indeed almost all from this one large routine, so we need to look further into what it's doing.  With the '>' cursor pointing at RECOVTEST, we'll press '+' and add the State dimension...

PERFSAMPLE for Local Process Activity.  11.00s at 12/01/2023 12:05:26
103655 events in 378240 samples [27.4 %-total]  |  CPULoad* 3.96
Multiple jobs included: 9456 samples per job
-----------------------------'?' for help-------------------------------
Routine [RECOVTEST] -> Process State [25.0 %-total]
 > GSETW              [55.1 %-subset]
   RUN                [23.1 %-subset]
   BSETW              [7.75 %-subset]
   GGETW              [6.82 %-subset]
   GSET               [3.76 %-subset]
   INCR               [1.71 %-subset]
   BSET               [0.767 %-subset]
   INCRW              [0.407 %-subset]
   GGET               [0.316 %-subset]
   LOCK               [0.279 %-subset]

In the above we see 55% of this routine's time was spent in GSETW, which means it's doing a global SET, but the W means that it's sleeping waiting for something (see the class reference for the %SYS.ProcessQuery's State property).  We press '+' again and add the Kernel Wait State.  Notice that we're still looking just under these samples of RECOVTEST routine in the GSETW state; we may be interested in going up to start a new analysis with Kernel Wait State as the top dimension but for now we're looking only for the explanation of this one particular set of data points.

PERFSAMPLE for Local Process Activity.  11.00s at 12/01/2023 12:05:26
103655 events in 378240 samples [27.4 %-total]  |  CPULoad* 3.96
Multiple jobs included: 9456 samples per job
-----------------------------'?' for help-------------------------------
Routine [RECOVTEST] -> Process State [GSETW] -> Kernel Wait State [13.8 %-total]
 > inusebufwt         [99.9 %-subset]
   resenqPer-BDB      [0.0577 %-subset]

The 'inusebufwt' state (see the ^PERFSAMPLE documentation) means that this process was waiting due to block collisions: the block that this process wanted to modify was momentarily being used by another process so this one had to wait.  Either multiple processes are SETting, KILLing or fetching the same global variable name (global subscript) simultaneously, or there's a "false sharing" pattern where different subscripts being modified and fetched simultaneously happen to be colocated in the same block.  Using PERFSAMPLE to return to start a new analysis of Kernel Wait State -> Routine would show all routines that were found in the 'inusebufwt' state.  From there, inspection of the application code, and use of ^BLKCOL or ^TRACE would identify the global references that were contending, while ^REPAIR would allow you to see what subscripts are colocated in the blocks in question. 

Sampling ECP Requests to the Data Server

PERFSAMPLE includes a special sampling mode for ECP Data Servers. When run on an ECP Data Server and the Sample ECP Server Requests option is used, it samples the incoming ECP requests that the data server is currently processing, including the global or lock name and its subscripts. This can be very helpful in understanding what application activity contributes the most to the load on the data server from the ECP Application Servers.  It also samples the process state of the ECP server daemon processing the request, so that the State and Kernel Wait State are available just as in the above examples.

1 Comment
ディスカッション (1)1
続けるにはログインするか新規登録を行ってください
質問
· 2023年11月29日

How can we set the properties of a package in VSCode?

In the wew versions of IRIS the Studio is going to be deprecated. In the Studio when editing classes there is an option to add information at package level, with the option "Package Information" that shows this dialog:
 

 

In VSCode there is the option to add/edit this package information? If no, how can one add/edit this information without the Studio?

Thanks.

8 Comments
ディスカッション (8)4
続けるにはログインするか新規登録を行ってください