新しい投稿

Encontrar

お知らせ
· 4 hr 前

Concurso Full Stack de InterSystems 2026

Hola desarrolladores,

Nos alegra anunciar el primer concurso de programación online de InterSystems del año:

🏆 Concurso Full Stack de InterSystems 🏆

Duración: del 2 de febrero al 1 de marzo de 2026

Bolsa de premios: 12.000 $


El tema

Desarrollad una solución full stack utilizando InterSystems IRIS, InterSystems IRIS for Health o IRIS Cloud Service como backend. Por full stack entendemos una aplicación web o móvil de frontend que inserte, actualice o elimine datos en InterSystems IRIS a través de una API REST, la API nativa, ODBC/JDBC o Python embebido.

Requisitos generales:

  1. Una aplicación o librería debe ser completamente funcional. No debe ser una importación ni una interfaz directa de una librería ya existente en otro lenguaje (excepto en C++, donde realmente es necesario hacer bastante trabajo para crear una interfaz para IRIS). No debe ser un copia y pega de una aplicación o librería existente.
  2. Aplicaciones aceptadas: aplicaciones nuevas en Open Exchange o ya existentes, pero con una mejora significativa. Nuestro equipo revisará todas las aplicaciones antes de aprobarlas para el concurso.
  3. La aplicación debe funcionar en IRIS Community Edition o en IRIS for Health Community Edition. Ambas pueden descargarse como versiones host (Mac, Windows) desde el sitio de evaluación, o utilizarse en forma de contenedores obtenidos desde InterSystems Container Registry o Community Containers: intersystemsdc/iris-community:latest o intersystemsdc/irishealth-community:latest.
  4. La aplicación debe ser de código abierto y publicarse en GitHub o GitLab.
  5. El archivo README de la aplicación debe estar en inglés, contener los pasos de instalación e incluir un vídeo de demostración y/o una descripción de cómo funciona la aplicación.
  6. Solo se permiten 3 participaciones por desarrollador.

Nota: nuestros expertos tendrán la última palabra sobre si la aplicación se aprueba o no para el concurso, basándose en los criterios de complejidad y utilidad. Su decisión es definitiva y no está sujeta a apelación.

Premios

  1. Nominación de expertos - un jurado especialmente seleccionado determinará a los ganadores:

🥇1.º puesto: 5.000 $
🥈2.º puesto: 2.500 $
🥉3.º puesto: 1.000 $
🏅 4.º puesto: 500 $
🏅5.º puesto: 300 $
🌟Puestos 6.º a 10.º: 100 $

  1. Ganadores de la comunidad - las aplicaciones que reciban más votos en total:

🥇 1.º puesto - 1.000 $
🥈 2.º puesto - 600 $
🥉 3.º puesto - 300 $
🏅 4.º puesto - 200 $
🏅 5.º puesto - 100 $

❗ Si varios participantes obtienen el mismo número de votos, todos se consideran ganadores y el premio en efectivo se repartirá entre ellos.
❗ Los premios en efectivo se entregan solo a quienes puedan verificar su identidad. Si hay alguna duda, los organizadores se pondrán en contacto y solicitarán información adicional sobre el/los participante(s).

¿Quiénes pueden participar?

Cualquier miembro de la Comunidad de Desarrolladores, excepto empleados de InterSystems (los contratistas de ISC sí pueden participar). ¡Cread una cuenta!

Los desarrolladores pueden formar equipos para crear una aplicación colaborativa. Se permite de 2 a 5 desarrolladores por equipo.

No olvidéis destacar a los miembros de vuestro equipo en el README de vuestra aplicación - perfiles de usuario de DC.

Fechas importantes:

🛠 Fase de desarrollo de la aplicación y registro:

  • 2 de febrero de 2026 (00:00 EST): Comienza el concurso.
  • 22 de febrero de 2026 (23:59 EST): Fecha límite para enviar las aplicaciones.

✅ Período de votación:

  • 23 de febrero de 2026 (00:00 EST): Comienza la votación.
  • 1 de marzo de 2026 (23:59 EST): Termina la votación.

Nota: los desarrolladores pueden mejorar sus aplicaciones durante todo el período de registro y votación.

Recursos útiles:

✓ Aplicaciones de ejemplo:

✓ Plantillas que recomendamos para empezar:

✓ Para principiantes con IRIS:

✓ Para principiantes con ObjectScript Package Manager (IPM):

✓ Cómo enviar vuestra aplicación al concurso:

¿Necesitáis ayuda?

Uníos al canal del concurso en el servidor de Discord de InterSystems o hablad con nosotros en los comentarios de esta publicación.

¡Estamos esperando VUESTRO proyecto – uníos a nuestro maratón de programación para ganar!


Al participar en este concurso, aceptáis los términos de la competición establecidos aquí. Por favor, leedlos detenidamente antes de continuar.

ディスカッション (0)1
続けるにはログインするか新規登録を行ってください
記事
· 7 hr 前 5m read

What Are Custom Mailer Boxes and How Do They Work?

Picture background
Custom mailer boxes have become a popular packaging solution for businesses that ship products directly to customers. These boxes are designed to protect items during transit while offering a neat and organized presentation. Unlike generic shipping cartons, mailer boxes are often customized in size, structure, and material to match specific product needs. Their growing use in e-commerce and retail shows how packaging has evolved beyond simple protection.

In simple terms, custom mailer boxes are folding cartons made to fit products snugly and ship them safely without requiring additional outer packaging. They are commonly used for lightweight to medium-weight items and are shipped flat before being assembled. Because of their smart design and ease of use, these boxes are now a standard choice for brands looking for both protection and efficiency in shipping.


Understanding Custom Mailer Boxes

Custom mailer boxes are usually made from corrugated cardboard or kraft material. They are designed to be self-locking, meaning no tape or glue is required to assemble them. Once folded, the box holds its shape securely, making it suitable for shipping through courier and postal services.

These boxes are widely used by online stores, subscription services, and small businesses. Their structure allows products to stay in place, reducing movement and the risk of damage. Since they can be tailored to exact dimensions, businesses avoid using oversized boxes, which helps lower shipping costs and material waste.

Another important aspect is consistency. When products are shipped in the same type of packaging every time, handling becomes easier for both sellers and logistics providers.


How Custom Mailer Boxes Work in Shipping

The working mechanism of custom mailer boxes is simple but effective. First, the box is manufactured according to the product’s size and weight requirements. Once delivered to the business, the boxes are stored flat, saving warehouse space.

During packing, the box is folded along pre-scored lines. The locking tabs and flaps interlock to form a rigid structure. The product is placed inside, often with minimal additional padding if needed. After closing the lid, the box is ready for labeling and shipping.

Because of their sturdy construction, these boxes can withstand stacking, handling, and transportation pressures. This makes them ideal for shipping products directly to customers without needing a second outer box.


Key Features of Custom Mailer Boxes

Custom mailer boxes offer several practical features that make them suitable for modern shipping needs.

Main Features Include:

  • Self-locking design for quick assembly
  • Custom sizing to reduce empty space
  • Durable materials for product protection
  • Lightweight structure to manage shipping costs
  • Easy stacking and storage before use

These features help businesses streamline their packaging process while ensuring that products reach customers in good condition.


Materials Used in Custom Mailer Boxes

The choice of material plays a major role in how mailer boxes function. Corrugated cardboard is the most commonly used material because it provides strength without adding excessive weight. It consists of a fluted layer between two linerboards, offering cushioning and durability.

Kraft paper is another popular option, especially for businesses looking for a natural and simple appearance. It is strong, recyclable, and suitable for a wide range of products. Depending on the shipping needs, different flute sizes can be selected to provide varying levels of protection.

Material selection ensures that the box can handle pressure, vibration, and temperature changes during transit.


Custom Mailer Boxes vs Standard Shipping Boxes

Understanding the difference between custom mailer boxes and standard shipping boxes helps explain how mailer boxes work more efficiently for certain products.

Feature Custom Mailer Boxes Standard Shipping Boxes
Design Self-locking, foldable Requires tape or glue
Size Fit Product-specific Often oversized
Storage Ships and stores flat Takes more space
Assembly Time Quick and simple Time-consuming
Shipping Use Single-box shipping Often needs inner packaging

This comparison shows why many businesses prefer mailer boxes for direct-to-customer deliveries.


Why Businesses Use Custom Mailer Boxes

One of the main reasons businesses use custom mailer boxes is efficiency. These boxes simplify the packing process and reduce the need for extra materials like bubble wrap or filler. When a box fits the product well, it minimizes movement and lowers the chance of damage.

Another reason is consistency in shipping. Using the same box size and structure helps businesses standardize operations. This leads to faster packing times and fewer errors during order fulfillment.

Mailer boxes also help manage shipping costs. Their lightweight nature and compact size reduce dimensional weight charges, which are common in courier pricing models.


Role of Custom Mailer Boxes in E-commerce

E-commerce relies heavily on packaging that can handle frequent shipping. Custom mailer boxes are designed to meet this demand. They are strong enough for long-distance transport and simple enough for quick order processing.

For subscription-based businesses, these boxes are especially useful. Products are shipped regularly, and having a reliable packaging solution ensures consistency across shipments. Customers also find these boxes easy to open and dispose of, which improves overall satisfaction.

As online shopping continues to grow, the role of mailer boxes in daily shipping operations becomes even more important.


Environmental Considerations

Many custom mailer boxes are made from recyclable materials, making them a more responsible packaging choice. Because they are designed to fit products closely, they reduce material waste and unnecessary fillers.

Using right-sized packaging also helps lower carbon emissions during transportation. Smaller and lighter boxes mean more efficient shipping, which benefits both businesses and the environment.

This practical approach aligns well with modern packaging trends focused on sustainability and efficiency.


Conclusion

Custom mailer boxes are a smart and functional packaging solution designed to protect products and simplify shipping. Their self-locking structure, durable materials, and custom sizing allow businesses to ship items securely without added complexity. By understanding how these boxes work, businesses can make better packaging decisions that support efficient operations and reliable deliveries.

As shipping needs continue to evolve, custom mailer boxes remain a dependable choice for businesses seeking practical, well-designed packaging solutions.

ディスカッション (0)1
続けるにはログインするか新規登録を行ってください
ダイジェスト
· 7 hr 前

【週間ダイジェスト】 1/19 ~ 1/25 の開発者コミュニティへの投稿

1/19 ~ 1/25Week at a GlanceInterSystems Developer Community
記事
· 8 hr 前 14m read

IRIS Agents: Building Agents on IRIS!

 

Ever since I started using IRIS, I have wondered if we could create agents on IRIS. It seemed obvious: we have an Interoperability GUI that can trace messages, we have an underlying object database that can store SQL, Vectors and even Base64 images. We have a Python SDK that allows anyone to interface with the platform using Python, although this is where I felt limited as a Python developer to a certain degree. This was my attempt to create a Python SDK that can leverage several parts of IRIS to support development of agentic systems.

First, I set out to define the functional requirements:

  • Developers should code primarily in Python
  • Developers should not have to set configuration settings on Management Portal
  • Developers should not need to code in ObjectScript

Luckily, the existing Python SDK  does allow quite a bit of interfacing with the IRIS data platform. Let's explore how we can leverage them to manage context, register tools, observe messages and build agents.

Here's how I envision the SDK to be used:

from iris.agents import Agent, Production, Chat, Prompt
from iris.agents.tools import Calendar, Weather
from pydantic import BaseModel

class Response(BaseModel):
	text: str  
	reasoning: str  
	
weather_conversation = Chat('WeatherDiscussion')
molly_system = Prompt('MollySystemPrompt').build(scale='Fahrenheit')
alex_system = Prompt(name='AlexSystemPrompt',
					text='You are an excellent assistant')

molly = Agent(
	name='Molly',
	description='General Assistant Agent',
	system_prompt=molly_system,
	model='gpt-5',
	response_format=Response)

alex = Agent(
	name='Alex',
	description='General Assistant Agent',
	system_prompt=alex_system,
	model='gpt-5',
	tools=[Calendar, Weather],
	response_format=Response)


prod = Production(name='AgentSpace', agents=[molly, alex]).start()

molly("What's the weather in Boston today?",
	  chat=weather_conversation)

Let's start by defining the structure of an agent in the IRIS context. Every agent in IRIS is construed a Business Process, with their own Business Services. Every tool is construed as a Business Operation. Some tools come out of the box, such as one to query the IRIS database using SQL (also used for vector search) and one to call an LLM. The underlying database is used to store knowledge bases, prompts, agent configurations, production specs, user information, and logged information such as agent reasoning.

 Before we dive into the Agents themselves, let's look at how Messages are handled. I converted each Pydantic BaseModel (useful for structured outputs) into an ObjectScript Message class stored in the namespace "Agents". If the developer defines a new BaseModel with an existing name, the structure overrides the previous one. These Message classes are converted back into Pydantic BaseModels in the LLM Business Operation, when it makes the call using appropriate libraries using Embedded Python.

class Message:
    def __init__(self, name, model:BaseModel):
        self.name = name
        self.build_message(model)
    
    def build_message(self, model:BaseModel):
        model_json = model.model_json_schema()

        cls_name = f'Agents.Message.{self.name}'
        cls_text = f'''Class {cls_name} Extends (Ens.Request, %JSON.Adaptor)

        {{

        '''

        for prop_name, prop_attribs in model_json['properties'].items():
            cls_text += f'''Property {prop_name} As {r'%Double' if prop_attribs['type'] == 'number' else r'%String'}; \n'''
        cls_text += '}'
        irispy = get_connection(True)

        stream = irispy.classMethodObject('%Stream.GlobalCharacter', '%New')
        stream.invoke('Write', cls_text)
        stream.invoke('Rewind')

        errorlog = iris.IRISReference(None)
        loadedlist = iris.IRISReference(None)

        sc = irispy.classMethodValue(
            '%SYSTEM.OBJ', 'LoadStream',
            stream,
            'ck',
            errorlog,
            loadedlist,
            0,
            '',
            f'{cls_name}.cls',
            'UTF-8'
        )

        if sc != 1:
            raise RuntimeError(irispy.classMethodValue("%SYSTEM.Status", "GetErrorText", sc))

        return 'Successful'

Once Messages are taken care of, here's how I created my Agent class:

class Agent:
    def __init__(self,
                 name: str,
                 description: str | None = None,
                 system_prompt: Prompt | None = None,
                 model: str | None = None,
                 tools: list[Tool] | None = None,
                 response_format: BaseModel | None = None,
                 chat: Chat | None = None,
                 override: bool = True):
        conn = get_connection()
        cur = conn.cursor()
        response_format = Message(response_format.__name__, response_format)

        sql = '''SELECT TABLE_NAME
                    FROM INFORMATION_SCHEMA.Tables
                    WHERE TABLE_TYPE='BASE TABLE'
                    AND TABLE_SCHEMA='SQLUser' '''
        if 'Agent' not in pd.read_sql_query(sql, conn)['TABLE_NAME'].to_list():
            sql = '''CREATE TABLE Agent (
                        agent_name VARCHAR(200) NOT NULL PRIMARY KEY,
                        description VARCHAR(4000),
                        system_prompt_id VARCHAR(200),
                        model VARCHAR(200),
                        tools VARCHAR(4000),
                        response_format VARCHAR(4000),
                        chat_id VARCHAR(200)
                        )'''
            cur.execute(sql)
            conn.commit()

        # 2) Check if agent exists
        sql = f"SELECT * FROM Agent WHERE agent_name = '{name}'"
        agent_df = pd.read_sql_query(sql, conn)

        if agent_df is not None and len(agent_df) > 0:
            row = agent_df.iloc[0]

            if not override:
                self.name = row['agent_name']
                self.description = row['description']
                self.system_prompt = Prompt(row['system_prompt_id']) if row['system_prompt_id'] else None
                self.model = row['model']
                self.tools = row['tools']
                self.response_format = row['response_format']
                self.chat_id = row['chat_id']
                return
            sp_id = system_prompt.name if system_prompt else row['system_prompt_id']
            chat_id = chat.id if chat else row['chat_id']

            sql = f'''UPDATE Agent SET
                        description = '{description}',
                        system_prompt_id = '{sp_id}' ,
                        model = '{model}',
                        tools = '{str(tools)}',
                        response_format = '{response_format.name if response_format else None}',
                        chat_id = '{chat_id}'
                        WHERE agent_name = '{name}' '''
            cur.execute(sql)
            conn.commit()

            self.name = name
            self.description = description
            self.system_prompt = Prompt(sp_id) if sp_id else None
            self.model = model
            self.tools = tools
            self.response_format = response_format
            self.chat = chat
            return
        # 3) Agent does not exist → create or error
        if any(x is None for x in (description, model, response_format)):
            raise KeyError("Missing required fields to create a new agent.")

        sp_id = system_prompt.name if system_prompt else None
        chat_id = chat.id if chat else None
        sql = f'''INSERT INTO Agent
                    (agent_name, description, system_prompt_id, model, tools, response_format, chat_id)
                    VALUES
                    ('{name}', '{description}', '{sp_id}', '{model}', '{str(tools)}', '{response_format.name if response_format else None}', '{chat_id}')'''
        cur.execute(sql)
        conn.commit()

        self.name = name
        self.description = description
        self.system_prompt = Prompt(sp_id) if sp_id else None
        self.model = model
        self.tools = tools
        self.response_format = response_format
        self.chat = chat

    def __repr__(self) -> str:
        return f"Agent(name={self.name!r}, model={self.model!r}, system_prompt={getattr(self.system_prompt,'name',None)!r})"
    def __call__(self, chat:Chat|None=None) -> str:
        # TODO: API call to agent's business service
        pass

Whenever Agents are initialized for the first time, it requires most of its parameters. Once an agent has been defined, it can be fetched with a simple Agent("Name") call, and the Agent's specs are loaded from database., or can be overridden by providing different specs.

For Prompts, I created a versioning system where prompts can be identified by their names (similar to Agents and Messages), but subsequent changes are versioned and stored, with the latest version being fetched when called. The prompt can also be "built" at runtime, which might allow users to inject details into a prompt template depending on the use case. All Prompts are persisted in tables.
 

class Prompt:
    def __init__(self, name:str, text:str|None=None, iris_args:dict[str,str]|None=None):
        conn = get_connection()
        cur = conn.cursor()

        sql = '''SELECT TABLE_SCHEMA, TABLE_NAME from INFORMATION_SCHEMA.Tables WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_SCHEMA = 'SQLUser' '''
        if 'Prompt' not in pd.read_sql_query(sql, conn)['TABLE_NAME'].to_list():

            sql = '''CREATE TABLE Prompt (
                prompt_id    VARCHAR(200) NOT NULL,
                prompt_text   VARCHAR(200) NOT NULL,
                version INT NOT NULL,
                PRIMARY KEY (prompt_id, version))'''
            cur.execute(sql)
            conn.commit()

        sql = f'''SELECT * FROM Prompt WHERE prompt_id = '{name}' ORDER BY version DESC LIMIT 1'''
        prompt_df = pd.read_sql_query(sql, conn)

        last_text = None
        version = 0
        if prompt_df is not None and len(prompt_df) > 0:
            name, last_text, version = prompt_df.iloc[0].tolist()
        self.name = name
        self.text = last_text
        self.version = version

        if not last_text and not text:
            raise KeyError(f'No prompt text found for \'{name}\', and no \'text\' was provided.')
        
        if text:
            sql = f'''INSERT INTO Prompt (prompt_id, prompt_text, version) VALUES ('{name}', '{text}', {version + 1})'''
            cur.execute(sql)
            conn.commit()
            self.text = text
            self.version += 1
    def __repr__(self) -> str:
        return f'Prompt(name={self.name!r}, version={self.version}, text={self.text!r})'
    def __str__(self) -> str:
        return self.text or ''
    def build(self, **vars) -> str:
        import string
        vars_req = {var for _, var, _, _ in string.Formatter().parse(self.text) if var}
        missing = vars_req - vars.keys()
        if missing:
            raise KeyError(f'Missing variables {sorted(missing)} for the selected prompt')
        return self.text.format(**vars)

Finally, the Production itself. The production creates the production configuration as well as the dispatch class needed to pass the REST calls to the correct Business Service (depending on which agent is being invoked).

class Production:
    def __init__(self, 
                 name: str,
                 agents: list[Agent], 
                 tools: list[Tool] | None = None):
        self.name = name
        self.agents = agents
        self.build_production()
        self.create_dispatch()

    def create_class(self, name, text):
        irispy = get_connection(True)

        stream = irispy.classMethodObject('%Stream.GlobalCharacter', '%New')
        stream.invoke('Write', text)
        stream.invoke('Rewind')

        errorlog = iris.IRISReference(None)
        loadedlist = iris.IRISReference(None)

        sc = irispy.classMethodValue(
            '%SYSTEM.OBJ', 'LoadStream',
            stream,
            'ck',
            errorlog,
            loadedlist,
            0, 
            '',
            f'{name}.cls',
            'UTF-8'
        )

        if sc != 1:
            raise RuntimeError(irispy.classMethodValue("%SYSTEM.Status", "GetErrorText", sc))
        
    def create_gateway(self, name:str):
        cls_text = f'''Class Agents.Gateway.{name}Service Extends Ens.BusinessService
            {{
            Method OnProcessInput(pInput As Agents.Message.Request, pOutput As Agents.Message.Response) As %Status
            {{
                set sc = ..SendRequestSync("{name}", pInput, .pResponse)
                set pOutput = ##class(Agents.Message.Response).%New()
                set pOutput = pResponse.%ConstructClone*(0)
                Quit sc
            }}
            ClassMethod OnGetConnections(Output pArray As %String, pItem As Ens.Config.Item)
            {{
                Do ##super(.pArray, pItem)
                Set pArray("{name}") = ""
            }}
            }}
            '''
        create_class(f'Agents.Gateway.{agent.name}Service', cls_text)

    def create_process(self, name:str, response_format:str):
        cls_text = f'''Class Agents.Process.{agent.name} Extends Ens.BusinessProcessBPL
            {{
            
            ClassMethod BuildChatJSON(pText As %String) As %String
            {{
                Set arr = ##class(%DynamicArray).%New()
                Set obj = ##class(%DynamicObject).%New()
                Do obj.%Set("role","user")
                Do obj.%Set("content", pText)
                Do arr.%Push(obj)
                Quit arr.%ToJSON()
            }}
            
            /// BPL Definition
            XData BPL [ XMLNamespace = "http://www.intersystems.com/bpl" ]
            {{
            <process language='objectscript' request='Agents.Message.Request' response='Agents.Message.{agent.response_format.name}'>
            <context>
            <property name='LLMResponse' type='Agents.Message.LLMResponse' instantiate='0' />
            <property name='ChatJSON' type='%String' instantiate='0' />
            </context>

            <sequence>
            <switch>
            <case name='LLM' condition='1'>
            <assign property="context.ChatJSON"
                action="set"
                languageOverride="objectscript"
                value="##class(Agents.Process.{name}).BuildChatJSON(request.Message)" />


            <call name='CallLLM' target='LLM' async='0'>
            <request type='Agents.Message.LLMRequest' >
            <assign property="callrequest.responseType" value="&quot;Agents.Message.{response_format}&quot;" action="set" />
            <assign property="callrequest.chat" value="context.ChatJSON" action="set" />
            </request>
            <response type='Agents.Message.LLMResponse' >
            <assign property="context.LLMResponse" value="callresponse" action="set"/>
            </response>
            </call>

            <assign property="response.Message" value="context.LLMResponse.message" action="set"/>
            </case>

            <default>
            <assign property="response.Message" value="&quot;Hello&quot;" action="set"/>
            </default>
            </switch>
            </sequence>
            </process>
            }}

            }}'''
        create_class(f'Agents.Process.{name}', cls_text)



    def build_production(self):
        prod_xml = f'''<Production Name="{name}" LogGeneralTraceEvents="false">
            <Description></Description>
            <ActorPoolSize>1</ActorPoolSize>
            '''
        for agent in self.agents:

            self.create_gateway(agent.name)

            self.create_process(agent.name, agent.response_format.name)

            prod_xml += f'<Item Name="{agent.name}Gateway" ClassName="Agents.Gateway.{agent.name}Service" PoolSize="1" Enabled="true"/>\n' + \
                f'<Item Name="{agent.name}" ClassName="Agents.Process.{agent.name}" PoolSize="1" Enabled="true"/>\n'
        prod_xml += '<Item Name="LLM" ClassName="Agents.Operation.LLM" PoolSize="1" Enabled="true"/>\n</Production>'
        cls_text = f"""Class {name} Extends Ens.Production
        {{
        XData ProductionDefinition
        {{
        {prod_xml}
        }}
        }}
        """
        create_class(name, cls_text)

    def start(self):
        # Stop existing Production
        irispy = get_connection(True)
        sc = irispy.classMethodValue("Ens.Director", "StopProduction", 10, 1)
        if sc != 1:
            print(irispy.classMethodValue("%SYSTEM.Status","GetErrorText", sc))
        

        irispy = get_connection(True)
        sc = irispy.classMethodValue("Ens.Director", "StartProduction", self.name)
        if sc != 1:
            raise RuntimeError(irispy.classMethodValue("%SYSTEM.Status", "GetErrorText", sc))

        print("Created/compiled/started:", self.name)

    def create_dispatch(self):
        cls_text = r'''
        Class Agents.REST.Dispatch Extends %CSP.REST
        {

        XData UrlMap
        {
        <Routes>
            <Route Url="/:agentName" Method="POST" Call="Agent" Cors="false" />
        </Routes>
        }

        /// POST /csp/agents/{agentName}
        ClassMethod Agent(agentName As %String) As %Status
        {
            Set %response.ContentType="application/json"

            Set body = %request.Content.Read()
            If body = "" {
                Do %response.SetStatus(400)
                Quit $$$OK
            }

            Set req = ##class(Agents.Message.Request).%New()
            Do req.%JSONImport(body)

            Set itemName = agentName _ "Gateway"
            Set sc = ##class(Ens.Director).SendRequestSync(itemName, .req, .resp)

            If sc '= 1 {
                Do %response.SetStatus(500)
                Quit $$$OK
            }

            Do %response.SetStatus(200)
            Do %response.Write(resp.%ToJSON())
            Quit $$$OK
        }

        }
        '''
        self.create_class('Agents.REST.Dispatch', cls_text)

The work here is far from over: I am currently exploring ways to send requests to the system using the dispatch class without needing to operate on Management Portal (currently it seems the web server is blocking my requests before it can reach the dispatch class). Once that is fixed, we need a few more elements to make this super useful:

NL2SQL Tool: This tool profiles a table, including creating descriptions, vectors and such. I already have created the algorithm to do this, but I intend on making it into a tool that can be directly called from Python to profile new tables, which can be leveraged by the LLM to create SQL statements.

SQL Business Operation: This tool would query the database and return the information. This would also be used by a higher level Vector Search and Index SDK that would query the database using SQL statements.

Passthrough: For Vector Search and NL2SQL profiles, a passthrough process would exist to serve the information to appropriate business services without involving agents.

Chat: Chat would exist as a table containing messages alongside chat_ids. A call to an Agent can be parameterized with a chat_id to dynamically query the database and construct the past conversation before making the LLM call. If no Chat is provided, the agentic flow remains standalone.

 

Note on why IRIS is uniquely positioned to help agentic application development

A typical agentic application development flow is messy. MCP tooling, context retrieval from a database, a vector db, observability (reasoning to inform prompt optimization) which is logged in a separate database and platforms like Langfuse, which itself uses multiple databases under the hood. IRIS offers a single platform to develop agents end to end, observe messages and traces on the Management Portal, and enable developers in ways few (if any) platforms can. I hope to publish this project on OpenExchange once finalized and packaged appropriately.

I hope you have enjoyed reading this article. If you have any questions, I'm always happy to discuss ideas, especially those that can change the world for the better!

ディスカッション (0)1
続けるにはログインするか新規登録を行ってください
記事
· 9 hr 前 5m read

ClassMethods Are Not Your Friends

There seems to be a generous use of ClassMethods in ObjectScript code generally. I hope my own experiences aren't representative, but I bet they are. Forgive me for giving away the ending of this article, but in short: don't use them. Unless you can make a pretty convincing case that you have to, just never use them.1

What is a ClassMethod? In an ObjectScript class, you can define methods in two different ways: in a Method, you must instantiate an instance of the class to call the method, and in a ClassMethod, you can call the method without instantiating the class. Of course, in a ClassMethod, you don't have access to any properties of the object (because there's no object), but you can access globals (they are global, after all) and Parameters (which are class constants).

It seems that the default development practice is to prefer ClassMethods to Methods if it doesn't reference class properties. After all, if there is no dependency on the state of the class, why not reflect the lack of dependency by declaring it a ClassMethod? That sounds like "functional programming" and that's really fashionable now, right?

The answer is: because the problem with "state" in functional programming is global state, and by declaring your method a ClassMethod, you have forced your function's definition into your user's global state. Take the following example:

Class MyClass extends %RegisteredObject {
ClassMethod MyMethod As %String {
    ...
}
}

Now, consider how this method would be used:

Class MyOtherClass extends %RegisteredObject {
ClassMethod MyOtherMethod() {
    ...
    Return ##class(MyClass).MyMethod()
}
}

In this snippet, the phrase "##class(MyClass).MyMethod" is, for all intents and purposes, a global variable, and now MyOtherClass does have dependencies on the global state. All of our concerns about "functional programming" are now thwarted.

So, what? What's the difference? Consider the possibility that MyMethod's logic accesses a network resource to get its value. You've now made it much more difficult to test MyOtherClass.MyOtherMethod, because there's no way to stop it from accessing the network resource when you run your %UnitTest.TestCase, which you definitely wrote so that you aren't in danger of introducing code with a bug in it, right? But, if you wrote MyClass with a Method instead, it's easy:

Class MyOtherClass extends %RegisteredObject {
Property myClassAccessObject = "";
Method OnNew(accessObject As MyClass = "") As %Status {
    if accessObject '= "" {
        Set ..myClassAccessObject = accessObject
    } else {
        Set ..myClassAccessObject = ##class(MyClass).%New()
    }
    Return $$$OK
}
Method MyOtherMethod() {
    Set value = ..myClassAccessObject.MyMethod()
    Return value
}
}

Class MockMyClass extends %RegisteredObject {
Property calls As %Integer;
Property returnValue As %String;
Method OnNew(returnValue As %String = "") As %Status {
    Set ..calls = 0
    Set ..returnValue = returnValue
}
Method MyMethod() As %String {
    Set ..calls = ..calls + 1
    Return ..returnValue
}
}

Class MyOtherClassTest extends %UnitTest.TestCase {
Method TestSimple() {
    Return ..ExerciseMethod("simple value")
}
Method TestComplex() {
    Return ..ExerciesMethod("more complicated value")
}

Method ExerciseMethod(expectedValue As %String) As %Status
    Set mock = ##class(MockMyClass).%New(expectedValue)
    Set sut = ##class(MyOtherClass).%New(mock) // Because there's no point in fixing it in MyClass and not MyOtherClass
    S actual = sut.MyOtherMethod()
    $$$AssertEquals(mock.calls, 1)
    $$$AssertEquals(actual, expectedValue)
}
}

Now, it's easy to test multiple different values coming from the network resource in your system without having to make changes to your production data every time you make a change to MyOtherClass and run your tests.

I hear you asking, "but what if MyClass doesn't access a network resource? This seems like a lot of bother when I know what MyClass does, and it doesn't do that." The point is that with those 5 simple letters, C-l-a-s-s, you've guaranteed that MyClass will never access network data, for the duration of your software's existence, or take a long time to run, or required complex setup to get it to return a value that you want it to return so that you can test MyOtherClass with different return values from MyClass. You haven't just violated good functional programming practices, you've pretended to know the future by putting limits on your future self, and now you've gotten in trouble with the Alethi church.2

Seriously, it looks like a pain to do all of that plumbing code of instantiating the default value and implementing an OnNew method, but think about your code for a minute. I'm guessing it doesn't take a lot of looking to see a lot of code that's only there to work around the fact that you aren't doing this. Or your day is full of activity messing around with downstream systems to try and test your code with the right values. Or your day is full of pointless runaround because your code is buggy because you think you can make "one small fix" without testing because it's such a pain to exercise it. The process of development takes more brain space and concentration, and it's exhausting to get it to work.

So when should you use ClassMethods? Don't! Seriously! More seriously, the rule of thumb is that Methods should be the default and ClassMethods should only be used when you are using a Singleton pattern, because that's essentially what you've established by defining a ClassMethod: a Singleton with no state. Is this an example of "Speculative Generality", the "code smell" where you're making things more complicated by anticipating more than will ever happen? No! It's the opposite; by making it a ClassMethod, you have made assumptions about what the code will never do in ways that your users will have to code around and compensate for. If you can't justify it with the Singleton pattern or some other really good reason, just don't use ClassMethods. Use Methods.

1. This is a well-written article talking about the same subject. I don't know anything about the author; I'm linking because it agrees with me. It was written in 2022, but it also references a book by the great Robert Martin that dates to 2008, and I learned this as a "best-practice" as soon as I started working professionally, and all I'll say is I started college in a year starting with "19".
2. This is a joke from the Stormlight Archive books by Brandon Sanderson. I'm not apologizing, and I'm not taking it out.

ディスカッション (0)1
続けるにはログインするか新規登録を行ってください