検索

質問
· 2025年6月25日

Crystal Energy in Healthcare: Ancient Wisdom for Modern Healing

When Traditional Medicine Meets Earth’s Energy

At 3 AM during a long shift, Nurse Sarah noticed her patient’s blue mood ring had darkened to stormy gray—the same moment their vitals began fluctuating. Coincidence? Or something deeper?

At Crystal Browser, we explore how healthcare workers are safely integrating crystal energy into their self-care and patient support routines. Not as alternatives to medicine, but as adjuncts to holistic care.


1. Crystals with Clinical Relevance

Evidence-Based Mineral Support

Crystal Documented Properties Healthcare Applications
Amethyst Stress reduction (NIH studies) At nurses’ stations for shift resilience
Black Tourmaline EMF absorption (Physics reports) Near ICU monitoring equipment
Lepidolite Lithium content (Geological data) In staff break rooms for mood support
Clear Quartz Focus enhancement (Anecdotal) At clinician workstations

Note: Always prioritize evidence-based medicine. These are complementary tools.

Case Study: A palliative care unit reported improved family satisfaction scores after introducing Crystal Browser’s "comfort stone lending library."


2. Implementing Crystal Protocols Safely

A. For Clinicians

  • Stethoscope Charm: Small hematite for grounding during chaotic shifts
  • Code Blue Prep: Keep carnelian in your pocket for focus under pressure

B. For Patients

  • Consent-First Approach: "Would you like to hold this smooth stone while we wait for results?"
  • Sanitation Matters: Use non-porous stones (like quartz) that can be sterilized

Download Our Free Guide:
Crystal Safety in Clinical Settings


3. Why Healthcare Needs Both Science and Serenity

The Data on Healing Environments

  • Studies show natural elements in hospitals reduce patient pain perception
  • 68% of nurses in our Crystal Browser survey reported crystals helped their own stress management

A Balanced Approach

"We use amethyst in our meditation room—not the OR."
— Dr. Rachel K., Integrative Medicine Director


Community Discussion

Poll: Where could crystals ethically complement healthcare?

  • 🏥 Staff wellness programs
  • 🧘 Patient relaxation initiatives
  • ⚠️ Nowhere—they’re inappropriate

Share Your Experience:
"Has your facility experimented with holistic elements? We’d love to feature respectful case studies on Crystal Browser."

ディスカッション (0)1
続けるにはログインするか新規登録を行ってください
お知らせ
· 2025年6月25日

Join us online for our Developer Ecosystem Ready 2025 session!

Hi Community!

We have super awesome news for you! We're going to try to broadcast our Developer Ecosystem session from the InterSystems Ready 2025:

👥 InterSystems Developer Ecosystem: New Resources and Tools You Need to Know

📅 Wednesday, June 25, 2025

🕑 14:25 - 14:45 EDT

Join us for the latest resources and tools available in the Developer Ecosystem. And stay till the end for the fun quiz (we love them!) with great prizes! 

Join us online or in person in Bonnet Creek X 🤩


PS. Don't worry, it doesn't matter if you play the quiz online or offline, you're still in the game!

ディスカッション (0)1
続けるにはログインするか新規登録を行ってください
記事
· 2025年6月25日 9m read

InterSystems for dummies – Machine learning II

   

 

Previously, we trained our model using machine learning. However, the sample data we utilized was generated directly from insert statements. 

Today, we will learn how to load this data straight from a file.

Dump Data

Before dumping the data from your file, check what header the fields have.

In this case, the file is called “Sleep_health_and_lifestyle_dataset.csv” and is located in the data/csv folder.

This file contains 374 records plus a header (375 lines).

The header includes the following names and positions:

  1. Person ID
  2. Gender
  3. Age
  4. Occupation
  5. Sleep Duration
  6. Quality of Sleep
  7. Physical Activity Level
  8. Stress Level
  9. BMI Category
  10. Systolic
  11. Diastolic
  12. Heart Rate
  13. Daily Steps
  14. Sleep Disorder


It is essential to know the names of column headers.

The class St.MLL.insomnia02 has different column names; therefore, we need to load the data indicating the name of the column into the file, while the relation with the column is placed in the table.

LOAD DATA FROM FILE '/opt/irisbuild/data/csv/Sleep_health_and_lifestyle_dataset.csv'
INTO St_MLL.insomnia02 
(Gender,Age,Occupation,SleepDuration,QualitySleep,PhysicalActivityLevel,
StressLevel,BMICategory,Systolic,Diastolic,HeartRate,DailySteps,SleepDisorder)
VALUES ("Gender","Age","Occupation","Sleep Duration","Quality of Sleep","Physical Activity Level",
"Stress Level","BMI Category","Systolic","Diastolic","Heart Rate","Daily Steps","Sleep Disorder")
USING {"from":{"file":{"header":true}}}

 

All the information makes sense, but… What is the last instruction?

{
  "from": {
    "file": {
      "header": true
    }
  }
}

This is an instruction for the LOAD DATA command to determine what the file is (whether or not it has a header; whether the column separator is another character, etc).

You can find more information about the JSON options by checking out the following links: 

LOAD DATA (SQL)

LOAD DATA jsonOptions 

Since the columns of the file do not match those in the tables, it is necessary to indicate that the document has a line with the header, because by default, this value is “false”.

Now, we will drill our model once more. With much more data in hand, it will be way more efficient at this point.

TRAIN MODEL insomnia01AllModel FROM St_MLL.insomnia02
TRAIN MODEL insomnia01SleepModel FROM St_MLL.insomnia02
TRAIN MODEL insomnia01BMIModel FROM St_MLL.insomnia02

Populate the St_MLL.insomniaValidate02 table with 50% of St_MLL.insomnia02 rows:

INSERT INTO St_MLL.insomniaValidate02(
Age, BMICategory, DailySteps, Diastolic, Gender, HeartRate, Occupation, PhysicalActivityLevel, QualitySleep, SleepDisorder, SleepDuration, StressLevel, Systolic)
SELECT TOP 187
Age, BMICategory, DailySteps, Diastolic, Gender, HeartRate, Occupation, PhysicalActivityLevel, QualitySleep, SleepDisorder, SleepDuration, StressLevel, Systolic
FROM St_MLL.insomnia02

Validate the models with the newly validated table:

INSERT INTO St_MLL.insomniaTest02(
Age, BMICategory, DailySteps, Diastolic, Gender, HeartRate, Occupation, PhysicalActivityLevel, QualitySleep, SleepDisorder, SleepDuration, StressLevel, Systolic)
SELECT TOP 50
Age, BMICategory, DailySteps, Diastolic, Gender, HeartRate, Occupation, PhysicalActivityLevel, QualitySleep, SleepDisorder, SleepDuration, StressLevel, Systolic
FROM St_MLL.insomnia02

Proceeding with our previous model (a nurse, 29-year-old, female), we can check what prediction our test table will make.

Note: The following queries will be focused exclusively on this type of person.

SELECT *, PREDICT(insomnia01AllModel) FROM St_MLL.insomnia02
WHERE age = 29 and Gender = 'Female' and Occupation = 'Nurse'

SURPRISE!!! The result is identical to the one with less data. We thought that training our model with more data would improve the outcome, but we were wrong.

For a change, I executed the probability query instead, and I got a pretty interesting result:

SELECT Gender, Age, SleepDuration, QualitySleep, SleepDisorder, PREDICT(insomnia01SleepModel) As SleepDisorderPrediction, PROBABILITY(insomnia01SleepModel FOR 'Insomnia') as ProbabilityInsomnia,
PROBABILITY(insomnia01SleepModel FOR 'Sleep Apnea') as ProbabilityApnea
FROM St_MLL.insomniaTest02
WHERE age = 29 and Gender = 'Female' and Occupation = 'Nurse'

 
According to the data (sex, age, sleep quality, and sleep duration), the probability of having insomnia is only 46.02%, whereas the chance of having sleep apnea is 51.46%.

Our previous data training provided us with the following percentages: insomnia - 34.63%, and sleep apnea - 64.18%.

What does it mean? The more data we have, the more accurate results we obtain.

Time Is Money

Now, let's try another type of training, using the time series.

Following the same steps we took to build the insomnia table, I created a class called WeatherBase:

Class St.MLL.WeatherBase Extends %Persistent
{

/// Date and time of the weather observation in New York City
Property DatetimeNYC As %DateTime;
/// Measured temperature in degrees
Property Temperature As %Numeric(SCALE = 2);
/// Apparent ("feels like") temperature in degrees
Property ApparentTemperature As %Numeric(SCALE = 2);
/// Relative humidity (0 to 1)
Property Humidity As %Numeric(SCALE = 2);
/// Wind speed in appropriate units (e.g., km/h)
Property WindSpeed As %Numeric(SCALE = 2);
/// Wind direction in degrees
Property WindBearing As %Numeric(SCALE = 2);
/// Visibility distance in kilometers
Property Visibility As %Numeric(SCALE = 2);
/// Cloud cover fraction (0 to 1)
Property LoudCover As %Numeric(SCALE = 2);
/// Atmospheric pressure in appropriate units (e.g., hPa)
Property Pressure As %Numeric(SCALE = 2);
}

Then, I built two classes extending from WeatherBase (Weather and WeatherTest). It allowed me to have the same columns for both tables.

There is a file named “NYC_WeatherHistory.csv” in the csv folder. It contains the temperature, humidity, wind speed, and pressure for New York City in 2015. It is a fortune of data!! For that reason, we will load the file into our table using the knowledge about how to load data from a file.

LOAD DATA FROM FILE '/opt/irisbuild/data/csv/NYC_WeatherHistory.csv'
INTO St_MLL.Weather 
(DatetimeNYC,Temperature,ApparentTemperature,Humidity,WindSpeed,WindBearing,Visibility,LoudCover,Pressure)
VALUES ("DatetimeNYC","Temperature","ApparentTemperature","Humidity","WindSpeed","WindBearing","Visibility","LoudCover","Pressure")
USING {"from":{"file":{"header":true}}}

📣NOTE: The names of the columns and the fields in the table are the same, therefore, we can use the following sentence instead.
LOAD DATA FROM FILE '/opt/irisbuild/data/csv/NYC_WeatherHistory.csv'
INTO St_MLL.Weather 
USING {"from":{"file":{"header":true}}}

Now we will create our model, but we will do it in a particular way.

CREATE TIME SERIES MODEL WeatherForecast 
PREDICTING (Temperature, Humidity, WindSpeed, Pressure) 
BY (DatetimeNYC) FROM St_MLL.Weather
USING {"Forward":3}

 

If we wish to create a prediction series, we should take into account the recommendations below:

  1. The date field must be datetime.
  2. Try to sort the data chronologically.
📣NOTE: This advice comes from Luis Angel Perez, thanks to his great experience in Machine Learning.

The latest command, USING {"Forward":3}, sets the timesteps for the time series.

This parameter has other values:

forward specifies the number of timesteps in the future that you would like to foresee as a positive integer. Approximated rows will appear after the latest time or date in the original dataset. However, you may specify both this and the backward setting simultaneously.

Example: USING {"Forward":3}

backward defines the number of timesteps in the past that you would like to predict as a positive integer. Forecasted rows will appear before the earliest time or date in the original dataset. Remember that you can indicate both this and the forward setting at the same time. The AutoML provider ignores this parameter.
Example: USING {"backward":5}

frequency determines both the size and unit of the predicted timesteps as a positive integer followed by a letter that denotes the unit of time. If this value is not appointed, the most common timestep in the data is supplied.

Example: USING {"Frequency":"d"}

This parameter is case-insensitive.

The letter abbreviations for units of time are outlined in the following table:

Abbreviation

Unit of Time

y

year

m

month

w

week

d

day

h

hour

t

minute

s

second

Now… training. You already know the command for that:

TRAIN MODEL WeatherForecast

 

Be patient! This training took 1391 seconds, wich is approximately 23 minutes!!!!

Now, populate the table St_MLL.WeatherTest with the command Populate.

Do ##class(St.MLL.WeatherTest).Populate()

It includes the first 5 days of January 2025. When completed, select the prediction using the model and the test table.

📣Remember: It is crucial to have at least three values to be able to make a prognosis.
SELECT WITH PREDICTIONS (WeatherForecast) * FROM St_MLL.WeatherTest

Well, it is showing us the forecast for the next 3 hours on January 2, 2025. This happens because we defined our model to forecast 3 records ahead. However, our data model has data for every hour of every day (00:00, 01:00, 02:00, etc.)

If we want to see the daily outlook, we should create another model trained to do so by the day.

Let's create the following model to see the 5-day forecast.

CREATE TIME SERIES MODEL WeatherForecastDaily 
PREDICTING (Temperature, Humidity, WindSpeed, Pressure) 
BY (DatetimeNYC) FROM St_MLL.Weather
USING {"Forward":5, "Frequency":"d"}

 

Now, repeat the same steps… training and displaying the forecast:

TRAIN MODEL WeatherForecastDaily
SELECT WITH PREDICTIONS (WeatherForecastDaily) * FROM St_MLL.WeatherTest

Wait! This time, it throws out the following error:

[SQLCODE: <-400>:<Fatal error occurred>]
[%msg: <PREDICT execution error: ERROR #5002: ObjectScript error: <PYTHON EXCEPTION> *<class 'ValueError'>: forecast_length is too large for training data. What this means is you don't have enough history to support cross validation with your forecast_length. Various solutions include bringing in more data, alter min_allowed_train_percent to something smaller, and also setting a shorter forecast_length to class init for cross validation which you can then override with a longer value in .predict() This error is also often caused by errors in inputing of or preshaping the data. Check model.df_wide_numeric to make sure data was imported correctly. >]

What has happened?

As the error says, it is due to the lack of data to make a prediction. You might think that it needs more data in the Weather table and training, but it has 8760 records… so what is wrong?

If we want to forecast the weather for a large number of days, we need a lot of data in the model. Filling all the data into a table requires extensive training time and a very powerful PC. Therefore, since this is a basic tutorial, we will build a model for 3 days only.
 
Don’t forget to remove the model WeatherForecastDaily before following the instructions.

DROP MODEL WeatherForecastDaily

I am not going to include all the images of those changes, but I will give you the instructions on what to do:

CREATE TIME SERIES MODEL WeatherForecastDaily 
PREDICTING (Temperature, Humidity, WindSpeed, Pressure) 
BY (DatetimeNYC) FROM St_MLL.Weather
USING {"Forward":3, "Frequency":"d"}

TRAIN MODEL WeatherForecastDaily

SELECT WITH PREDICTIONS (WeatherForecastDaily) * FROM St_MLL.WeatherTest

Important Note

The Docker container containers.intersystems.com/intersystems/iris-community-ml:latest-em is no longer available, so you have to use the iris-community container.

This container is not initialized with the AutoML configuration, so the following statement will need to be executed first:

pip install --index-url https://registry.intersystems.com/pypi/simple --no-cache-dir --target /usr/irissys/mgr/python intersystems-iris-automl

If you are using a Dockerfile to deploy your Docker image, remember to add the command below to the deployment instructions:

ARG IMAGE=containers.intersystems.com/intersystems/iris-community:latest-em
FROM $IMAGE
USER root
WORKDIR /opt/irisbuild
RUN chown ${ISC_PACKAGE_MGRUSER}:${ISC_PACKAGE_IRISGROUP} /opt/irisbuild
RUN pip install --index-url https://registry.intersystems.com/pypi/simple --no-cache-dir --target /usr/irissys/mgr/python intersystems-iris-automl

For more information, please visit the website below:

https://docs.intersystems.com/iris20251/csp/docbook/DocBook.UI.Page.cls?KEY=GIML_Configuration_Providers#GIML_Configuration_Providers_AutoML_Install 

 

ディスカッション (0)1
続けるにはログインするか新規登録を行ってください
記事
· 2025年6月25日 9m read

InterSystems para dummies – Machine learning II

   

 

Anteriormente, entrenamos nuestro modelo mediante Machine Learning. Sin embargo, los datos de muestra que utilizamos se generaron directamente a partir de instrucciones de insert.

Hoy aprenderemos a cargar estos datos directamente desde un archivo.

Volcado de Datos

Antes de volcar los datos de su archivo, verifique qué nombre de encabezado tienen los campos.

En este caso, el archivo se llama “Sleep_health_and_lifestyle_dataset.csv” y se encuentra en la carpeta data/csv.

Este archivo contiene 374 registros más un encabezado (375 líneas).

El encabezado incluye los siguientes nombres y posiciones:

  1. Person ID
  2. Gender
  3. Age
  4. Occupation
  5. Sleep Duration
  6. Quality of Sleep
  7. Physical Activity Level
  8. Stress Level
  9. BMI Category
  10. Systolic
  11. Diastolic
  12. Heart Rate
  13. Daily Steps
  14. Sleep Disorder

Es esencial conocer los nombres de los encabezados de columna.

La clase St.MLL.insomnia02 tiene diferentes nombres de columnas, por lo tanto, necesitamos cargar los datos indicando el nombre de la columna en el archivo, mientras que la relación con la columna se coloca en la tabla.

LOAD DATA FROM FILE '/opt/irisbuild/data/csv/Sleep_health_and_lifestyle_dataset.csv'
INTO St_MLL.insomnia02 
(Gender,Age,Occupation,SleepDuration,QualitySleep,PhysicalActivityLevel,
StressLevel,BMICategory,Systolic,Diastolic,HeartRate,DailySteps,SleepDisorder)
VALUES ("Gender","Age","Occupation","Sleep Duration","Quality of Sleep","Physical Activity Level",
"Stress Level","BMI Category","Systolic","Diastolic","Heart Rate","Daily Steps","Sleep Disorder")
USING {"from":{"file":{"header":true}}}

 

Toda la información tiene sentido, pero… ¿Qué es la última instrucción?

{
  "from": {
    "file": {
      "header": true
    }
  }
}

Esta es una instrucción para que el comando LOAD DATA determine qué es el archivo (si tiene o no un encabezado; si el separador de columnas es otro carácter, etc.).

Puede encontrar más información sobre las opciones JSON consultando los siguientes enlaces:

LOAD DATA (SQL)

LOAD DATA jsonOptions 

Como las columnas del archivo no coinciden con las de las tablas, es necesario indicar que el documento tiene una línea con el encabezado, porque por defecto este valor es “false”.

Ahora, analizaremos nuestro modelo una vez más. Con muchos más datos disponibles, será mucho más eficiente en este punto.

TRAIN MODEL insomnia01AllModel FROM St_MLL.insomnia02
TRAIN MODEL insomnia01SleepModel FROM St_MLL.insomnia02
TRAIN MODEL insomnia01BMIModel FROM St_MLL.insomnia02

Rellene la tabla St_MLL.insomniaValidate02 con el 50% de las filas de St_MLL.insomnia02:

INSERT INTO St_MLL.insomniaValidate02(
Age, BMICategory, DailySteps, Diastolic, Gender, HeartRate, Occupation, PhysicalActivityLevel, QualitySleep, SleepDisorder, SleepDuration, StressLevel, Systolic)
SELECT TOP 187
Age, BMICategory, DailySteps, Diastolic, Gender, HeartRate, Occupation, PhysicalActivityLevel, QualitySleep, SleepDisorder, SleepDuration, StressLevel, Systolic
FROM St_MLL.insomnia02

Validar los modelos con la tabla recién validada:

INSERT INTO St_MLL.insomniaTest02(
Age, BMICategory, DailySteps, Diastolic, Gender, HeartRate, Occupation, PhysicalActivityLevel, QualitySleep, SleepDisorder, SleepDuration, StressLevel, Systolic)
SELECT TOP 50
Age, BMICategory, DailySteps, Diastolic, Gender, HeartRate, Occupation, PhysicalActivityLevel, QualitySleep, SleepDisorder, SleepDuration, StressLevel, Systolic
FROM St_MLL.insomnia02

Continuando con nuestro modelo anterior (una enfermera de 29 años, mujer), podemos comprobar qué predicción hará nuestra tabla de prueba.

Nota: Las siguientes consultas estarán enfocadas exclusivamente a este tipo de personas.

SELECT *, PREDICT(insomnia01AllModel) FROM St_MLL.insomnia02
WHERE age = 29 and Gender = 'Female' and Occupation = 'Nurse'

¡¡¡Sorpresa!!! El resultado es idéntico al obtenido con menos datos. Pensamos que entrenar nuestro modelo con más datos mejoraría el resultado, pero nos equivocamos.

Para variar, ejecuté la consulta de probabilidad y obtuve un resultado bastante interesante:

 

Según los datos (sexo, edad, calidad del sueño y duración del sueño), la probabilidad de tener insomnio es solo del 46,02%, mientras que la probabilidad de tener apnea del sueño es del 51,46%.

Nuestro entrenamiento de datos previo nos proporcionó los siguientes porcentajes: insomnio - 34,63% y apnea del sueño - 64,18%.

¿Qué significa? Cuantos más datos tengamos, más precisos serán los resultados.

El Tiempo es Oro

Ahora, probemos otro tipo de entrenamiento, utilizando la serie de tiempo.

Siguiendo los mismos pasos que tomamos para construir la tabla de insomnio, creé una clase llamada WeatherBase:

Class St.MLL.WeatherBase Extends %Persistent
{

/// Date and time of the weather observation in New York City
Property DatetimeNYC As %DateTime;
/// Measured temperature in degrees
Property Temperature As %Numeric(SCALE = 2);
/// Apparent ("feels like") temperature in degrees
Property ApparentTemperature As %Numeric(SCALE = 2);
/// Relative humidity (0 to 1)
Property Humidity As %Numeric(SCALE = 2);
/// Wind speed in appropriate units (e.g., km/h)
Property WindSpeed As %Numeric(SCALE = 2);
/// Wind direction in degrees
Property WindBearing As %Numeric(SCALE = 2);
/// Visibility distance in kilometers
Property Visibility As %Numeric(SCALE = 2);
/// Cloud cover fraction (0 to 1)
Property LoudCover As %Numeric(SCALE = 2);
/// Atmospheric pressure in appropriate units (e.g., hPa)
Property Pressure As %Numeric(SCALE = 2);
}

Luego, creé dos clases que extendían de WeatherBase (Weather y WeatherTest). Esto me permitió tener las mismas columnas en ambas tablas.

Hay un archivo llamado "NYC_WeatherHistory.csv" en la carpeta csv. Contiene la temperatura, la humedad, la velocidad del viento y la presión de la ciudad de Nueva York en 2015. ¡Es una gran cantidad de datos! Por eso, cargaremos el archivo en nuestra tabla usando los conocimientos sobre cómo cargar datos desde un archivo.

LOAD DATA FROM FILE '/opt/irisbuild/data/csv/NYC_WeatherHistory.csv'
INTO St_MLL.Weather 
(DatetimeNYC,Temperature,ApparentTemperature,Humidity,WindSpeed,WindBearing,Visibility,LoudCover,Pressure)
VALUES ("DatetimeNYC","Temperature","ApparentTemperature","Humidity","WindSpeed","WindBearing","Visibility","LoudCover","Pressure")
USING {"from":{"file":{"header":true}}}

📣NOTA: Los nombres de las columnas y los campos de la tabla son los mismos, por lo tanto, podemos utilizar la siguiente sentencia en su lugar.
LOAD DATA FROM FILE '/opt/irisbuild/data/csv/NYC_WeatherHistory.csv'
INTO St_MLL.Weather 
USING {"from":{"file":{"header":true}}}

Ahora crearemos nuestro modelo, pero lo haremos de una manera particular.

CREATE TIME SERIES MODEL WeatherForecast 
PREDICTING (Temperature, Humidity, WindSpeed, Pressure) 
BY (DatetimeNYC) FROM St_MLL.Weather
USING {"Forward":3}

 

Si deseamos crear una serie de predicciones, debemos tener en cuenta las siguientes recomendaciones:

  1. El campo de fecha debe ser DateTime.
  2. Intente ordenar los datos cronológicamente.
📣NOTA: Este consejo viene de Luis Angel Perez, gracias a su gran experiencia en Machine Learning.

El último comando, USING {"Forward":3}, establece los pasos de tiempo para la serie temporal.

Este parámetro tiene otros valores:

forward especifica el número de intervalos de tiempo futuros que desea prever, como un entero positivo. Las filas aproximadas aparecerán después de la última hora o fecha del conjunto de datos original. Sin embargo, puede especificar esta opción y la configuración inversa simultáneamente.

Ejemplo: USING {"Forward":3}

backward define el número de intervalos de tiempo en el pasado que desea predecir como un entero positivo. Las filas pronosticadas aparecerán antes de la fecha o hora más temprana del conjunto de datos original. Recuerde que puede indicar tanto esto como la configuración de avance al mismo tiempo. El proveedor AutoML ignora este parámetro.
Ejemplo: USING {"backward":5}

frequency determina el tamaño y la unidad de los intervalos de tiempo previstos como un entero positivo seguido de una letra que indica la unidad de tiempo. Si no se asigna este valor, se proporciona el intervalo de tiempo más común en los datos.

Ejemplo: USING {"Frequency":"d"}

Este parámetro no distingue entre mayúsculas y minúsculas.

Las abreviaturas de letras para unidades de tiempo se detallan en la siguiente tabla:

Abreviacion

Unidad de tiempo

y

Año

m

Mes

w

Semana

d

Día

h

Hora

t

Minuto

s

Segundo

Ahora… ¡a entrenar! Ya sabes el comando para eso:

TRAIN MODEL WeatherForecast

 

¡Ten paciencia! Este entrenamiento tardó 1391 segundos, lo que equivale aproximadamente a 23 minutos.

Ahora, rellena la tabla St_MLL.WeatherTest con el comando "Populate".

Do ##class(St.MLL.WeatherTest).Populate()

Incluye los primeros 5 días de enero de 2025. Una vez completado, seleccione la predicción utilizando el modelo y la tabla de prueba.

📣Recuerda: Es crucial tener al menos tres valores para poder hacer un pronóstico.
SELECT WITH PREDICTIONS (WeatherForecast) * FROM St_MLL.WeatherTest

Bueno, nos muestra el pronóstico para las próximas 3 horas del 2 de enero de 2025. Esto se debe a que definimos nuestro modelo para pronosticar 3 registros en adelante. Sin embargo, nuestro modelo de datos contiene datos para cada hora de cada día (00:00, 01:00, 02:00, etc.).

Si queremos ver la perspectiva diaria, deberíamos crear otro modelo entrenado para hacerlo día a día.

Creemos el siguiente modelo para ver el pronóstico de 5 días.

CREATE TIME SERIES MODEL WeatherForecastDaily 
PREDICTING (Temperature, Humidity, WindSpeed, Pressure) 
BY (DatetimeNYC) FROM St_MLL.Weather
USING {"Forward":5, "Frequency":"d"}

 

Ahora, repita los mismos pasos… entrenando y mostrando el pronóstico:

TRAIN MODEL WeatherForecastDaily
SELECT WITH PREDICTIONS (WeatherForecastDaily) * FROM St_MLL.WeatherTest

¡Espera! Esta vez, tengo el siguiente error:

[SQLCODE: <-400>:<Fatal error occurred>]
[%msg: <PREDICT execution error: ERROR #5002: ObjectScript error: <PYTHON EXCEPTION> *<class 'ValueError'>: forecast_length is too large for training data. What this means is you don't have enough history to support cross validation with your forecast_length. Various solutions include bringing in more data, alter min_allowed_train_percent to something smaller, and also setting a shorter forecast_length to class init for cross validation which you can then override with a longer value in .predict() This error is also often caused by errors in inputing of or preshaping the data. Check model.df_wide_numeric to make sure data was imported correctly. >]

¿Qué ha pasado?

Como indica el error, se debe a la falta de datos para hacer una predicción. Podrías pensar que necesita más datos en la tabla meteorológica y en el entrenamiento, pero tiene 8760 registros... ¿Cuál es el problema?

Si queremos pronosticar el tiempo para un gran número de días, necesitamos muchos datos en el modelo. Introducir todos los datos en una tabla requiere un tiempo de entrenamiento considerable y un ordenador muy potente. Por lo tanto, dado que este es un tutorial básico, crearemos un modelo para solo 3 días.

No olvides eliminar el modelo WeatherForecastDaily antes de seguir las instrucciones.

DROP MODEL WeatherForecastDaily

No voy a incluir todas las imágenes de esos cambios, pero sí te daré las instrucciones de qué hacer:

CREATE TIME SERIES MODEL WeatherForecastDaily 
PREDICTING (Temperature, Humidity, WindSpeed, Pressure) 
BY (DatetimeNYC) FROM St_MLL.Weather
USING {"Forward":3, "Frequency":"d"}

TRAIN MODEL WeatherForecastDaily

SELECT WITH PREDICTIONS (WeatherForecastDaily) * FROM St_MLL.WeatherTest

Nota Importante

El contenedor Docker containers.intersystems.com/intersystems/iris-community-ml:latest-em ya no está disponible, por lo que debes usar el contenedor iris-community.

Este contenedor no se inicializa con la configuración de AutoML, por lo que primero deberá ejecutarse la siguiente declaración:

pip install --index-url https://registry.intersystems.com/pypi/simple --no-cache-dir --target /usr/irissys/mgr/python intersystems-iris-automl

Si está utilizando un Dockerfile para implementar su imagen Docker, recuerde agregar el siguiente comando a las instrucciones de implementación:

ARG IMAGE=containers.intersystems.com/intersystems/iris-community:latest-em
FROM $IMAGE
USER root
WORKDIR /opt/irisbuild
RUN chown ${ISC_PACKAGE_MGRUSER}:${ISC_PACKAGE_IRISGROUP} /opt/irisbuild
RUN pip install --index-url https://registry.intersystems.com/pypi/simple --no-cache-dir --target /usr/irissys/mgr/python intersystems-iris-automl

Para obtener más información, visite el siguiente sitio web:

https://docs.intersystems.com/iris20251/csp/docbook/DocBook.UI.Page.cls?KEY=GIML_Configuration_Providers#GIML_Configuration_Providers_AutoML_Install 

 

1 Comment
ディスカッション (1)1
続けるにはログインするか新規登録を行ってください
InterSystems公式
· 2025年6月25日

Annonce de la sortie d'InterSystems API Manager (IAM) 3.10

InterSystems a le plaisir d'annoncer la sortie d'IAM 3.10. Première version significative depuis environ 18 mois, IAM 3.10 inclut de nombreuses nouvelles fonctionnalités importantes non disponibles dans IAM 3.4, notamment :

  • Ajout de la prise en charge de la synchronisation incrémentielle de la configuration pour les déploiements en mode hybride. Au lieu d'envoyer l'intégralité de la configuration de l'entité aux plans de données à chaque mise à jour, la synchronisation incrémentielle permet d'envoyer uniquement la configuration modifiée aux plans de données.
  • Ajout du nouveau paramètre de configuration admin_gui_csp_header à Gateway, qui contrôle l'en-tête Content-Security-Policy (CSP) servi avec Kong Manager. Ce paramètre est désactivé par défaut et vous pouvez l'activer. Vous pouvez utiliser ce paramètre pour renforcer la sécurité dans Kong Manager.
  • Injecteur AI RAG (ai-rag-injector) : ajout du plugin AI Rag Injector, qui permet d'injecter automatiquement des documents afin de simplifier la création de pipelines RAG.
  • AI Sanitizer (ai-sanitizer) : ajout du plugin AI Sanitizer, qui permet de nettoyer les informations personnelles des requêtes avant leur traitement par proxy AI Proxy ou AI Proxy Advanced.
  • Kafka Consume (kafka-consume) : introduction du plugin Kafka Consume, qui ajoute des fonctionnalités de consommation Kafka à Kong Gateway.
  • Redirect (redirect) : introduction du plugin Redirect, qui permet de rediriger les requêtes vers un autre emplacement.
  • … et bien plus encore

Les clients effectuant une mise à niveau depuis des versions antérieures d'IAM doivent obtenir une nouvelle clé de licence IRIS pour utiliser IAM 3.10. Kong a modifié ses licences de telle sorte que nous devons vous fournir de nouvelles clés de licence. Lors de la mise à niveau d'IAM, vous devrez installer la nouvelle clé de licence IRIS sur votre serveur IRIS avant de démarrer IAM 3.10.

IAM 2.8 est arrivé en fin de vie et nos clients actuels sont vivement encouragés à effectuer la mise à niveau dès que possible. IAM 3.4 arrivera en fin de vie en 2026 ; commencez donc à planifier cette mise à niveau rapidement.


IAM est une passerelle API entre vos serveurs et applications InterSystems IRIS. Elle fournit des outils pour surveiller, contrôler et gérer efficacement le trafic HTTP à grande échelle. IAM est disponible en tant que module complémentaire gratuit pour votre licence InterSystems IRIS.

IAM 3.10 peut être téléchargé depuis la section Components du site de distribution de logiciels WRC.

Suivez le Guide d'installation pour savoir comment télécharger, installer et démarrer avec IAM. La documentation complète d'IAM 3.10 vous fournit plus d'informations sur IAM et son utilisation avec InterSystems IRIS. Notre partenaire Kong fournit une documentation complémentaire sur l'utilisation d'IAM dans la documentation de Kong Gateway (Enterprise) 3.10.

IAM est uniquement disponible au format OCI (Open Container Initiative), également appelé conteneur Docker. Des images de conteneur sont disponibles pour les moteurs d'exécution compatibles OCI pour Linux x86-64 et Linux ARM64, comme détaillé dans le document Plateformes prises en charge.

Le numéro de build de cette version est IAM 3.10.0.2.

Cette version est basée sur la version 3.10.0.2 de Kong Gateway (Enterprise).

ディスカッション (0)0
続けるにはログインするか新規登録を行ってください