Mapping HRV metrics to standard concepts

Hello everyone,
I’m a university student, currently working on my thesis. As part of this, we are working on mapping ECG signals into the OMOP CDM. In particular, we are currently focused on matching some HRV (Heart Rate Variability) metrics with the appropriate standard concepts.
Here’s where we are at, after using Usagi and Athena to find the following concepts:

source name Concept id Concept name notes
AVNN (or mean RR or mean heart rate or mean inter-beat interval): Average NN interval duration 3006307 R-R interval (Mean value during study) by EKG
SDNN: Standard deviation of NN (or RR or inter-beat) interval durations 21491502 R-R interval.standard deviation (Heart rate variability)
RMSSD: Square root of mean summed squares of NN (or RR or inter-beat) interval differences 0 Unmapped
TOTAL_POWER: Total power in all three bands combined 21490734 Total power of power spectrum on EEG Inexact equivalence
VLF_POWER: Power in the VLF (very low frequency) band 4240339 Low frequency
VLF_POWER: Power in the VLF (very low frequency) band 4125550 Very low
VLF_POWER: Power in the VLF (very low frequency) band 3013078 R-R interval by EKG
LF_POWER: Power in the LF (low frequency) band 4240339 Low frequency
LF_POWER: Power in the LF (low frequency) band 3013078 R-R interval by EKG
HF_POWER: Power in the HF (high frequency) band 4100481 High frequency
HF_POWER: Power in the HF (high frequency) band 3013078 R-R interval by EKG
VLF_NORM: 100 * Ratio between VLF (very low frequency) power and total power 4240339 Low frequency
VLF_NORM: 100 * Ratio between VLF (very low frequency) power and total power 4125550 Very low
VLF_NORM: 100 * Ratio between VLF (very low frequency) power and total power 8554 percent
VLF_NORM: 100 * Ratio between VLF (very low frequency) power and total power 3013078 R-R interval by EKG
LF_NORM: 100 * Ratio between LF (low frequency) power and total power or the sum of LF and HF power 4240339 Low frequency
LF_NORM: 100 * Ratio between LF (low frequency) power and total power or the sum of LF and HF power 8554 percent
LF_NORM: 100 * Ratio between LF (low frequency) power and total power or the sum of LF and HF power 3013078 R-R interval by EKG
HF_NORM: 100 * Ratio between HF (high frequency) power and total power or the sum of LF and HF power 4100481 High frequency
HF_NORM: 100 * Ratio between HF (high frequency) power and total power or the sum of LF and HF power 8554 percent
HF_NORM: 100 * Ratio between HF (high frequency) power and total power or the sum of LF and HF power 3013078 R-R interval by EKG
LF_TO_HF: Ratio between LF (low frequency) and HF (high frequency) power 4240339 Low frequency
LF_TO_HF: Ratio between LF (low frequency) and HF (high frequency) power 4042999 Ratio
LF_TO_HF: Ratio between LF (low frequency) and HF (high frequency) power 4100481 High frequency
LF_PEAK: Frequency of highest peak in the LF (low frequency) band 4240339 Low frequency
LF_PEAK: Frequency of highest peak in the LF (low frequency) band 4114683 Peak
HF_PEAK: Frequency of highest peak in the HF (high frequency) band 4100481 High frequency
HF_PEAK: Frequency of highest peak in the HF (high frequency) band 4114683 Peak
SD1: Poincare plot SD1 descriptor (std. dev. of intervals along the line perpendicular to the line of identity) 0 Unmapped
SD2: Poincare plot SD2 descriptor (std. dev. of intervals along the line of identity) 0 Unmapped
alpha1: Log-log slope of DFA (detrended fluctuation analysis) in the low-scale region 0 Unmapped
alpha2: Log-log slope of DFA in the high-scale region 0 Unmapped
SampEn: The sample entropy 0 Unmapped

We are not completely satisfied with some of them, particularly the ones related to the frequencies, but to be honest, we couldn’t find anything more appropriate. Are there alternative concepts that might work better?

I’d also like to ask your advice on how to proceed for the source terms for which we couldn’t find mapping to a standard concept. Did we miss them, or do they simply not exist yet?

In cases when the appropriate standard concepts don’t currently exist, my understanding is that we should simply map them to “concept_id = 0”. Are there any alternative approaches on how to handle these kinds of situations?
For example, I came across discussions in the forums about creating custom concepts (ID > 2B). However, as I understand it, these can only be used locally and only assigned as “x_source_concept_id” which doesn’t seem like a solution in this context.

Does anybody have experience in mapping HRV metrics?

I apologize for asking so many questions but I’m a bit confused. Any advice or suggestions would be greatly appreciated.

Thank you!

Hi @Emaau. The specific concepts for heart rate variability in your source data are absent in currently available OHDSI standard vocabularies. So if your purpose to use these concepts for analytics locally the most simple way to create set with custom concept_ids using the identifiers greater than two billion.

1 Like

Hi @MSalavei, thank you for your reply!
I appreciate the clarification regarding the absence of specific HRV concepts.
Do you know where I could find more detailed information about the custom concepts approach? I’ve only come across brief mentions of it in the forums.

I’d like to ask two more questions, if you and others don’t mind:

  • In cases when one source term maps to multiple Standard Concepts, is it correct to store multiple rows, one for each target concept, in the corresponding table?

  • What are your thoughts on our approach to mapping the Frequency-Domain Metrics, such as “LF_POWER”? We’ve included the concept 3013078 R-R interval by EKG to provide more context and significance to the data. We acknowledge that these mappings are not perfect, but we are hoping to achieve at least a sufficient solution.

Thank you in advance!

Submit the concepts to a standard developing organization (SDO)
SNOMED-CT US extension or LOINC.

If a concept is not in routine healthcare and SDO - your next option is a research Common Data Element (CDE).

Yes, it’s correct. In case of one-two-many mappings you have to store as many rows as targets you have for any source codes.

You may use even not very granular target concepts for mapping (so called mapping uphill) but you have to assume that these not granular concepts missed some information that in your usecase may be important for further analysis.