OHDSI Home | Forums | Wiki | Github

CONCEPT_ID (to NULL or not to NULL) [THEMIS WG3]

From a mapping quality point of view the null/0 distinction is important. For non-required concept_ids like value_as_concept_id, a null represents that there was no attempt made to map a source concept to a standard target concept. A 0 means that no suitable mapping has been found. When generating mapping statistics, we neglect the nulls and only look at the concept_ids >= 0.

So in that perspective I would argue to allow null in those fields.

2 Likes

@MaximMoinat, what are situations where you do not attempt to map and why do you need to keep track of this? How are you using this in an analysis? Or are you only using this for mapping statistics? You still should be able to get the answer you need by using the SOURCE_VALUE columns.

Friends:

Setting @aostropolets’ problem aside:

Are you getting ready to make a recommendation to the THEMIS session? Are we using NULL or 0? I have seen good arguments for both (speed vs logic)?

I agree that we shouldn’t capture “intent to capture”. It matters what happened to the patient.

I think we are. Here is still where I am.

  1. Submit recommendation to the CDM WG to make all CONCEPT_IDs NOT NULL columns.

  2. Update language in the WIKI under CONCEPT_ID

Foreign key into the Standardized Vocabularies (i.e. the standard_concept attribute for the corresponding term is true), which serves as the primary basis for all standardized analytics. For example, condition_concept_id = 31967 contains reference value for SNOMED concept of 'Nausea'. Set the value to 0 'No matching concept' when a mapping to a standard concept cannot be found.

THEMIS WG3 is bringing this up at the THEMIS F2F Day 1 this week.


RULE
Foreign key into the Standardized Vocabularies (i.e. the standard_concept attribute for the corresponding term is true), which serves as the primary basis for all standardized analytics. For example, condition_concept_id = 31967 contains reference value for SNOMED concept of 'Nausea'. Set the value to 0 'No matching concept' when a mapping to a standard concept cannot be found.

ACTION

  1. Submit recommendation to the CDM WG to make all CONCEPT_IDs NOT NULL columns.
  2. Update language in the WIKI under CONCEPT_ID
    https://github.com/OHDSI/CommonDataModel/wiki/Data-Model-Conventions

I would like to add to this. Rather than the vague requirement that the
concept id is a ‘Foreign key into the Standardized Vocabularies’ the CDM
should define the Domain, and if there are additional constraints on the
concept set the Concept Class. This should be the definition for all
‘_concept_id’ columns.

@DTorok I like this idea. I think the ‘foreign key to a standard concept’ phrase that is used across the CDM documentation is too vague for comfort and specifying the domain would give a little more structure. This may be a separate THEMIS issue, however (or just something I could tackle myself), as I think what @ericaVoss is most interested in here is setting all *CONCEPT_ID fields to NOT NULL and setting the value to 0 when there is no map to a standard concept.

Sorry for my late reply, @Chris_Knoll actually described this very well already:

So here we consider three concept_id fields that should be allowed to be NULL in the measurement table, as they can be missing:

  • value_as_concept_id (if the measurement only contains a numeric value)
  • unit_concept_id (if the measurement has only a concept value)
  • operator_concept_id
    These fields might be left empty because there is no data to fill it with, not because there is “No matching concept”. There is no source concept to match to the OMOP vocabulary.

On the other hand, it does make sense to set the cause_concept_id in the death table to NOT NULL.

And in V6 we want to do that properly. Not with the wishy-washy domain to table correlation.

Right. We want to even nail the list of Concepts better for the various fields.

@MaximMoinat:

Agree full-heartedly. We should limit the @ericaVoss proposal to only apply to the proper *_concept_id fields of each Domain table. Values, units and operators should absolutely be nullable.

3 Likes

You can still set all of these to 0 because that is what the SOURCE_VALUE column is for. Store 0 in your CONCEPT_ID and store the NULL in the SOURCE_VALUE column (linking MEASUREMENT for reference).

  • value_as_concept_id --> value_source_value
  • unit_concept_id --> unit_source_value
  • operator_concept_id -->

Looks like OPERATOR_CONCEPT_ID doesn’t have a SOURCE_VALUE column, but I would argue that it should and that should be a recommendation to the CDM WG.

I feel like this complicates thing that you need to interpret the 0 in the unit_concept_id as there wasn’t an actual unit by looking at the unit_source_value for null (and what if we have one of those sources that are affraid of NULLs and instead they store empty strings (’’) in their data? then we have to check for null and empty string to understand if there was even a value that was mapped from?

Why not just put null in the columns where there wasn’t a source value to map from? That’s what null represents: the absence of value.

I personally prefer the consistency of CONCEPT_ID is a concept (0 representing the absence of a map) and then store the true raw value in the SOURCE_VALUE column. We aren’t going to perform standard analytics on NULL. At Janssen we have been doing this for years and never had a use case for storing a NULL in CONCEPT_ID. The only time I utilize the NULLs from the raw data is for ETL debugging and the SOURCE_VALUE does just find for this.

Will we be performing standard analytics on 0?

Not anything more than using it for reviewing what got mapped to 0, which @Ajit_Londhe, @clairblacketer, and I do do. If something didn’t map we don’t care if it the raw data had NULL, ‘’, or garbage - we review to figure out if we can fix it. 0 tells us in a standard way that we didn’t map to standard terminology and having the NULL is in the SOURCE_VALUE you can always use this if review what the raw data was.

This is the essence that @MaximMoinat, @Christian_Reich and myself have been saying: 0 implies you got something that failed to map. Null means there wasn’t anything in the first place. This isn’t about analytical use cases, this is about what the data represents. I’m surprised that you don’t see the benefit of 0 vs. null…if you just wanted to know what didn’t have a map, and you allow nulls in those cases, then searching for 0 means you’ll find everything that was attempted to map, but didn’t. I’m guessing that the ‘valid source but no concept in the vocabulary’ is much less prevalent vs no value to map. I’m guessing that you are reviewing a lot of needless rows that have concept 0 when you could have just ignored them outright because there was nothing mapped for that row anyways. Of course, the source_value column could help you refine your search, but strictly speaking about the concept column…If I see a 0 of an unknown concept, i’m assuming that there was some value there that failed to map over to a CDM concept…

Yes I agree and I still argue why would you differentiate between NULL, ‘’, ’ ', or an unmapped source code. What happens when the raw value actually sends text ‘NULL’ or ‘NA’, in that case I would say 0 but a NULL value is NULL? Just seems inconsistent.

In your example of searching for CONCEPT_ID IS NULL this wouldn’t even reliably bring back values you wanted unless the ETL has additional logic to discerning between “true” NULLS and values not mapped. We’d need logic to handle “flavors of NULL” which I don’t see value in.

I think it is much clearer and consistent to say CONCEPT_ID is 0 when not mapped and store what the raw data gave you in the SOURCE_VALUE column.

One example where I think NULL doesn’t work is if I’m linking DRUG_EXPOSURE to the Vocabulary I would automatically lose records when CONCEPT_ID IS NULL on a JOIN. You would be blind to the fact that there are rows that didn’t map with that JOIN.

It really depends on what you mapping.

Well after 5+ years of doing 13+ CDMs that don’t have NULLs in CONCEPT_ID I haven’t run into a problem or a need for it to be otherwise. :slight_smile:

I think that’s the best example of NULL vs. 0. If they are telling you it’s not a value or NA (which I’m assuming means ‘not applicable’ then saying that it is not a value is being consistent with the original data.

I’m sorry, I don’t understand this point. But you could substitute ‘flavors of unmapped concepts’ and your statement would make my point.

This is what a left join is for. And it’s explicitly to say ‘Bring me back rows even if their is no value in the right side table.’ The counter to your example is: you do a join on a table and get records back that you didn’t want, and you’ll have to exclude concept IDs being 0, but you don’t know if that 0 meant that there wasn’t a value vs. it was something that was unable to map…

I know you’re just being silly so I won’t comment on that! :wink:

If I may throw a technical point to this argument of NULL or not to NULL.

DBMS with MPP architecture apparently dont like NULLs. Apparently NOT NULL columns perform significantly better compared to NULLable columns. JOINS and WHERE work significantly better with NOT NULL columns. You ask me for reference – i heard this someplace and it was stored in my memory some place. Maybe @Chris_Knoll knows! Maybe it is how the ‘distribute on’ works - maybe MPP cant distribute NULLs. Also - having not gone to formal SQL school, i learnt the hard way that NULL is not equal to NULL

I am in favor of some flavor of integer – maybe negative integers to represent flavors of NULL, like -1, -2, -3 etc because all our concept_id’s are positive integers anyways… So we can clearly define each flavor of special information without any ambiguity and develop conventions on when to use what. This could be then in the OMOP vocabulary as an OMOP-generated concept_id with a vocabulary-id called ‘Special Values’, and domain ‘Special Values’, and concept_class_id ‘NULL flavors’. Then all negative value concept_id’s are some form of special concept’s and we could support a lot of such special situations using OMOP vocabulary.

This may make writing parameterized SQL and R packages easier - because we dont have to do stuff like

where concept_id in (33423, 32432342) or concept_id is null

we can just do where

where concept_id in (33423, 32432342, -1)
1 Like

I have not heard this, and have done lots of work in optimizing queries on mpp systems. The only place where MPP performance suffers is in the use of left joins (which doesn’t have to do with nullable columns). In the use of a Left join. All records from the left table (which may be distributed across multiple nodes) have to be joined to all records found in the right side. This causes data movement across nodes which is what you want to avoid in MPP environments. If it was not a left join, then you can leverage partitioning information from the ON clause to limit the rows compared to records living in the same node.

Based on azure guidance, you use hashing on columns that have no or very few null values. But this is for partitioning guidance (NULLs will go to the same partition, but this is same thing as all 0’s going to the same partition). But nothing in there says ‘null columns slow down performance’. You just need to be smart about what you partition on. Ref: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-tables-distribute.

The first one reads 'give me rows with these specified concepts or nothing at all.

The second one reads ‘give me rows with these specified concepts…and I don’t know what the last one refers to except it’s an inverse sign’.

I think the topic of ‘flavors of missing data’ is conflating the primary point here of NULL vs. 0. The null is very binary: Is there a value or is there not a value, that is the question. Null is very clear: there is no value. If you use 0, then you have to refer to other columns to understand what this 0 means, and you may even have to dig back into the ETL logic about how they decided to code 0 for those cases of ‘’, ‘NULL’ and ‘NA’ that @ericaVoss mentioned. I might be misunderstanding something, but isn’t the point of Themis is that you don’t have to go back to ETL logic on a study-by-study basis, and instead there are expectations that can be assumed from an ETL that follows the Themis standards?

t