It is and it isn’t. I agree that significant digits is for indicating the number of digits that are used (so 3 significant dignits for the speed of light is 3.00x10^8 m/s while 5 would be 2.9979 x10^8 m/s. So, you could say that significant digits could be seen as some indication of uncertainty, but the intervals are (in my view) really what you are talking about.
For example, you state that the measurement was done +/- 10mg/dL and therefore some use of significant digits, but would you make the same claim if it was +/- 7 mg/dL?
I also agree that there’s some problem with unit and measurement standardization in the CDM in that: when you look at a body measurement, is it always in KG (a standardized unit), and how many significant digits are you using (so you don’t have to bother going with 10.39482 if everything is standardized to 2 decimal places). But I don’t think that’s what you’re really going after here.
I think the intervals you’re using describes confidence around the actual value. To use your ‘sometime in the summery 2022’, I’d represent that with intervals starting at the midpoint of summer and +/- the half the number of days in summer. That interval covers every day in summer…you don’t have confidence about when it is, but if you said ‘June 1’ or ‘August 1’ both would be ‘in the range’ to hit this observation.
I think considering the confidence interval along with the significant digits is a bit entangling (as I said above: you can specify that we only care about 2 decimal places when recording weight, and that’s not making any statement about confidence interval, except you could have a .005 error. You then could say that all of our weight intervals are +/- .005 but i think the point of the uncertainty is to account for device that took the measurement.
Disclaimer: I am not a professional lab tech / material scientist so some everything above is just musings of a data scientist, but I do handle a lot of complicated code, and dealing with comparing values with arbitrary precision/confidence intervals greatly complicates the data-model and the processing logic to deal with those data considerations.