It is and it isnât. I agree that significant digits is for indicating the number of digits that are used (so 3 significant dignits for the speed of light is 3.00x10^8 m/s while 5 would be 2.9979 x10^8 m/s. So, you could say that significant digits could be seen as some indication of uncertainty, but the intervals are (in my view) really what you are talking about.
For example, you state that the measurement was done +/- 10mg/dL and therefore some use of significant digits, but would you make the same claim if it was +/- 7 mg/dL?
I also agree that thereâs some problem with unit and measurement standardization in the CDM in that: when you look at a body measurement, is it always in KG (a standardized unit), and how many significant digits are you using (so you donât have to bother going with 10.39482 if everything is standardized to 2 decimal places). But I donât think thatâs what youâre really going after here.
I think the intervals youâre using describes confidence around the actual value. To use your âsometime in the summery 2022â, Iâd represent that with intervals starting at the midpoint of summer and +/- the half the number of days in summer. That interval covers every day in summerâŚyou donât have confidence about when it is, but if you said âJune 1â or âAugust 1â both would be âin the rangeâ to hit this observation.
I think considering the confidence interval along with the significant digits is a bit entangling (as I said above: you can specify that we only care about 2 decimal places when recording weight, and thatâs not making any statement about confidence interval, except you could have a .005 error. You then could say that all of our weight intervals are +/- .005 but i think the point of the uncertainty is to account for device that took the measurement.
Disclaimer: I am not a professional lab tech / material scientist so some everything above is just musings of a data scientist, but I do handle a lot of complicated code, and dealing with comparing values with arbitrary precision/confidence intervals greatly complicates the data-model and the processing logic to deal with those data considerations.