You can make a fake date of death from logic rules or prediction model outputs.
If mortality is well understood in a subset of the data, and is missing at random from the rest of the data; and the estimate is wide(year of death) as opposed to narrow (second of death in day of year) you could consider ML or MI ((weighted) multiple imputation) approaches. It would mean that your conclusion is theoretical and not real world but it should be all right provided cases were enrolled at similar start points for similar reasons and from similar places (if etiology, geography and enrollment is controlled).
If this is a must do I would recommend using national mortality data within demography to weight the risk of death in the death observed cases and then use ML to find the relationships between variables that span observed and unobserved mortality cases for a set where death is observed and a set where death is not. Finally use the ML output to re-weight the likelihood of being able to observe death from your records and impute with ‘probability of observation within demography’ from observed case weights, as well as ‘probability within clinical condition death observed observed weights’ (like Charleston comorbidity score or something) to enhance the solution to the MI equation.
If you frame your prediction broadly (vaguely) you have a higher chance of being right; dead within five years of study start is more accurate when predicted from observed weighted imputed records than dead within one year of study start.
The best option is to get a death certificate; but you can ‘wing it’ provided you either know they died or know when some people who really look a lot like them died.