Machine-Readable Prevalence Estimates

May 12th, 2023
bio, nao
Cross-posted from my NAO Notebook

In Estimating Norovirus Prevalence I wrote up an estimate for how many people currently had Norovirus at various times in the past, describing some of my work on the P2RA project at the NAO. That post is prose, though, with inline calculations, and this has a few drawbacks:

  • The calculations are manual, which makes it harder to catch errors.

  • It's hard to tell exactly how sourcing works for inputs that are being pulled in from elsewhere.

  • You might want multiple estimates based on changing inputs over time.

At the NAO we also started with prose estimates, in Google Docs, but in addition to the issues above we found that the review tooling wasn't a good fit for the kind of deep reviews we wanted. After some initial struggles we switched to Python to represent our estimates; you can see the code on github.

An estimate is a combination of inputs (numbers we get from somewhere else) and calculations (how we combine those inputs). Most of the effort is in the inputs: making sure it's clear where the numbers come from. For example, here's how we represent that the CDC estimates that there were 1.2M people with HIV in the US in 2019:

us_infected_2019 = PrevalenceAbsolute(
  infections=1.2e6,
  country="United States",
  date="2019",
  active=Active.LATENT,
  source="https://www.cdc.gov/hiv/library/reports/hiv-surveillance/vol-26-no-2/content/national-profile.html#:~:text=Among%20the%20estimated-,1.2%20million%20people,-living%20with%20HIV",
)

An absolute prevalence isn't useful to us without connecting it to a population. Here's how we could represent the corresponding population:

us_population_2019 = Population(
  people=328_231_337,
  country="United States",
  date="2019-01-01",
  source="https://www.census.gov/newsroom/press-releases/2019/new-years-population.html",
)

And we can connect these to get a Prevalence:

us_prevalence_2019 = us_infected_2019.to_rate(us_population_2019)

The to_rate method checks that the locations and dates are compatible, does the division, gives us a Prevalence.

For a more complex example, you could look at norovirus.py. This is doing the calculation from the previous blog post, with the addition of estimates for the Norovirus Genogroup I and II subtypes.

For each estimate one team member creates the initial estimate and then we use GitHub's code review process for a line-by-line review. This includes validating all inputs match what's listed on the external site and that we're using the data source correctly, in addition to checking that the overall structure of the estimate is reasonable.

I think there's a decent chance that this isn't the final way this will look, and as we get further along in the project we'll want to make the details of the prevalence calculation available to the model so it can understand uncertainty. Most of the work is in determining the inputs and reviewing the structure of the estimate, however, so if we do need to make changes like that I expect they won't be too bad.

This post describes in-progress work at the NAO and covers work from a team. The Python-based estimation framework is mostly my work, with help from Simon Grimm and Dan Rice.

Comment via: facebook, lesswrong, mastodon

Recent posts on blogs I like:

Animal Welfare and Capabilitarianism

All ethics is a special case of animal welfare science

via Thing of Things December 18, 2024

Developing the middle ground on polarized topics

Avoiding false dichotomies The post Developing the middle ground on polarized topics appeared first on Otherwise.

via Otherwise November 25, 2024

How to eat vegan on Icon of the Seas

Royal Caribbean has a new giant cruise ship, Icon of the Seas, which has a large selection of food options.

via Home November 21, 2024

more     (via openring)