Imagine you have a goal of identifying a novel disease by the time some small fraction of the population has been infected. Many of the signs you might use to detect something unusual, however, such as doctor visits or shedding into wastewater, will depend on the number of people currently infected. How do these relate?
Bottom line: if we limit our consideration to the time before anyone has noticed something unusual, where people aren't changing their behavior to avoid the disease, the vast majority of people are still susceptible, and spread is likely approximately exponential, then:
incidence = cumulative infections ln (2) doubling time Let's derive this! We'll call "cumulative infections" c(t), and "doubling time" Td. So here's cumulative infections at time t: c(t) = 2 t Td The math will be easier with natural exponents, so let's define k = ln (2) Td and switch our base: e kt Let's call "incidence" i(t), which will be the derivative of c(t): i(t) = d dt c(t) = d dt e kt = k e kt And so: i(t) c(t) = k e kt e kt = k = ln (2) Td Which means: i(t) = c(t) ln (2) Td What does this look like? Here's a chart of weekly incidence at the time when cumulative incidence reaches 1%: For example, if it's doubling weekly then when 1% of people have ever been infected 0.69% of people became infected in the last seven days, representing 69% of people who have ever been infected. If it's doubling every three weeks, then when 1% of people have ever been infected 0.23% of people became infected this week, 23% of cumulative infections. Is this really right, though? Let's check our work with a bit of very simple simulation: def simulate(doubling_period_weeks): cumulative_infection_threshold = 0.01 initial_weekly_incidence = 0.000000001 cumulative_infections = 0 current_weekly_incidence = 0 week = 0 while cumulative_infections < \ cumulative_infection_threshold: week += 1 current_weekly_incidence = \ initial_weekly_incidence * 2**( week/doubling_period_weeks) cumulative_infections += \ current_weekly_incidence return current_weekly_incidence for f in range(50, 500): doubling_period_weeks = f / 100 print(doubling_period_weeks, simulate(doubling_period_weeks)) This looks like: The simulated line is jagged, especially for short doubling periods, but that's not especially meaningful: it comes from running the calculation a week at a time and how some weeks will be just above or just below the (arbitrary) 1% goal.
Let's derive this! We'll call "cumulative infections" c(t), and "doubling time" Td. So here's cumulative infections at time t:
c(t) = 2 t Td
The math will be easier with natural exponents, so let's define k = ln (2) Td and switch our base:
e kt
Let's call "incidence" i(t), which will be the derivative of c(t):
i(t) = d dt c(t) = d dt e kt = k e kt
And so:
i(t) c(t) = k e kt e kt = k = ln (2) Td
Which means: i(t) = c(t) ln (2) Td
What does this look like? Here's a chart of weekly incidence at the time when cumulative incidence reaches 1%:
For example, if it's doubling weekly then when 1% of people have ever been infected 0.69% of people became infected in the last seven days, representing 69% of people who have ever been infected. If it's doubling every three weeks, then when 1% of people have ever been infected 0.23% of people became infected this week, 23% of cumulative infections.
Is this really right, though? Let's check our work with a bit of very simple simulation:
def simulate(doubling_period_weeks): cumulative_infection_threshold = 0.01 initial_weekly_incidence = 0.000000001 cumulative_infections = 0 current_weekly_incidence = 0 week = 0 while cumulative_infections < \ cumulative_infection_threshold: week += 1 current_weekly_incidence = \ initial_weekly_incidence * 2**( week/doubling_period_weeks) cumulative_infections += \ current_weekly_incidence return current_weekly_incidence for f in range(50, 500): doubling_period_weeks = f / 100 print(doubling_period_weeks, simulate(doubling_period_weeks))
This looks like:
The simulated line is jagged, especially for short doubling periods, but that's not especially meaningful: it comes from running the calculation a week at a time and how some weeks will be just above or just below the (arbitrary) 1% goal.
Comment via: facebook, lesswrong, mastodon
A lot of people play fiddle. Basically nobody starts by learning chords before learning melodies. But that's actually how I learned. I started with chords. One of the nice things about learning to play violin this way is that you can go busking even…
I have some stuffies and I just have a bunny. Bunny is a rabbit. Woof is a seal. My favorite stuffie is bun bun. I play with my stuffies. Sometimes I jump up with them and I roll them. I can just throw them in the air when I want to play bthululubp wi…
2024 election takes
more (via openring)
More Posts
Dividing Tasks
Lifelong investments
Explaining Capitalism Harder
Equal Parenting Advice for Dads
The Privilege of Earning To Give