We took the efficacy measure mentioned in the Ferretti et al paper and reasoned the constituent parts that affect the probability of detection of 30 second segments of any contact event.
The paper is now available on medRxiv here: https://www.medrxiv.org/content/10.1101/2020.11.07.20227447v1
The resultant formula is below:-
Where:-
There was no standardised way to measure proximity detection and continuity of coverage of a contact event other than simple “Percentage of devices detected during an entire test” style high level metrics.
Whilst useful, those simplistic metrics miss out a variety of low level Operating System, Hardware, and especially protocol behaviours that can lead to a misrepresentation or misunderstanding of efficacy in epidemiological terms.
In short the simple measure tracks ‘detection of phones’ and not ‘detection of risk exposure’. The fair efficacy formula presented in our paper (under peer review) does this.
Epidemiologists need to calculate exposure risk based on time and distance of any contact event. The closer people are together, the higher risk. The length of the ‘exposure window’ will determine how fine grained such risk scores are. Performance across a range of devices prevalent in a particular population will determine a protocol’s maximum reach.
All of these measures need to be understood in order to truly score and compare the effectiveness of any Proximity Detection Protocol and therefore the contact tracing applications and health service responses built on this underpinning.
We took a step back and considered what physically happens between two mobile devices during a contact event. We then reasoned what actions could occur to reduce the chances of a part (window) of a contact event from being correctly recorded.
We then created dummy data for a simple theoretical contact event. We then applied an existing risk scoring model to show how a ‘perfect protocol’ and a protocol that exhibits some deliberate limitation could modify the observed risk.
This then led us to create a formula that took in to account all limiting factors that a user of such a contact tracing app could encounter in a particular day.
We then applied this formula to a brand new Bluetooth protocol called Herald. We were able to identify and rapidly tune this protocol in 5 weeks thanks to the presence of the fair efficacy formula, and approaching the problem in an epidemiologially data driven way.
No. Although it is true that some of the authors of our paper and protocol worked on previous COVID-19 app efforts for governments, we built this measure independent of any particular protocol and before we wrote our new protocol from scratch. We wanted to be sure we could encourage all teams to use our measure to communicate their own protocols’ effectiveness with their country’s population. The hope is to increase the use of, and trust in, contact tracing applications worldwide - no matter whose protocol they use.
Once we completed the design of the formula we created a new protocol based on our new knowledge of what affects efficacy, and used the formula to test and measure its performance, and directed our efforts at the mechanisms that would provide most epidemiological advantage.
Our protocol has therefore a high score on the measure, but that’s because we’ve been using the formula and testing approach for longer. The formula wasn’t designed to show our protocol in a good light. We would encourage all teams to apply our formula to their work and rapidly iterate to improve their own efficacy. This way more lives can be saved.
The paper does mention some items out of scope of our research that others are already working on, or where published material already exists. This includes:-
The synthetic data and calculations used in the paper can be found in the Synthetic Data Spreadsheet (ODS)
From the paper: “Now a low-level protocol that works across a large range of devices exists in the Herald protocol, the author aims to suggest a payload to transfer over this protocol that allows for its use in either a centralised or decentralised contact tracing application. This will provide international interoperability whilst allowing local jurisdictions to tailor their approach to one acceptable by its residents.”
We also believe that more work needs to be done to ensure that any RSSI to distance estimation formulae takes account of the fact that some phone pairings appear to use a log-distance approach to scale their RSSI values, whereas others use an inverse-distance-squared approach. This leads to inaccuracies around the 2.5m mark.
Since the first draft of the paper was distributed, Oxford University’s Big Data Institute (BDI) have released OpenABM-COVID-1927, a Jupyter notebook simulation of COVID-19 cases, spread, and hospitalisation statistics given a population size, existing number of cases, and settings for various control methods.
The paper we have written has a new section in it detailing how to take the output of the fair efficacy formula and apply it to an OpenABM simulation for COVID-19 spread.
Our team has created an extension to Oxford’s work to simulate the disease spread curve given efficacy results presented in the Fair Efficacy Formula. This is currently undergoing review. Once verified, the results and spread control charts shall be published here.
To help you get started, see the documentation.