We have a new episode of the Algorithmic Futures Podcast out, and it has me thinking back to my earlier post on objectivity, and the challenges of figuring out what we know — and how we know it. In the podcast episode, our guest, Lyndon Llewellyn from AIMS, presents the kind of scenario that keeps scientists up at night: you create a new method of measuring something — a method that promises to be better in some way. What if that method comes up with a different answer than you might have gotten previously? Is the new method right? Is it wrong? Is everything you’ve done called into question? And if so, what does that mean? These are big, scary questions, particularly when the integrity of your work is at stake.
In the podcast, Lyndon’s speaking specifically about the simultaneous promise and perils of using autonomous technologies as data collection and analysis tools. In his world, such technologies are likely to augment — and in some cases, replace — information currently collected by highly trained human divers, who are towed over specific areas of the reef year after year. There’s respect for the work those divers have done in Lyndon’s story, and the idea that the body of work this team has amassed over almost 40 years represents a useful truth.
It’s gotten me thinking about parallels to my own past life in nuclear physics. For me, the knowledge I created was always mediated by technology. It had to be. I couldn’t detect the radioactivity I was interested in directly — at least not in any safe, useful fashion — so I had no choice but to use detectors and signal processing electronics and computers and code. During an experiment, the detection system operated “autonomously” to collect valid data, where valid was defined by conditions we specified. But it hadn’t always been that way.
In the lab I was trained in, there was once a giant blue ball that took up a large portion of our experimental hall. My predecessors in that lab used the ball for experiments, generating nuclear reactions at its centre and detecting the products of those reactions on giant sheets of photographic paper. The data points, I was once told, were tiny dots of light in a sea of black. A team of women would count the dots on each sheet, and these dots — and their position in the ball — would become the observations physicists would use to understand the structure of the nuclei they had produced.

(Yale University – Wright Nuclear Structure Laboratory, 2005ish)
With the introduction of computers, better radiation detectors, and a heap of signal processing electronics (all the tools one needs to create an automated data collection and analysis system), the ball became a relic of another time.
In listening to Lyndon’s story, I wonder now what the transition from photographic paper to computer-collected data looked like?
I imagine a bit of nervousness set in as those first detectors were installed. Were the data reasonable? Would it reveal something we couldn’t see before? If the answers didn’t agree — what was right? What were the consequences? In truth, we ask these questions with every experiment. Surprise in an experiment is not a eureka moment. Those only come after every cable and line of code is checked, and every detail is verified (as I found for my work in this paper here — we thought the data was wrong at first). But when the technology you are using to seek truth is new, the only normal you know is the understanding you’ve built up using previous methods.
In physics, we build trust in the equipment that makes our science possible by finding ways to verify what we “see”. This generally happens in two ways. We can make sure the data we collect, once processed, conforms with the laws of physics, and we can make sure we reproduce old (tried and true) results. There are inherent assumptions in either method: that the laws of physics are correct, that our previous results are correct, and that we haven’t somehow tuned our data analysis approach to produce what we expect to see. Some of these assumptions are more likely to be true than others. As a researcher, I trust the laws of physics because they’ve been verified so many times. I trust my own prior results or analysis far less. It’s easy to make mistakes when there’s a detector, a bunch of electronics and 10,000 lines of code between what I seek to understand and what I observe. This is why, if you find my former colleagues in the beam hall during an experiment, they’re constantly looking for oddities. What is that funny thing there? Why does this look a bit off? Can we explain this shape in the data here? The underlying questions are these: Do we understand our detection system? Is what it’s “telling us” right?
In all my time in physics, that quest to get things right was fundamental to the culture of the place. It’s a quest I sense in Lyndon’s answers here — his desire to get things right, to seek a truth that he perceives to exist, even if it can’t truly be known.
These are big questions. And I don’t think there’s a simple answer. Science is about truth seeking, but there are always going to be biases in the way we seek truth. Some are surmountable; others not so much. I think those biases matter less than our attitude towards our own findings. Are we suspicious enough of our own work? Do we question ourselves, and do we allow others to question us (respectfully)? Are we aiming to learn?
I think our answers to these questions matter for science, for policy, for everything we do, really. I suspect the introduction of new (autonomous) technologies into AIMS’s practice could be seen as an opportunity to test the robustness of the science, and potentially enhance it in unexpected ways. If conclusions drawn by the new technology match the old ones, that’s great, but if they don’t? That’ll be tough, but it’ll also pose new creative challenges — ones that I suspect Lyndon and his colleagues will be well equipped for.
Featured image – Great Barrier Reef from above, by USGS on Unsplash