Preventing Biological Threats Through AI Research: An Interview With Jake Pencharz

“I got this job because I had a really weird amalgamation of experience—not because I had some linear path that very clearly ended up here.”

How does someone find their way into the niche fields of AI safety and biosecurity? Jake Pencharz is currently a researcher at the UK’s AI Safety Institute where he investigates how AI could democratize research and development in biology and chemistry. His path to this role has been anything but straightforward, shaped by evolving ambitions and unexpected opportunities. Jake’s journey highlights the value of exploring different interests, building career capital, and following curiosity. We recently spoke with Jake about his career path and advice for others looking to enter the field. This conversation has been edited for clarity and brevity.

What did you start out wanting to do with your career and how did that change?

The theme of my career, as you rightly picked up, is one of confusion. When I was 18, I really wasn’t sure what I wanted to do. I actually thought about becoming an artist, so I applied for both fine arts and medicine in university. In the end, I was convinced to pursue medicine. Just before starting, though, I received a call from a professor who mentioned a program in biomedical engineering and thought I’d be a great fit. Because I was so unsure about my future, the idea of combining medicine and engineering seemed perfect. The program allowed me to study both for three years, after which I could choose to continue with either medicine or switch to an engineering track. 

I had always vaguely wanted to do something helpful, which is why I initially applied for medicine. However, during the program, I realized that a lot of the real value in healthcare was coming from the people designing the medical devices and software, not just the doctors. I saw that doctors often relied heavily on these devices, like PET scanners, without fully understanding how they worked. This made me more interested in being the person who designed the devices.

What happened after undergrad?

I started getting interested in working on brain-computer interfaces after watching a TED talk on sensory substitution, which I found incredibly inspiring. This led me to apply for a master’s program in Germany that focused on brain-computer interfaces, combining machine learning, neuroscience, and electrical engineering.

Initially, I thought this was the frontier of science. I think I was a bit naive. I believed we were on the verge of creating a world where humans could be augmented by high-bandwidth communication with computers. I worried that this would create a digital divide between the rich and the poor, where the wealthy would almost become a different species due to their augmented capabilities. My original ambition going into the master’s was to build low-cost EEG devices to help bridge this divide.

Quite quickly, I realized we were much further away from the future I had envisioned. When I started working with EEG devices, it took me weeks to determine whether the signals I was receiving were generated by neural activity or just noise—which was really disheartening. In general, my take on neurotechnology now is that it’s primarily a hardware problem. We can’t extract much information from the current hardware we have. On top of that, I noticed that many companies working in this space were focused either on optimizing websites for selling things more effectively or creating incredibly expensive prosthetics that only a few wealthy individuals could afford. I just didn’t find that compelling.

How did this realization lead to a shift in your career goals?

I’d say my career took a major turn when I realized that my initial path wasn’t right for me. Unsure of what I wanted to do, I literally Googled, “How to have a meaningful career?” This was around the time that the pandemic hit, and pandemic prevention became a very compelling cause area for me. I felt strongly about the mismanagement of vaccine rollout, particularly in South Africa, where the surveillance was patchy at best. That experience really motivated me to work in the biosecurity space.

While I was still pursuing my master’s degree, I was also working full time with a pharmaceutical company. I decided to gain some hands-on experience and applied for a position in a machine learning group at the company. Strangely enough, the reason I got that job was because, before my master’s, I had worked at an AI consulting company for a year. It wasn’t planned; it just happened because the academic seasons between South Africa and Europe are out of sync, and I couldn’t start my master’s right away. 

During that year, I did some full-stack engineering and web design, which ended up being exactly what the machine learning group needed. They were looking for a student with web experience whom they could hire for a low wage. So, I kind of lucked into that position. That seems to be a theme in my career—random bits of experience helping me land jobs that open up new doors. There was no way I could have known that doing web development would eventually lead to this opportunity, but it did. 

I worked in that group for about two years on several interesting projects. One of the key projects I worked on involved antibody structure prediction. You might have heard of AlphaFold, a model developed by DeepMind. It’s amazing, but it turns out it’s not very good at predicting the structures of antibodies, which are now the most common class of therapeutics. Pharmaceutical companies are keen on having efficient and accurate computational pipelines for designing these drugs, and structure prediction is a crucial part of that pipeline. We worked on adapting AlphaFold specifically for antibodies. I was particularly drawn to this project because it connected to medical countermeasures for pandemics, which was my main motivation for getting involved in the first place.

Without a PhD, I don’t think I could have ever secured full-time employment with the machine learning group. It was great to get exposure and experience there, though. At the time, I was still enrolled in my master’s but doing very little—just watching lectures in the evenings and occasionally taking exams. Eventually, the work I was doing at the company became the basis for my master’s thesis, which worked out nicely.

What did you do after completing your master’s?

At that point, I was still interested in working in biosecurity. I ended up applying for a grant to work on metagenomic sequencing. I was drawn to this project because I believed the technology had great potential for large-scale population monitoring, especially in places like Sub-Saharan Africa and India. These are densely populated areas with limited infrastructure, so if you can monitor things like wastewater to detect pandemic-grade pathogens early, that’s hugely beneficial.

I received funding for a four-month independent research project to explore metagenomic sequencing. Early on, we realized that the technology wasn’t cheap enough to be widely adopted. So the project shifted focus to exploring how to shape the market to create incentives for cheaper sequencing devices. We were trying to find “beachhead markets” for metagenomic sequencing—essentially, initial markets that would pave the way for broader adoption. While we didn’t quite nail down a clear strategy in that timeframe, it was an interesting exercise in thinking about how to build surveillance systems that could piggyback on existing ones, like the flu surveillance system in the U.S., to detect novel viruses.

This project was also a signal of my genuine interest in biosecurity, which later helped me land my current job. After that, I took a position at a sequencing company, Oxford Nanopore Technologies. They specialize in DNA and RNA sensing. I joined the company because of my interest in biosecurity, but I ended up on a different team, working on protein sequencing. It was a bit of a sidestep from what I initially wanted to do, but still a fascinating project that could help diagnose diseases.

Interestingly, the only reason I got that job was my previous work at the pharmaceutical company. Since antibodies are proteins, they assumed I knew what I was doing. Again, my career path has often felt muddled, with random experiences opening up unexpected doors. During this time, I had moved to Oxford and connected with several people in the biosecurity community. One of them was an early employee at the UK AI Safety Institute. They knew I had worked on antibody structure prediction and as a data scientist in biotech, which fit with their need to build out a biosecurity evaluation team. It also gave a signal that I was serious about biosecurity. I likely wouldn’t have applied or known about my current role had I not made this connection. 

What does your work at the AI Safety Institute look like?

The goal of the institute is to broadly evaluate AI and identify the risks it poses to society and British national security, which is a genuine concern. The work is divided into different streams. One focuses on societal harms, asking questions like, “How might AI radicalize people?” Another stream looks at cybersecurity, exploring the risks AI presents in cyberspace. There’s also a safeguards workstream dedicated to understanding how large language models (LLMs) can be jailbroken and how to prevent that.

My team investigates possible ways in which AI could be used in chemistry and biology research, focusing on dual use contexts. A lot of our work involves evaluating LLMs for the ability to synthesize and communicate complex information in chemistry and biology which could assist humans with little experience in those topics. 

The institute has strong relationships with AI developers, allowing us to access models before their public release. We then provide feedback on how safe we believe the models are, helping developers address potential risks before launching them.

So basically, you’re trying to understand whether systems like ChatGPT can help people run biological experiments? 

Yes, we’re evaluating if these models can provide detailed, step-by-step instructions to accomplish complex tasks. LLMs are excellent at retrieving and synthesizing complicated information. So instead of needing to read countless papers and fully understand their methodologies, the model could potentially package that information into a clear recipe for the user. However, even then, someone without wet lab experience might struggle to follow these protocols because working in a lab requires a lot of tacit, hands-on knowledge.

While my work is more technical, the UK AI Safety Institute is split between technical researchers and policy staff, and there’s a tight feedback loop between our evaluations and policy discussions. Ultimately our goal is to provide the UK government with accurate information and risk assessments. It would then be up to the government to decide what to do with that information.

So in terms of your specific role, what does your typical day-to-day look like?

Yeah, so a couple of different things. I spend time writing documents to detail our findings. I’ll write code to actually build these evaluations. One cool thing about my role is getting to work with a software package called Inspect, which was developed by the UK AI Safety Institute. It’s becoming an industry standard, which is quite exciting. We use Inspect internally for our evaluations, and it’s also gaining traction with other companies and third-party evaluators. Much of my time is spent working with this software, exploring questions like which capabilities we need to assess, how these relate to the risks we’re concerned about, and how to set up evaluations to focus precisely on these capabilities to obtain meaningful results. So in a nutshell, my daily tasks involve writing and coding.

What qualities or preferences do you think would make someone enjoy this type of work?

There are a few things that are typical for anyone that enjoys their job—it’s a great culture full of really smart people I can learn from and feel inspired by. To do well in this work, you need to be someone who likes working hard and tackling challenges. You need to be excited about being at the forefront of something and potentially making a lot of mistakes along the way. You also need to have a research-oriented mindset and enjoy building things that don’t already exist.

If you’re aligned with the mission of addressing significant risks posed by AI, particularly in areas like chem-bio, and are motivated by a commitment to solving these issues, you’ll find the work fulfilling. Because it is a small field, it’s exciting to be world-leading. But again, I got this job because I had a really weird amalgamation of experience—not because I had some linear path that very clearly ended up here.

What advice would you give your younger self or someone interested in getting into biosecurity?

If I had to give advice, I’d say there are some core skills you can develop early in your career that are valuable almost anywhere. Being able to write well and communicate effectively is always important. If you’re more analytically inclined, learning statistics is a very useful skill set, and this is actually a gap in my own knowledge. If I could go back and give advice to my younger self, I’d tell myself to focus more on statistics and math. In general, having a solid foundation in math is essential for anyone wanting to work in a technical field. Most of the people I see excelling around me have studied math or statistics, and it gives them a lot of confidence when discussing analytical topics, designing experiments, and dealing with quantitative issues.

As for biosecurity specifically, my path into the field was pretty unconventional, so I’d recommend pursuing something like clinical studies, epidemiology, or virology. These areas would be super helpful. I definitely wouldn’t recommend electrical engineering unless you’re interested in something like sequencing. I guess if sequencing is something you’re passionate about, even though it would be unusual to be obsessed with sequencing at 18, then maybe material science or electrical engineering could be a good fit. ​​Ultimately, though, you really just have to follow whatever you’re genuinely interested in.

Keep exploring

Want to figure out how to make a bigger impact in your own career journey?
Check out these articles and resources: