The importance of infosec
In previous posts I’ve wondered what things I should try to learn as a senior data scientist with LLM experience. The hard part of that is trying to project forward what skills will be useful more than six months ahead of time.
In the past few weeks I’ve been thinking that cyber security is going to become more important. In this post I give my reasons for this, the ways I could be wrong, and also some next steps.
Real world context
Out in the real world, in the last few months in the UK, there have been major cyber incidents with Marks and Spencer, and Co-op, which have affected many people.
In my computer world, there’s a lot of online discussion about progress in LLM capability, and whether they’ll be able to act more autonomously. OpenAI called 2025 the “year of agents” though personally I haven’t seen many applications yet. No-one knows how rapidly LLMs will become more capable, but it seems likely that they’ll use more agentic approaches where LLMs can use multiple tools to act more autonomously.
AI Safety
Meanwhile I’ve continued my interest in AI safety. Some of the roles in this field are about evaluating dangerous capabilities of models. Probably the most obvious way that people + AI models could inflict harm is via cyber attacks, since this doesn’t require (much) physical infrastructure.
Safety-focused roles like an Evaluations Software Engineer at Apollo Research mentions LLM and agentic skills, and also mention “Infosecurity / cybersecurity experience” as a bonus skill.
This role, this one at AISI mentions some skills:
- Proven experience related to cyber-security red-teaming such as:
- Penetration testing
- Cyber range design
- Competing in or designing in CTFs
- Developing automated security testing tools
- Bug bounties, vulnerability research, or exploit discovery and patching
- Communicating the outcomes of cyber security research to a range of technical and non-technical audiences
- Familiarity with cybersecurity tools and platforms such as Wireshark, Metasploit or Ghidra
- Software skills in one or more relevant domains such as network engineering, secure application development, or binary analysis
So this is making me think that safety-focused AI research organisations are concerned with how capable new LLMs will be at completing cyber offensive tasks.
Ways I could be wrong
It’s possible that advances in LLMs could make people safer, because you can have more automated defense. But my sense is that cyber attacks have increased as technnology has become more complex and powerful.
It’s also possible that LLM providers like OpenAI will stop their models from performing cyber offensive tasks, and I think they do. But I think that it’ll be possible to run your own models locally, even agentic ones, where organisations will have a limited ability to regulate them.
Another way I could be wrong is if security really should be provided by institutions, network providers, and players like Amazon and Google, rather than something to tinker with individually. I think this is maybe the best argument against ML folks learning more about infosec.
Next steps
I personally want to learn more about information security. So I plan to adapt my learning plan to include more content about computer security, maybe using these resoruces, or this MIT class I found the course page here and the videos here.
I’m also planning to learn more about cyber security at places I work, and to try to be part of making that security more robust to a world where there might be more attacks from people using AI tools.
Enjoy Reading This Article?
Here are some more articles you might like to read next: