LLMs x Security

Helping Security Researchers to use LLMs in 50 lines of code or less..

minimal.py
privesc.py
class LinuxPrivesc(Agent):
conn: SSHConnection = None
def init(self):
self.init()
self.add_capability(SSHRunCommand(conn=self.conn), default=True)
self.add_capability(SSHTestCredential(conn=self.conn))
self.add_template("next_cmd.md")

Introduction

Getting started

Helping Ethical Hackers use LLMs in 50 Lines of Code or less..

HackingBuddyGPT helps security researchers use LLMs to discover new attack vectors and save the world (or earn bug bounties) in 50 lines of code or less. In the long run, we hope to make the world a safer place by empowering security professionals to get more hacking done by using AI. The more testing they can do, the safer all of us will get.

Start your own security research using LLMs!

Installation and Quickstart

Step-by-step guides to setting up hackingBuddyGPT and running it against a test target.

Architecture guide

Learn how the internals work and contribute.

Existing Usecases

Look at our existings agents/use-cases or write your own.

Contribute (:

Include your usecases/agents into hackingBuddyGPT.

Continue with Setting up and Running your own Hacking Agent!


Getting help

If you need help or want to chat about using AI for security or eduction, please join our discord server were we talk about all things AI + Offensive Security!

Main Contributors

The project originally started with Andreas asking himself a the simple question during a rainy weekend: Can LLMs be used to hack systems? Initial results were promising (or disturbing, depends whom you ask) and led to the creation of our motley group of academics and professinal pen-testers at TU Wien's IPA-Lab.

Over time, more contributors joined: