Skip to content

2025

Black Hat 2025 and DEFCON 33 debriefing

Summer HackerCamp is over for 2025 and the experience was unique. A few keynotes, vendor meetings, hacking villages and more than 30 hours of travel time later, I can confidently announce that it was all worth it. Being based in Europe, there is nothing similar that I have experienced, which brings so many cybersecurity enthusiasts from around the world at one place, to learn, connect and advance the field.

The organization, given the size of both conferences was near perfect and any inconveniences due to long queues or similar, was mainly due to conference center and room limitations. The presentations at both conferences, at least the ones I attended, were brilliant. I attended presentations of researchers hacking Apple CarPlay, Cursor, Microsoft Copilot and all sorts of AI agents and assistants, while I experienced truly inspiring keynote talks by Mikko Hyppönen and Nicole Perlroth. I was especially inspired by Perlroth's keynote who bridged geopolitics, national security and actual real life with the work being done in Cybersecurity over the last decades, using unguided, direct and engaging storytelling.

DEFCON simply exceeded my expectations for a few key reasons: - the people, the culture, the attitude - the number of insanely well driven villages - the actual potential to see and learn new techniques and engage with fellow hackers

I am still in the process of trying to put my notes and thoughts in order, because the overall experience and information received is overwhelming. However, if I could highlight a few key points from this year's conferences would be the following: - AI cannot solve everything, it will probably not cost much less when done properly. It needs AppSec, proper architecture, specific context and a lot of penetration testing. - Prompt injection is a real threat and something you have to account for if you are building generative AI systems. But it's not the only threat to consider. Trying to contain a prompt for security is hard, so keep in mind that as with all the things in security, every input is a potential attack vector. - Expose yourself, share your ideas and engage with people. Ask for opinions and look at what others do.

Looking forward to the next Hacker SummerCamp, whenever that may be.

Preparing for Black Hat and DEFCON 2025

It's almost time to board the plane and head to Las Vegas to attend the two most famous cybersecurity conferences in the world. I thought I'd drop a few lines to capture my thoughts before the conferences begin.

Black Hat

Agentic AI and AI/LLM security are everywhere. Considering all the pre-conference product presentations, I think that only a handful are missing AI capabilities. What drew my attention, though is companies that are offering identity management, access control and AI governance, which shows how this field seems to be evolving in an uncontrolled way for each company. Additionally, there are companies offering complete agentic and automated SOC analysts, AppSec architects and it just seems that this year as well, everything will be dominated by AI. Not entirely unexpected or unjustified...

There are also presentations on LLM exploitation, AI 0-day exploits and AI secure architecture that looks really promising. I am looking forward to James Kettle's talk "HTTP/1.1 Must Die! The Desync Endgame" and also, Nicole Perloth's keynote, who is the author of one of my favorite books "This is how they tell me the world ends". The Black Hat app is really neat and helps organize the experience, although it feels a bit outdated.

DEFCON

This is really a "dream come true" for me and I think that for a good few hours I will be wandering around like a child in a candy shop.. When reality hits me, I will for sure attend both the social engineering village and the AppSec village that have amazing things planned. I believe the social engineering village will be integrating AI assistants to help hacking and I'm sure that the results will be great. What I am hoping for is to meet people interested in agentic AI development and hacking.

Langgraph notes on state and memory

Learning langgraph is cool, but clearing our the basic terminology is important. There are my notes on State and Memory, what they are, how to use them and when.

State

The state consists of a schema and the reducer functions. The state is what is passed between nodes in a single run and is subject to transformations by the reducer functions. It will include the last messages, variables and so on. If you re-execute the workflow the state of the graph is re-initiated and it will not persist between executions. That's where memory comes in handy!

The schema is a more structured way of defining what is passed around the nodes and transformed by the reducer functions. The easiest way to define a schema for you state is like this:

from typing_extensions import TypedDict
from langgraph.graph import StateGraph

class MyState(TypeDict):
    my_var: str

...
# Build your graph
builder = StateGraph(MyState)
...
graph = builder.compile()
...

So in the previous example you pass your custom state to the graph and you ask that the graph's state conforms with what you define, which is a dictionary with the key my_var. You can then interact with the state like this:

def node(state):
    return {"my_var": state["my_var"]+" hey!"}

Observe that langgraph knows how to bind the MyState schema class to the state variable passed as input to the node.

MessagesState

Well, if you don't want to do all of that and you just need to rely on passing the system and chat messages around the nodes, you can do exactly that with the prebuilt MessagesState. Here is an example from the langgraph docs:

from langgraph.graph import START, StateGraph, MessagesState
...

# Node
def assistant(state: MessagesState):
   return {"messages": [llm_with_tools.invoke([sys_msg] + state["messages"])]}

# Build graph
builder = StateGraph(MessagesState)

Memory

In contrast to the state, memory can persist across multiple runs. But before looking at memory, it's time to define checkpoints.

Checkpoints

Checkpoints are snapshots of the state. Remember a state is transient and changes between nodes due to reducer functions, so taking snapshots of it to keep it in memory makes sense. From the langgraph documentation: Checkpoints are persisted and can be used to restore the state of a thread at a later time.

Memory store

We can now define memory in langgraph as follows:

from langgraph.checkpoint.memory import MemorySaver

memory = MemorySaver()
...
graph_memory = builder.compile(checkpointer=memory)

You then have to define a thread_id, which is a way to identify your storage, the collection of all the states. From langgraph's docs:

# Specify a thread
config = {"configurable": {"thread_id": "1"}}

...

messages = graph_memory.invoke({"messages": messages},config)

Observe how the graph is associated with the config, which eventually stores in memory automatically.

There are more advanced topics related to memory that I will cover with another set of notes.

Thoughts on AI and the future of AppSec

A lot is changing due to assistive AI and agentic workflows, clearly affecting the state of Cybersecurity and AppSec. Today it is a real effort to find a tool without AI enhancements, even if that is done just to keep it relevant. Would you buy a tool without an AI assistant today?