Many companies use artificial intelligence as an assistance or automation tool. At the same time, new approaches are emerging with agent-based systems in which AI independently assumes roles and actively helps shape work processes. This shift is changing not only the technology itself, but the overall organisation of work. Oskar Trautmann, Manager Strategy Agentic AI at Plan.Net Studios, supports this transformation strategically. In an interview with the IBA Forum, he talks about Agentic AI, new forms of division of labour, and why dialogue between generations remains crucial.
Oskar, you were on stage at the last Wherever Whenever – Work Culture Festival together with your dad for “Zoomer meets Boomer”. Looking back today, what has changed in your view of work since then, especially with your new role in Agentic AI?
I feel like the world of work is changing faster than ever before. A year ago, we were still talking a lot about how hybrid models work or how to design meaningful face-to-face collaboration. Today, the focus is much more on how AI directly intervenes in work processes. The office is becoming less of a mandatory place and more of an experimental space. Topics like AI can’t be solved purely digitally. Teams need to come together, try things out, and learn from each other. At the same time, I see roles shifting. Younger employees tend to have a very natural relationship with AI, while more experienced colleagues bring contextual knowledge and judgement. This combination is becoming increasingly important.
Many people know you as a podcaster and a voice of Generation Z. Today you work as Manager Strategy Agentic AI. What personally appeals to you about this intersection of culture, technology and strategy and how do these areas fit together for you?
For me, they’re inseparable. AI isn’t an isolated technology, it’s a cross-cutting technology, similar to electricity or the internet. It doesn’t just change tools. It changes entire organisational logics. When the division of labour changes, it automatically affects culture, leadership and strategy. That’s exactly what fascinates me: not just introducing individual applications, but thinking about how organisations can fundamentally work differently. I’d rather actively shape developments than just watch them happen from the outside. And especially in Europe, we have the opportunity to use AI in a more human-centred and responsible way.
Many companies use tools like ChatGPT or Copilot. What distinguishes these tools from AI agents?
Generative AI such as ChatGPT or Copilot primarily responds to individual requests. You enter a prompt and receive an answer, a text or an analysis. That’s helpful, but it remains selective. AI agents go a step further. The system assumes a clearly defined role and handles tasks more independently and continuously. I like to explain it using three components. First, there’s a role description, for example: “You are my research agent, you work critically and back up your statements with sources.” Second, there’s the language model in the background (like ChatGPT) that generates and evaluates content. Third, there are the tools the agent is allowed to access, such as web searches, internal data or other systems. This combination makes the difference. Agents don’t just respond, they work in cycles of planning, executing and reflecting. They pursue a target state and complete tasks within a defined framework, more like a team member with responsibilities. Many workflows are labelled as agents today but are essentially just automations. Agentic AI, however, comes closer to human work logic and therefore truly changes how work is organised.
Was there a moment when you realised that AI is no longer just a background tool but is changing how organisations function?
Yes, definitely. For me, it was the point when AI no longer simply accelerated individual tasks but began connecting different systems and sources of knowledge. Suddenly, it wasn’t just about writing texts faster, but about how information flows through the organisation. When a system can simultaneously access documents, project statuses, communication and data, and independently derive recommendations for action, it changes structures. Silos dissolve, responsibilities shift and processes become more fluid. At that moment, it becomes clear: this is no longer just an additional tool. It’s a new layer of infrastructure. And infrastructure always transforms the entire organisation, not just individual workplaces.
Many companies are currently experimenting with AI tools. When does automation actually become a new division of labour between humans and machines?
Automation often stops at efficiency gains. You save time on routine tasks, but the underlying work logic remains unchanged. A new division of labour only emerges when AI not only supports but takes responsibility for clearly defined tasks. This usually happens through a visible lighthouse project, an application that delivers noticeable value for an entire team or department. Ideally, these are tasks that nobody particularly enjoys anyway, such as research, reporting or documentation. When employees experience real relief rather than control, their attitude changes. Only then do organisations consciously redistribute processes. What does the human do? What does the machine take over? That’s where real transformation begins.
Where do we already see concrete examples of agents acting more like team members rather than background software?
We’re currently developing and testing specialised agents with clearly defined roles. One example is the so-called Opinionated Agent, which we internally call Hannah. Hannah is deliberately data-driven and critical, and works with criteria for evaluating and prioritising information. In everyday work, we integrate her quite pragmatically, for example, by copying her into emails or including her in projects. If I request a market or competitor analysis, Hannah recognises the task, researches independently, gathers sources, structures the results and delivers a reliable report shortly afterwards, which she critically reviews herself. That roughly corresponds to the work of a very careful junior to intermediate employee. Transparency is important: we clearly communicate that Hannah is an agent, not a human. Still, collaboration noticeably changes. Tasks are genuinely delegated rather than simply automated. The agent takes over research and routine activities, while the team can focus more on evaluation, prioritisation and decision-making. That’s when a tool becomes a real actor in the process.
If AI agents take over tasks, work shifts from execution to decision-making and design. How does this change collaboration in teams?
I think we’re facing a similar situation to when email or cloud tools were introduced. We gain time, but we have to consciously decide how to use it. When AI handles operational tasks, space opens up for conceptual work like prioritising, evaluating, designing and leading. These activities are less standardised and require more alignment within the team. That means collaboration becomes more dialogic and there is less execution as well as more discussion about direction and quality. Leadership changes too. It’s less about controlling tasks and more about providing orientation and context. In the long term, this could make work more human, if organisations consciously use the newly gained freedom instead of filling it with even more meetings.
You work a lot on generational topics. Who finds it easier to work with AI: Gen Z or experienced employee and why?
The numbers clearly show that younger employees use AI very naturally. Adoption rates in Generation Z are extremely high, while they’re still much lower among many experienced employees. The approach is often more intuitive, experimental and less inhibited. At the same time, the strengths of the older generation are frequently underestimated. They bring contextual knowledge, quality awareness and strategic experience. That’s crucial when working with AI, because results need to be assessed, risks evaluated and decisions taken responsibly. There’s also a stronger awareness of data security, governance and system boundaries Topics, that can quickly become critical in AI projects. From my perspective, it’s not an either-or. The strongest teams combine both: the speed and tool competence of younger employees with the judgement and experience of older colleagues.
Are there strengths of the boomer generation in dealing with AI that we often underestimate?
Absolutely. Their critical distance is extremely valuable. Issues such as data protection, security and long-term consequences are often questioned much more consciously. And above all, they bring enormous experience. AI can provide information, but it cannot replace intuition, deep industry knowledge or holistic strategic assessments. When this experience is combined with AI, it creates a very powerful lever. That’s often underestimated because the debate focuses too heavily on technical skills.
In the podcast “Zoomer meets Boomer”, everything revolves around intergenerational dialogue. What misunderstandings about AI do you encounter most often and how can they be resolved?
A common misunderstanding arises when AI is seen either as a threat or a miracle cure. Some experienced employees initially react with scepticism or rejection because they fear losing control or quality. Younger employees, on the other hand, use AI very naturally without always reflecting on its limitations. It becomes problematic when this leads to bans or accusations. Then valuable learning potential is lost. What we need are shared learning spaces, formats where people openly show how they use AI, discuss results and learn from each other. That dialogue is essential so that AI strengthens collaboration rather than dividing teams.
What advice would you give people who think: “Everyone is talking about AI, but I don’t know where to start”?
Start with a concrete use case. Either something that’s time-consuming and annoying, or something you genuinely enjoy. Both work. Don’t wait for the perfect strategy. You don’t learn AI theoretically, you learn it by experimenting. It usually takes several iterations before prompts and workflows work well. Once you feel the first benefit, routine develops quickly and new ideas emerge almost automatically.
Please also read
If you had to name three things every organisation should have clarified about AI by 2026, what would they be?
First: a shared data and digital foundation. Without clean structures, AI doesn’t work properly. Second: training. Employees need to understand how systems work and where their limits lie. Third: concrete use cases with visible value. Technology for its own sake isn’t enough. Only real benefits create real change.
Oskar, thank you for the interview.