Marinela Profi: Building the Trust Frontier or How Agentic AI Is Redefining Enterprise Decision-Making
On this episode of Scouting For Growth, Sabine VdL welcomes Marinela Profi, Global Market Strategy Lead for AI, GenAI, and Agentic AI at SAS, for a sharp, grounded conversation on what’s actually happening in enterprise AI right now—and what leaders need to prepare for next.
Together, they cut through the noise surrounding generative AI and focus on what comes after the chatbot era: agentic AI. If generative AI is the talented communicator in the room, agentic AI is the one who not only speaks—but takes action, executes workflows, and delivers outcomes. Marinela puts it simply: Generative AI talks. Agentic AI does.
The episode begins by reframing a major misconception: LLMs alone don’t solve business problems. While generative AI chatbots are excellent at answering questions, summarizing content, and producing text, they typically stop at conversation. Business transformation, however, requires systems that can reason, make decisions, interact with data, follow rules, coordinate across tools, and carry tasks through to completion. That’s where agentic AI steps in—combining large language models with analytics, policies, data pipelines, governance frameworks, and real operational logic.
Marinela explains that AI agents aren’t a futuristic fantasy—they’re a practical evolution of automation, made smarter through contextual understanding and orchestrated decision-making. To help business leaders and technical teams understand what “agent behavior” looks like in real life, she shares her 5-step lifecycle framework—a clear model for how agents operate end-to-end:
Perception – sensing signals from users, systems, or environments
Cognition – reasoning, interpreting context, and forming intent
Decisioning – selecting the best course of action based on goals and constraints
Action – executing tasks across workflows and tools
Learning – improving over time through feedback and outcomes
But the most important message in this episode isn’t just that agents are powerful—it’s that autonomy must be designed responsibly.
Marinela emphasizes that the real leap forward for enterprises won’t come from more impressive demos. It will come from governance, because trust is becoming the true competitive advantage in AI. She forecasts that by 2026, governance boards will increasingly resemble digital oversight committees—not just approving AI deployments, but ensuring agents are safe, accountable, explainable, auditable, and continuously monitored.
A critical insight: governance doesn’t end when an agent is launched. Performance and behavior must be monitored continuously, particularly as agents learn from human feedback loops. Marinela warns that learning mechanisms can’t be left unchecked—because allowing an agent to “self-update” in uncontrolled ways is not innovation, it’s operational risk wearing a futuristic costume.
The conversation also tackles one of the biggest leadership questions emerging right now: How autonomous should an AI agent be? Marinela’s answer is refreshingly practical: most of the time, it depends on the risk and impact of the task. Low-risk activities may allow higher autonomy, while high-impact decisions demand constraints, oversight, and transparency. As she highlights throughout the episode, autonomy without accountability is a risk multiplier.
Ultimately, this episode is a strategic guide for leaders who want to move beyond AI experimentation into reliable execution. The future isn’t just about faster answers—it’s about autonomous, governed intelligence that can explain what it’s doing, why it’s doing it, and who is responsible when it does.
If your organization is wondering what comes after GenAI pilots, how to build AI trust at scale, or what enterprise AI will look like by 2026—this is the conversation to listen to.
Because the winners in AI won’t be the ones with the flashi