Product Update

Jun 9, 2025

7 min read

Scanning AutoGen Workflows with Agentic Radar

Discover how Agentic Radar scans AutoGen AgentChat workflows to visualize agent collaboration and pinpoint vulnerabilities in a financial research example.

Josip Srzic - SplxAI

Josip Srzić

Agentic Radar & AutoGen
Agentic Radar & AutoGen
Agentic Radar & AutoGen

We’re excited to announce that Agentic Radar now supports scanning workflows built with Microsoft's AutoGen! Agentic Radar is a security and transparency scanner for agentic systems – designed to analyze your code, map workflows, identify tools and external dependencies, and surface potential security vulnerabilities. In this artice, we’ll showcase how Agentic Radar works with AutoGen’s AgentChat framework by using a real-world example.

We’re excited to announce that Agentic Radar now supports scanning workflows built with Microsoft's AutoGen! Agentic Radar is a security and transparency scanner for agentic systems – designed to analyze your code, map workflows, identify tools and external dependencies, and surface potential security vulnerabilities. In this artice, we’ll showcase how Agentic Radar works with AutoGen’s AgentChat framework by using a real-world example.

We’re excited to announce that Agentic Radar now supports scanning workflows built with Microsoft's AutoGen! Agentic Radar is a security and transparency scanner for agentic systems – designed to analyze your code, map workflows, identify tools and external dependencies, and surface potential security vulnerabilities. In this artice, we’ll showcase how Agentic Radar works with AutoGen’s AgentChat framework by using a real-world example.

What is AutoGen?

AutoGen, developed by Microsoft, is a powerful framework designed to build collaborative LLM-powered agents for solving tasks. It offers several variants:

  • AutoGen Core – the base logic for LLM agents.

  • AutoGen Studio – a GUI for designing and visualizing agents.

  • AutoGen AgentChat – a user-friendly interface for scripting collaborative agents in Python.

Agentic Radar now supports AutoGen, focusing specifically on AgentChat due to its widespread popularity and straightforward interface for building agentic systems in code.

AutoGen, developed by Microsoft, is a powerful framework designed to build collaborative LLM-powered agents for solving tasks. It offers several variants:

  • AutoGen Core – the base logic for LLM agents.

  • AutoGen Studio – a GUI for designing and visualizing agents.

  • AutoGen AgentChat – a user-friendly interface for scripting collaborative agents in Python.

Agentic Radar now supports AutoGen, focusing specifically on AgentChat due to its widespread popularity and straightforward interface for building agentic systems in code.

AutoGen, developed by Microsoft, is a powerful framework designed to build collaborative LLM-powered agents for solving tasks. It offers several variants:

  • AutoGen Core – the base logic for LLM agents.

  • AutoGen Studio – a GUI for designing and visualizing agents.

  • AutoGen AgentChat – a user-friendly interface for scripting collaborative agents in Python.

Agentic Radar now supports AutoGen, focusing specifically on AgentChat due to its widespread popularity and straightforward interface for building agentic systems in code.

A Real Example: Company Research with Agents

To demonstrate Agentic Radar in action, we’ll walk through the official AutoGen AgentChat company research example.

This example defines a multi-agent system to automate competitive research:

  • Search Agent – queries Google Search to gather company information.

  • Stock Analysis Agent – retrieves stock data and generates a visual analysis.

  • Report Agent – compiles everything into a coherent report.

Here’s a quick glance at the necessary imports:

from autogen_agentchat.agents import AssistantAgent

from autogen_agentchat.conditions import TextMentionTermination

from autogen_agentchat.teams import RoundRobinGroupChat

from autogen_agentchat.ui import Console

from autogen_core.tools import FunctionTool

from autogen_ext.models.openai import OpenAIChatCompletionClient

Defining Tools

In AutoGen, any Python function can be converted to a custom tool. In this example, we define two functions:

  • google_search – searches web using Google’s API

  • analyze_stock – summarizes key financial metrics, trends, and volatility for a stock over the past year and generates a price chart, uses the Yahoo Finance API

def google_search(query: str, num_results: int = 2, max_chars: int = 500) -> list# type: ignore[type-arg]
    import os
    import time
    import requests
    from bs4 import BeautifulSoup
    from dotenv import load_dotenv
    load_dotenv()
    api_key = os.getenv("GOOGLE_API_KEY")
    search_engine_id = os.getenv("GOOGLE_SEARCH_ENGINE_ID")
    if not api_key or not search_engine_id:
        raise ValueError("API key or Search Engine ID not found in environment variables")
    url = "https://customsearch.googleapis.com/customsearch/v1"
    params = {"key": str(api_key), "cx": str(search_engine_id), "q": str(query), "num": str(num_results)}
    response = requests.get(url, params=params)
    if response.status_code != 200:
        print(response.json())
        raise Exception(f"Error in API request: {response.status_code}")
    results = response.json().get("items", [])
    def get_page_content(url: str) -> str:
        try:
            response = requests.get(url, timeout=10)
            soup = BeautifulSoup(response.content, "html.parser")
            text = soup.get_text(separator=" ", strip=True)
            words = text.split()
            content = ""
            for word in words:
                if len(content) + len(word) + 1 > max_chars:
                    break
                content += " " + word
            return content.strip()
        except Exception as e:
            print(f"Error fetching {url}: {str(e)}")
            return ""
    enriched_results = []
    for item in results:
        body = get_page_content(item["link"])
        enriched_results.append(
            {"title": item["title"], "link": item["link"], "snippet": item["snippet"], "body": body}
        )
        time.sleep(1)  # Be respectful to the servers
    return enriched_results
def analyze_stock(ticker: str) -> dict# type: ignore[type-arg]
    import os
    from datetime import datetime, timedelta
    import matplotlib.pyplot as plt
    import numpy as np
    import pandas as pd
    import yfinance as yf
    from pytz import timezone  # type: ignore
    stock = yf.Ticker(ticker)
    # Get historical data (1 year of data to ensure we have enough for 200-day MA)
    end_date = datetime.now(timezone("UTC"))
    start_date = end_date - timedelta(days=365)
    hist = stock.history(start=start_date, end=end_date)
    # Ensure we have data
    if hist.empty:
        return {"error": "No historical data available for the specified ticker."}
    # Compute basic statistics and additional metrics
    current_price = stock.info.get("currentPrice", hist["Close"].iloc[-1])
    year_high = stock.info.get("fiftyTwoWeekHigh", hist["High"].max())
    year_low = stock.info.get("fiftyTwoWeekLow", hist["Low"].min())
    # Calculate 50-day and 200-day moving averages
    ma_50 = hist["Close"].rolling(window=50).mean().iloc[-1]
    ma_200 = hist["Close"].rolling(window=200).mean().iloc[-1]
    # Calculate YTD price change and percent change
    ytd_start = datetime(end_date.year, 1, 1, tzinfo=timezone("UTC"))
    ytd_data = hist.loc[ytd_start:]  # type: ignore[misc]
    if not ytd_data.empty:
        price_change = ytd_data["Close"].iloc[-1] - ytd_data["Close"].iloc[0]
        percent_change = (price_change / ytd_data["Close"].iloc[0]) * 100
    else:
        price_change = percent_change = np.nan
    # Determine trend
    if pd.notna(ma_50) and pd.notna(ma_200):
        if ma_50 > ma_200:
            trend = "Upward"
        elif ma_50 < ma_200:
            trend = "Downward"
        else:
            trend = "Neutral"
    else:
        trend = "Insufficient data for trend analysis"
    # Calculate volatility (standard deviation of daily returns)
    daily_returns = hist["Close"].pct_change().dropna()
    volatility = daily_returns.std() * np.sqrt(252)  # Annualized volatility
    # Create result dictionary
    result = {
        "ticker": ticker,
        "current_price": current_price,
        "52_week_high": year_high,
        "52_week_low": year_low,
        "50_day_ma": ma_50,
        "200_day_ma": ma_200,
        "ytd_price_change": price_change,
        "ytd_percent_change": percent_change,
        "trend": trend,
        "volatility": volatility,
    }
    # Convert numpy types to Python native types for better JSON serialization
    for key, value in result.items():
        if isinstance(value, np.generic):
            result[key] = value.item()
    # Generate plot
    plt.figure(figsize=(12, 6))
    plt.plot(hist.index, hist["Close"], label="Close Price")
    plt.plot(hist.index, hist["Close"].rolling(window=50).mean(), label="50-day MA")
    plt.plot(hist.index, hist["Close"].rolling(window=200).mean(), label="200-day MA")
    plt.title(f"{ticker} Stock Price (Past Year)")
    plt.xlabel("Date")
    plt.ylabel("Price ($)")
    plt.legend()
    plt.grid(True)
    # Save plot to file
    os.makedirs("coding", exist_ok=True)
    plot_file_path = f"coding/{ticker}_stockprice.png"
    plt.savefig(plot_file_path)
    print(f"Plot saved as {plot_file_path}")
    result["plot_file_path"] = plot_file_path
    return result

Each function is then wrapped inside AutoGen's FunctionTool instance, enabling it to be registered as a callable tool within an agent's toolkit for automated reasoning and execution.

google_search_tool = FunctionTool(
    google_search, description="Search Google for information, returns results with a snippet and body content"
)
stock_analysis_tool = FunctionTool(analyze_stock, description="Analyze stock data and generate a plot")

Defining Agents

In AutoGen AgentChat, agents are created using the AssistantAgent class by specifying a name, model client, system message, and an optional list of tools. Each agent is designed for a specific role, making it easy to coordinate tasks across a multi-agent workflow.

model_client = OpenAIChatCompletionClient(model="gpt-4o")
search_agent = AssistantAgent(
    name="Google_Search_Agent",
    model_client=model_client,
    tools=[google_search_tool],
    description="Search Google for information, returns top 2 results with a snippet and body content",
    system_message="You are a helpful AI assistant. Solve tasks using your tools.",
)
stock_analysis_agent = AssistantAgent(
    name="Stock_Analysis_Agent",
    model_client=model_client,
    tools=[stock_analysis_tool],
    description="Analyze stock data and generate a plot",
    system_message="Perform data analysis.",
)
report_agent = AssistantAgent(
    name="Report_Agent",
    model_client=model_client,
    description="Generate a report based the search and results of stock analysis",
    system_message="You are a helpful assistant that can generate a comprehensive report on a given topic based on search and stock analysis. When you done with generating the report, reply with TERMINATE.",
)

Orchestrating Collaboration

To enable agents to work together, AutoGen provides group chat mechanisms like RoundRobinGroupChat, which manages the flow of conversation between multiple agents. In this example, the stock_analysis_agent, search_agent, and report_agent are connected into a team that takes turns contributing to the task. The agents take turns in a round-robin fashion to publish a message to others. The run_stream method starts the collaborative process, and the Console utility streams their interactions in real time. This orchestration allows agents to collectively solve tasks that are too complex for a single agent to handle alone.

team = RoundRobinGroupChat([stock_analysis_agent, search_agent, report_agent], max_turns=3)
stream = team.run_stream(task="Write a financial report on American airlines")
await Console(stream)
await model_client.close()

To demonstrate Agentic Radar in action, we’ll walk through the official AutoGen AgentChat company research example.

This example defines a multi-agent system to automate competitive research:

  • Search Agent – queries Google Search to gather company information.

  • Stock Analysis Agent – retrieves stock data and generates a visual analysis.

  • Report Agent – compiles everything into a coherent report.

Here’s a quick glance at the necessary imports:

from autogen_agentchat.agents import AssistantAgent

from autogen_agentchat.conditions import TextMentionTermination

from autogen_agentchat.teams import RoundRobinGroupChat

from autogen_agentchat.ui import Console

from autogen_core.tools import FunctionTool

from autogen_ext.models.openai import OpenAIChatCompletionClient

Defining Tools

In AutoGen, any Python function can be converted to a custom tool. In this example, we define two functions:

  • google_search – searches web using Google’s API

  • analyze_stock – summarizes key financial metrics, trends, and volatility for a stock over the past year and generates a price chart, uses the Yahoo Finance API

def google_search(query: str, num_results: int = 2, max_chars: int = 500) -> list# type: ignore[type-arg]
    import os
    import time
    import requests
    from bs4 import BeautifulSoup
    from dotenv import load_dotenv
    load_dotenv()
    api_key = os.getenv("GOOGLE_API_KEY")
    search_engine_id = os.getenv("GOOGLE_SEARCH_ENGINE_ID")
    if not api_key or not search_engine_id:
        raise ValueError("API key or Search Engine ID not found in environment variables")
    url = "https://customsearch.googleapis.com/customsearch/v1"
    params = {"key": str(api_key), "cx": str(search_engine_id), "q": str(query), "num": str(num_results)}
    response = requests.get(url, params=params)
    if response.status_code != 200:
        print(response.json())
        raise Exception(f"Error in API request: {response.status_code}")
    results = response.json().get("items", [])
    def get_page_content(url: str) -> str:
        try:
            response = requests.get(url, timeout=10)
            soup = BeautifulSoup(response.content, "html.parser")
            text = soup.get_text(separator=" ", strip=True)
            words = text.split()
            content = ""
            for word in words:
                if len(content) + len(word) + 1 > max_chars:
                    break
                content += " " + word
            return content.strip()
        except Exception as e:
            print(f"Error fetching {url}: {str(e)}")
            return ""
    enriched_results = []
    for item in results:
        body = get_page_content(item["link"])
        enriched_results.append(
            {"title": item["title"], "link": item["link"], "snippet": item["snippet"], "body": body}
        )
        time.sleep(1)  # Be respectful to the servers
    return enriched_results
def analyze_stock(ticker: str) -> dict# type: ignore[type-arg]
    import os
    from datetime import datetime, timedelta
    import matplotlib.pyplot as plt
    import numpy as np
    import pandas as pd
    import yfinance as yf
    from pytz import timezone  # type: ignore
    stock = yf.Ticker(ticker)
    # Get historical data (1 year of data to ensure we have enough for 200-day MA)
    end_date = datetime.now(timezone("UTC"))
    start_date = end_date - timedelta(days=365)
    hist = stock.history(start=start_date, end=end_date)
    # Ensure we have data
    if hist.empty:
        return {"error": "No historical data available for the specified ticker."}
    # Compute basic statistics and additional metrics
    current_price = stock.info.get("currentPrice", hist["Close"].iloc[-1])
    year_high = stock.info.get("fiftyTwoWeekHigh", hist["High"].max())
    year_low = stock.info.get("fiftyTwoWeekLow", hist["Low"].min())
    # Calculate 50-day and 200-day moving averages
    ma_50 = hist["Close"].rolling(window=50).mean().iloc[-1]
    ma_200 = hist["Close"].rolling(window=200).mean().iloc[-1]
    # Calculate YTD price change and percent change
    ytd_start = datetime(end_date.year, 1, 1, tzinfo=timezone("UTC"))
    ytd_data = hist.loc[ytd_start:]  # type: ignore[misc]
    if not ytd_data.empty:
        price_change = ytd_data["Close"].iloc[-1] - ytd_data["Close"].iloc[0]
        percent_change = (price_change / ytd_data["Close"].iloc[0]) * 100
    else:
        price_change = percent_change = np.nan
    # Determine trend
    if pd.notna(ma_50) and pd.notna(ma_200):
        if ma_50 > ma_200:
            trend = "Upward"
        elif ma_50 < ma_200:
            trend = "Downward"
        else:
            trend = "Neutral"
    else:
        trend = "Insufficient data for trend analysis"
    # Calculate volatility (standard deviation of daily returns)
    daily_returns = hist["Close"].pct_change().dropna()
    volatility = daily_returns.std() * np.sqrt(252)  # Annualized volatility
    # Create result dictionary
    result = {
        "ticker": ticker,
        "current_price": current_price,
        "52_week_high": year_high,
        "52_week_low": year_low,
        "50_day_ma": ma_50,
        "200_day_ma": ma_200,
        "ytd_price_change": price_change,
        "ytd_percent_change": percent_change,
        "trend": trend,
        "volatility": volatility,
    }
    # Convert numpy types to Python native types for better JSON serialization
    for key, value in result.items():
        if isinstance(value, np.generic):
            result[key] = value.item()
    # Generate plot
    plt.figure(figsize=(12, 6))
    plt.plot(hist.index, hist["Close"], label="Close Price")
    plt.plot(hist.index, hist["Close"].rolling(window=50).mean(), label="50-day MA")
    plt.plot(hist.index, hist["Close"].rolling(window=200).mean(), label="200-day MA")
    plt.title(f"{ticker} Stock Price (Past Year)")
    plt.xlabel("Date")
    plt.ylabel("Price ($)")
    plt.legend()
    plt.grid(True)
    # Save plot to file
    os.makedirs("coding", exist_ok=True)
    plot_file_path = f"coding/{ticker}_stockprice.png"
    plt.savefig(plot_file_path)
    print(f"Plot saved as {plot_file_path}")
    result["plot_file_path"] = plot_file_path
    return result

Each function is then wrapped inside AutoGen's FunctionTool instance, enabling it to be registered as a callable tool within an agent's toolkit for automated reasoning and execution.

google_search_tool = FunctionTool(
    google_search, description="Search Google for information, returns results with a snippet and body content"
)
stock_analysis_tool = FunctionTool(analyze_stock, description="Analyze stock data and generate a plot")

Defining Agents

In AutoGen AgentChat, agents are created using the AssistantAgent class by specifying a name, model client, system message, and an optional list of tools. Each agent is designed for a specific role, making it easy to coordinate tasks across a multi-agent workflow.

model_client = OpenAIChatCompletionClient(model="gpt-4o")
search_agent = AssistantAgent(
    name="Google_Search_Agent",
    model_client=model_client,
    tools=[google_search_tool],
    description="Search Google for information, returns top 2 results with a snippet and body content",
    system_message="You are a helpful AI assistant. Solve tasks using your tools.",
)
stock_analysis_agent = AssistantAgent(
    name="Stock_Analysis_Agent",
    model_client=model_client,
    tools=[stock_analysis_tool],
    description="Analyze stock data and generate a plot",
    system_message="Perform data analysis.",
)
report_agent = AssistantAgent(
    name="Report_Agent",
    model_client=model_client,
    description="Generate a report based the search and results of stock analysis",
    system_message="You are a helpful assistant that can generate a comprehensive report on a given topic based on search and stock analysis. When you done with generating the report, reply with TERMINATE.",
)

Orchestrating Collaboration

To enable agents to work together, AutoGen provides group chat mechanisms like RoundRobinGroupChat, which manages the flow of conversation between multiple agents. In this example, the stock_analysis_agent, search_agent, and report_agent are connected into a team that takes turns contributing to the task. The agents take turns in a round-robin fashion to publish a message to others. The run_stream method starts the collaborative process, and the Console utility streams their interactions in real time. This orchestration allows agents to collectively solve tasks that are too complex for a single agent to handle alone.

team = RoundRobinGroupChat([stock_analysis_agent, search_agent, report_agent], max_turns=3)
stream = team.run_stream(task="Write a financial report on American airlines")
await Console(stream)
await model_client.close()

To demonstrate Agentic Radar in action, we’ll walk through the official AutoGen AgentChat company research example.

This example defines a multi-agent system to automate competitive research:

  • Search Agent – queries Google Search to gather company information.

  • Stock Analysis Agent – retrieves stock data and generates a visual analysis.

  • Report Agent – compiles everything into a coherent report.

Here’s a quick glance at the necessary imports:

from autogen_agentchat.agents import AssistantAgent

from autogen_agentchat.conditions import TextMentionTermination

from autogen_agentchat.teams import RoundRobinGroupChat

from autogen_agentchat.ui import Console

from autogen_core.tools import FunctionTool

from autogen_ext.models.openai import OpenAIChatCompletionClient

Defining Tools

In AutoGen, any Python function can be converted to a custom tool. In this example, we define two functions:

  • google_search – searches web using Google’s API

  • analyze_stock – summarizes key financial metrics, trends, and volatility for a stock over the past year and generates a price chart, uses the Yahoo Finance API

def google_search(query: str, num_results: int = 2, max_chars: int = 500) -> list# type: ignore[type-arg]
    import os
    import time
    import requests
    from bs4 import BeautifulSoup
    from dotenv import load_dotenv
    load_dotenv()
    api_key = os.getenv("GOOGLE_API_KEY")
    search_engine_id = os.getenv("GOOGLE_SEARCH_ENGINE_ID")
    if not api_key or not search_engine_id:
        raise ValueError("API key or Search Engine ID not found in environment variables")
    url = "https://customsearch.googleapis.com/customsearch/v1"
    params = {"key": str(api_key), "cx": str(search_engine_id), "q": str(query), "num": str(num_results)}
    response = requests.get(url, params=params)
    if response.status_code != 200:
        print(response.json())
        raise Exception(f"Error in API request: {response.status_code}")
    results = response.json().get("items", [])
    def get_page_content(url: str) -> str:
        try:
            response = requests.get(url, timeout=10)
            soup = BeautifulSoup(response.content, "html.parser")
            text = soup.get_text(separator=" ", strip=True)
            words = text.split()
            content = ""
            for word in words:
                if len(content) + len(word) + 1 > max_chars:
                    break
                content += " " + word
            return content.strip()
        except Exception as e:
            print(f"Error fetching {url}: {str(e)}")
            return ""
    enriched_results = []
    for item in results:
        body = get_page_content(item["link"])
        enriched_results.append(
            {"title": item["title"], "link": item["link"], "snippet": item["snippet"], "body": body}
        )
        time.sleep(1)  # Be respectful to the servers
    return enriched_results
def analyze_stock(ticker: str) -> dict# type: ignore[type-arg]
    import os
    from datetime import datetime, timedelta
    import matplotlib.pyplot as plt
    import numpy as np
    import pandas as pd
    import yfinance as yf
    from pytz import timezone  # type: ignore
    stock = yf.Ticker(ticker)
    # Get historical data (1 year of data to ensure we have enough for 200-day MA)
    end_date = datetime.now(timezone("UTC"))
    start_date = end_date - timedelta(days=365)
    hist = stock.history(start=start_date, end=end_date)
    # Ensure we have data
    if hist.empty:
        return {"error": "No historical data available for the specified ticker."}
    # Compute basic statistics and additional metrics
    current_price = stock.info.get("currentPrice", hist["Close"].iloc[-1])
    year_high = stock.info.get("fiftyTwoWeekHigh", hist["High"].max())
    year_low = stock.info.get("fiftyTwoWeekLow", hist["Low"].min())
    # Calculate 50-day and 200-day moving averages
    ma_50 = hist["Close"].rolling(window=50).mean().iloc[-1]
    ma_200 = hist["Close"].rolling(window=200).mean().iloc[-1]
    # Calculate YTD price change and percent change
    ytd_start = datetime(end_date.year, 1, 1, tzinfo=timezone("UTC"))
    ytd_data = hist.loc[ytd_start:]  # type: ignore[misc]
    if not ytd_data.empty:
        price_change = ytd_data["Close"].iloc[-1] - ytd_data["Close"].iloc[0]
        percent_change = (price_change / ytd_data["Close"].iloc[0]) * 100
    else:
        price_change = percent_change = np.nan
    # Determine trend
    if pd.notna(ma_50) and pd.notna(ma_200):
        if ma_50 > ma_200:
            trend = "Upward"
        elif ma_50 < ma_200:
            trend = "Downward"
        else:
            trend = "Neutral"
    else:
        trend = "Insufficient data for trend analysis"
    # Calculate volatility (standard deviation of daily returns)
    daily_returns = hist["Close"].pct_change().dropna()
    volatility = daily_returns.std() * np.sqrt(252)  # Annualized volatility
    # Create result dictionary
    result = {
        "ticker": ticker,
        "current_price": current_price,
        "52_week_high": year_high,
        "52_week_low": year_low,
        "50_day_ma": ma_50,
        "200_day_ma": ma_200,
        "ytd_price_change": price_change,
        "ytd_percent_change": percent_change,
        "trend": trend,
        "volatility": volatility,
    }
    # Convert numpy types to Python native types for better JSON serialization
    for key, value in result.items():
        if isinstance(value, np.generic):
            result[key] = value.item()
    # Generate plot
    plt.figure(figsize=(12, 6))
    plt.plot(hist.index, hist["Close"], label="Close Price")
    plt.plot(hist.index, hist["Close"].rolling(window=50).mean(), label="50-day MA")
    plt.plot(hist.index, hist["Close"].rolling(window=200).mean(), label="200-day MA")
    plt.title(f"{ticker} Stock Price (Past Year)")
    plt.xlabel("Date")
    plt.ylabel("Price ($)")
    plt.legend()
    plt.grid(True)
    # Save plot to file
    os.makedirs("coding", exist_ok=True)
    plot_file_path = f"coding/{ticker}_stockprice.png"
    plt.savefig(plot_file_path)
    print(f"Plot saved as {plot_file_path}")
    result["plot_file_path"] = plot_file_path
    return result

Each function is then wrapped inside AutoGen's FunctionTool instance, enabling it to be registered as a callable tool within an agent's toolkit for automated reasoning and execution.

google_search_tool = FunctionTool(
    google_search, description="Search Google for information, returns results with a snippet and body content"
)
stock_analysis_tool = FunctionTool(analyze_stock, description="Analyze stock data and generate a plot")

Defining Agents

In AutoGen AgentChat, agents are created using the AssistantAgent class by specifying a name, model client, system message, and an optional list of tools. Each agent is designed for a specific role, making it easy to coordinate tasks across a multi-agent workflow.

model_client = OpenAIChatCompletionClient(model="gpt-4o")
search_agent = AssistantAgent(
    name="Google_Search_Agent",
    model_client=model_client,
    tools=[google_search_tool],
    description="Search Google for information, returns top 2 results with a snippet and body content",
    system_message="You are a helpful AI assistant. Solve tasks using your tools.",
)
stock_analysis_agent = AssistantAgent(
    name="Stock_Analysis_Agent",
    model_client=model_client,
    tools=[stock_analysis_tool],
    description="Analyze stock data and generate a plot",
    system_message="Perform data analysis.",
)
report_agent = AssistantAgent(
    name="Report_Agent",
    model_client=model_client,
    description="Generate a report based the search and results of stock analysis",
    system_message="You are a helpful assistant that can generate a comprehensive report on a given topic based on search and stock analysis. When you done with generating the report, reply with TERMINATE.",
)

Orchestrating Collaboration

To enable agents to work together, AutoGen provides group chat mechanisms like RoundRobinGroupChat, which manages the flow of conversation between multiple agents. In this example, the stock_analysis_agent, search_agent, and report_agent are connected into a team that takes turns contributing to the task. The agents take turns in a round-robin fashion to publish a message to others. The run_stream method starts the collaborative process, and the Console utility streams their interactions in real time. This orchestration allows agents to collectively solve tasks that are too complex for a single agent to handle alone.

team = RoundRobinGroupChat([stock_analysis_agent, search_agent, report_agent], max_turns=3)
stream = team.run_stream(task="Write a financial report on American airlines")
await Console(stream)
await model_client.close()

Agentic Radar in Action

With the agents defined and tools in place, the workflow comes to life when the agents begin interacting. Each agent uses its tools and reasoning abilities to complete its part of the task, passing results to others as needed. Agentic Radar captures this dynamic process, visualizing the flow of information and decisions between agents. This helps you trace how a task is decomposed, how tools are used, and where bottlenecks or redundancies might occur – offering deep insight into the behavior of your agentic system.

Installing Agentic Radar is very simple, just run: pip install agentic-radar

To run it on the example shown in the previous chapter:

  1. Download the full example source code from here and store it in a folder called company_research.

  2. Run the following command inside of your terminal:
    agentic-radar scan autogen -i ./company_research -o report.html

    You should see a report.html file appear inside your current working directory.

  3. Open the report.html file in your browser.

This produces a detailed, interactive HTML report showing the agent graph, tool usage, and potential vulnerabilities.

AutoGen – Agentic Workflow GraphAutoGen – Workflow Findings

With the agents defined and tools in place, the workflow comes to life when the agents begin interacting. Each agent uses its tools and reasoning abilities to complete its part of the task, passing results to others as needed. Agentic Radar captures this dynamic process, visualizing the flow of information and decisions between agents. This helps you trace how a task is decomposed, how tools are used, and where bottlenecks or redundancies might occur – offering deep insight into the behavior of your agentic system.

Installing Agentic Radar is very simple, just run: pip install agentic-radar

To run it on the example shown in the previous chapter:

  1. Download the full example source code from here and store it in a folder called company_research.

  2. Run the following command inside of your terminal:
    agentic-radar scan autogen -i ./company_research -o report.html

    You should see a report.html file appear inside your current working directory.

  3. Open the report.html file in your browser.

This produces a detailed, interactive HTML report showing the agent graph, tool usage, and potential vulnerabilities.

AutoGen – Agentic Workflow GraphAutoGen – Workflow Findings

With the agents defined and tools in place, the workflow comes to life when the agents begin interacting. Each agent uses its tools and reasoning abilities to complete its part of the task, passing results to others as needed. Agentic Radar captures this dynamic process, visualizing the flow of information and decisions between agents. This helps you trace how a task is decomposed, how tools are used, and where bottlenecks or redundancies might occur – offering deep insight into the behavior of your agentic system.

Installing Agentic Radar is very simple, just run: pip install agentic-radar

To run it on the example shown in the previous chapter:

  1. Download the full example source code from here and store it in a folder called company_research.

  2. Run the following command inside of your terminal:
    agentic-radar scan autogen -i ./company_research -o report.html

    You should see a report.html file appear inside your current working directory.

  3. Open the report.html file in your browser.

This produces a detailed, interactive HTML report showing the agent graph, tool usage, and potential vulnerabilities.

AutoGen – Agentic Workflow GraphAutoGen – Workflow Findings

Agentic Prompt Hardening

In addition to visualization and scanning, you can enable Agentic Prompt Hardening to automatically analyze and improve the system prompts used by your agents. These improvements follow best practices from prompt engineering and make your agents more robust and secure.

To activate prompt hardening, just add the --harden-prompts flag to the command from the previous example:

agentic-radar scan autogen -i ./company_research -o report.html —harden-prompts

Note: this requires setting your OPENAI_API_KEY by running export OPENAI_API_KEY=your-key-here

The report will now include a side-by-side comparison of original and hardened prompts, helping you quickly identify weak instructions and upgrade them to more effective, defensive system messages –  all without changing a single line of your code.

AutoGen – Agentic Prompt Hardening

In addition to visualization and scanning, you can enable Agentic Prompt Hardening to automatically analyze and improve the system prompts used by your agents. These improvements follow best practices from prompt engineering and make your agents more robust and secure.

To activate prompt hardening, just add the --harden-prompts flag to the command from the previous example:

agentic-radar scan autogen -i ./company_research -o report.html —harden-prompts

Note: this requires setting your OPENAI_API_KEY by running export OPENAI_API_KEY=your-key-here

The report will now include a side-by-side comparison of original and hardened prompts, helping you quickly identify weak instructions and upgrade them to more effective, defensive system messages –  all without changing a single line of your code.

AutoGen – Agentic Prompt Hardening

In addition to visualization and scanning, you can enable Agentic Prompt Hardening to automatically analyze and improve the system prompts used by your agents. These improvements follow best practices from prompt engineering and make your agents more robust and secure.

To activate prompt hardening, just add the --harden-prompts flag to the command from the previous example:

agentic-radar scan autogen -i ./company_research -o report.html —harden-prompts

Note: this requires setting your OPENAI_API_KEY by running export OPENAI_API_KEY=your-key-here

The report will now include a side-by-side comparison of original and hardened prompts, helping you quickly identify weak instructions and upgrade them to more effective, defensive system messages –  all without changing a single line of your code.

AutoGen – Agentic Prompt Hardening

Summary

With AutoGen AgentChat support, Agentic Radar extends its reach into one of the most widely used frameworks for building collaborative AI agents. This integration allows developers to scan AutoGen workflows for risks, visualize multi-agent interactions, and harden prompts – bringing much-needed transparency and security to real-world agentic systems.

As the agentic ecosystem continues to grow, so does the importance of securing these dynamic, interconnected workflows. Agentic Radar is committed to staying ahead of the curve by:

  • Expanding support to additional agent frameworks like PydanticAI and Dify

  • Enhancing system prompt analysis and hardening

  • Tracing agent data sources, tool inputs, and external endpoints

  • Deepening alignment with security standards like the OWASP LLM Top 10 and Agentic Threats

To get involved, join our Community Discord or contribute directly on GitHub. If Agentic Radar is helping you build safer AI, drop us a star ⭐ – every bit of support helps grow the community and define the standard for building secure and transparent agentic workflows.

With AutoGen AgentChat support, Agentic Radar extends its reach into one of the most widely used frameworks for building collaborative AI agents. This integration allows developers to scan AutoGen workflows for risks, visualize multi-agent interactions, and harden prompts – bringing much-needed transparency and security to real-world agentic systems.

As the agentic ecosystem continues to grow, so does the importance of securing these dynamic, interconnected workflows. Agentic Radar is committed to staying ahead of the curve by:

  • Expanding support to additional agent frameworks like PydanticAI and Dify

  • Enhancing system prompt analysis and hardening

  • Tracing agent data sources, tool inputs, and external endpoints

  • Deepening alignment with security standards like the OWASP LLM Top 10 and Agentic Threats

To get involved, join our Community Discord or contribute directly on GitHub. If Agentic Radar is helping you build safer AI, drop us a star ⭐ – every bit of support helps grow the community and define the standard for building secure and transparent agentic workflows.

With AutoGen AgentChat support, Agentic Radar extends its reach into one of the most widely used frameworks for building collaborative AI agents. This integration allows developers to scan AutoGen workflows for risks, visualize multi-agent interactions, and harden prompts – bringing much-needed transparency and security to real-world agentic systems.

As the agentic ecosystem continues to grow, so does the importance of securing these dynamic, interconnected workflows. Agentic Radar is committed to staying ahead of the curve by:

  • Expanding support to additional agent frameworks like PydanticAI and Dify

  • Enhancing system prompt analysis and hardening

  • Tracing agent data sources, tool inputs, and external endpoints

  • Deepening alignment with security standards like the OWASP LLM Top 10 and Agentic Threats

To get involved, join our Community Discord or contribute directly on GitHub. If Agentic Radar is helping you build safer AI, drop us a star ⭐ – every bit of support helps grow the community and define the standard for building secure and transparent agentic workflows.

Ready to leverage AI with confidence?

Ready to leverage AI with confidence?

Ready to leverage AI with confidence?

Leverage GenAI technology securely with SplxAI

Join a number of enterprises that trust SplxAI for their AI Security needs:

CX platforms

Sales platforms

Conversational AI

Finance & banking

Insurances

CPaaS providers

300+

Tested GenAI apps

100k+

Vulnerabilities found

1,000+

Unique attack scenarios

12x

Accelerated deployments

SECURITY YOU CAN TRUST

GDPR

COMPLIANT

CCPA

COMPLIANT

ISO 27001

CERTIFIED

SOC 2 TYPE II

COMPLIANT

OWASP

CONTRIBUTORS

Leverage GenAI technology securely with SplxAI

Join a number of enterprises that trust SplxAI for their AI Security needs:

CX platforms

Sales platforms

Conversational AI

Finance & banking

Insurances

CPaaS providers

300+

Tested GenAI apps

100k+

Vulnerabilities found

1,000+

Unique attack scenarios

12x

Accelerated deployments

SECURITY YOU CAN TRUST

GDPR

COMPLIANT

CCPA

COMPLIANT

ISO 27001

CERTIFIED

SOC 2 TYPE II

COMPLIANT

OWASP

CONTRIBUTORS

Leverage GenAI technology securely with SplxAI

Join a number of enterprises that trust SplxAI for their AI Security needs:

CX platforms

Sales platforms

Conversational AI

Finance & banking

Insurances

CPaaS providers

300+

Tested GenAI apps

100k+

Vulnerabilities found

1,000+

Unique attack scenarios

12x

Accelerated deployments

SECURITY YOU CAN TRUST

GDPR

COMPLIANT

CCPA

COMPLIANT

ISO 27001

CERTIFIED

SOC 2 TYPE II

COMPLIANT

OWASP

CONTRIBUTORS

Deploy secure AI Assistants and Agents with confidence.

Don’t wait for an incident to happen. Proactively identify and remediate your AI's vulnerabilities to ensure you're protected at all times.

SplxAI - Background Pattern

Deploy secure AI Assistants and Agents with confidence.

Don’t wait for an incident to happen. Proactively identify and remediate your AI's vulnerabilities to ensure you're protected at all times.

SplxAI - Background Pattern

Deploy secure AI Assistants and Agents with confidence.

Don’t wait for an incident to happen. Proactively identify and remediate your AI's vulnerabilities to ensure you're protected at all times.

SplxAI - Accelerator Programs
SplxAI Logo

For a future of safe and trustworthy AI.

Subscribe to our newsletter

By clicking "Subscribe" you agree to our privacy policy.

SplxAI - Accelerator Programs
SplxAI Logo

For a future of safe and trustworthy AI.

Subscribe to our newsletter

By clicking "Subscribe" you agree to our privacy policy.

SplxAI Logo

For a future of safe and trustworthy AI.

Subscribe to our newsletter

By clicking "Subscribe" you agree to our privacy policy.