Unlocking the Power of Multi-Agent AI with CrewAI

Artificial Intelligence (AI) has evolved rapidly over the last few years. From single-task large language models (LLMs) to entire systems of autonomous agents, the AI ecosystem is now enabling new classes of intelligent workflows. In this blog post, we’ll build a multi-agent AI assistant that takes in a resume profile, a resume document, and a job description link, then produces a tailored resume and interview questions. We’ll explore how to do this using CrewAI, a Python-based multi-agent framework, and run it against both local models via OLLAMA and remote LLMs like OpenAI’s API.

What are AI Agents?

An AI agent is a software program that performs tasks independently, makes decisions based on data inputs, and interacts with humans or other agents. Agents can be categorized into:

  • A Single-Agent system is designed for a specific task, such as language translation or facial recognition.
  • Multi-agent systems interact with each other to achieve complex tasks, like coordination in robotics or negotiation in e-commerce.

AI agents have numerous applications across various industries:

  • Customer Service Chatbots: Automated customer support chatbots route users to human representatives when necessary.
  • Robotics and Automation: AI-powered robots coordinating with each other for assembly lines, warehouse management, or search-and-rescue missions.
  • Healthcare: AI agents assisting medical professionals in diagnosis, treatment planning, and patient communication.
  • Resume tailoring and job prep (as we’ll see here)

What are Multi-Agent Workflows?

A single agent has limitations—multi-agent systems overcome this by dividing complex tasks among specialized agents. These agents collaborate, delegate, and verify one another’s work, creating a more powerful and flexible system. The benefits of multi-agent systems include task parallelization, which enables faster completion of multiple tasks; building modular agent systems that can scale independently; extending the workflow with new agents; and implementing agent specialization, where each agent focuses on a specific task.

This can involve:

  • Task Delegation: Agents delegate tasks to one another based on their respective capabilities.
  • Information Sharing: Agents sharing knowledge or data with each other.
  • Coordination Mechanisms: Agents utilize coordination mechanisms, such as auctions, negotiations, or voting, to reach a consensus.

Introducing CrewAI

CrewAI is an open-source Python library for building multi-agent systems. It allows you to:

  • Define agents with roles and goals
  • Create workflows where agents collaborate to solve problems
  • Assign tasks to agents
  • Integrate various AI models to achieve more accurate predictions, including language processing and computer vision.
  • Design custom workflows combining multiple agents with various data sources.

Key concepts:

  1. Agents: Individual AI agents designed to perform specific tasks or interact with humans.
  2. Task: A particular objective assigned to one or more agents.
  3. Crew: A group of agents that collaborates on tasks.
  4. Pipelines: Customizable workflows that combine multiple agents and data sources.
  5. Roles: Agents can assume roles within a pipeline, including leader, follower, or coordinator.

Setting Up Your Environment

Install Python dependencies

Set up OLLAMA with LLAMA locally

Install OLLAMA and then run:

This downloads and serves the LLaMA 3 model locally at http://localhost:11434.

Set up ChatGPT API

  1. Get your OpenAI API key from https://platform.openai.com/account/api-keys
  2. Set it in your environment:

Generate a CrewAI Project Skeleton with CLI

CrewAI offers a CLI that can bootstrap a new project:

This will generate a standard directory structure with example files for agents, tasks, and workflows:

Code: Multi-Agent Resume Builder with CrewAI

Here is the Python code for the multi-agent AI that:

  • Takes a resume profile and text
  • Downloads a job description from a URL
  • Tailors the resume
  • Generates sample interview questions

src/ai-resume/crew.py

For OpenAI please ensure you have a paid account with API credits. Also, ensure the environment variables OPENAI_API_KEY and SERPER_API_KEY are set. Avoid adding them to the code files.

Update the folders to your local path

src/ai-resume/main.py

Theagents.yaml and tasks.yamlfiles

The agents.yaml and tasks.yaml files are used in CrewAI projects to define agent and task configurations in a structured, declarative format. These YAML files serve as an alternative to defining agents and tasks directly in Python code.

  • Separation of Concerns – They allow you to separate configuration from logic. This is especially helpful in larger projects or when collaborating with non-developers.
  • CLI Integration – CrewAI’s CLI (crewai run, crewai init, etc.) uses these files to load agent and task definitions, enabling a full pipeline without writing much Python.
  • Readability and Reusability – YAML files are easy to read, modify, and version-control. They support rapid iteration and reuse across different workflows.

src/config/agents.yaml

src/config/tasks.yaml

 

To run the code

In closing…

CrewAI makes it easy to orchestrate agents into cooperative systems. This multi-agent pattern can be extended beyond resumes to applications such as coding assistants, research assistants, or customer support bots.

A side note. I was using VS Code with Git Copilot AI assistant. While writing the agents and task yaml file, the code assistant would try to help me write the prompts, which might seem like a great idea, but more often completely useless, prompting me with the same sentence, with just a few words changing. I only share this to keep us grounded that these are still early days, and take what you get from an AI Assistant with a big grain of salt and double-check. Another observation was that using the local LLAMA runtime was much slower on my Mac M3 machine vs having the code use the remote OpenAI API (you need to pay dollars for API access). I do like the fact that I can run LLAMA locally using OLLAMA, though. I also use Msty, a local UI app that can serve as the frontend for the OLLAMA-hosted models I run locally.

References

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.