Agentic Coordination
Agentic Engineering continues by expanding from implementation into Agentic Coordination. After a workplan is developed and coding begins, the next challenge is managing how multiple AI tools and agents interact within a project.

Modern engineering workflows rarely involve a single tool working in isolation. Instead, multiple agents may analyze data, generate code, run experiments, and produce documentation simultaneously. The engineer becomes the coordinator of these activities, ensuring that the work remains consistent, reproducible, and aligned with the original plan. Agentic coordination allows parallel work while maintaining engineering rigor.
Coordination and Roles
Automation can increase productivity, but it can also introduce new sources of complexity. When multiple agents operate without coordination, they may duplicate work, make inconsistent assumptions, or overwrite each other's results.
Coordination provides structure so that agents operate within defined boundaries. Tasks are sequenced logically, responsibilities are clear, and outputs are verified before integration. In practice, the human software engineer monitors progress and ensures that results from different parts of the workflow remain compatible.
Role: Final decision authority and quality gate.
This is the human engineer who approves plans, reviews diffs, validates results, and decides what gets merged and deployed. The human owner sets acceptance criteria, enforces standards, and stops unsafe or low-quality changes.
This role is always present, even when most work is delegated to agents.
AI Agent Roles
Role: Break work into tasks, route tasks to specialists, and aggregate outputs.
The Orchestrator converts a high-level request into a queue of tasks, assigns tasks to the right agents, checks progress, and merges outputs into a coherent deliverable. It also enforces workflow rules such as branch strategy, test requirements, and definition-of-done.
Best practice is to keep orchestration instructions concise and stable so you refine the system over time rather than rewriting prompts each week.
Role: Clarify requirements, propose scope, and produce an implementation plan.
This agent turns a vague idea into an actionable plan: user stories, success criteria, milestones, risks, and test strategy. It should explicitly state assumptions and list the decisions that require human approval.
The output of this agent is a plan that other agents execute.
Role: Build and refine the user interface.
This agent implements UI components, layout, accessibility, responsive design, and client-side behavior. It should propose UI structure, reuse shared components, and include basic interaction tests when appropriate.
Boundaries: It should not change backend contracts without coordination, and it should not introduce insecure client-side patterns.
Role: Implement server-side logic, endpoints, and integrations.
This agent builds APIs, service logic, business rules, background jobs, and integrations. It should keep interfaces stable, add server tests, and document endpoints.
Boundaries: Avoid schema changes without involving the database agent, and avoid auth changes without involving the security agent.
Role: Own schema design, migrations, indexing, and data correctness.
This agent designs database schemas, manages migrations, optimizes queries, and ensures referential integrity. It should propose performance considerations and data retention policies.
Outputs should include migration scripts and a clear description of schema changes.
Role: Threat modeling, secure defaults, and vulnerability review.
This agent reviews proposed changes for security risks: authentication/authorization, injection risks, secrets handling, PII policies, logging safety, dependency risk, and secure configuration. It should maintain a checklist and produce a short “security sign-off” note.
Boundaries: It should be allowed to block merges until critical risks are addressed.
Role: Build tests, reproduce bugs, and confirm acceptance criteria.
This agent writes unit/integration tests, creates minimal repro cases, and verifies fixes. It ensures that the system passes required checks and that changes are not “demo-only.”
Outputs should include test coverage notes and any known gaps or risks.
Role: Packaging, CI/CD, environments, and deployment safety.
This agent manages Docker/configuration, CI pipelines, environment variables, staging/production rollouts, and rollback strategy. It should produce deployment checklists and verify observability hooks.
Boundaries: It should not deploy without human approval unless explicitly authorized.
Role: Monitoring, logging, performance profiling, and reliability.
This agent adds metrics and tracing, reviews logs for usefulness and privacy, identifies bottlenecks, and proposes performance improvements. It should define SLOs (even simple ones) and recommend alerts.
Outputs should include what to monitor and how to detect failures early.
Role: Produce documentation that enables reuse and reproducibility.
This agent writes documentation, API docs, setup instructions, and release notes. It should capture decisions, assumptions, and how to run tests and deploy. It is responsible for making the project understandable to a new developer.
Outputs should be short, structured, and easy to follow.
Many modern workflows divide work among specialized agents. One agent may prepare data, another may build models, while another runs simulations or generates documentation. This division of labor allows different parts of a project to move forward at the same time.
However, parallel work introduces new responsibilities for the engineer. Data sources must remain consistent, assumptions must align across modules, and results must be integrated into a coherent final system. Successful multi-agent workflows depend less on automation itself and more on how clearly tasks are defined and coordinated.
Rules and Skills
Agentic systems often rely on two types of guidance: persistent rules and task-specific skills.
Rules define stable expectations for the entire workflow, such as coding standards, repository structure, and testing requirements. They provide consistency across all agents and reduce repeated mistakes.
Skills represent specialized capabilities that are loaded when relevant. These may include domain-specific analysis routines, automation hooks, or structured procedures for running experiments. Skills allow agents to perform complex operations without requiring the full project context every time they run. Rules and skills help maintain consistency while still allowing agents to operate flexibly.
Context and Memory
Large engineering projects contain far more information than any single model can process at once. Effective coordination therefore depends on careful management of context.
Rather than providing every file to the model, engineers selectively include only the most relevant information. Well-organized repositories and clear naming conventions make it easier for agents to locate the information they need.
Many systems also track memory of prior conversations or results so that agents can recall earlier decisions and avoid repeating work. Managing context effectively is one of the most important skills for scaling agentic workflows.
Instructor Perspective
Students often assume that faster code generation automatically means faster progress. In reality, the largest challenges in AI-assisted workflows usually appear during coordination.
When multiple agents contribute to a project, the engineer must ensure that the pieces fit together correctly. This requires reviewing outputs carefully, checking assumptions, and maintaining a clear structure for the project.
In many ways the engineer’s role begins to resemble that of a project manager or systems architect. Instead of writing every line of code, the engineer ensures that the overall system remains coherent and reliable.
Integrating Coordination into the Course Project
In the Machine Learning for Engineers project, coordination becomes important as the project grows beyond a single script or experiment. Students should structure their work so that data preparation, modeling, testing, and reporting remain organized and reproducible.
AI tools can accelerate individual tasks, but the overall workflow must remain understandable and maintainable. Clear organization and consistent assumptions will matter more than the amount of automation used.
Key Takeaways
Agentic coordination connects planning and coding into a complete engineering workflow. Instead of interacting with a single AI assistant, software engineers manage systems of tools and agents working together.
The most effective engineers learn to guide these systems deliberately, ensuring that parallel work remains organized, consistent, and verifiable. In the next lectures, we will extend this workflow further by focusing on visualization and communication so that AI-assisted engineering produces clear insights and meaningful results.