Skip to content

docs: add agent-loop.md explaining the tool-use loop and completion signals#1010

Merged
patniko merged 2 commits intogithub:mainfrom
shravanmn:main
Apr 6, 2026
Merged

docs: add agent-loop.md explaining the tool-use loop and completion signals#1010
patniko merged 2 commits intogithub:mainfrom
shravanmn:main

Conversation

@shravanmn
Copy link
Copy Markdown
Contributor

Summary

Adds docs/features/agent-loop.md — a guide explaining how the Copilot CLI processes a user message through its agentic tool-use loop.

Closes #1009

What's covered

  • Architecture: App → SDK → CLI → LLM relationship
  • The tool-use loop: flowchart showing how turns chain via toolRequests
  • Turns: one turn = one LLM API call, no hidden calls
  • Event flow: concrete multi-turn example with mermaid diagrams
  • session.idle vs session.task_complete: why idle is the reliable signal and task_complete is best-effort
  • Autopilot mode: how the CLI nudges the model to call task_complete
  • Counting LLM calls: turn pairs = API calls

Changes

  • docs/features/agent-loop.md — new guide
  • docs/features/index.md — added entry to the features table

@shravanmn shravanmn requested a review from a team as a code owner April 5, 2026 22:52
Copilot AI review requested due to automatic review settings April 5, 2026 22:52
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a new documentation guide explaining the Copilot CLI’s agentic tool-use loop and completion signals, and links it from the Features index to help SDK consumers reason about turn sequencing, tool execution, and when a session is “done”.

Changes:

  • Added docs/features/agent-loop.md describing the tool-use loop, turns, multi-turn event flow, and completion signals (session.idle vs session.task_complete).
  • Updated docs/features/index.md to include the new guide in the features list.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.

File Description
docs/features/index.md Adds a new entry pointing readers to the agent loop guide.
docs/features/agent-loop.md New guide documenting the CLI’s turn/tool loop, example event flows, and guidance on completion signals and LLM-call counting.

Comment on lines +1 to +4
# The Agent Loop

How the Copilot CLI processes a user message end-to-end: from prompt to `session.idle`.

Copy link

Copilot AI Apr 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file appears to be committed with CRLF line endings (visible as \r in the diff/context), while other docs in this repo use LF. Please normalize this file to LF to avoid noisy diffs and inconsistent formatting across platforms.

Copilot uses AI. Check for mistakes.
style F fill:#0d1117,stroke:#f0883e,color:#f0883e
```

The model sees the **full conversation history** on each call — system prompt, user message, and all prior tool calls and results.
Copy link

Copilot AI Apr 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The doc says the model sees the full conversation history on each call. However, the CLI can compact/summarize history when the context window fills (see session.compaction_* events), so the model may only see a compacted subset rather than the full persisted history. Consider rephrasing to clarify that the model sees whatever the CLI includes in the current context window (potentially compacted).

Suggested change
The model sees the **full conversation history** on each call — system prompt, user message, and all prior tool calls and results.
On each call, the model sees whatever conversation state the CLI includes in the current context window — typically the system prompt, user message, and prior tool calls and results, but potentially a compacted or summarized version of earlier history if the context window fills.

Copilot uses AI. Check for mistakes.
Comment on lines +175 to +180
The number of `assistant.turn_start` / `assistant.turn_end` pairs in the event log equals the total number of LLM API calls made. There are no hidden calls for planning, evaluation, or completion checking.

To inspect turn count for a session:

```bash
# Count turns in a session's event log
Copy link

Copilot AI Apr 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The “no hidden calls” / “turn pairs == total LLM API calls” claim looks inaccurate: the CLI can make additional LLM calls outside assistant turns (e.g., context compaction is tracked via session.compaction_complete.compactionTokensUsed). Consider scoping this section to assistant response turns only, or update the counting guidance to include other LLM-call sources (compaction, etc.).

Suggested change
The number of `assistant.turn_start` / `assistant.turn_end` pairs in the event log equals the total number of LLM API calls made. There are no hidden calls for planning, evaluation, or completion checking.
To inspect turn count for a session:
```bash
# Count turns in a session's event log
The number of `assistant.turn_start` / `assistant.turn_end` pairs in the event log equals the number of **assistant response turns** in the session.
This is a useful way to count how many times the CLI asked the model to produce an assistant turn, but it is not a complete accounting of every model-related operation in the session. For example, the CLI may perform work outside assistant turns, such as context compaction, so you should not treat turn pairs as the total number of all LLM calls or token-consuming operations.
To inspect assistant turn count for a session:
```bash
# Count assistant turns in a session's event log

Copilot uses AI. Check for mistakes.
Comment on lines +141 to +145
In **autopilot mode** (headless/autonomous operation), the CLI actively tracks whether the model has called `task_complete`. If the tool-use loop ends without it, the CLI injects a synthetic user message nudging the model:

> *"You have not yet marked the task as complete using the task_complete tool. If you were planning, stop planning and start implementing. You aren't done until you have fully completed the task."*

This effectively restarts the tool-use loop — the model sees the nudge as a new user message and continues working. The nudge also instructs the model **not** to call `task_complete` prematurely:
Copy link

Copilot AI Apr 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The autopilot “nudge” is presented as an exact quoted synthetic user message. Since this wording is CLI-implementation detail and may change between CLI versions, it’s likely to become stale. Consider marking it explicitly as an example/approximation (or linking to a versioned source) rather than quoting a fixed string.

Suggested change
In **autopilot mode** (headless/autonomous operation), the CLI actively tracks whether the model has called `task_complete`. If the tool-use loop ends without it, the CLI injects a synthetic user message nudging the model:
> *"You have not yet marked the task as complete using the task_complete tool. If you were planning, stop planning and start implementing. You aren't done until you have fully completed the task."*
This effectively restarts the tool-use loop — the model sees the nudge as a new user message and continues working. The nudge also instructs the model **not** to call `task_complete` prematurely:
In **autopilot mode** (headless/autonomous operation), the CLI actively tracks whether the model has called `task_complete`. If the tool-use loop ends without it, the CLI injects a synthetic user message nudging the model. The exact wording is a CLI implementation detail and may vary by version; conceptually, it looks something like:
> *For example: "You have not yet marked the task as complete using the task_complete tool. If you were planning, stop planning and start implementing. You aren't done until you have fully completed the task."*
This effectively restarts the tool-use loop — the model sees the synthetic follow-up as a new user message and continues working. That follow-up also instructs the model **not** to call `task_complete` prematurely:

Copilot uses AI. Check for mistakes.
@patniko patniko merged commit da9921e into github:main Apr 6, 2026
3 of 4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Documentation: add guide explaining the agentic tool-use loop, turns, and completion signals

3 participants