Skip to content

LangSmith sample: starter cleanup, plugin on Worker, chatbot Response decoupling#295

Merged
xumaple merged 3 commits intomainfrom
maplexu/langsmith-starter-cleanup
Apr 23, 2026
Merged

LangSmith sample: starter cleanup, plugin on Worker, chatbot Response decoupling#295
xumaple merged 3 commits intomainfrom
maplexu/langsmith-starter-cleanup

Conversation

@xumaple
Copy link
Copy Markdown
Contributor

@xumaple xumaple commented Apr 22, 2026

Summary

Follow-ups to #292.

basic/starter.py — move @traceable directly onto main and delete the nested run_workflow closure (reviewer comment). main now returns the workflow result so LangSmith captures it as the trace output.

basic/worker.py, chatbot/worker.py — move LangSmithPlugin from the Client to the Worker. Recommended pattern: plugin on the Worker in worker code, plugin on the Client in client code.

chatbot/activities.py, chatbot/workflows.py (+ tests) — fix a pre-existing bug that broke every chatbot workflow task. The activity returned openai.types.responses.Response. OpenAI's API returns "prompt_cache_retention": "in_memory" (underscore), but openai SDK v2.32.0 declares that field as Literal["in-memory", "24h"]. The openai client parses laxly so the activity succeeds, but Temporal's pydantic_data_converter uses strict TypeAdapter(Response).validate_json on the way into the workflow and rejects the underscore, failing the task. Defined minimal ChatResponse and ToolCall pydantic models exposing only the fields the workflow uses (id, output_text, tool_calls), and have the activity project the openai Response down to that shape. The sample is no longer coupled to SDK drift in fields it doesn't use.

Test plan

  • poe lint clean
  • poe format clean
  • pytest tests/langsmith_tracing/ — 3 passed

- basic/starter.py: move `@traceable` decorator directly onto `main`,
  removing the nested `run_workflow` closure (addresses reviewer comment)
- basic/worker.py, chatbot/worker.py: move `LangSmithPlugin` from the
  `Client` to the `Worker`, matching our recommended pattern (plugin on
  the worker in worker code; on the client in client code)
@xumaple xumaple requested a review from a team as a code owner April 22, 2026 21:55
xumaple added 2 commits April 22, 2026 18:07
@Traceable captures the decorated function's return value as the
LangSmith trace output, so implicitly returning None left the trace's
output field empty. Return `result` (and annotate the return type) so
the trace shows the workflow response.
The activity previously returned `openai.types.responses.Response`
directly. The OpenAI API currently returns
`"prompt_cache_retention": "in_memory"` (underscore), but openai SDK
v2.32.0 declares that field as `Literal["in-memory", "24h"]`. The openai
client parses laxly so the activity succeeds, but Temporal's
`pydantic_data_converter` uses strict `TypeAdapter(Response).validate_json`
on the way into the workflow and rejects the underscore value, failing
every workflow task.

Define minimal `ChatResponse` and `ToolCall` pydantic models in
`activities.py` exposing only the fields the workflow uses (id,
output_text, tool_calls). The activity projects the openai Response
down to this shape so the sample is no longer coupled to SDK drift in
fields it doesn't use.

Update the workflow loop to iterate `response.tool_calls` directly and
the test mocks/helpers to build `ChatResponse` instead of constructing
openai Response objects.
@xumaple xumaple changed the title LangSmith sample: inline @traceable on main, move plugin to Worker LangSmith sample: starter cleanup, plugin on Worker, chatbot Response decoupling Apr 22, 2026
@xumaple xumaple merged commit ba9e9a1 into main Apr 23, 2026
11 checks passed
@xumaple xumaple deleted the maplexu/langsmith-starter-cleanup branch April 23, 2026 16:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants