LangSmith sample: starter cleanup, plugin on Worker, chatbot Response decoupling#295
Merged
LangSmith sample: starter cleanup, plugin on Worker, chatbot Response decoupling#295
Conversation
- basic/starter.py: move `@traceable` decorator directly onto `main`, removing the nested `run_workflow` closure (addresses reviewer comment) - basic/worker.py, chatbot/worker.py: move `LangSmithPlugin` from the `Client` to the `Worker`, matching our recommended pattern (plugin on the worker in worker code; on the client in client code)
JasonSteving99
approved these changes
Apr 22, 2026
@Traceable captures the decorated function's return value as the LangSmith trace output, so implicitly returning None left the trace's output field empty. Return `result` (and annotate the return type) so the trace shows the workflow response.
The activity previously returned `openai.types.responses.Response` directly. The OpenAI API currently returns `"prompt_cache_retention": "in_memory"` (underscore), but openai SDK v2.32.0 declares that field as `Literal["in-memory", "24h"]`. The openai client parses laxly so the activity succeeds, but Temporal's `pydantic_data_converter` uses strict `TypeAdapter(Response).validate_json` on the way into the workflow and rejects the underscore value, failing every workflow task. Define minimal `ChatResponse` and `ToolCall` pydantic models in `activities.py` exposing only the fields the workflow uses (id, output_text, tool_calls). The activity projects the openai Response down to this shape so the sample is no longer coupled to SDK drift in fields it doesn't use. Update the workflow loop to iterate `response.tool_calls` directly and the test mocks/helpers to build `ChatResponse` instead of constructing openai Response objects.
JasonSteving99
approved these changes
Apr 22, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Follow-ups to #292.
basic/starter.py— move@traceabledirectly ontomainand delete the nestedrun_workflowclosure (reviewer comment).mainnow returns the workflow result so LangSmith captures it as the trace output.basic/worker.py,chatbot/worker.py— moveLangSmithPluginfrom theClientto theWorker. Recommended pattern: plugin on theWorkerin worker code, plugin on theClientin client code.chatbot/activities.py,chatbot/workflows.py(+ tests) — fix a pre-existing bug that broke every chatbot workflow task. The activity returnedopenai.types.responses.Response. OpenAI's API returns"prompt_cache_retention": "in_memory"(underscore), but openai SDK v2.32.0 declares that field asLiteral["in-memory", "24h"]. The openai client parses laxly so the activity succeeds, but Temporal'spydantic_data_converteruses strictTypeAdapter(Response).validate_jsonon the way into the workflow and rejects the underscore, failing the task. Defined minimalChatResponseandToolCallpydantic models exposing only the fields the workflow uses (id,output_text,tool_calls), and have the activity project the openai Response down to that shape. The sample is no longer coupled to SDK drift in fields it doesn't use.Test plan
poe lintcleanpoe formatcleanpytest tests/langsmith_tracing/— 3 passed