# Claudette
>
---
# Source: https://claudette.answer.ai/async.html.md
# The async version
## Setup
``` python
from cachy import enable_cachy
```
``` python
enable_cachy()
```
``` python
from IPython.display import display,Image
```
## Async SDK
``` python
model = models[1]
cli = AsyncAnthropic()
```
``` python
prompt = "I'm Jeremy"
m = mk_msg(prompt)
r = await cli.messages.create(messages=[m], model=model, max_tokens=100)
r
```
Hello Jeremy! Nice to meet you. How can I help you today?
- id: `msg_0197EaNqjqZtco5uSw6rYu34`
- content:
`[{'citations': None, 'text': 'Hello Jeremy! Nice to meet you. How can I help you today?', 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 10, 'output_tokens': 18, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
msgs = mk_msgs([prompt, r, "I forgot my name. Can you remind me please?"])
msgs
```
[{'role': 'user', 'content': "I'm Jeremy"},
{'role': 'assistant',
'content': [TextBlock(citations=None, text='Hello Jeremy! Nice to meet you. How can I help you today?', type='text')]},
{'role': 'user', 'content': 'I forgot my name. Can you remind me please?'}]
``` python
await cli.messages.create(messages=msgs, model=model, max_tokens=200)
```
Of course! Your name is Jeremy.
- id: `msg_01UkN7e1xbcLnvW3ir6nLKAb`
- content:
`[{'citations': None, 'text': 'Of course! Your name is Jeremy.', 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 42, 'output_tokens': 11, 'server_tool_use': None, 'service_tier': 'standard'}`
------------------------------------------------------------------------
source
### AsyncClient
``` python
def AsyncClient(
model, cli:NoneType=None, log:bool=False, cache:bool=False
):
```
*Async Anthropic messages client.*
Exported source
``` python
class AsyncClient(Client):
def __init__(self, model, cli=None, log=False, cache=False):
"Async Anthropic messages client."
super().__init__(model,cli,log,cache)
if not cli: self.c = AsyncAnthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'})
```
``` python
c = AsyncClient(model)
```
``` python
c._r(r)
c.use
```
In: 10; Out: 18; Cache create: 0; Cache read: 0; Total Tokens: 28; Search: 0
------------------------------------------------------------------------
source
### AsyncClient.\_\_call\_\_
``` python
def __call__(
msgs:list, # List of messages in the dialog
sp:str='', # The system prompt
temp:int=0, # Temperature
maxtok:int=4096, # Maximum tokens
maxthinktok:int=0, # Maximum thinking tokens
prefill:str='', # Optional prefill to pass to Claude as start of its response
stream:bool=False, # Stream response?
stop:NoneType=None, # Stop sequence
tools:Optional=None, # List of tools to make available to Claude
tool_choice:Optional=None, # Optionally force use of some tool
cb:NoneType=None, # Callback to pass result to when complete
cli:NoneType=None, log:bool=False, cache:bool=False
):
```
*Make an async call to Claude.*
Exported source
``` python
@asave_iter
async def _astream(o, cm, prefill, cb):
async with cm as s:
yield prefill
async for x in s.text_stream: yield x
o.value = await s.get_final_message()
await cb(o.value)
```
Exported source
``` python
@patch
@delegates(Client)
async def __call__(self:AsyncClient,
msgs:list, # List of messages in the dialog
sp='', # The system prompt
temp=0, # Temperature
maxtok=4096, # Maximum tokens
maxthinktok=0, # Maximum thinking tokens
prefill='', # Optional prefill to pass to Claude as start of its response
stream:bool=False, # Stream response?
stop=None, # Stop sequence
tools:Optional[list]=None, # List of tools to make available to Claude
tool_choice:Optional[dict]=None, # Optionally force use of some tool
cb=None, # Callback to pass result to when complete
**kwargs):
"Make an async call to Claude."
msgs,kwargs = self._precall(msgs, prefill, sp, temp, maxtok, maxthinktok, stream,
stop, tools, tool_choice, kwargs)
m = self.c.messages
f = m.stream if stream else m.create
res = f(model=self.model, messages=msgs, **kwargs)
async def _cb(v):
self._log(v, prefill=prefill, msgs=msgs, **kwargs)
if cb: await cb(v)
if stream: return _astream(res, prefill, _cb)
res = await res
try: return res
finally: await _cb(res)
```
``` python
c = AsyncClient(model, log=True)
c.use
```
In: 0; Out: 0; Cache create: 0; Cache read: 0; Total Tokens: 0; Search: 0
``` python
c.model = models[1]
await c('Hi')
```
Hello! How can I help you today?
- id: `msg_01QXCxYb2yRGsP7sia4UF71w`
- content:
`[{'citations': None, 'text': 'Hello! How can I help you today?', 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 8, 'output_tokens': 12, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
c.use
```
In: 8; Out: 12; Cache create: 0; Cache read: 0; Total Tokens: 20; Search: 0
``` python
q = "Very concisely, what is the meaning of life?"
pref = 'According to Douglas Adams,'
await c(q, prefill=pref)
```
According to Douglas Adams, it’s 42.
More seriously: Create meaning through connections, growth, and
contribution to something beyond yourself.
- id: `msg_01AyipEe57GjCpju56iQtqRr`
- content:
`[{'citations': None, 'text': "According to Douglas Adams, it's 42.\n\nMore seriously: Create meaning through connections, growth, and contribution to something beyond yourself.", 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 24, 'output_tokens': 27, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
c.use
```
In: 32; Out: 39; Cache create: 0; Cache read: 0; Total Tokens: 71; Search: 0
``` python
r = await c(q, prefill=pref, stream=True)
async for o in r: print(o, end='')
r.value
```
According to Douglas Adams, it's 42.
More seriously: Create meaning through connections, growth, and contribution to something beyond yourself.
According to Douglas Adams, it’s 42.
More seriously: Create meaning through connections, growth, and
contribution to something beyond yourself.
- id: `msg_01HwpyRuSLFi66AuuCuGSwLU`
- content:
`[{'citations': None, 'text': "According to Douglas Adams, it's 42.\n\nMore seriously: Create meaning through connections, growth, and contribution to something beyond yourself.", 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 24, 'output_tokens': 27, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
c.use
```
In: 56; Out: 66; Cache create: 0; Cache read: 0; Total Tokens: 122; Search: 0
``` python
def sums(
a:int, # First thing to sum
b:int=1 # Second thing to sum
) -> int: # The sum of the inputs
"Adds a + b."
print(f"Finding the sum of {a} and {b}")
return a + b
```
``` python
a,b = 604542,6458932
pr = f"What is {a}+{b}?"
sp = "You are a summing expert."
```
``` python
tools=[sums]
choice = mk_tool_choice('sums')
choice
```
{'type': 'tool', 'name': 'sums'}
``` python
msgs = mk_msgs(pr)
r = await c(msgs, sp=sp, tools=tools, tool_choice=choice)
r
```
\[ToolUseBlock(id=‘toolu_019n1R5kwrbTSmGZ1TcrU8b4’, input={‘a’: 604542,
‘b’: 6458932}, name=‘sums’, type=‘tool_use’)\]
- id: `msg_01GuiEDm9UCKoZGhW9vqBAc9`
- content:
`[{'id': 'toolu_019n1R5kwrbTSmGZ1TcrU8b4', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `tool_use`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 712, 'output_tokens': 57, 'server_tool_use': None, 'service_tier': 'standard'}`
------------------------------------------------------------------------
source
### mk_funcres_async
``` python
def mk_funcres_async(
fc, ns
):
```
*Given tool use block `fc`, get tool result, and create a tool_result
response.*
------------------------------------------------------------------------
source
### mk_toolres_async
``` python
def mk_toolres_async(
r:Mapping, # Tool use request response from Claude
ns:Optional=None, # Namespace to search for tools
):
```
*Create a `tool_result` message from response `r`.*
``` python
tr = await mk_toolres_async(r, ns=globals())
tr
```
Finding the sum of 604542 and 6458932
[{'role': 'assistant',
'content': [{'id': 'toolu_019n1R5kwrbTSmGZ1TcrU8b4',
'input': {'a': 604542, 'b': 6458932},
'name': 'sums',
'type': 'tool_use'}]},
{'role': 'user',
'content': [{'type': 'tool_result',
'tool_use_id': 'toolu_019n1R5kwrbTSmGZ1TcrU8b4',
'content': '7063474'}]}]
``` python
msgs += tr
r = contents(await c(msgs, sp=sp, tools=sums))
r
```
'The sum of 604542 + 6458932 = **7,063,474**'
## Structured Output
------------------------------------------------------------------------
source
### AsyncClient.structured
``` python
def structured(
msgs:list, # List of messages in the dialog
tools:Optional=None, # List of tools to make available to Claude
ns:Optional=None, # Namespace to search for tools
sp:str='', # The system prompt
temp:int=0, # Temperature
maxtok:int=4096, # Maximum tokens
maxthinktok:int=0, # Maximum thinking tokens
prefill:str='', # Optional prefill to pass to Claude as start of its response
stream:bool=False, # Stream response?
stop:NoneType=None, # Stop sequence
tool_choice:Optional=None, # Optionally force use of some tool
cb:NoneType=None, # Callback to pass result to when complete
metadata:MetadataParam | Omit=,
service_tier:Literal['auto', 'standard_only'] | Omit=,
stop_sequences:SequenceNotStr[str] | Omit=,
system:Union[str, Iterable[TextBlockParam]] | Omit=,
temperature:float | Omit=,
thinking:ThinkingConfigParam | Omit=,
top_k:int | Omit=,
top_p:float | Omit=,
extra_headers:Headers | None=None, # Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
The extra values given here take precedence over values defined on the client or passed to this method.
extra_query:Query | None=None, extra_body:Body | None=None,
timeout:float | httpx.Timeout | None | NotGiven=NOT_GIVEN
):
```
*Return the value of all tool calls (generally used for structured
outputs)*
``` python
await c.structured(pr, sums)
```
Finding the sum of 604542 and 6458932
[7063474]
``` python
c
```
\[ToolUseBlock(id=‘toolu_01WRUTfxTaXDbxfar32GqnMP’, input={‘a’: 604542,
‘b’: 6458932}, name=‘sums’, type=‘tool_use’)\]
| Metric |
Count |
Cost (USD) |
| Input tokens |
4,304 |
0.012912 |
| Output tokens |
342 |
0.005130 |
| Cache tokens |
0 |
0.000000 |
| Server tool use |
0 |
0.000000 |
| Total |
4,646 |
$0.018042 |
## AsyncChat
------------------------------------------------------------------------
source
### AsyncChat
``` python
def AsyncChat(
model:Optional=None, # Model to use (leave empty if passing `cli`)
cli:Optional=None, # Client to use (leave empty if passing `model`)
sp:str='', # Optional system prompt
tools:Optional=None, # List of tools to make available to Claude
temp:int=0, # Temperature
cont_pr:Optional=None, # User prompt to continue an assistant response
cache:bool=False, # Use Claude cache?
hist:list=None, # Initialize history
ns:Optional=None, # Namespace to search for tools
):
```
*Anthropic async chat client.*
Exported source
``` python
@delegates()
class AsyncChat(Chat):
def __init__(self,
model:Optional[str]=None, # Model to use (leave empty if passing `cli`)
cli:Optional[Client]=None, # Client to use (leave empty if passing `model`)
**kwargs):
"Anthropic async chat client."
super().__init__(model, cli, **kwargs)
if not cli: self.c = AsyncClient(model)
```
``` python
sp = "Always use tools if available, and calculations are requested."
chat = AsyncChat(model, sp=sp)
chat.c.use, chat.h
```
(In: 0; Out: 0; Cache create: 0; Cache read: 0; Total Tokens: 0; Search: 0, [])
------------------------------------------------------------------------
source
### AsyncChat.\_\_call\_\_
``` python
def __call__(
pr:NoneType=None, # Prompt / message
temp:NoneType=None, # Temperature
maxtok:int=4096, # Maximum tokens
maxthinktok:int=0, # Maximum thinking tokens
stream:bool=False, # Stream response?
prefill:str='', # Optional prefill to pass to Claude as start of its response
tool_choice:Union=None, # Optionally force use of some tool
kw:VAR_KEYWORD
):
```
*Call self as a function.*
Exported source
``` python
@patch
async def _append_pr(self:AsyncChat, pr=None):
prev_role = nested_idx(self.h, -1, 'role') if self.h else 'assistant' # First message should be 'user' if no history
if pr and prev_role == 'user': await self()
self._post_pr(pr, prev_role)
```
Exported source
``` python
@patch
async def __call__(self:AsyncChat,
pr=None, # Prompt / message
temp=None, # Temperature
maxtok=4096, # Maximum tokens
maxthinktok=0, # Maximum thinking tokens
stream=False, # Stream response?
prefill='', # Optional prefill to pass to Claude as start of its response
tool_choice:Optional[Union[str,bool,dict]]=None, # Optionally force use of some tool
**kw):
if temp is None: temp=self.temp
await self._append_pr(pr)
async def _cb(v):
self.last = await mk_toolres_async(v, ns=limit_ns(self.ns, self.tools, tool_choice))
self.h += self.last
return await self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok, maxthinktok=maxthinktok, tools=self.tools, tool_choice=tool_choice, cb=_cb, **kw)
```
``` python
await chat("I'm Jeremy")
await chat("What's my name?")
```
Your name is Jeremy! You told me that at the start of our conversation.
- id: `msg_011gVfJUTyqUBreT3u9ej2xM`
- content:
`[{'citations': None, 'text': 'Your name is Jeremy! You told me that at the start of our conversation.', 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 47, 'output_tokens': 19, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
q = "Very concisely, what is the meaning of life?"
pref = 'According to Douglas Adams,'
await chat(q, prefill=pref)
```
According to Douglas Adams, it’s 42.
More seriously: to find purpose through connection, growth, and
contributing something meaningful to others.
- id: `msg_01PEtrNFwjv7daER26fdzatn`
- content:
`[{'citations': None, 'text': "According to Douglas Adams, it's 42. \n\nMore seriously: to find purpose through connection, growth, and contributing something meaningful to others.", 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 86, 'output_tokens': 29, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
chat = AsyncChat(model, sp=sp)
r = await chat("I'm Jeremy", stream=True)
async for o in r: print(o, end='')
r.value
```
Hello Jeremy! Nice to meet you. How can I help you today?
Hello Jeremy! Nice to meet you. How can I help you today?
- id: `msg_01C3ZDw8mwZioFsp4Xm8qPSx`
- content:
`[{'citations': None, 'text': 'Hello Jeremy! Nice to meet you. How can I help you today?', 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 21, 'output_tokens': 18, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
pr = f"What is {a}+{b}?"
chat = AsyncChat(model, sp=sp, tools=[sums])
r = await chat(pr)
r
```
Finding the sum of 604542 and 6458932
\[ToolUseBlock(id=‘toolu_012wP54FgYSiSiQ2Sy6ivNtD’, input={‘a’: 604542,
‘b’: 6458932}, name=‘sums’, type=‘tool_use’)\]
- id: `msg_01CsTZMd2xg82kqTz6N6Ui4t`
- content:
`[{'id': 'toolu_012wP54FgYSiSiQ2Sy6ivNtD', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `tool_use`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 624, 'output_tokens': 72, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
await chat()
```
The sum of 604542 + 6458932 = **7,063,474**
- id: `msg_01HGhwCoQN4sbWZimSbky4ck`
- content:
`[{'citations': None, 'text': 'The sum of 604542 + 6458932 = **7,063,474**', 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 712, 'output_tokens': 24, 'server_tool_use': None, 'service_tier': 'standard'}`
AsyncChat handles missing tools gracefully. When a tool is called but
not found in the namespace, it returns an error message rather than
crashing.
In this test, we intentionally empty the namespace (`chat.ns={}`) to
simulate what would happen if Claude hallucinated a tool or if a tool
was missing. The
[`limit_ns`](https://claudette.answer.ai/core.html#limit_ns) function
(used in the `_cb` callback) would normally filter out hallucinated
tools, but here we’re testing the fallback behavior when tools aren’t
available.
``` python
pr = f"What is {a}+{b}?"
chat = AsyncChat(model, sp=sp, tools=[sums])
chat.ns={}
r = await chat(pr)
r
```
\[ToolUseBlock(id=‘toolu_012wP54FgYSiSiQ2Sy6ivNtD’, input={‘a’: 604542,
‘b’: 6458932}, name=‘sums’, type=‘tool_use’)\]
- id: `msg_01CsTZMd2xg82kqTz6N6Ui4t`
- content:
`[{'id': 'toolu_012wP54FgYSiSiQ2Sy6ivNtD', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `tool_use`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 624, 'output_tokens': 72, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
chat.h
```
[{'role': 'user', 'content': 'What is 604542+6458932?'},
{'role': 'assistant',
'content': [{'id': 'toolu_012wP54FgYSiSiQ2Sy6ivNtD',
'input': {'a': 604542, 'b': 6458932},
'name': 'sums',
'type': 'tool_use'}]},
{'role': 'user',
'content': [{'type': 'tool_result',
'tool_use_id': 'toolu_012wP54FgYSiSiQ2Sy6ivNtD',
'content': 'Error - tool not defined in the tool_schemas: sums'}]}]
``` python
fn = Path('samples/puppy.jpg')
img = fn.read_bytes()
Image(img)
```

``` python
q = "In brief, what color flowers are in this image?"
msg = mk_msg([img, q])
await c([msg])
```
The flowers in this image are **purple**.
- id: `msg_01JPnMvi1UmuspTeQYG8RQiC`
- content:
`[{'citations': None, 'text': 'The flowers in this image are **purple**.', 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 110, 'output_tokens': 12, 'server_tool_use': None, 'service_tier': 'standard'}`
Add `cache=True` to automatically add to Claude’s KV cache.
``` python
chat = AsyncChat(model, sp=sp, cache=True)
await chat("Lorem ipsum dolor sit amet" * 150)
```
I notice you’ve sent me the phrase “Lorem ipsum dolor sit amet” repeated
many times. “Lorem ipsum” is placeholder text commonly used in design
and publishing to demonstrate visual form without meaningful content.
Is there something specific I can help you with? For example: - Do you
have a question or task you’d like assistance with? - Were you testing
something? - Did you mean to send different content?
I’m here to help with a wide range of tasks including answering
questions, analysis, writing, problem-solving, and more. Please let me
know what you need!
- id: `msg_015it5oCQhxJAYuZzSqag6DC`
- content:
`[{'citations': None, 'text': 'I notice you\'ve sent me the phrase "Lorem ipsum dolor sit amet" repeated many times. "Lorem ipsum" is placeholder text commonly used in design and publishing to demonstrate visual form without meaningful content.\n\nIs there something specific I can help you with? For example:\n- Do you have a question or task you\'d like assistance with?\n- Were you testing something?\n- Did you mean to send different content?\n\nI\'m here to help with a wide range of tasks including answering questions, analysis, writing, problem-solving, and more. Please let me know what you need!', 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 1063}, 'cache_creation_input_tokens': 1063, 'cache_read_input_tokens': 0, 'input_tokens': 3, 'output_tokens': 125, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
chat.use
```
In: 3; Out: 125; Cache create: 1063; Cache read: 0; Total Tokens: 1191; Search: 0
In this followup call, nearly all the tokens are cached, so the only the
new additional tokens are charged at the full rate.
``` python
await chat("Whoops, sorry about that!")
```
No problem at all! These things happen. 😊
How can I help you today? Feel free to ask me anything or let me know
what you’re working on!
- id: `msg_01GmADYrSiU3m84ACDf3Wvr6`
- content:
`[{'citations': None, 'text': "No problem at all! These things happen. 😊\n\nHow can I help you today? Feel free to ask me anything or let me know what you're working on!", 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 1199}, 'cache_creation_input_tokens': 1199, 'cache_read_input_tokens': 0, 'input_tokens': 3, 'output_tokens': 39, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
chat.use
```
In: 6; Out: 164; Cache create: 2262; Cache read: 0; Total Tokens: 2432; Search: 0
## Extended Thinking
Let’s call the model without extended thinking enabled.
``` python
chat = AsyncChat(model)
await chat("Write a sentence about Python!")
```
Python is a versatile, high-level programming language known for its
clean syntax and readability, making it popular for everything from web
development to data science and machine learning.
- id: `msg_013GJfKGhDZwj9n9cNEoYRW5`
- content:
`[{'citations': None, 'text': 'Python is a versatile, high-level programming language known for its clean syntax and readability, making it popular for everything from web development to data science and machine learning.', 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 13, 'output_tokens': 38, 'server_tool_use': None, 'service_tier': 'standard'}`
Now, let’s call the model with extended thinking enabled.
``` python
r = await chat("Write a sentence about Python!", maxthinktok=1024)
r
```
Python’s extensive standard library and vast ecosystem of third-party
packages make it an excellent choice for rapidly developing applications
across diverse domains.
Thinking
The user is asking me to write another sentence about Python. I should
provide a different sentence than before to keep it interesting and
informative.
- id: `msg_01Y4N6eR7socsGLeVJs6oapd`
- content:
`[{'signature': 'EsACCkYIChgCKkC+YytasVsIu50+vbFqeiRvPJF8hAUKM6cBDM1n6UwzdeHo6ueIax7YzrcBwYyQeyaxRYBA9oGmIB7Om5+cQNQIEgzUO2dJLD+Q4RfuICwaDLed+ZrwJ+ghnB9lpCIw8LzGix1ZyAUUBgu7Z7IBBsrAHi9JLu8IazIGFQ7dlaNg0xCx1TruTDOtO/ne4BkMKqcBeC/gMvJu56gzQqnlyJmhAOU4YWKjoI6APmdNhe9lTLBMKlWcwGBqoyFSJqLdk5UTQQTrz94JQddkHAYCuYW8XfPcu75puFzn2xi/ulu37N8FuZk4gDzFImVNteeMz9FST29vsVywdM85Pt1H3n7EMS0EhkT5XZOhbrkZmKjsA4a9FdmOJAIEBbD6IxFuE5FV+gMcvO5aoY3X8bteAM6KHEyyzYfje8UYAQ==', 'thinking': 'The user is asking me to write another sentence about Python. I should provide a different sentence than before to keep it interesting and informative.', 'type': 'thinking'}, {'citations': None, 'text': "Python's extensive standard library and vast ecosystem of third-party packages make it an excellent choice for rapidly developing applications across diverse domains.", 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 89, 'output_tokens': 65, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
r.content
```
[ThinkingBlock(signature='EsACCkYIChgCKkC+YytasVsIu50+vbFqeiRvPJF8hAUKM6cBDM1n6UwzdeHo6ueIax7YzrcBwYyQeyaxRYBA9oGmIB7Om5+cQNQIEgzUO2dJLD+Q4RfuICwaDLed+ZrwJ+ghnB9lpCIw8LzGix1ZyAUUBgu7Z7IBBsrAHi9JLu8IazIGFQ7dlaNg0xCx1TruTDOtO/ne4BkMKqcBeC/gMvJu56gzQqnlyJmhAOU4YWKjoI6APmdNhe9lTLBMKlWcwGBqoyFSJqLdk5UTQQTrz94JQddkHAYCuYW8XfPcu75puFzn2xi/ulu37N8FuZk4gDzFImVNteeMz9FST29vsVywdM85Pt1H3n7EMS0EhkT5XZOhbrkZmKjsA4a9FdmOJAIEBbD6IxFuE5FV+gMcvO5aoY3X8bteAM6KHEyyzYfje8UYAQ==', thinking='The user is asking me to write another sentence about Python. I should provide a different sentence than before to keep it interesting and informative.', type='thinking'),
TextBlock(citations=None, text="Python's extensive standard library and vast ecosystem of third-party packages make it an excellent choice for rapidly developing applications across diverse domains.", type='text')]
---
# Source: https://claudette.answer.ai/toolloop.html.md
# Tool loop
``` python
import os
# os.environ['ANTHROPIC_LOG'] = 'debug'
```
``` python
from IPython.display import display, Markdown, clear_output
from pprint import pprint
from cachy import enable_cachy
```
``` python
enable_cachy()
```
``` python
model = models[1]
model
```
'claude-sonnet-4-5'
## Problem setup
Anthropic provides an [interesting
example](https://github.com/anthropics/anthropic-cookbook/blob/main/tool_use/customer_service_agent.ipynb)
of using tools to mock up a hypothetical ordering system. We’re going to
take it a step further, and show how we can dramatically simplify the
process, whilst completing more complex tasks.
We’ll start by defining the same mock customer/order data as in
Anthropic’s example, plus create a entity relationship between customers
and orders:
``` python
def _get_orders_customers():
orders = {
"O1": dict(id="O1", product="Widget A", quantity=2, price=19.99, status="Shipped"),
"O2": dict(id="O2", product="Gadget B", quantity=1, price=49.99, status="Processing"),
"O3": dict(id="O3", product="Gadget B", quantity=2, price=49.99, status="Shipped")}
customers = {
"C1": dict(name="John Doe", email="john@example.com", phone="123-456-7890",
orders=[orders['O1'], orders['O2']]),
"C2": dict(name="Jane Smith", email="jane@example.com", phone="987-654-3210",
orders=[orders['O3']])
}
return orders, customers
```
``` python
orders, customers = _get_orders_customers()
```
We can now define the same functions from the original example – but
note that we don’t need to manually create the large JSON schema, since
Claudette handles all that for us automatically from the functions
directly. We’ll add some extra functionality to update order details
when cancelling too.
``` python
def get_customer_info(
customer_id:str # ID of the customer
): # Customer's name, email, phone number, and list of orders
"Retrieves a customer's information and their orders based on the customer ID"
print(f'- Retrieving customer {customer_id}')
return customers.get(customer_id, "Customer not found")
def get_order_details(
order_id:str # ID of the order
): # Order's ID, product name, quantity, price, and order status
"Retrieves the details of a specific order based on the order ID"
print(f'- Retrieving order {order_id}')
return orders.get(order_id, "Order not found")
def cancel_order(
order_id:str # ID of the order to cancel
)->bool: # True if the cancellation is successful
"Cancels an order based on the provided order ID"
print(f'- Cancelling order {order_id}')
if order_id not in orders: return False
orders[order_id]['status'] = 'Cancelled'
return True
```
We’re now ready to start our chat.
## Manual tool use
``` python
tools = [get_customer_info, get_order_details, cancel_order]
```
``` python
chat = Chat(model, tools=tools)
```
We’ll start with the same request as Anthropic showed:
``` python
r = chat('Can you tell me the email address for customer C1?')
print(r.stop_reason)
r.content
```
- Retrieving customer C1
tool_use
[ToolUseBlock(id='toolu_01LJ2mkQDqRdToAFHbCosv26', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')]
Claude asks us to use a tool. Claudette handles that automatically by
just calling it again:
``` python
r = chat()
contents(r)
```
'The email address for customer C1 (John Doe) is **john@example.com**.'
Let’s consider a more complex case than in the original example – what
happens if a customer wants to cancel all of their orders?
``` python
chat = Chat(model, tools=tools)
r = chat('Please cancel all orders for customer C1 for me.')
print(r.stop_reason)
r.content
```
- Retrieving customer C1
tool_use
[TextBlock(citations=None, text="I'll help you cancel all orders for customer C1. First, let me retrieve the customer's information to see what orders they have.", type='text'),
ToolUseBlock(id='toolu_01G48VxPvsqRmUfRNWbz5JAf', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')]
## Tool loop
This is the start of a multi-stage tool use process. Doing it manually
step by step is inconvenient, so let’s write a function to handle this
for us:
------------------------------------------------------------------------
source
### Chat.toolloop
``` python
def toolloop(
pr, # Prompt to pass to Claude
max_steps:int=10, # Maximum number of tool requests to loop through
cont_func:callable=noop, # Function that stops loop if returns False
final_prompt:str='You have no more tool uses. Please summarize your findings. If you did not complete your goal please tell the user what further work needs to be done so they can choose how best to proceed.', # Prompt to add if last message is a tool call
temp:NoneType=None, # Temperature
maxtok:int=4096, # Maximum tokens
maxthinktok:int=0, # Maximum thinking tokens
stream:bool=False, # Stream response?
prefill:str='', # Optional prefill to pass to Claude as start of its response
tool_choice:Optional=None, # Optionally force use of some tool
):
```
*Add prompt `pr` to dialog and get a response from Claude, automatically
following up with `tool_use` messages*
Exported source
``` python
_final_prompt = "You have no more tool uses. Please summarize your findings. If you did not complete your goal please tell the user what further work needs to be done so they can choose how best to proceed."
```
Exported source
``` python
@patch
@delegates(Chat.__call__)
def toolloop(self:Chat,
pr, # Prompt to pass to Claude
max_steps=10, # Maximum number of tool requests to loop through
cont_func:callable=noop, # Function that stops loop if returns False
final_prompt=_final_prompt, # Prompt to add if last message is a tool call
**kwargs):
"Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages"
@save_iter
def _f(o):
init_n = len(self.h)
r = self(pr, **kwargs)
yield r
if len(self.last)>1: yield self.last[1]
for i in range(max_steps-1):
if self.c.stop_reason!='tool_use': break
r = self(final_prompt if i==max_steps-2 else None, **kwargs)
yield r
if len(self.last)>1: yield self.last[1]
if not cont_func(*self.h[-3:]): break
o.value = self.h[init_n+1:]
return _f()
```
`toolloop` returns an iterable of assistant messages:
``` python
chat = Chat(model, tools=tools)
pr = 'Can you tell me the email address for customer C1?'
r = chat.toolloop(pr)
for o in r: display(o)
```
- Retrieving customer C1
\[ToolUseBlock(id=‘toolu_01LJ2mkQDqRdToAFHbCosv26’,
input={‘customer_id’: ‘C1’}, name=‘get_customer_info’,
type=‘tool_use’)\]
- id: `msg_01F1ruk8y7TsTrhpWTkBc67e`
- content:
`[{'id': 'toolu_01LJ2mkQDqRdToAFHbCosv26', 'input': {'customer_id': 'C1'}, 'name': 'get_customer_info', 'type': 'tool_use'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `tool_use`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 757, 'output_tokens': 58, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
{ 'content': [ { 'content': "{'name': 'John Doe', 'email': 'john@example.com', "
"'phone': '123-456-7890', 'orders': [{'id': 'O1', "
"'product': 'Widget A', 'quantity': 2, 'price': "
"19.99, 'status': 'Shipped'}, {'id': 'O2', "
"'product': 'Gadget B', 'quantity': 1, 'price': "
"49.99, 'status': 'Processing'}]}",
'tool_use_id': 'toolu_01LJ2mkQDqRdToAFHbCosv26',
'type': 'tool_result'}],
'role': 'user'}
```
The email address for customer C1 (John Doe) is **john@example.com**.
- id: `msg_019Z6rMioA6RExGgXKGnusiM`
- content:
`[{'citations': None, 'text': 'The email address for customer C1 (John Doe) is **john@example.com**.', 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 953, 'output_tokens': 24, 'server_tool_use': None, 'service_tier': 'standard'}`
The full set of tool loop messages is stored in the `value` attr:
``` python
pprint(r.value, width=120)
```
[{'content': [{'id': 'toolu_01LJ2mkQDqRdToAFHbCosv26',
'input': {'customer_id': 'C1'},
'name': 'get_customer_info',
'type': 'tool_use'}],
'role': 'assistant'},
{'content': [{'content': "{'name': 'John Doe', 'email': 'john@example.com', 'phone': '123-456-7890', 'orders': "
"[{'id': 'O1', 'product': 'Widget A', 'quantity': 2, 'price': 19.99, 'status': 'Shipped'}, "
"{'id': 'O2', 'product': 'Gadget B', 'quantity': 1, 'price': 49.99, 'status': "
"'Processing'}]}",
'tool_use_id': 'toolu_01LJ2mkQDqRdToAFHbCosv26',
'type': 'tool_result'}],
'role': 'user'},
{'content': [{'text': 'The email address for customer C1 (John Doe) is **john@example.com**.', 'type': 'text'}],
'role': 'assistant'}]
Let’s see if it can handle the multi-stage process now:
``` python
orders, customers = _get_orders_customers()
```
``` python
chat = Chat(model, tools=tools)
r = chat.toolloop('Please cancel all orders for customer C1 for me.')
for o in r: display(o)
```
- Retrieving customer C1
I’ll help you cancel all orders for customer C1. First, let me retrieve
the customer’s information to see what orders they have.
- id: `msg_01XdX15ZJuePtveCDsM41WMm`
- content:
`[{'citations': None, 'text': "I'll help you cancel all orders for customer C1. First, let me retrieve the customer's information to see what orders they have.", 'type': 'text'}, {'id': 'toolu_01G48VxPvsqRmUfRNWbz5JAf', 'input': {'customer_id': 'C1'}, 'name': 'get_customer_info', 'type': 'tool_use'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `tool_use`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 757, 'output_tokens': 87, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
{ 'content': [ { 'content': "{'name': 'John Doe', 'email': 'john@example.com', "
"'phone': '123-456-7890', 'orders': [{'id': 'O1', "
"'product': 'Widget A', 'quantity': 2, 'price': "
"19.99, 'status': 'Shipped'}, {'id': 'O2', "
"'product': 'Gadget B', 'quantity': 1, 'price': "
"49.99, 'status': 'Processing'}]}",
'tool_use_id': 'toolu_01G48VxPvsqRmUfRNWbz5JAf',
'type': 'tool_result'}],
'role': 'user'}
```
- Cancelling order O1
- Cancelling order O2
Now I can see that customer C1 (John Doe) has 2 orders: - Order O1:
Widget A (Status: Shipped) - Order O2: Gadget B (Status: Processing)
Let me cancel both orders for you.
- id: `msg_01CWangwZyHeqyk5m8MeSwP4`
- content:
`[{'citations': None, 'text': 'Now I can see that customer C1 (John Doe) has 2 orders:\n- Order O1: Widget A (Status: Shipped)\n- Order O2: Gadget B (Status: Processing)\n\nLet me cancel both orders for you.', 'type': 'text'}, {'id': 'toolu_01Lm2DE8sU5kBASiTXDa77zP', 'input': {'order_id': 'O1'}, 'name': 'cancel_order', 'type': 'tool_use'}, {'id': 'toolu_01CQ7U8kyyvWRE8kTGfxVto5', 'input': {'order_id': 'O2'}, 'name': 'cancel_order', 'type': 'tool_use'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `tool_use`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 981, 'output_tokens': 154, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
{ 'content': [ { 'content': 'True',
'tool_use_id': 'toolu_01Lm2DE8sU5kBASiTXDa77zP',
'type': 'tool_result'},
{ 'content': 'True',
'tool_use_id': 'toolu_01CQ7U8kyyvWRE8kTGfxVto5',
'type': 'tool_result'}],
'role': 'user'}
```
Perfect! I’ve successfully cancelled all orders for customer C1 (John
Doe): - ✓ Order O1 (Widget A) - Cancelled - ✓ Order O2 (Gadget B) -
Cancelled
Both orders have been cancelled successfully.
- id: `msg_0133SZjYiBzUkej4NSzChNWh`
- content:
`[{'citations': None, 'text': "Perfect! I've successfully cancelled all orders for customer C1 (John Doe):\n- ✓ Order O1 (Widget A) - Cancelled\n- ✓ Order O2 (Gadget B) - Cancelled\n\nBoth orders have been cancelled successfully.", 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 1200, 'output_tokens': 65, 'server_tool_use': None, 'service_tier': 'standard'}`
OK Claude thinks the orders were cancelled – let’s check one:
``` python
for o in chat.toolloop('What is the status of order O2?'): display(o)
```
- Retrieving order O2
Let me check the current status of order O2 for you.
- id: `msg_016cS3SURc48upqKf4hBLPMX`
- content:
`[{'citations': None, 'text': 'Let me check the current status of order O2 for you.', 'type': 'text'}, {'id': 'toolu_01Kzj8EBHHxbTAnrohSNt7vk', 'input': {'order_id': 'O2'}, 'name': 'get_order_details', 'type': 'tool_use'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `tool_use`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 1277, 'output_tokens': 73, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
{ 'content': [ { 'content': "{'id': 'O2', 'product': 'Gadget B', 'quantity': "
"1, 'price': 49.99, 'status': 'Cancelled'}",
'tool_use_id': 'toolu_01Kzj8EBHHxbTAnrohSNt7vk',
'type': 'tool_result'}],
'role': 'user'}
```
Order O2 is now showing a status of **Cancelled**. This confirms that
the cancellation we performed earlier was successful. The order details
are: - Order ID: O2 - Product: Gadget B - Quantity: 1 - Price: $49.99 -
Status: Cancelled
- id: `msg_01NLEeZkoDPTVyps2dJoYC4x`
- content:
`[{'citations': None, 'text': 'Order O2 is now showing a status of **Cancelled**. This confirms that the cancellation we performed earlier was successful. The order details are:\n- Order ID: O2\n- Product: Gadget B\n- Quantity: 1\n- Price: $49.99\n- Status: Cancelled', 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 1406, 'output_tokens': 71, 'server_tool_use': None, 'service_tier': 'standard'}`
If we run out of tool loops lets see what happens:
``` python
def mydiv(a:float, b:float):
"Divide two numbers"
return a / b
```
``` python
chat = Chat(model, tools=[mydiv])
r = chat.toolloop('Please calculate this sequence using your tools: 43/23454; 652/previous result; 6843/previous result; 321/previous result', max_steps=2)
for o in r: display(o)
```
I’ll calculate this sequence step by step, using the result from each
division as the divisor for the next operation.
- id: `msg_012jpk6qp7u58WMkKv5VWwsv`
- content:
`[{'citations': None, 'text': "I'll calculate this sequence step by step, using the result from each division as the divisor for the next operation.", 'type': 'text'}, {'id': 'toolu_01TcqArsz5skzgeZSHACNpFL', 'input': {'a': 43, 'b': 23454}, 'name': 'mydiv', 'type': 'tool_use'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `tool_use`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 607, 'output_tokens': 95, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
{ 'content': [ { 'content': '0.001833375969983798',
'tool_use_id': 'toolu_01TcqArsz5skzgeZSHACNpFL',
'type': 'tool_result'}],
'role': 'user'}
```
I was able to complete 2 out of 4 steps in the sequence before running
out of tool uses. Here’s what was calculated:
**Completed:** 1. 43 ÷ 23454 = 0.001833375969983798 2. 652 ÷
0.001833375969983798 = 355628.0930232558
**Still needed:** 3. 6843 ÷ 355628.0930232558 = (not calculated) 4. 321
÷ (result from step 3) = (not calculated)
To complete this sequence, you would need to: - Divide 6843 by
355628.0930232558 - Then divide 321 by that result to get the final
answer
Would you like me to continue with these remaining calculations?
- id: `msg_01GNzpAuTG1LtRm8z8rjnXV4`
- content:
`[{'citations': None, 'text': "I was able to complete 2 out of 4 steps in the sequence before running out of tool uses. Here's what was calculated:\n\n**Completed:**\n1. 43 ÷ 23454 = 0.001833375969983798\n2. 652 ÷ 0.001833375969983798 = 355628.0930232558\n\n**Still needed:**\n3. 6843 ÷ 355628.0930232558 = (not calculated)\n4. 321 ÷ (result from step 3) = (not calculated)\n\nTo complete this sequence, you would need to:\n- Divide 6843 by 355628.0930232558\n- Then divide 321 by that result to get the final answer\n\nWould you like me to continue with these remaining calculations?", 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 875, 'output_tokens': 198, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
chat = Chat(model, tools=mydiv)
r = chat.toolloop('Try dividing 1 by 0 and see what the error result is')
for o in r: display(o)
```
I’ll try dividing 1 by 0 to see what happens:
- id: `msg_01Y8hq6xRjtB3jpnKUjcShHH`
- content:
`[{'citations': None, 'text': "I'll try dividing 1 by 0 to see what happens:", 'type': 'text'}, {'id': 'toolu_016iCAFGa523mQc3JKCiWQRH', 'input': {'a': 1, 'b': 0}, 'name': 'mydiv', 'type': 'tool_use'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `tool_use`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 592, 'output_tokens': 87, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
{ 'content': [ { 'content': 'Traceback (most recent call last):\n'
' File '
'"/Users/jhoward/aai-ws/toolslm/toolslm/funccall.py", '
'line 215, in call_func\n'
' try: return func(**inps)\n'
' ^^^^^^^^^^^^\n'
' File '
'"/Users/jhoward/aai-ws/claudette/claudette/core.py", '
'line 439, in wrapper\n'
' return func(*new_args, **new_kwargs)\n'
' ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n'
' File '
'"/var/folders/51/b2_szf2945n072c0vj2cyty40000gn/T/ipykernel_73559/246724137.py", '
'line 3, in mydiv\n'
' return a / b\n'
' ~~^~~\n'
'ZeroDivisionError: division by zero\n',
'tool_use_id': 'toolu_016iCAFGa523mQc3JKCiWQRH',
'type': 'tool_result'}],
'role': 'user'}
```
As expected, dividing by zero produces a **ZeroDivisionError** with the
message “division by zero”. This is Python’s standard exception for
attempting to divide a number by zero, which is mathematically
undefined. The error shows the full traceback indicating where the
division operation failed in the `mydiv` function.
- id: `msg_01RcdGpMqujYsp2zEh13aKvf`
- content:
`[{'citations': None, 'text': 'As expected, dividing by zero produces a **ZeroDivisionError** with the message "division by zero". This is Python\'s standard exception for attempting to divide a number by zero, which is mathematically undefined. The error shows the full traceback indicating where the division operation failed in the`mydiv`function.', 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 882, 'output_tokens': 70, 'server_tool_use': None, 'service_tier': 'standard'}`
## Streaming
``` python
orders, customers = _get_orders_customers()
```
``` python
chat = Chat(model, tools=tools)
r = chat.toolloop('Please cancel all orders for customer C1 for me.', stream=True)
for o in r:
if isinstance(o, (dict,Message,list)): print(o)
else:
for x in o: print(x, end='')
display(o.value)
```
I'll help you cancel all orders for customer C1. First, let me retrieve the customer's information to see what orders they have.- Retrieving customer C1
I’ll help you cancel all orders for customer C1. First, let me retrieve
the customer’s information to see what orders they have.
- id: `msg_01Qz5tYWK1jA8kete3Ub2dNt`
- content:
`[{'citations': None, 'text': "I'll help you cancel all orders for customer C1. First, let me retrieve the customer's information to see what orders they have.", 'type': 'text'}, {'id': 'toolu_019nveAUbHYAQ11nprLdWAQR', 'input': {'customer_id': 'C1'}, 'name': 'get_customer_info', 'type': 'tool_use'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `tool_use`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 757, 'output_tokens': 87, 'server_tool_use': None, 'service_tier': 'standard'}`
{'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_019nveAUbHYAQ11nprLdWAQR', 'content': "{'name': 'John Doe', 'email': 'john@example.com', 'phone': '123-456-7890', 'orders': [{'id': 'O1', 'product': 'Widget A', 'quantity': 2, 'price': 19.99, 'status': 'Shipped'}, {'id': 'O2', 'product': 'Gadget B', 'quantity': 1, 'price': 49.99, 'status': 'Processing'}]}"}]}
Now I can see that customer C1 (John Doe) has 2 orders:
- Order O1: Widget A (Status: Shipped)
- Order O2: Gadget B (Status: Processing)
Let me cancel both orders for you.- Cancelling order O1
- Cancelling order O2
Now I can see that customer C1 (John Doe) has 2 orders: - Order O1:
Widget A (Status: Shipped) - Order O2: Gadget B (Status: Processing)
Let me cancel both orders for you.
- id: `msg_01MhvyKhRtSdsCmTfbSuWwE7`
- content:
`[{'citations': None, 'text': 'Now I can see that customer C1 (John Doe) has 2 orders:\n- Order O1: Widget A (Status: Shipped)\n- Order O2: Gadget B (Status: Processing)\n\nLet me cancel both orders for you.', 'type': 'text'}, {'id': 'toolu_01DheUnWexZKYKSi211SgGLM', 'input': {'order_id': 'O1'}, 'name': 'cancel_order', 'type': 'tool_use'}, {'id': 'toolu_01HhVbewBrXzyGSfeWMeTEEt', 'input': {'order_id': 'O2'}, 'name': 'cancel_order', 'type': 'tool_use'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `tool_use`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 981, 'output_tokens': 154, 'server_tool_use': None, 'service_tier': 'standard'}`
{'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01DheUnWexZKYKSi211SgGLM', 'content': 'True'}, {'type': 'tool_result', 'tool_use_id': 'toolu_01HhVbewBrXzyGSfeWMeTEEt', 'content': 'True'}]}
Perfect! I've successfully cancelled all orders for customer C1 (John Doe):
- ✓ Order O1 (Widget A) - Cancelled
- ✓ Order O2 (Gadget B) - Cancelled
Both orders have been cancelled successfully.
Perfect! I’ve successfully cancelled all orders for customer C1 (John
Doe): - ✓ Order O1 (Widget A) - Cancelled - ✓ Order O2 (Gadget B) -
Cancelled
Both orders have been cancelled successfully.
- id: `msg_01NgJQktChmU3RrU4xUpXTKq`
- content:
`[{'citations': None, 'text': "Perfect! I've successfully cancelled all orders for customer C1 (John Doe):\n- ✓ Order O1 (Widget A) - Cancelled\n- ✓ Order O2 (Gadget B) - Cancelled\n\nBoth orders have been cancelled successfully.", 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 1200, 'output_tokens': 65, 'server_tool_use': None, 'service_tier': 'standard'}`
## Async tool loop
------------------------------------------------------------------------
source
### AsyncChat.toolloop
``` python
def toolloop(
pr, # Prompt to pass to Claude
max_steps:int=10, # Maximum number of tool requests to loop through
cont_func:callable=noop, # Function that stops loop if returns False
final_prompt:str='You have no more tool uses. Please summarize your findings. If you did not complete your goal please tell the user what further work needs to be done so they can choose how best to proceed.', # Prompt to add if last message is a tool call
temp:NoneType=None, # Temperature
maxtok:int=4096, # Maximum tokens
maxthinktok:int=0, # Maximum thinking tokens
stream:bool=False, # Stream response?
prefill:str='', # Optional prefill to pass to Claude as start of its response
tool_choice:Union=None, # Optionally force use of some tool
):
```
*Add prompt `pr` to dialog and get a response from Claude, automatically
following up with `tool_use` messages*
Exported source
``` python
@patch
@delegates(AsyncChat.__call__)
def toolloop(
self: AsyncChat,
pr, # Prompt to pass to Claude
max_steps=10, # Maximum number of tool requests to loop through
cont_func: callable = noop, # Function that stops loop if returns False
final_prompt = _final_prompt, # Prompt to add if last message is a tool call
**kwargs
):
"Add prompt `pr` to dialog and get a response from Claude, automatically following up with `tool_use` messages"
@save_iter
async def _f(o):
init_n = len(self.h)
r = await self(pr, **kwargs)
yield r
if len(self.last)>1: yield self.last[1]
for i in range(max_steps-1):
if self.c.stop_reason != 'tool_use': break
r = await self(final_prompt if i==max_steps-2 else None, **kwargs)
yield r
if len(self.last)>1: yield self.last[1]
if not cont_func(*self.h[-3:]): break
o.value = self.h[init_n+1:]
return _f()
```
``` python
orders, customers = _get_orders_customers()
```
``` python
tools = [get_customer_info, get_order_details, cancel_order]
chat = AsyncChat(model, tools=tools)
r = chat.toolloop('Can you tell me the email address for customer C1?')
async for o in r: print(o)
```
- Retrieving customer C1
Message(id='msg_01F1ruk8y7TsTrhpWTkBc67e', content=[ToolUseBlock(id='toolu_01LJ2mkQDqRdToAFHbCosv26', input={'customer_id': 'C1'}, name='get_customer_info', type='tool_use')], model='claude-sonnet-4-5-20250929', role='assistant', stop_reason='tool_use', stop_sequence=None, type='message', usage=In: 757; Out: 58; Cache create: 0; Cache read: 0; Total Tokens: 815; Search: 0)
{'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01LJ2mkQDqRdToAFHbCosv26', 'content': "{'name': 'John Doe', 'email': 'john@example.com', 'phone': '123-456-7890', 'orders': [{'id': 'O1', 'product': 'Widget A', 'quantity': 2, 'price': 19.99, 'status': 'Shipped'}, {'id': 'O2', 'product': 'Gadget B', 'quantity': 1, 'price': 49.99, 'status': 'Processing'}]}"}]}
Message(id='msg_019Z6rMioA6RExGgXKGnusiM', content=[TextBlock(citations=None, text='The email address for customer C1 (John Doe) is **john@example.com**.', type='text')], model='claude-sonnet-4-5-20250929', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 953; Out: 24; Cache create: 0; Cache read: 0; Total Tokens: 977; Search: 0)
``` python
pprint(r.value)
```
[{'content': [{'id': 'toolu_01LJ2mkQDqRdToAFHbCosv26',
'input': {'customer_id': 'C1'},
'name': 'get_customer_info',
'type': 'tool_use'}],
'role': 'assistant'},
{'content': [{'content': "{'name': 'John Doe', 'email': 'john@example.com', "
"'phone': '123-456-7890', 'orders': [{'id': 'O1', "
"'product': 'Widget A', 'quantity': 2, 'price': "
"19.99, 'status': 'Shipped'}, {'id': 'O2', "
"'product': 'Gadget B', 'quantity': 1, 'price': "
"49.99, 'status': 'Processing'}]}",
'tool_use_id': 'toolu_01LJ2mkQDqRdToAFHbCosv26',
'type': 'tool_result'}],
'role': 'user'},
{'content': [{'citations': {},
'text': 'The email address for customer C1 (John Doe) is '
'**john@example.com**.',
'type': 'text'}],
'role': 'assistant'}]
## Code interpreter
Here is an example of using `toolloop` to implement a simple code
interpreter with additional tools.
``` python
from toolslm.shell import get_shell
from fastcore.meta import delegates
import traceback
```
``` python
@delegates()
class CodeChat(Chat):
imps = 'os, warnings, time, json, re, math, collections, itertools, functools, dateutil, datetime, string, types, copy, pprint, enum, numbers, decimal, fractions, random, operator, typing, dataclasses'
def __init__(self, model: Optional[str] = None, ask:bool=True, **kwargs):
super().__init__(model=model, **kwargs)
self.ask = ask
self.tools.append(self.run_cell)
self.shell = get_shell()
self.shell.run_cell('import '+self.imps)
```
We have one additional parameter to creating a `CodeChat` beyond what we
pass to [`Chat`](https://claudette.answer.ai/core.html#chat), which is
`ask` – if that’s `True`, we’ll prompt the user before running code.
``` python
@patch
def run_cell(
self:CodeChat,
code:str, # Code to execute in persistent IPython session
)->str:
"""Asks user for permission, and if provided, executes python `code` using persistent IPython session.
Returns: Result of expression on last line (if exists); '#DECLINED#' if user declines request to execute"""
confirm = f'Press Enter to execute, or enter "n" to skip?\n```\n{code}\n```\n'
if self.ask and input(confirm): return '#DECLINED#'
try: res = self.shell.run_cell(code)
except Exception as e: return traceback.format_exc()
return res.stdout if res.result is None else res.result
```
We just pass along requests to run code to the shell’s implementation.
Claude often prints results instead of just using the last expression,
so we capture stdout in those cases.
``` python
sp = f'''You are a knowledgable assistant. Do not use tools unless needed.
Don't do complex calculations yourself -- use code for them.
The following modules are pre-imported for `run_cell` automatically:
{CodeChat.imps}
Never mention what tools you are using. Note that `run_cell` interpreter state is *persistent* across calls.
If a tool returns `#DECLINED#` report to the user that the attempt was declined and no further progress can be made.
In that case, do *not* attempt to run any further code -- stop execution *IMMEDIATELY* and tell the user it was declined.
When using a tool, *ALWAYS* before every use of every tool, tell the user what you will be doing and why.'''
```
``` python
def get_user()->str:
"Get the username of the user running this session"
print("Looking up username")
return 'Jeremy'
```
In order to test out multi-stage tool use, we create a mock function
that Claude can call to get the current username.
``` python
model = models[1]
chat = CodeChat(model, tools=[get_user], sp=sp, ask=True, temp=0.3)
```
Providing a callable to toolloop’s `trace_func` lets us print out
information during the loop:
`toolloop`’s `cont_func` callable let’s us provide a function which, if
it returns `False`, stops the loop:
``` python
def _cont_decline(call, resp, asst): return resp['content'][0].get('content') != '#DECLINED#'
```
Now we can try our code interpreter. We start by asking for a function
to be created, which we’ll use in the next prompt to test that the
interpreter is persistent.
``` python
pr = '''Create a 1-line function `checksum` for a string `s`,
that multiplies together the ascii values of each character in `s` using `reduce`.'''
for o in chat.toolloop(pr, cont_func=_cont_decline): display(o)
```
I’ll create a 1-line `checksum` function that uses `reduce` to multiply
the ASCII values of each character in a string.
- id: `msg_01NfL3YNWTZ2Chst88nr3iPu`
- content:
`[{'citations': None, 'text': "I'll create a 1-line`checksum`function that uses`reduce`to multiply the ASCII values of each character in a string.", 'type': 'text'}, {'id': 'toolu_01GgTqM917BzHJ4MjpwZFwTF', 'input': {'code': '\nchecksum = lambda s: functools.reduce(operator.mul, (ord(c) for c in s), 1)\n\n# Test it\nprint(checksum("abc"))\nprint(checksum("hello"))\nprint(checksum("A"))\n'}, 'name': 'run_cell', 'type': 'tool_use'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `tool_use`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 927, 'output_tokens': 141, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
{ 'content': [ { 'content': '941094\n13599570816\n65\n',
'tool_use_id': 'toolu_01GgTqM917BzHJ4MjpwZFwTF',
'type': 'tool_result'}],
'role': 'user'}
```
Perfect! I’ve created the `checksum` function as a one-liner that: -
Uses `functools.reduce` with `operator.mul` to multiply values
together - Converts each character to its ASCII value using `ord(c)` -
Uses a generator expression `(ord(c) for c in s)` to iterate through the
string - Starts with an initial value of `1` (the identity for
multiplication)
The test shows it working correctly: - `"abc"` → 97 × 98 × 99 =
941,094 - `"hello"` → 104 × 101 × 108 × 108 × 111 = 13,599,570,816 -
`"A"` → 65
- id: `msg_01L1Et5ywM28AckqGct1Ce4A`
- content:
`[{'citations': None, 'text': 'Perfect! I\'ve created the`checksum`function as a one-liner that:\n- Uses`functools.reduce`with`operator.mul`to multiply values together\n- Converts each character to its ASCII value using`ord(c)`\n- Uses a generator expression`(ord(c)
for c in
s)`to iterate through the string\n- Starts with an initial value of`1`(the identity for multiplication)\n\nThe test shows it working correctly:\n-`“abc”`→ 97 × 98 × 99 = 941,094\n-`“hello”`→ 104 × 101 × 108 × 108 × 111 = 13,599,570,816\n-`“A”`→ 65', 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 1090, 'output_tokens': 175, 'server_tool_use': None, 'service_tier': 'standard'}`
By asking for a calculation to be done on the username, we force it to
use multiple steps:
``` python
pr = 'Use it to get the checksum of the username of this session.'
for o in chat.toolloop(pr): display(o)
```
Looking up username
I’ll get the username of this session and then calculate its checksum.
- id: `msg_01HrZAji8SwDbRhbb79cUZHo`
- content:
`[{'citations': None, 'text': "I'll get the username of this session and then calculate its checksum.", 'type': 'text'}, {'id': 'toolu_01MzJHgGgQdJ5Pc9e5s35ddF', 'input': {}, 'name': 'get_user', 'type': 'tool_use'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `tool_use`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 1282, 'output_tokens': 52, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
{ 'content': [ { 'content': 'Jeremy',
'tool_use_id': 'toolu_01MzJHgGgQdJ5Pc9e5s35ddF',
'type': 'tool_result'}],
'role': 'user'}
```
Now I’ll calculate the checksum for “Jeremy”:
- id: `msg_01Fdq1ViEfUg6kzGPrYeeVa6`
- content:
`[{'citations': None, 'text': 'Now I\'ll calculate the checksum for "Jeremy":', 'type': 'text'}, {'id': 'toolu_016XwiSjzxo2sgJTiZw2gFH8', 'input': {'code': '\nusername = "Jeremy"\nresult = checksum(username)\nprint(f"Username: {username}")\nprint(f"Checksum: {result}")\n'}, 'name': 'run_cell', 'type': 'tool_use'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `tool_use`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 1347, 'output_tokens': 101, 'server_tool_use': None, 'service_tier': 'standard'}`
``` python
{ 'content': [ { 'content': 'Username: Jeremy\nChecksum: 1134987783204\n',
'tool_use_id': 'toolu_016XwiSjzxo2sgJTiZw2gFH8',
'type': 'tool_result'}],
'role': 'user'}
```
The checksum of the username “Jeremy” is **1,134,987,783,204**.
This is calculated by multiplying the ASCII values: 74 × 101 × 114 × 101
× 109 × 121 = 1,134,987,783,204
- id: `msg_01CfuVuK1hbYq6o61cv7ddp8`
- content:
`[{'citations': None, 'text': 'The checksum of the username "Jeremy" is **1,134,987,783,204**.\n\nThis is calculated by multiplying the ASCII values: 74 × 101 × 114 × 101 × 109 × 121 = 1,134,987,783,204', 'type': 'text'}]`
- model: `claude-sonnet-4-5-20250929`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 1475, 'output_tokens': 69, 'server_tool_use': None, 'service_tier': 'standard'}`