Skip to main content

Basic Usage

During a persona session, you can force a response from the persona using the talk method. This is useful when the user interacts with a UI element or when you use your own LLM instead of Anam’s built-in models.
Both talk() and createTalkMessageStream() require an active streaming session. Call stream() or streamToVideoElement() first.
await anamClient.talk("Hello, how are you?");
This sends the text to the persona, which then speaks it aloud.
To learn more about using the talk method with your own LLM, see the Custom LLMs guide.

Streaming Talk Input

For lower latency, stream messages to the persona in chunks. This works well when streaming output from a custom LLM.
const talkMessageStream = anamClient.createTalkMessageStream();

const chunks = ["He", "l", "lo", ", how are you?"];

for (const chunk of chunks) {
  if (talkMessageStream.isActive()) {
    await talkMessageStream.streamMessageChunk(
      chunk,
      chunk === chunks[chunks.length - 1] // endOfSpeech: true on last chunk
    );
  }
}
Each TalkMessageStream represents one conversation turn. Once the turn ends, create a new stream for the next turn.

Available Methods

The TalkMessageStream object provides these methods:
MethodDescription
streamMessageChunk(content, endOfSpeech)Send a text chunk. Set endOfSpeech: true on the final chunk.
endMessage()End the stream without sending more content. Alternative to endOfSpeech: true.
isActive()Returns true if the stream can still accept chunks.
getState()Returns the current state: UNSTARTED, STREAMING, INTERRUPTED, or ENDED.
getCorrelationId()Returns the correlation ID for this stream.

Ending a Stream

End a conversation turn in one of two ways: Option 1: Set endOfSpeech on the last chunk
await talkMessageStream.streamMessageChunk("final text", true);
Option 2: Call endMessage() separately
await talkMessageStream.streamMessageChunk("final text", false);
await talkMessageStream.endMessage();

Handling Interruptions

When a user speaks during a stream, the SDK emits AnamEvent.TALK_STREAM_INTERRUPTED and closes the stream:
import { AnamEvent } from "@anam-ai/js-sdk";

anamClient.addListener(AnamEvent.TALK_STREAM_INTERRUPTED, (event) => {
  console.log("Stream interrupted:", event.correlationId);
  // Handle the interruption - e.g., stop your LLM generation
});

Checking Stream State

Check whether a stream can still accept chunks:
if (talkMessageStream.isActive()) {
  await talkMessageStream.streamMessageChunk(chunk, false);
}

// Or check the specific state
const state = talkMessageStream.getState();
// Returns: UNSTARTED | STREAMING | INTERRUPTED | ENDED

Error Handling

The streamMessageChunk method throws an error if the stream is not active:
try {
  await talkMessageStream.streamMessageChunk("text", false);
} catch (error) {
  // Stream is in INTERRUPTED or ENDED state
  console.error("Cannot send chunk:", error.message);
}

Correlation IDs

Attach a correlation ID to track streams, especially useful for matching interruption events:
const correlationId = "request-123";
const talkMessageStream = anamClient.createTalkMessageStream(correlationId);

// Later, when handling interruptions:
anamClient.addListener(AnamEvent.TALK_STREAM_INTERRUPTED, (event) => {
  if (event.correlationId === correlationId) {
    // This specific stream was interrupted
  }
});
Use unique correlation IDs for each TalkMessageStream. The ID appears in TALK_STREAM_INTERRUPTED events, helping you identify which stream was interrupted.

Next Steps

Audio Control

Learn how to control audio input in your Anam sessions