Skip to main content

Transparency Mode

Set model=transparency when connecting to wss://transfer.navtalk.ai/wss/v2/realtime-chat. Upon connection, you receive a conversation.connected.success event containing the session ID and ICE server configuration. No AI response is generated; the avatar simply mirrors whatever the user says.
{
  "type": "input_audio_buffer.append",
  "audio": "x//z//X/JQAwA/"
}
Just push PCM16 audio chunks like above—the server marks them as received and the WebRTC channel plays the spoken turn. After receiving the conversation.connected.success event with the session ID, establish the WebRTC connection through the same unified WebSocket (WebRTC signaling is handled through the same connection in v2). The avatar will animate the user’s sentence without any AI reasoning. Use transparency mode for mirror demos, caption QA, or scripted playback. Close the transparency session and reconnect without model=transparency when you want to resume normal AI replies.