Beta Feature: The Realtime Assistants API is currently in beta. This tutorial uses the latest features available.
What We’ll Build
In this tutorial, we’ll create a voice assistant that can:
- Have natural conversations with users
- Execute custom tools (weather lookup, jokes)
- Update its behavior dynamically
- Visualize voice activity in real-time
Prerequisites
Before starting, you’ll need:
- Node.js 16+ installed
- A UpliftAI account (sign up free)
- Basic React knowledge
Step 1: Create Your Assistant
First, let’s create an assistant using the API or the UpliftAI platform.
Step 2: Set Up Your React Project
Create a new React application and install the required dependencies:
# Create a new React app
npx create-react-app voice-assistant-demo
cd voice-assistant-demo
# Install UpliftAI SDK and dependencies
npm install @upliftai/assistants-react @livekit/components-react livekit-client
Step 3: Create the Connection Logic
Create a component to handle assistant connection:
import { useState } from 'react';
import { UpliftAIRoom } from '@upliftai/assistants-react';
import AssistantView from './AssistantView';
import './App.css';
function App() {
const [sessionData, setSessionData] = useState(null);
const [assistantId, setAssistantId] = useState('');
const [loading, setLoading] = useState(false);
const [error, setError] = useState(null);
const connectToAssistant = async () => {
if (!assistantId.trim()) {
setError('Please enter an Assistant ID');
return;
}
setLoading(true);
setError(null);
try {
// For public assistants, we can call directly from frontend
const response = await fetch(
`https://api.upliftai.org/v1/realtime-assistants/${assistantId}/createPublicSession`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
participantName: 'Demo User',
}),
}
);
if (!response.ok) {
throw new Error(`Failed to create session: ${response.status}`);
}
const data = await response.json();
setSessionData(data);
} catch (err) {
setError(err.message);
} finally {
setLoading(false);
}
};
if (!sessionData) {
return (
<div className="connect-screen">
<h1>Voice Assistant Demo</h1>
<input
type="text"
placeholder="Enter Assistant ID"
value={assistantId}
onChange={(e) => setAssistantId(e.target.value)}
/>
<button onClick={connectToAssistant} disabled={loading}>
{loading ? 'Connecting...' : 'Connect'}
</button>
{error && <p className="error">{error}</p>}
</div>
);
}
return (
<UpliftAIRoom
token={sessionData.token}
serverUrl={sessionData.wsUrl}
connect={true}
audio={true}
video={false}
>
<AssistantView />
</UpliftAIRoom>
);
}
export default App;
Step 4: Build the Assistant Interface
Create the main assistant view with voice visualization:
import {
useUpliftAIRoom,
useVoiceAssistant,
BarVisualizer,
TrackToggle,
DisconnectButton,
AudioTrack,
useTracks
} from '@upliftai/assistants-react';
import { Track } from 'livekit-client';
function AssistantView() {
const { isConnected, agentParticipant } = useUpliftAIRoom();
const { state } = useVoiceAssistant();
// Get audio tracks for visualization
const tracks = useTracks([Track.Source.Microphone], {
onlySubscribed: true,
});
const agentTrack = tracks.find((t) => !t.participant.isLocal);
return (
<div className="assistant-container">
{/* Connection Status */}
<div className="status-bar">
<span className={`status ${isConnected ? 'connected' : 'disconnected'}`}>
{isConnected ? '🟢 Connected' : '🔴 Disconnected'}
</span>
{agentParticipant && (
<span>Agent: {agentParticipant.identity}</span>
)}
</div>
{/* Voice Visualization */}
<div className="visualizer">
{agentTrack && (
<>
<AudioTrack trackRef={agentTrack} />
<BarVisualizer
state={state}
trackRef={agentTrack}
barCount={20}
className="bar-visualizer"
/>
</>
)}
{/* Assistant State */}
<div className="agent-state">
{state === 'speaking' && '🗣️ Speaking...'}
{state === 'thinking' && '🤔 Thinking...'}
{state === 'listening' && '👂 Listening...'}
</div>
</div>
{/* Controls */}
<div className="controls">
<TrackToggle source={Track.Source.Microphone}>
🎤 Microphone
</TrackToggle>
<DisconnectButton>
End Call
</DisconnectButton>
</div>
</div>
);
}
export default AssistantView;
Extend your assistant with custom functionality:
import { useState, useCallback } from 'react';
import {
UpliftAIRoom,
useUpliftAIRoom,
ToolConfig
} from '@upliftai/assistants-react';
// Define custom tools
const customTools: ToolConfig[] = [
{
name: 'get_weather',
description: 'Get current weather for a location',
parameters: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'City and state, e.g., San Francisco, CA',
},
},
required: ['location'],
},
timeout: 10,
handler: async (data) => {
const payload = JSON.parse(data.payload);
const { location } = payload.arguments.raw_arguments;
// Simulate weather API call
const weather = {
location,
temperature: Math.floor(Math.random() * 30 + 50),
condition: ['sunny', 'cloudy', 'rainy'][Math.floor(Math.random() * 3)],
};
return JSON.stringify({
result: weather,
presentationInstructions:
`The weather in ${location} is ${weather.temperature}°F and ${weather.condition}`,
});
},
},
{
name: 'tell_joke',
description: 'Tell a random joke',
parameters: {
type: 'object',
properties: {},
required: [],
},
timeout: 5,
handler: async () => {
const jokes = [
"Why don't scientists trust atoms? Because they make up everything!",
"What do you call a bear with no teeth? A gummy bear!",
"Why did the math book look so sad? It had too many problems!",
];
const joke = jokes[Math.floor(Math.random() * jokes.length)];
return JSON.stringify({
joke,
presentationInstructions: joke,
});
},
},
];
function EnhancedAssistant({ sessionData }) {
return (
<UpliftAIRoom
token={sessionData.token}
serverUrl={sessionData.wsUrl}
connect={true}
audio={true}
video={false}
tools={customTools} // Add tools here
>
<AssistantViewWithTools />
</UpliftAIRoom>
);
}
Step 6: Dynamic Instruction Updates
Add the ability to change assistant behavior on the fly:
function InstructionManager() {
const { updateInstruction } = useUpliftAIRoom();
const [instructions, setInstructions] = useState('');
const handleUpdate = async () => {
try {
await updateInstruction(instructions);
alert('Instructions updated!');
} catch (error) {
console.error('Failed to update:', error);
}
};
return (
<div className="instruction-manager">
<h3>Customize Behavior</h3>
<textarea
value={instructions}
onChange={(e) => setInstructions(e.target.value)}
placeholder="e.g., 'Speak like a pirate' or 'Be extra helpful with math'"
rows={3}
/>
<button onClick={handleUpdate}>
Update Instructions
</button>
</div>
);
}
Step 7: Add Styling
Create a simple stylesheet for your assistant:
.connect-screen {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
min-height: 100vh;
gap: 20px;
}
.connect-screen input {
padding: 10px;
font-size: 16px;
border: 1px solid #ddd;
border-radius: 4px;
width: 300px;
}
.connect-screen button {
padding: 10px 20px;
font-size: 16px;
background: #16A34A;
color: white;
border: none;
border-radius: 4px;
cursor: pointer;
}
.connect-screen button:disabled {
opacity: 0.5;
cursor: not-allowed;
}
.assistant-container {
max-width: 800px;
margin: 0 auto;
padding: 20px;
}
.status-bar {
display: flex;
justify-content: space-between;
padding: 10px;
background: #f3f4f6;
border-radius: 8px;
margin-bottom: 20px;
}
.status.connected {
color: #16A34A;
}
.status.disconnected {
color: #dc2626;
}
.visualizer {
background: #1f2937;
border-radius: 8px;
padding: 40px;
margin: 20px 0;
text-align: center;
}
.bar-visualizer {
height: 100px;
margin: 20px 0;
}
.agent-state {
color: white;
font-size: 18px;
margin-top: 20px;
}
.controls {
display: flex;
gap: 10px;
justify-content: center;
}
.controls button {
padding: 10px 20px;
font-size: 16px;
border-radius: 4px;
cursor: pointer;
}
.error {
color: #dc2626;
margin-top: 10px;
}
Step 8: Test Your Assistant
-
Start your development server:
-
Open http://localhost:3000 in your browser
-
Enter your Assistant ID and click Connect
-
Start talking! Try:
- “Hello, how are you?”
- “What’s the weather in New York?”
- “Tell me a joke”
- “Can you help me with math?”
Production Considerations
When moving to production, consider:
Security
// Backend session creation for private assistants
app.post('/api/create-session', authenticate, async (req, res) => {
const response = await fetch(
`https://api.upliftai.org/v1/realtime-assistants/${ASSISTANT_ID}/createSession`,
{
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.UPLIFTAI_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
participantName: req.user.name,
}),
}
);
const sessionData = await response.json();
res.json(sessionData);
});
Error Handling
const { error, connectionState } = useConnectionState();
if (error) {
return <ErrorView error={error} onRetry={reconnect} />;
}
// Production-ready tool with error handling
{
name: 'get_custom_data',
description: 'Query user data',
parameters: { /* ... */ },
handler: async (data) => {
try {
const result = await query_your_api(data);
return JSON.stringify({ result });
} catch (error) {
console.error('Tool error:', error);
return JSON.stringify({
error: 'Unable to complete request',
presentationInstructions: 'I encountered an error. Please try again.'
});
}
}
}
Next Steps
Explore Advanced Features
Learn about building complex tools and integrations
View Complete Example
Check out the full example with all features
SDK Reference
Deep dive into the React SDK documentation
Deploy Your Assistant
Learn how to deploy to production
Troubleshooting
- Verify your Assistant ID is correct
- Ensure the assistant is marked as public
- Check browser console for errors
- Verify microphone permissions are granted
- Check browser audio permissions
- Verify STT, LLM, & TTS configuration in assistant settings
- MAKE SURE the model ids being referred to actually exist.
- Contact us at founders@upliftai.org, we would love to see how we can improve the experience or fix any issues you have.
- You can also just tell us about your usecase!