Jobs and takes
Jobs and takes are the core concepts that power the Move API workflow. Understanding these concepts is essential for building effective motion capture applications.
What is a job?
A job is a processing task that converts video input into motion capture data. Think of it as a "work order" that tells the Move API what to process and how to process it.
Job lifecycle
Every job follows this lifecycle:
- Created - Job is submitted with video files and parameters
- Pending - Job is queued for processing
- Processing - AI models are analyzing the video
- Completed - Processing finished, take is ready
- Failed - Processing encountered an error
Job properties
Each job contains:
{
"id": "job_123456",
"status": "completed",
"model": "s1",
"created_at": "2024-01-15T10:30:00Z",
"completed_at": "2024-01-15T10:35:00Z",
"input": {
"videos": ["video1.mp4", "video2.mp4"],
"parameters": {...}
},
"output": {
"take_id": "take_789012"
}
}
What is a take?
A take is the processed motion capture data output from a completed job. It contains the 3D skeletal animation that can be used in games, animations, or analysis.
Take contents
A take includes:
- Skeletal Data: 3D positions of body keypoints
- Frame Data: Motion capture data for each frame of the original video
- Metadata: Information about the capture (duration, frame rate, etc.)
- Export Formats: Ready-to-use files (FBX, BVH, USDC, USDZ, GLB, Blend, C3D, JSON, CSV, Render Video, Sync Data)
Take properties
{
"id": "take_789012",
"job_id": "job_123456",
"duration": 5.2,
"frame_count": 156,
"frame_rate": 30,
"model_used": "s1",
"exports": {
"fbx": "https://api.move.ai/exports/take_789012.fbx",
"bvh": "https://api.move.ai/exports/take_789012.bvh",
"json": "https://api.move.ai/exports/take_789012.json"
}
}
Working with jobs
Creating a job
from move_ai import MoveAI
client = MoveAI(api_key="your-api-key")
# Upload video first
video_id = client.files.upload("dance_video.mp4")
# Create job
job = client.jobs.create(
model="s1",
videos=[video_id],
name="Dance Performance"
)
print(f"Job created: {job.id}")
Monitoring job status
# Check job status
job = client.jobs.get(job_id)
if job.status == "completed":
print("Job completed! Take ready for download.")
elif job.status == "processing":
print("Job still processing...")
elif job.status == "failed":
print(f"Job failed: {job.error_message}")
Polling for completion
import time
while True:
job = client.jobs.get(job_id)
if job.status == "completed":
print("Job completed!")
break
elif job.status == "failed":
print(f"Job failed: {job.error_message}")
break
print("Still processing...")
time.sleep(10) # Wait 10 seconds before checking again
Working with takes
Downloading a take
# Get take details
take = client.takes.get(job.take_id)
# Download in different formats
fbx_data = client.takes.download(take.id, format="fbx")
bvh_data = client.takes.download(take.id, format="bvh")
# Save to file
with open("motion.fbx", "wb") as f:
f.write(fbx_data)
Listing takes
# Get all takes
takes = client.takes.list()
for take in takes:
print(f"Take {take.id}: {take.duration}s, {take.frame_count} frames")
Job and take relationships
- One-to-One: Each job produces exactly one take
- Job ID: Every take has a
job_id
that references its source job - Take ID: Every completed job has a
take_id
in its output
Best practices
Job management
- Monitor Status: Always check job status before proceeding
- Handle Errors: Implement error handling for failed jobs
- Polling: Use reasonable intervals when polling for completion
- Cleanup: Delete old jobs and takes to manage storage
Take usage
- Format Selection: Choose the right export format for your use case
- Caching: Cache takes locally to avoid repeated downloads
- Validation: Verify take data before using in production
Common patterns
Batch processing
# Process multiple videos
videos = ["video1.mp4", "video2.mp4", "video3.mp4"]
jobs = []
for video in videos:
video_id = client.files.upload(video)
job = client.jobs.create(model="s1", videos=[video_id])
jobs.append(job)
# Monitor all jobs
for job in jobs:
# ... monitoring logic
Error recovery
try:
job = client.jobs.create(model="s1", videos=[video_id])
# Monitor with timeout
timeout = 300 # 5 minutes
start_time = time.time()
while time.time() - start_time < timeout:
job = client.jobs.get(job.id)
if job.status == "completed":
return job.take_id
elif job.status == "failed":
raise Exception(f"Job failed: {job.error_message}")
time.sleep(10)
raise Exception("Job timeout")
except Exception as e:
print(f"Error: {e}")
# Implement retry logic or fallback
Next steps
- Multicam fundamentals - Multi-camera setup and calibration
- Motion data format - Understanding the output data structure
- API reference - Detailed API documentation