Quickstart
Get started with Move API in minutes. This guide covers both single-camera and multi-camera motion capture workflows.
Single-camera motion capture
Single-camera motion capture for basic applications:
You can choose to use or GraphQL API or use our Pyhton SDK.
The process for creating mocap outputs for singlecam involves following steps:
Prerequisites
- Python SDK
- GraphQL
- Curl
To start using the Python SDK install the package from Pypi
- pip
- poetry
pip install move-ugc-python
poetry add move-ugc-python
To get information about your key
from move_ugc import MoveUgc
ugc = MoveUgc(api_key= '<YOUR API KEY>' ,endpoint_url="https://api.move.ai/ugc/graphql")
# Test the connection
ugc.client.retrieve()
query Client {
client {
created
id
metadata
name
portal
}
}
curl -X POST https://api.move.ai/ugc/graphql \
-H "Content-Type: application/json" \
-H "Authorization: <YOUR API KEY>" \
-d '{"query": "query Client {created id metadata name portal}"}'
Creating a file
First you need to create a file entity for the video file you want to process.
To create a file type, you need to specify the type of file you to create:
- Python SDK
- GraphQL
- Curl
ugc.files.create(file_type="mp4",name="test_video_file") # Assuming video files are mp4
mutation CreateFile {
file: createFile(type: "mp4") {
id
presignedUrl
}
}
curl -X POST https://api.move.ai/ugc/graphql \
-H "Content-Type: application/json" \
-H "Authorization: <YOUR API KEY>" \
-d '{"query": "mutation CreateFile { createFile(type: \"mp4\") { id presignedUrl } } "}'
This will return a presigned HTTP PUT URL that you can use to upload your video file to. Check here to see how you can upload your videos to these file presigned. See the sample response below
- JSON Response
{
"data": {
"createFile": {
"id": "file-2be2463e-ffa3-419b-beb4-ea0f99c79592",
"presignedUrl": "https://file-production-storage.s3.amazonaws.com/ cfde-bfd1-4b7b-b134-d2d07455af8c.mp4?AWSAccessKeyId=ASIAZJHW76V2R6BKYGE4& ature=5BSdYtrKRp6DnmpYtw2RKVd2YNI%3D& z-security-token=IQoJb3JpZ2luX2VjEGcaCWV1LXdlc3QtMSJHMEUCIQCoVMnKEgbyIkIZnS 7wS6Y4DytfyRi6kTQqSiDYdQIgNyaT5O5ybmVIKTlo3564kahhLC%2Bjt0RxA7Eevjo3po8qjQM 2F%2F%2F%2F%2F%2F%2F%2F%2F%2FARABGgw2MzgzMjA2MzczMDEiDJtq%2FGKG1wp5%2FeWE6y fdmA0oDr%2Ftek9gDNh60eLXOVZz3f7s1AoY8oZClC5yUAX%2B6C392zSacNQVaDAj786g%2BJA MbXZ7RazgaWp9DMAZFj5V5VfivnXTHNHgorKvWKTAV4aTI8N%2B0qgcuueY7EDkuCKW9D249T2L oKrIzYTV89lvnAWw2L1QCwAJH%2BgDKdj2KXPKH3mzpFk9%2BjwdUwlQwx8k8%2Faa%2BSb3HvR OvQn7w60wQbP%2B3RFCnZKtIVUAf6GOcvC6xzvLliNW7WspONEcb5PbdAIQ89sqJLcCdagI0Q%2 KX%2BY2PpAmnZZpY8%2BelT5iy3Ci7Oolc3Y1zWmZQV%2FV7UJ0tD7OtcpNHsPm4185ER9lVjyg KlDga9oyb4ItQ8y0GXiCvqpiluiVHICu3SdHMmiP7tfy7LrkUZwfSGGlFOdHaV7J4kvPMoZCTTS IGWmH4i3kPxhLMSswTQLM1QcTzZPMPSTzKQGOp4BuIj5oAgvdk2YjGbZeshvnnorYL53swxJkuQ 7HNwOf105rhv8vZ4ciUIP6rA3V4rNcBVoy2%2Bvtcuq7oGvlXdGqvk%2F0yhlDvURTHLTTkwWkT %2B6gr5clN7ipScD6d1y3%2F3seHMaMnCJdgvvUzm7i2ZLg0HiC8Ndn5NIrfBDxM5x%2FmrVt4U gXxM4SVn9rAASWXVML4tMOnE%3D&Expires=1687361581"
}
}
}
Creating a singlecam take
Now, create the singlecam take using the file ids that you have created and uploaded
This will return a take ID which you can process any time you want by using our singlecam jobs interface.
- Python SDK
- GraphQL
- Curl
file_id="<FILE ID FROM FILE CREATION>"
#Create a take with the file id
ugc.takes.create(file_id=file_id)
mutation CreateSingleCamTake {
take: createSingleCamTake(
sources: [{
deviceLabel:"cam01",
fileId: "<FILE ID FROM FILE CREATION>", // (1)
format:MP4
}]
) {
id
}
}
curl -X POST https://api.move.ai/ugc/graphql \
-H "Content-Type: application/json" \
-H "Authorization: YOUR_API_KEY" \
-d '{"query":"mutation CreateSingleCamTake { take: createSingleCamTake(sources: [{ deviceLabel:\"human-readable-device-label\", fileId:\"<FILE ID FROM FILE CREATION>\", format:MP4 }]) { id } }"}'
Creating a singlecam job
Finally you can create a job to process the video into, using the take-id received from the previous step.
Processing with default outputs
- Python SDK
- GraphQL
- Curl
ugc.jobs.create_create_singlecamjob(take_id="<TAKE ID FROM TAKE CREATION">)
mutation CreateSingleCamJob {
job: createSingleCamJob(takeId: "<TAKE ID FROM TAKE CREATION>") {
id
created
progress {
state
percentageComplete
}
}
}
curl -X POST https://api.move.ai/ugc/graphql \
-H "Content-Type: application/json" \
-H "Authorization: YOUR_API_KEY" \
-d '{
"query": "mutation CreateSingleCamJob { job: createSingleCamJob(takeId: "<TAKE ID FROM TAKE CREAtION>") { id status } }"
}'
Processing with specific outputs
You can also specify the outputs you want to generate by passing the outputs
parameter. You can find the list of available outputs here.
- Python SDK
- GraphQL
- Curl
ugc.jobs.create_singlecam(take_id="<TAKE ID FROM TAKE CREATION">, outputs=[RENDER_VIDEO, MAIN_FBX])
mutation CreateSingleCamJob {
job: createSingleCamJob(takeId: "<TAKE ID FROM TAKE CREATION>", outputs: [RENDER_VIDEO, MAIN_FBX]) {
id
created
progress {
state
percentageComplete
}
}
}
curl -X POST https://api.move.ai/ugc/graphql \
-H "Content-Type: application/json" \
-H "Authorization: YOUR_API_KEY" \
-d '{
"query": "mutation CreateSingleCamJob { job: createSingleCamJob(takeId: \"<TAKE ID FROM TAKE CREATION>\", outputs: [RENDER_VIDEO, MAIN_FBX]) { id created progress { state percentageComplete } } }"
}'
Processing with specific options
You can also specify the options You can find the list of available outputs here.
- Python SDK
- GraphQL
- Curl
# Specifying options on a singlecam job
ugc.jobs.create_singlecam(take_id="<TAKE ID FROM TAKE CREATION>", options=JobOptions(trackFingers=True, floorPlane=True, mocapModel="S1"),
)
mutation CreateSingleCamJob {
job: createSingleCamJob(takeId: "<TAKE ID FROM TAKE CREATION>", options: {mocapModel: "S1", trackFingers: true}) {
id
created
progress {
state
percentageComplete
}
}
}
curl -X POST https://api.move.ai/ugc/graphql \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"query": "mutation CreateSingleCamJob { job: createSingleCamJob(takeId: \"<TAKE ID FROM TAKE CREATION>\", options: {mocapModel: \"S1\", trackFingers: true}) { id created progress {state percentageComplete}} }"
}'
Check status of job
You can check the status of the job using the getJob
query. See here for all available attributes.
The job status will return FINISHED
when your job is complete. You should now be able to get the outputs.
The output files will return a "not found" error message if the job is not completed.
- Python SDK
- GraphQL
- Curl
ugc.jobs.retrive(id = "<YOUR JOB ID>")
{
getJob(jobId: "<YOUR JOB ID>") { // (1)
id
name
progress {
state
percentageComplete
}
inputs {
options {
mocapModel
trackBall
trackFingers
}
numberOfActors
}
outputs {
key
file{
id
presignedUrl
created
}
}
}
}
curl -X POST https://api.move.ai/ugc/graphql \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-d '{
"query": "query { getJob(jobId: \"<YOUR JOB ID>\") { id name progress { state percentageComplete } inputs { options { mocapModel trackBall trackFingers } numberOfActors } outputs { key file { id presignedUrl created } } } }"
}
Job state
The lifecycle of a job is:
NOT_STARTED
- submitted but not startedSTARTED
- has been sent to a server for processingRUNNING
- is running on the serverFINISHED
- has produced some outputs (this has no relation to the quality of the output, just some output was generated)FAILED
- we couldn’t process the output
Processing time
Three main factors drive the time it takes for a take to process:
- Duration of video
- Resolution
- Frame rate
- Availability of processing servers
For 10s FHD at 60fps with a server immediately available the processing should be complete with 5 minutes. If there isn't a server available then the time may be as high as 30 minutes for the same video. We make efforts to ensure that this happens as rarely as possible but at certain times, especially as we release updates to the processing engine, these may be more common. This is part of the reason we advise you avoid polling in production and use webhooks.
Multi-camera motion capture
Multi-camera motion capture for higher accuracy applications:
Before you begin, make sure you have read the quickstart guide This provides useful hints and tips on what equipment you need, how to shoot as well as general advice to get the most out of the multicam API
A multicamera job also needs a take as its input. A multicam take is what defines a recording session with multiple cameras. It is collection of videos from multiple cameras.
The process for creating mocap outputs for a multicam take has a few more steps than for singlecam - but the output is generally of much higher quality.
The steps are:
- Create the calibration files
- Create a calibration volume
- Create the multicam files
- Create the multicam take
- Create the multicam job
!Note: This quickstart guide is for 2 cameras
Prerequisites
- Python SDK
- GraphQL
- Curl
To start using the Python SDK install the package from Pypi
- pip
- poetry
pip install move-ugc-python
poetry add move-ugc-python
To get information about your key
from move_ugc import MoveUgc
ugc = MoveUgc(api_key= '<YOUR API KEY>' ,endpoint_url="https://api.move.ai/ugc/graphql")
# Test the connection
ugc.client.retrieve()
query Client {
client {
created
id
metadata
name
portal
}
}
curl -X POST https://api.move.ai/ugc/graphql \
-H "Content-Type: application/json" \
-H "Authorization: <YOUR API KEY>" \
-d '{"query": "query Client {created id metadata name portal}"}'
Create calibration files
First, you need to calibrate your volume. Once you have setup your cameras and recorded your calibration videos you can upload them as normal using the files api. See here for how to upload the files. See our articles here for how best to setup your cameras and here for how to record a calibration. Create as many number of files for the number of cameras you are shooting with.
- Python SDK
- GraphQL
- Curl
ugc.files.create(file_type="mp4",name="calib_video_file1") # Assuming video files are mp4
ugc.files.create(file_type="mp4",name="calib_video_file2") # Assuming video files are mp4
mutation CreateFile {
videoFile1: createFile(type: "mp4") {
id
presignedUrl
}
videoFile2: createFile(type: "mp4") {
id
presignedUrl
}
}
curl -X POST https://api.move.ai/ugc/graphql \
-H "Content-Type: application/json" \
-H "Authorization: "<YOUR_API_KEY>" \
-d '{
"query": "mutation CreateFile { videoFile1: createFile(type: \"mp4\") { id presignedUrl } videoFile2: createFile(type: \"mp4\") { id presignedUrl } }"
}'
Create calibration volume
Once you have created your calibration files you can now create the volume. See the api docs here for more information on the attributes on createVolumeWithHuman mutation. Only certain camera lenses are supported at the moment. See [here](/move-ugc-api/ getting-started/multicam/lenses) for a complete list
- Python SDK
- GraphQL
- Curl
from move_ugc.schemas.sources import SourceIn
volume_sources = []
volume_sources.extend([]
SourceIn(
device_label=f"cam01",
file_id="<FILE ID OF FIRST CALIBRATION VIDEO FILE>",
format="MP4",
camera_settings={
"lens": "goprohero10-fhd", # specify the camera lens with which these videos were shot, check supported lens
}
),
SourceIn(
device_label=f"cam02",
file_id="<FILE ID OF FIRST CALIBRATION VIDEO FILE>",
format="MP4",
camera_settings={
"lens": "goprohero10-fhd", # specify the camera lens with which these videos were shot, check supported lens
}
)
)
ugc.volume.create_human_volume(sources=volume_sources, human_height=1.77, name="Test volume")
mutation createVolumeMutation{
createVolumeWithHuman(
areaType:NORMAL,
clipWindow:{
startTime:0.1,
endTime:1.4
},
humanHeight:1.77, // (1)
syncMethod:{
clapWindow:{
startTime:2.0,
endTime:4.0
}
},
sources : [{
deviceLabel:"cam01", // (4)
cameraSettings:{
lens:"goprohero10-fhd"
},
fileId:"file-2be2463e-ffa3-419b-beb4-ea0f99c79592", // (2)
format:MP4
},{
deviceLabel:"cam02",
cameraSettings:{
lens:"goprohero10-fhd"
},
fileId:"file-edcf5b93-24b4-45b8-91b2-0985c4c44665", // (3)
format:MP4
},],
){
areaType,
client{
id
name
},
created,
humanHeight,
id,
metadata,
sources{
cameraSettings{
lens
},
deviceLabel,
file{
presignedUrl
},
format
},
state
}
}
- The height of the human in metres
- This is the file from cam01 that was created in step 1
- This is the file from cam02 that was created in step 1
- A human readable label identifying this camera
curl -X POST https://api.move.ai/ugc/graphql \
-H "Content-Type: application/json" \
-H "Authorization: "<YOUR_API_KEY>" \
-d '{
"query": "mutation createVolumeMutation { createVolumeWithHuman(areaType: NORMAL, clipWindow: { startTime: 0.1, endTime: 1.4 }, humanHeight: 1.77, syncMethod: { clapWindow: { startTime: 2.0, endTime: 4.0 } }, sources: [ { deviceLabel: \"cam01\", cameraSettings: { lens: \"goprohero10-fhd\" }, fileId: \"file-2be2463e-ffa3-419b-beb4-ea0f99c79592\", format: MP4 }, { deviceLabel: \"cam02\", cameraSettings: { lens: \"goprohero10-fhd\" }, fileId: \"file-edcf5b93-24b4-45b8-91b2-0985c4c44665\", format: MP4 } ]) { areaType client { id name } created humanHeight id metadata sources { cameraSettings { lens } deviceLabel file { presignedUrl } format } state } }"
}'
Create multicam files
You can now shoot your take using the same camera configuration as you used for the calibration. Upload the files in the same way as you do for calibration
Create multicam take
Before creating the take ensure that the volume has finished processing. Use the volume id provided from the creation of the volume in step 2
- Python SDK
- GraphQL
- Curl
client.get_volume(volume.id)
Once the volume has finished processing and the files for the take are uploaded you can then create a take object.
action_sources = []
action_sources.extend([]
SourceIn(
device_label=f"cam01",
file_id="<FILE ID OF FIRST ACTION VIDEO FILE>",
format="MP4",
camera_settings={
"lens": "goprohero10-fhd", # specify the camera lens with which these videos were shot, check supported lens
}
),
SourceIn(
device_label=f"cam02",
file_id="<FILE ID OF FIRST ACTION VIDEO FILE>",
format="MP4",
camera_settings={
"lens": "goprohero10-fhd", # specify the camera lens with which these videos were shot, check supported lens
}
)
)
ugc.takes.create_multicam(
action_sources,
volume_id="<VOLUME ID>",
sync_method=SyncMethodInput(
clap_window={
"start_time": 2,
"end_time": 4,
},
),
name="Test take"
)
{
getVolume(id: "<CALIBRATION VOLUME ID>"){
... on Volume {
...VolumeFields
}
}
}
fragment VolumeFields on HumanVolume {
id
state
}
Once the volume has finished processing and the files for the take are uploaded you can then create a take object.
mutation createMultiCamTake {
take: createMultiCamTake(
volumeId:"<CALIBRATION VOLUME ID>", // (1)
syncMethod:{
clapWindow:{
startTime:2.0,
endTime:4.0
}
},
sources : [{
deviceLabel:"cam01", // (4)
cameraSettings:{
lens:"goprohero10-fhd"
},
fileId:"<ACTION VIDEO FILE 1>", // (2)
format:MP4
},{
deviceLabel:"cam02",
cameraSettings:{
lens:"goprohero10-fhd"
},
fileId:"<ACTION VIDEO FILE 2>", // (3)
format:MP4
},],
) {
id
metadata
created
client{
id
name
}
sources
{
cameraSettings
{
lens
}
,
deviceLabel,
file
{
presignedUrl
id
}
,
format
}
}
}
- This is the id of the volume returned in step 2
- This is the file ID of the file created in step 3
- This is the file ID of the file created in step 3
- The same human readable name as used in step 2. It is crucial that these match the names used in the volume creation
curl -X POST https://api.move.ai/ugc/graphql \
-H "Content-Type: application/json" \
-H "Authorization: "<YOUR_API_KEY>" \
-d '{
"query": "{ getVolume(id: \"<VOLUME ID>\") { ... on Volume { ...VolumeFields } } }"
}'
Create Multicam Job
You can now create the job which will generate the mocap output for the take using the ID generated in step 4. See the api docs here for more information on the attributes on createMultiCamJob mutation.
Processing with default outputs
Unless specified, Multicam runs will generate the following default output files: render_video, main_fbx, main_usdc, main_usdz, main_blend and motion_data
- Python SDK
- GraphQL
- Curl
ugc.jobs.create_multicam(take_id=<TAKE ID CREATED FOR MULTICAM ACTION>, number_of_actors=1, name="Test multicam job")
mutation Jobs {
createMultiCamJob(
takeId: "<TAKE ID CREATED FOR MULTICAM ACTION>",
numberOfActors:1
) {
id
state
created
metadata
client{
id
}
take{
client{
id
}
created
id
metadata
}
}
}
curl -X POST https://api.move.ai/ugc/graphql \
-H "Content-Type: application/json" \
-H "Authorization: "<YOUR_API_KEY>" \
-d '{
"query": "mutation Jobs { createMultiCamJob(takeId: \"<TAKE ID CREATED FOR MULTICAM ACTION>\", numberOfActors: 1) { id state created metadata client { id } take { client { id } created id metadata } } }"
}'
Processing with specific outputs
You can also specify the outputs you want to generate by passing the outputs
parameter. You can find the list of available outputs here.
- Python SDK
- GraphQL
- Curl
ugc.jobs.create_multicam(take_id="<TAKE ID CREATED FOR MULTICAM ACTION>", number_of_actors=1, name="Test multicam job", outputs=[RENDER_VIDEO, MAIN_FBX, MAIN_USDC, MAIN_GLB])
mutation Jobs {
createMultiCamJob(
takeId: "<TAKE ID CREATED FOR MULTICAM ACTION>",
numberOfActors:1,
outputs: [RENDER_VIDEO, MAIN_FBX, MAIN_USDC, MAIN_GLB]
) {
id
state
created
metadata
client{
id
}
outputs {
format
file {
presignedUrl
}
}
}
}
curl -X POST https://api.move.ai/ugc/graphql \
-H "Content-Type: application/json" \
-H "Authorization: "<YOUR_API_KEY>" \
-d '{
"query": "mutation Jobs { createMultiCamJob(takeId: \"<TAKE ID CREATED FOR MULTICAM ACTION>\", numberOfActors: 1, outputs: [RENDER_VIDEO, MAIN_FBX, MAIN_USDC, MAIN_GLB]) { id state created metadata client { id } outputs { format file { presignedUrl } } } }"
}'
Retargeting to a specific rig
- Python SDK
- GraphQL
- Curl
ugc.jobs.create_multicam(take_id="<TAKE ID CREATED FOR MULTICAM ACTION>", rig="move_ve", number_of_actors=1, name="Test multicam job")
mutation Jobs {
createMultiCamJob(
takeId: "<TAKE ID CREATED FOR MULTICAM ACTION>"
numberOfActors:1,
rig: "move_ve" // (1)
) {
id
state
created
metadata
client{
id
}
outputs {
format
file {
presignedUrl
}
}
}
}
- The name of the rig to use for retargeting.
curl -X POST https://api.move.ai/ugc/graphql \
-H "Content-Type: application/json" \
-H "Authorization: "<YOUR_API_KEY>" \
-d '{
"query": "mutation Jobs { createMultiCamJob(takeId: \"<TAKE ID CREATED FOR MULTICAM ACTION>\", numberOfActors: 1, rig: \"move_ve\") { id state created metadata client { id } outputs { format file { presignedUrl } } } }"
}'
Model selection
Choose the right model for your use case:
- s1: Basic single-camera capture (1st generation)
- s2: High-quality single-camera capture (2nd generation)
- m1: Basic multi-camera capture (1st generation)
- m2: Professional multi-camera capture (2nd generation, includes Dex)
Next steps
- API reference - Complete API documentation
- Models - Detailed model comparison
- Authentication - API key setup