Skip to main content

Quickstart

Get started with Move API in minutes. This guide covers both single-camera and multi-camera motion capture workflows.

Single-camera motion capture

Single-camera motion capture for basic applications:

You can choose to use or GraphQL API or use our Pyhton SDK.

The process for creating mocap outputs for singlecam involves following steps:

  1. Creating a file
  2. Creating a singlecam take
  3. Creating a singlecam job

Prerequisites

To start using the Python SDK install the package from Pypi

pip install move-ugc-python

To get information about your key

from move_ugc import MoveUgc
ugc = MoveUgc(api_key= '<YOUR API KEY>' ,endpoint_url="https://api.move.ai/ugc/graphql")
# Test the connection
ugc.client.retrieve()

Creating a file

First you need to create a file entity for the video file you want to process.

To create a file type, you need to specify the type of file you to create:

ugc.files.create(file_type="mp4",name="test_video_file")  # Assuming video files are mp4

This will return a presigned HTTP PUT URL that you can use to upload your video file to. Check here to see how you can upload your videos to these file presigned. See the sample response below

{
"data": {
"createFile": {
"id": "file-2be2463e-ffa3-419b-beb4-ea0f99c79592",
"presignedUrl": "https://file-production-storage.s3.amazonaws.com/ cfde-bfd1-4b7b-b134-d2d07455af8c.mp4?AWSAccessKeyId=ASIAZJHW76V2R6BKYGE4& ature=5BSdYtrKRp6DnmpYtw2RKVd2YNI%3D& z-security-token=IQoJb3JpZ2luX2VjEGcaCWV1LXdlc3QtMSJHMEUCIQCoVMnKEgbyIkIZnS 7wS6Y4DytfyRi6kTQqSiDYdQIgNyaT5O5ybmVIKTlo3564kahhLC%2Bjt0RxA7Eevjo3po8qjQM 2F%2F%2F%2F%2F%2F%2F%2F%2F%2FARABGgw2MzgzMjA2MzczMDEiDJtq%2FGKG1wp5%2FeWE6y fdmA0oDr%2Ftek9gDNh60eLXOVZz3f7s1AoY8oZClC5yUAX%2B6C392zSacNQVaDAj786g%2BJA MbXZ7RazgaWp9DMAZFj5V5VfivnXTHNHgorKvWKTAV4aTI8N%2B0qgcuueY7EDkuCKW9D249T2L oKrIzYTV89lvnAWw2L1QCwAJH%2BgDKdj2KXPKH3mzpFk9%2BjwdUwlQwx8k8%2Faa%2BSb3HvR OvQn7w60wQbP%2B3RFCnZKtIVUAf6GOcvC6xzvLliNW7WspONEcb5PbdAIQ89sqJLcCdagI0Q%2 KX%2BY2PpAmnZZpY8%2BelT5iy3Ci7Oolc3Y1zWmZQV%2FV7UJ0tD7OtcpNHsPm4185ER9lVjyg KlDga9oyb4ItQ8y0GXiCvqpiluiVHICu3SdHMmiP7tfy7LrkUZwfSGGlFOdHaV7J4kvPMoZCTTS IGWmH4i3kPxhLMSswTQLM1QcTzZPMPSTzKQGOp4BuIj5oAgvdk2YjGbZeshvnnorYL53swxJkuQ 7HNwOf105rhv8vZ4ciUIP6rA3V4rNcBVoy2%2Bvtcuq7oGvlXdGqvk%2F0yhlDvURTHLTTkwWkT %2B6gr5clN7ipScD6d1y3%2F3seHMaMnCJdgvvUzm7i2ZLg0HiC8Ndn5NIrfBDxM5x%2FmrVt4U gXxM4SVn9rAASWXVML4tMOnE%3D&Expires=1687361581"
}
}
}

Creating a singlecam take

Now, create the singlecam take using the file ids that you have created and uploaded

This will return a take ID which you can process any time you want by using our singlecam jobs interface.

  file_id="<FILE ID FROM FILE CREATION>"

#Create a take with the file id
ugc.takes.create(file_id=file_id)

Creating a singlecam job

Finally you can create a job to process the video into, using the take-id received from the previous step.

Processing with default outputs

  ugc.jobs.create_create_singlecamjob(take_id="<TAKE ID FROM TAKE CREATION">)   

Processing with specific outputs

You can also specify the outputs you want to generate by passing the outputs parameter. You can find the list of available outputs here.

  ugc.jobs.create_singlecam(take_id="<TAKE ID FROM TAKE CREATION">, outputs=[RENDER_VIDEO, MAIN_FBX])  

Processing with specific options

You can also specify the options You can find the list of available outputs here.

 # Specifying options on a singlecam job
ugc.jobs.create_singlecam(take_id="<TAKE ID FROM TAKE CREATION>", options=JobOptions(trackFingers=True, floorPlane=True, mocapModel="S1"),
)

Check status of job

You can check the status of the job using the getJob query. See here for all available attributes.

The job status will return FINISHED when your job is complete. You should now be able to get the outputs.

The output files will return a "not found" error message if the job is not completed.

ugc.jobs.retrive(id = "<YOUR JOB ID>")

Job state

The lifecycle of a job is:

  • NOT_STARTED - submitted but not started
  • STARTED - has been sent to a server for processing
  • RUNNING - is running on the server
  • FINISHED - has produced some outputs (this has no relation to the quality of the output, just some output was generated)
  • FAILED - we couldn’t process the output

Processing time

Three main factors drive the time it takes for a take to process:

  • Duration of video
  • Resolution
  • Frame rate
  • Availability of processing servers

For 10s FHD at 60fps with a server immediately available the processing should be complete with 5 minutes. If there isn't a server available then the time may be as high as 30 minutes for the same video. We make efforts to ensure that this happens as rarely as possible but at certain times, especially as we release updates to the processing engine, these may be more common. This is part of the reason we advise you avoid polling in production and use webhooks.

Multi-camera motion capture

Multi-camera motion capture for higher accuracy applications:

Before you begin, make sure you have read the quickstart guide This provides useful hints and tips on what equipment you need, how to shoot as well as general advice to get the most out of the multicam API

A multicamera job also needs a take as its input. A multicam take is what defines a recording session with multiple cameras. It is collection of videos from multiple cameras.

The process for creating mocap outputs for a multicam take has a few more steps than for singlecam - but the output is generally of much higher quality.

The steps are:

  1. Create the calibration files
  2. Create a calibration volume
  3. Create the multicam files
  4. Create the multicam take
  5. Create the multicam job

!Note: This quickstart guide is for 2 cameras

Prerequisites

To start using the Python SDK install the package from Pypi

pip install move-ugc-python

To get information about your key

from move_ugc import MoveUgc
ugc = MoveUgc(api_key= '<YOUR API KEY>' ,endpoint_url="https://api.move.ai/ugc/graphql")
# Test the connection
ugc.client.retrieve()

Create calibration files

First, you need to calibrate your volume. Once you have setup your cameras and recorded your calibration videos you can upload them as normal using the files api. See here for how to upload the files. See our articles here for how best to setup your cameras and here for how to record a calibration. Create as many number of files for the number of cameras you are shooting with.

 ugc.files.create(file_type="mp4",name="calib_video_file1")  # Assuming video files are mp4
ugc.files.create(file_type="mp4",name="calib_video_file2") # Assuming video files are mp4

Create calibration volume

Once you have created your calibration files you can now create the volume. See the api docs here for more information on the attributes on createVolumeWithHuman mutation. Only certain camera lenses are supported at the moment. See [here](/move-ugc-api/ getting-started/multicam/lenses) for a complete list

  from move_ugc.schemas.sources import SourceIn
volume_sources = []
volume_sources.extend([]
SourceIn(
device_label=f"cam01",
file_id="<FILE ID OF FIRST CALIBRATION VIDEO FILE>",
format="MP4",
camera_settings={
"lens": "goprohero10-fhd", # specify the camera lens with which these videos were shot, check supported lens
}
),
SourceIn(
device_label=f"cam02",
file_id="<FILE ID OF FIRST CALIBRATION VIDEO FILE>",
format="MP4",
camera_settings={
"lens": "goprohero10-fhd", # specify the camera lens with which these videos were shot, check supported lens
}
)
)

ugc.volume.create_human_volume(sources=volume_sources, human_height=1.77, name="Test volume")

Create multicam files

You can now shoot your take using the same camera configuration as you used for the calibration. Upload the files in the same way as you do for calibration

Create multicam take

Before creating the take ensure that the volume has finished processing. Use the volume id provided from the creation of the volume in step 2

client.get_volume(volume.id)

Once the volume has finished processing and the files for the take are uploaded you can then create a take object.

action_sources = []
action_sources.extend([]
SourceIn(
device_label=f"cam01",
file_id="<FILE ID OF FIRST ACTION VIDEO FILE>",
format="MP4",
camera_settings={
"lens": "goprohero10-fhd", # specify the camera lens with which these videos were shot, check supported lens
}
),
SourceIn(
device_label=f"cam02",
file_id="<FILE ID OF FIRST ACTION VIDEO FILE>",
format="MP4",
camera_settings={
"lens": "goprohero10-fhd", # specify the camera lens with which these videos were shot, check supported lens
}
)
)


ugc.takes.create_multicam(
action_sources,
volume_id="<VOLUME ID>",
sync_method=SyncMethodInput(
clap_window={
"start_time": 2,
"end_time": 4,
},
),
name="Test take"
)

Create Multicam Job

You can now create the job which will generate the mocap output for the take using the ID generated in step 4. See the api docs here for more information on the attributes on createMultiCamJob mutation.

Processing with default outputs

Unless specified, Multicam runs will generate the following default output files: render_video, main_fbx, main_usdc, main_usdz, main_blend and motion_data

 ugc.jobs.create_multicam(take_id=<TAKE ID CREATED FOR MULTICAM ACTION>, number_of_actors=1, name="Test multicam job")

Processing with specific outputs

You can also specify the outputs you want to generate by passing the outputs parameter. You can find the list of available outputs here.

 ugc.jobs.create_multicam(take_id="<TAKE ID CREATED FOR MULTICAM ACTION>", number_of_actors=1, name="Test multicam job", outputs=[RENDER_VIDEO, MAIN_FBX, MAIN_USDC, MAIN_GLB])

Retargeting to a specific rig

  ugc.jobs.create_multicam(take_id="<TAKE ID CREATED FOR MULTICAM ACTION>", rig="move_ve", number_of_actors=1, name="Test multicam job")

Model selection

Choose the right model for your use case:

  • s1: Basic single-camera capture (1st generation)
  • s2: High-quality single-camera capture (2nd generation)
  • m1: Basic multi-camera capture (1st generation)
  • m2: Professional multi-camera capture (2nd generation, includes Dex)

Next steps