Vionlabs operates a distributed system consisting of the Vionlabs Cloud and an Edge Processing Site, which typically runs in the customer’s AWS or GCP infrastructure.
For the Edge Processing, Vionlabs provides prebuilt VM/AMI images containing all required components for secure video access, decryption, frame extraction, and feature generation. The components are deployed in the customer’s cloud account, with all configuration parameters provisioned via Terraform scripts supplied by Vionlabs.
The extracted features and intermediate artifacts are uploaded from the Edge Processing Site to a Vionlabs-managed GCP or S3 bucket, making them available for downstream inference and analysis in the Vionlabs Cloud.
Within the Vionlabs Cloud, the orchestration of analysis workflows is carried out. These workflows consist of multiple tasks defined as Directed Acyclic Graphs (DAGs), executed in a distributed manner. The actual inference and analysis are performed in the Vionlabs Cloud. Vionlabs AB also provides the APIs for video asset registration and retrieval of analysis results.
Communication between the distributed components is handled via an event-driven messaging model (e.g., SNS/SQS or Pub/Sub). Through this channel, control commands are issued, tasks are triggered, status updates are transmitted, and feature artifacts are exchanged between the Edge Processing Site and the Vionlabs Cloud.
Sizing & Performance
The required sizing of an Edge Site depends on the current processing backlog. When autoscaling is enabled for Edge Processing, the scheduler will automatically adjust the processing capacity based on demand.
Overall processing time is influenced by several factors, including the number of files, video length, quality (bitrate, FPS), and other technical parameters. Additionally, the products configured for processing can impact duration — though significant synergies can be achieved when multiple products are processed simultaneously.
Enabling GPU acceleration can greatly increase processing speed. When active, GPUs may only be used for specific capability components within downstream models, ensuring cost-efficient utilization of GPU resources.
For more detailed insights on expected processing duration for your specific setup, please contact your integration lead.
Terraform Packages
https://gitlab.com/vionlabspublic/gcp-gks-terraform
https://gitlab.com/vionlabspublic/aws-vm-terraform