Designing a tablet-first control interface for professional robotic camera systems — simplifying complex multi-device workflows into a single, intuitive experience for film operators.
Client
Under NDA
Role
Product Design Lead
Duration
2 years
Scope
Tablet first, Desktop
SISU Lab is a professional software platform for robotic camera control, used by cinematographers and operators on high-precision film productions. The platform enables complex camera movements — arcs, orbits, and multi-keyframe sequences — with millimetre-level accuracy.
As Product Design Lead, I worked embedded with the engineering team over 2 years to redesign the core control experience. My focus was on reducing the cognitive load of operating complex robotic systems on set, where speed, accuracy, and reliability are non-negotiable.
Video : SISU Labs Preview setup

Image : SISU Labs Preview setup
Professional robotic camera operators were managing multiple hardware controllers, software interfaces, and physical inputs simultaneously — often mid-shot, under time pressure on set.
The existing system required switching between devices to control movement, timing, and camera parameters. This fragmentation created three core problems:
Cognitive overload
Operators had to mentally track too many inputs at once, leaving less focus for the creative shot itself.
Setup time
Configuring complex multi-keyframe sequences required significant pre-production effort, slowing down shoots.
Error risk
The more systems involved, the higher the chance of input errors during live takes.
Designed for real-time use on tablets, ensuring controls are accessible, readable, and easy to operate under pressure.

Introduced touch and hand-based interactions to make camera movement feel direct and natural, reducing reliance on complex inputs.
Structured the experience around a familiar timeline, making it easier to create, edit, and refine camera movements.
Delivered a tablet-first control system for professional robotic camera operators over a 2-year engagement with SISU Robotics.
Consolidated multi-device controls into a single timeline-based workflow, reducing setup complexity and enabling operators to execute complex multi-keyframe shots without switching between systems.
Gesture-driven interactions replaced technical hardware inputs, allowing operators to focus on creative execution rather than system configuration.