OPen
Role:
UX Designer
Timeline: January - June 2024
Team: 1 PM & Researcher, 3 Designers, 2 Developers
Tools: Figma

AI Director

An all-in-one AI video generation & editing tool that lets users effortlessly create complete videos from simple text descriptions.

We collaborated with Decohere, a Y Combinator-backed startup revolutionizing AI-generated video production by transforming text descriptions into engaging video clips. Their current product, Turbo, generates 4-second video clips from text, which has attracted many new users but experienced low retention rates.

Users find the short clips limiting and the process lacks advanced features necessary for a comprehensive video production experience. Our main responsibility in this project was to develop a more advanced product to enhance the paid user experience by providing more functionality and flexibility, thereby increasing conversion and retention rates.

Project Overview

Problem

How can we design an AI video production tool that enables users to effortlessly and quickly create complete videos, considering the current limitations and constraints of the video and visual LLMs (as of Feb 2024).

Solution

We developed an all-in-one platform that streamlines video creation: from prompt ideation to text-to-image, image-to-video, one-click generation, adding transitions, and exporting in a ready-to-share format.

This new platform ensures ease of use, a minimal learning curve, and supports iterative refinement.

Project Timeline

2 Quarters ; 4 People ; 1 Product

As our capstone project, we had two full academic quarters to complete the project and handed it off for development. Our project timeline consisted three main phases: research, design, and evaluation. We also scheduled regular check-in meetings with Decohere's management team and engineering support. That being said, we had significant freedom and responsibility in terms of project planning.

Of course, the design process won't be linear. Here is a simple and rough overview of our project timeline:>

Research Overview

What did we do to get insights?

Competitive Analysis
We conducted a UX audit of Decohere’s existing product, comparing it with competitors such as ChatGPT, Runway, Pika, and Genmo to identify strengths, shortcomings, and trends in AI video generation and editing. This helped us develop a specific UX strategy and prioritize our design approach for user satisfaction and innovation.

In-Depth Interviews
We interviewed current Decohere users to gain insights into their perspectives and expectations regarding AI video editing tools and Decohere’s future direction.

Contextual Inquiries
We recruited several new users familiar with AI technologies but new to Decohere to interact with the platform in their natural environments.

Netnography
We analyzed AI-generated content on TikTok and YouTube, examining over 25 videos from each platform created by AI enthusiasts and content creators.

What did we learn from research?

From our research, we found that users expect AI video generators to produce longer videos (2-3 minutes) instead of animated clips (4 seconds) and need better transitions to maintain a consistent style. New users struggle with prompting and need more guidance, while experienced users handle it better.

Many users are unaware that creating an image is the first step and want more control over video elements. Producing satisfactory outputs requires multiple iterations, and users seek a more dynamic process and tools like negative prompts for refinement.

Who are we targeting to?

After synthesizing all the research insights and communicating with Decohere about their future business strategy, we decided to target part-time social media content creators for this new product.

These creators have limited video editing knowledge and can't spend too much time on production. They regularly share content on their channels, are enthusiastic about leveraging AI technology to enhance their work, and need the ability to create videos ranging from 10 seconds to 2 minutes. They seek efficient tools that simplify the video production process while allowing them to maintain a consistent posting schedule and engage their audience with high-quality content.

Final Design Solution

The north star design

We built an MVP design and handed it off for development. Building on that, we expanded and created the North Star version, envisioning additional features for the product. This version is planned for launch in Q3.

wait... you might be wondering, how did we get to here?

Let me show you some decision process and design rationales ☺

Concepts & Ideas

Deciding on a concept.

To maximize the current AI technology and address user pain points, we focused on ideating concepts to solve the question: How can we design a seamless layout for switching between scene clips as the product's foundational structure?

After thorough analysis, we selected Concept 2 as it best addresses all user pain points identified during the research stage. Despite requiring more development effort, Concept 2 meets the overall business and user needs more effectively. This decision was based on comprehensive evaluation from user experience, business, and engineering perspectives.

Concept 2 offers a seamless layout for switching between scene clips, utilizes real-time image generation, and provides users with an overview of the storyboard, including scene descriptions and previews. This reduces the number of clicks and enhances user satisfaction.

By aligning with Decohere's future business strategy and addressing user needs for a more dynamic and flexible generation process, Concept 2 ensures a superior product that supports iterative refinement and long-term user engagement.

What did we do to persuade stakeholders to accept this concept?

Initially, we received limited support from Decohere’s management team regarding the decision to choose Concept 2: the Storyboard idea. Their primary concerns were focused on the business implications and the engineering resources required for this product. To address these concerns, we spent an additional week conducting market research.

We then presented detailed positioning maps that illustrated the competitive landscape, potential market advantages, and how Concept 2 aligns with both user needs and business goals.

This comprehensive approach helped us to persuade the stakeholders of the value and feasibility of Concept 2.

Information Architecture

Feature and content organizations.

We divided features into project, overview, scene edit, and prompting levels, with the video creation process occurring within the overview and scene edit levels.

Initial Prototyping

The VERY FIRST versions of our design and prototype.

You may notice how different it is compared to our final design. However, this rough high/mid fidelity prototype was used to convey our original concepts and discuss feasibility with engineering. We then built a more complete wireframe for user testing.

Testing & Iteration

Usability testing: findings & iterations

We conducted guided usability tests with 6 individuals, focusing on three primary user flows: video creation, one-click generation, and the AI script assistant. Our aim was to assess the clarity, comprehension, and completion of these tasks.

Overall, all participants offered positive comments on our design. However, we observed consistent issues with the one-click generation feature: the name “Batch Generate" does not accurately reflect the feature's functionality, leading users to interpret it as combining clips into one video. The placement of the button is too close to the export option, causing users to perceive it as the final step rather than a middle step. Its distance from the generate button in the scene card also confuses users about its functionality.

Furthermore, users lack clear expectations regarding the batch generate function before opening it, and they struggle to comprehend the supportive text in the pop-up, which makes it seem like a task completion step rather than an editing step.

Design System

Creating consistency ✿.

We converted the existing color and typography system from code into a design system in Figma to streamline the design process and minimize confusion during development. To expedite future engineering efforts, we also developed a design system for commonly used icons and components, ensuring engineer team can easily reuse them in future projects.

Promotion Video

Product launch video.

We also created this short promotional video to attract more users and serve as a guide/onboarding when the product first launches.

✿ Lesson Learned ✿

01: Show everything with stakeholders

In terms of stakeholder management, it can be more time-saving to show everything and be transparent at every stage of the design process. Stakeholders understand that the design process is not linear, so don’t be afraid to show high-fidelity designs during the ideation stages. This transparency can sometimes save you and your team a significant amount of time.

02: Refers to important AI principles

Designing for an AI product is a new and evolving challenge, but we must still adhere to principles like Responsible AI. It's crucial to understand how this new product could potentially be harmful to users and design with those risks in mind.

By prioritizing these principles, we can create a safer and more ethical AI product.

Last but not the least...

Meet the team! ☺☺☺

We brought our product to UW's Human Centered Design & Engineering department showcase and took this great team pic! From left to right: Deeksha, Wenxin, Maomao, Haotian.

Say hi and let's create something amazing together!
Thank you! Your note has been received!
Oops! Something went wrong while sending the note.