Updated February 15th, 2024 at 13:25 IST

Apple’s latest AI model capable of generating animates based on inputs

Through a process involving GPT-4, Keyframer generates CSS animation code to animate Scalable Vector Graphic (SVG) images based on user prompts.

Reported by: Business Desk
Apple office | Image:Unsplash
Advertisement

Apple’s latest AI model: Apple has expanded its AI capabilities with the introduction of a new tool that uses large language models (LLMs) to animate static images based on user-provided text prompts. In their recent research paper titled "Keyframer: Empowering Animation Design Using Large Language Models," Apple outlines the development of this innovative tool. Unlike existing text-to-image systems like DALL·E and Midjourney, which are primarily focused on generating static images, Keyframer is specifically tailored for creating animations, which demand more intricate user considerations such as timing and coordination.

By integrating principles of language-based design prompting with the code-generation abilities of LLMs, Keyframer allows users to generate animated illustrations from static 2D images using natural language commands. Through a process involving GPT-4, Keyframer generates CSS animation code to animate Scalable Vector Graphic (SVG) images based on user prompts.

Advertisement

Users can upload an SVG image and provide a text prompt describing the desired animation, such as "create three designs where the sky transitions into different colours and stars twinkle." Keyframer then generates CSS animation code accordingly, which users can further refine either by directly editing the code or by providing additional text prompts.

The iterative design process facilitated by Keyframer allows users to refine their animations sequentially, avoiding the need to conceive the entire design upfront. This approach streamlines the animation design process, as highlighted by feedback from professional animation designers and engineers involved in the research.

Advertisement

This AI model builds on previous advancements, including a recent AI model enabling pixel-level edits on images using multimodal LLMs. Moreover, Apple has demonstrated progress in deploying LLMs on devices with limited memory, enhancing the accessibility of AI-powered features on iPhones and other Apple devices.

Advertisement

Published February 15th, 2024 at 13:25 IST