Using generative artificial intelligence (AI), researchers were able to rebuild “high-quality” video strictly from brain activity, per a new study.

The researchers, comprising Juan Helen Zhou, Zijiao Chen, and Jiaxin Qing from The Chinese University of Hong Kong and the National University of Singapore, established a model called MinD-Video, which was used in the generation of the video, based on the outputs of brain activities.

MinD-Video is defined as a “two-module pipeline designed to bridge the gap between image and video brain decoding,” according to the researchers. The model used derives from the existing data of fMRI and the AI model capable of generating images from text, Stable Diffusion.

See also: How to spot AI-generated images

AI-video-brain-activities

With the study posted in the arXiv, an associated website showed a side-by-side comparison between actual visual cues displayed to the subject and the subsequent materialization of the graphics as perceived by the AI through brain activities. You can check it out here: https://mind-video.com

While the accuracy of the reconstructed moving images by the AI was not perfect relative to the source, the precision was nevertheless largely on point, with the output showing a similar figure and color palettes.

The correctness of the result is reportedly 85 percent accurate, which shows a better outcome than previously known techniques.

Leave a comment

Your email address will not be published. Required fields are marked *