Monkey See Researchers from the National University of Singapore and The Chinese University of Hong Kong claim to have created an AI that can reconstruct “high-quality” video from brain signals. As the researchers explain in a yet-to-be-peer-reviewed paper, the AI model dubbed MinD-Video is “co-trained” on publicly available data from fMRI readings — specifically, data taken from instances where an individual was shown a video while their brain activity was being recorded — and an augmented model of the AI image generator Stable Diffusion. Using this “two-module pipeline designed to bridge the gap between image and video brain decoding,” they were able to generate “high-quality,” AI-generated reconstructions of the videos, which were originally shown to the participants, purely based on their brain readings. According to the researchers, their model was able to reconstruct these videos with an average accuracy of 85 percent, based on “various semantic and pixel-level metrics.” “Understanding the information hidden within our complex brain activities is a big puzzle in cognitive neuroscience,” the paper reads. “We show that high-quality videos of arbitrary frame rates can be reconstructed with Mind-Video using adversarial guidance.” Credit: Chen et al. Input Output The new paper builds on the researchers’ previous efforts of using AI to recreate images by analyzing only brain waves. The AI’s new video renderings, on the whole, are pretty impressive, as demonstrated in direct side-by-side comparisons of the original and “reconstructed” videos on the researchers’ website. For instance, a video of a crowd of people walking down a busy street…This AI Generates Video From Brain Signals