EAST Real-Time VOD System Based on MDSplus

EAST Real-Time VOD System Based on MDSplus
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

As with EAST (Experimental Advanced Superconducting Tokamak) experimental data analyzed by more and more collaborators, the experimental videos which directly reflect the real status of vacuum attract more and more researchers’ attention. The real time VOD (Video On Demand) system based on MDSplus allows users reading the video frames in real time as same as the signal data which is also stored in the MDSplus database. User can display the plasma discharge videos and analyze videos frame by frame through jScope or our VOD web station. The system mainly includes the frames storing and frames displaying. The frames storing application accepts shot information by using socket TCP communication firstly, then reads video frames through disk mapping, finally stores them into MDSplus. The displaying process is implemented through B/S (Browser/Server) framework, it uses PHP and JavaScript to realize VOD function and read frames information from MDSplus. The system offers a unit way to access and backup experimental data and video during the EAST experiment, which is of great benefit to EAST experimenter than the formal VOD system in VOD function and real time performance.


💡 Research Summary

The paper presents a novel real‑time Video‑On‑Demand (VOD) system for the Experimental Advanced Superconducting Tokamak (EAST) that integrates high‑speed camera footage directly into the MDSplus data infrastructure. Traditional EAST video handling stored raw video files on a RAID array and kept timestamps in separate XML files, which made real‑time access cumbersome and required a dedicated video player. By leveraging MDSplus—a hierarchical, self‑describing database already used for plasma diagnostic signals—the authors store each video frame as a signal node, enabling uniform access to both experimental data and video through the same APIs (jScope, MDSplus client libraries).

The architecture consists of four main components: (1) the Central Control System (CCS) that triggers cameras at the start of a discharge; (2) an Acquisition Server that captures raw video, splits it into individual frames, writes timestamps to a text file, and sends the shot number to the Storage Server via TCP; (3) a Storage Server equipped with RAID storage, which runs two threads—one listening for shot numbers and queuing them, the other processing the queue by memory‑mapping the frame files, inserting the frames into MDSplus, and simultaneously synthesizing an FLV video using FFmpeg; shot metadata is also recorded in a MySQL database for the web interface; and (4) a VOD Server that provides a browser‑based UI (PHP/JavaScript) for searching, playing, and downloading videos, as well as a frame‑by‑frame analysis page that reads frames and timestamps from MDSplus.

A key technical contribution is the use of disk‑mapping to read large frame files without copying data into user space, dramatically reducing I/O overhead. The system also adopts a queue‑based, multi‑threaded design to ensure that shot processing does not block incoming data, thereby achieving near‑real‑time performance. The web UI allows users to select camera types, input shot numbers, adjust playback speed, and navigate frames with “previous”, “next”, and “show” controls. In parallel, the authors integrated the system with jScope, MDSplus’s waveform viewer, enabling simultaneous display of video frames and plasma diagnostic signals. This synchronized visualization aids researchers in correlating visual phenomena with quantitative measurements.

Performance evaluation, summarized in Table 1, shows that the new system reduces processing time by roughly a factor of three compared with the legacy approach. For example, a 9.79‑second discharge (shot 77213) that required 30 seconds in the old system is processed in about 10.5 seconds with the new pipeline. The most dramatic improvement is observed for long‑pulse shots, such as shot 73999 (≈105 seconds), where processing time drops from 271 seconds to under 92 seconds.

The authors acknowledge current limitations: video synthesis still relies on CPU‑intensive FFmpeg encoding, and true low‑latency streaming is not yet supported. Multi‑camera synchronous playback is also absent, restricting complex multi‑view analyses. Planned future work includes GPU‑accelerated encoding, adoption of low‑latency streaming protocols (e.g., WebRTC), and development of plugins to manage video metadata within MDSplus more efficiently.

In conclusion, the EAST real‑time VOD system built on MDSplus provides a unified, user‑friendly platform for storing, retrieving, and analyzing experimental video alongside traditional diagnostic data. It delivers significant speed gains, simplifies data backup and sharing, and offers flexible web and jScope interfaces that enhance researchers’ ability to study plasma events in real time. The system’s modular design and reliance on widely adopted open‑source tools position it for easy extension as EAST incorporates additional high‑speed cameras and advanced diagnostics.


Comments & Academic Discussion

Loading comments...

Leave a Comment