A high-capacity linguistic steganography based on entropy-driven rank-token mapping

Reading time: 1 minute
...

📝 Original Info

  • Title: A high-capacity linguistic steganography based on entropy-driven rank-token mapping
  • ArXiv ID: 2510.23035
  • Date: 2025-10-27
  • Authors: ** 논문에 명시된 저자 정보가 제공되지 않았습니다. (보통은 연구팀, 소속 기관, 연락처 등을 포함) **

📝 Abstract

Linguistic steganography enables covert communication through embedding secret messages into innocuous texts; however, current methods face critical limitations in payload capacity and security. Traditional modification-based methods introduce detectable anomalies, while retrieval-based strategies suffer from low embedding capacity. Modern generative steganography leverages language models to generate natural stego text but struggles with limited entropy in token predictions, further constraining capacity. To address these issues, we propose an entropy-driven framework called RTMStega that integrates rank-based adaptive coding and context-aware decompression with normalized entropy. By mapping secret messages to token probability ranks and dynamically adjusting sampling via context-aware entropy-based adjustments, RTMStega achieves a balance between payload capacity and imperceptibility. Experiments across diverse datasets and models demonstrate that RTMStega triples the payload capacity of mainstream generative steganography, reduces processing time by over 50%, and maintains high text quality, offering a trustworthy solution for secure and efficient covert communication.

💡 Deep Analysis

📄 Full Content

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut