KOINEU Logo
์ด๋”๋ฆฌ์›€ ๊ฑฐ๋ž˜ ๊ฒฝ์ œ์  ์˜๋„ ํŒŒ์•…์„ ์œ„ํ•œ TxSum ๋ฐ์ดํ„ฐ์…‹๊ณผ MATEX ๋ฉ€ํ‹ฐ์—์ด์ „ํŠธ ์‹œ์Šคํ…œ

์ด๋”๋ฆฌ์›€ ๊ฑฐ๋ž˜ ๊ฒฝ์ œ์  ์˜๋„ ํŒŒ์•…์„ ์œ„ํ•œ TxSum ๋ฐ์ดํ„ฐ์…‹๊ณผ MATEX ๋ฉ€ํ‹ฐ์—์ด์ „ํŠธ ์‹œ์Šคํ…œ

Understanding the economic intent of Ethereum transactions is critical for user safety, yet current tools expose only raw on-chain data, leading to widespread 'blind signing' (approving transactions without understanding them). Through interviews with 16 Web3 users, we find that effective explanatio

์ด์ค‘ ์ถ”๋ก  ํ•™์Šต: ๊ธ์ •โ€‘๋ถ€์ • ๋…ผ๋ฆฌ๋ฅผ ๊ฒฐํ•ฉํ•œ ๋Œ€ํ˜• ์–ธ์–ด ๋ชจ๋ธ์˜ ๊ณผํ•™์  ์ถ”๋ก  ๊ฐ•ํ™”

์ด์ค‘ ์ถ”๋ก  ํ•™์Šต: ๊ธ์ •โ€‘๋ถ€์ • ๋…ผ๋ฆฌ๋ฅผ ๊ฒฐํ•ฉํ•œ ๋Œ€ํ˜• ์–ธ์–ด ๋ชจ๋ธ์˜ ๊ณผํ•™์  ์ถ”๋ก  ๊ฐ•ํ™”

Large Language Models (LLMs) have transformed natural language processing and hold growing promise for advancing science, healthcare, and decision-making. Yet their training paradigms remain dominated by affirmation-based inference, akin to modus ponens, where accepted premises yield predicted conse

์ž๋™ํ™”๋œ MDP ๋ชจ๋ธ๋ง๊ณผ ์ •์ฑ… ์ƒ์„ฑ์„ ์œ„ํ•œ ์—์ด์ „ํŠธํ˜• LLM ํ”„๋ ˆ์ž„์›Œํฌ Aโ€‘LAMP

์ž๋™ํ™”๋œ MDP ๋ชจ๋ธ๋ง๊ณผ ์ •์ฑ… ์ƒ์„ฑ์„ ์œ„ํ•œ ์—์ด์ „ํŠธํ˜• LLM ํ”„๋ ˆ์ž„์›Œํฌ Aโ€‘LAMP

Applying reinforcement learning (RL) to real-world tasks requires converting informal descriptions into a formal Markov decision process (MDP), implementing an executable environment, and training a policy agent. Automating this process is challenging due to modeling errors, fragile code, and misali

์ €์กฐ๋„ ๊ตํ†ต ์˜์ƒ ํ–ฅ์ƒ์„ ์œ„ํ•œ ๋ฌด์ง€๋„ ํ•™์Šต ๋‹ค๋‹จ๊ณ„ ํ”„๋ ˆ์ž„์›Œํฌ

์ €์กฐ๋„ ๊ตํ†ต ์˜์ƒ ํ–ฅ์ƒ์„ ์œ„ํ•œ ๋ฌด์ง€๋„ ํ•™์Šต ๋‹ค๋‹จ๊ณ„ ํ”„๋ ˆ์ž„์›Œํฌ

Enhancing low-light traffic imagery is a critical requirement for achieving reliable perception in autonomous driving, intelligent transportation, and urban surveillance systems. Traffic scenes captured under nighttime or dimly lit conditions often suffer from complex visual degradations arising fro

์ฃผ๊ฐ€ ์˜ˆ์ธก์—์„œ KAN๊ณผ LSTM ์„ฑ๋Šฅ ๋น„๊ต ์ •ํ™•๋„์™€ ํ•ด์„ ๊ฐ€๋Šฅ์„ฑ์˜ ๊ท ํ˜•

์ฃผ๊ฐ€ ์˜ˆ์ธก์—์„œ KAN๊ณผ LSTM ์„ฑ๋Šฅ ๋น„๊ต ์ •ํ™•๋„์™€ ํ•ด์„ ๊ฐ€๋Šฅ์„ฑ์˜ ๊ท ํ˜•

This paper compares Kolmogorov-Arnold Networks (KAN) and Long Short-Term Memory networks (LSTM) for forecasting non-deterministic stock price data, evaluating predictive accuracy versus interpretability trade-offs using Root Mean Square Error (RMSE). LSTM demonstrates substantial superiority across

ํ๋ธŒ๋ฒค์น˜ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ๋Œ€ํ˜• ์–ธ์–ด ๋ชจ๋ธ์˜ ๊ณต๊ฐ„ยท์ˆœ์ฐจ ์ถ”๋ก  ํ‰๊ฐ€

ํ๋ธŒ๋ฒค์น˜ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ๋Œ€ํ˜• ์–ธ์–ด ๋ชจ๋ธ์˜ ๊ณต๊ฐ„ยท์ˆœ์ฐจ ์ถ”๋ก  ํ‰๊ฐ€

We introduce Cube Bench, a Rubik's-cube benchmark for evaluating spatial and sequential reasoning in multimodal large language models (MLLMs). The benchmark decomposes performance into five skills: (i) reconstructing cube faces from images and text, (ii) choosing the optimal next move, (iii) predict

ํ”„๋ฆฌ์ฆ˜ ์›”๋“œ ๋ชจ๋ธ: ํ•˜์ด๋ธŒ๋ฆฌ๋“œ ๋กœ๋ด‡ ๋™์—ญํ•™์„ ์œ„ํ•œ ๋ชจ๋“œ ๋ถ„๋ฆฌ ์ „๋ฌธ๊ฐ€ ํ˜ผํ•ฉ

ํ”„๋ฆฌ์ฆ˜ ์›”๋“œ ๋ชจ๋ธ: ํ•˜์ด๋ธŒ๋ฆฌ๋“œ ๋กœ๋ด‡ ๋™์—ญํ•™์„ ์œ„ํ•œ ๋ชจ๋“œ ๋ถ„๋ฆฌ ์ „๋ฌธ๊ฐ€ ํ˜ผํ•ฉ

Model-based planning in robotic domains is fundamentally challenged by the hybrid nature of physical dynamics, where continuous motion is punctuated by discrete events such as contacts and impacts. Conventional latent world models typically employ monolithic neural networks that enforce global conti

ํ”Œ๋ผ์Šคํ‹ฑ์„ฑ ํšŒ๋ณต์„ ์œ„ํ•œ ํŠธ์œˆ ๋„คํŠธ์›Œํฌ ๊ธฐ๋ฐ˜ ๋ฆฌ์…‹ ๊ธฐ๋ฒ• AltNet

ํ”Œ๋ผ์Šคํ‹ฑ์„ฑ ํšŒ๋ณต์„ ์œ„ํ•œ ํŠธ์œˆ ๋„คํŠธ์›Œํฌ ๊ธฐ๋ฐ˜ ๋ฆฌ์…‹ ๊ธฐ๋ฒ• AltNet

Neural networks have shown remarkable success in supervised learning when trained on a single task using a fixed dataset. However, when neural networks are trained on a reinforcement learning task, their ability to continue learning from new experiences declines over time. This decline in learning a

ํ”ฝ์…€ ๋™๋“ฑ ์ž ์žฌ ํ•ฉ์„ฑ์œผ๋กœ ๊ตฌํ˜„ํ•˜๋Š” ๊ณ ํ’ˆ์งˆ ์ด๋ฏธ์ง€ ์ธํŽ˜์ธํŒ…

ํ”ฝ์…€ ๋™๋“ฑ ์ž ์žฌ ํ•ฉ์„ฑ์œผ๋กœ ๊ตฌํ˜„ํ•˜๋Š” ๊ณ ํ’ˆ์งˆ ์ด๋ฏธ์ง€ ์ธํŽ˜์ธํŒ…

Latent inpainting in diffusion models still relies almost universally on linearly interpolating VAE latents under a downsampled mask. We propose a key principle for compositing image latents: Pixel-Equivalent Latent Compositing (PELC). An equivalent latent compositor should be the same as compositin

ํฌ์†Œ ์ž…๋ ฅ์œผ๋กœ๋ถ€ํ„ฐ ์ž์—ฐ์Šค๋Ÿฌ์šด ๋น„๋””์˜ค๋ฅผ ์—ฐ์ƒํ•˜๋Š” 3D ๊ฐ€์šฐ์‹œ์•ˆ ์Šคํ”Œ๋ž˜ํŒ…

ํฌ์†Œ ์ž…๋ ฅ์œผ๋กœ๋ถ€ํ„ฐ ์ž์—ฐ์Šค๋Ÿฌ์šด ๋น„๋””์˜ค๋ฅผ ์—ฐ์ƒํ•˜๋Š” 3D ๊ฐ€์šฐ์‹œ์•ˆ ์Šคํ”Œ๋ž˜ํŒ…

Given just a few glimpses of a scene, can you imagine the movie playing out as the camera glides through it? That's the lens we take on emph{sparse-input novel view synthesis}, not only as filling spatial gaps between widely spaced views, but also as emph{completing a natural video} unfolding throug

< Category Statistics (Total: 5017) >

General Relativity
66
General Research
781
HEP-EX
25
HEP-LAT
7
HEP-PH
79
HEP-TH
67
MATH-PH
79
NUCL-EX
6
NUCL-TH
14
Quantum Physics
67

Start searching

Enter keywords to search articles

โ†‘โ†“
โ†ต
ESC
โŒ˜K Shortcut