TML-Interaction-Small is a multimodal interaction model created by Thinking Machines Lab that enables continuous real-time collaboration between humans and AI across audio, video, and text modalities. The model is designed to move beyond traditional turn-based AI systems by supporting native interaction capabilities such as simultaneous listening and speaking, proactive interjections, visual cue awareness, real-time responses, and ongoing contextual collaboration. TML-Interaction-Small processes interactions through a time-aligned micro-turn architecture that continuously exchanges 200ms streams of input and output, allowing the model to maintain conversational presence while reasoning, responding, and acting concurrently. The system combines an interaction model with an asynchronous background model that handles deeper reasoning, tool usage, browsing, and long-running workflows while the primary interaction layer continues communicating with the user in real time. The architecture allows users to collaborate with AI more naturally through speech, video, messaging, and multimodal inputs without waiting for rigid conversational turn boundaries. Thinking Machines Lab developed the model to improve human-AI collaboration by keeping people actively involved during AI workflows rather than relying solely on autonomous agents. TML-Interaction-Small includes capabilities such as live translation, contextual interruptions, visual-based reactions, concurrent speech processing, time awareness, tool calling, web browsing, and multimodal streaming interaction. The system also introduces encoder-free early fusion techniques, streaming inference optimization, and reinforcement learning strategies optimized for interactive responsiveness and stability.