Former OpenAI CTO Mira Murati’s AI startup, Thinking Machines Lab, has unveiled its first product aimed at helping developers easily fine-tune large language models (LLMs).

Called Tinker, the API-based product allows developers to “write training loops in Python on your laptop” that run on the company’s distributed GPUs, Thinking Machines said in a blog post on October 1. Scheduling, resource allocation, and failure recovery is also handled by the company itself.

Using Tinker, developers can fine-tune a wide range of large and small open-weight models, including large mixture-of-experts (MoE) models such as Alibaba’s Qwen-235B-A22B as well as Meta’s Llama family of models. It has currently been released for private beta-testing by waitlisted researchers and developers. Users will be able

See Full Page