Google has unveiled a pint-sized new addition to its "open" large language model lineup: Gemma 3 270M.

Weighing in at 270 million parameters and requiring around 550MB of memory, it's designed to make waves in on-device deployment and rapid model iteration — despite the usual caveats around hallucinations, shaky output, and probable copyright entanglements baked into its training data.

Google launched the original Gemma family in February 2024 , and at the time offered two flavours: a two-billion-parameter version designed for on-CPU execution and a more capable seven-billion-parameter version targeting systems with GPU- or TPU-based accelerators.

While positioned as "open" models, in contrast to the company's proprietary Gemini family, they, like most competing "open" models, did not i

See Full Page