NBC Universal, Inc.

Researchers behind a new study say that the methods used to evaluate AI systems’ capabilities routinely oversell AI performance and lack scientific rigor.

The study, led by researchers at the Oxford Internet Institute in partnership with over three dozen researchers from other institutions, examined 445 leading AI tests, called benchmarks, often used to measure the performance of AI models across a variety of topic areas.

Stream Philadelphia News for free, 24/7, wherever you are with NBC10. WATCH HERE

AI developers and researchers use these benchmarks to evaluate model abilities and tout technical progress , referencing them to make claims on topics ranging from software engineering performance to abstract-reasoning capacity . However, the paper, releas

See Full Page