Our technology transitions from monolithic "black box" models to a distributed AI model architecture where intelligence is orchestrated through a coordinated system of small models.
The model utilizes adaptive test-time compute techniques to dynamically scale reasoning depth and integrate structural guardrails, ensuring the precision and auditability required for enterprise and industrial workflows.
Giotto is a portable, configurable AI model with advanced reasoning capabilities, combining open and proprietary models, datasets, and tools to deliver high performance, adaptability, robustness, and multi-agency support.
Test-time training extends model capabilities by adapting it on-the-fly to the specific query context.
We re-train the model in real time, improving accuracy and reducing hallucinations in the final response.
Decoding is the process by which a language model generates responses token by token, shaping both quality and diversity of outputs.
At Giotto we move away from the standard next-most-probable token paradigm, and explore token paths as branching structure, dynamically expanding the most informative directions based on uncertainty.
Once multiple candidates are generated during decoding, scoring determines which output is most reliable without relying on external supervision.
Our proprietary scoring system relies on sophisticated ranking methods, based on the intrinsic markers that characterise output quality.
The Abstraction and Reasoning Corpus (ARC) is a benchmark designed to measure progress toward Artificial General Intelligence (AGI). Created by François Chollet in 2019, ARC evaluates a system's ability to acquire new skills and key traits of general intelligence. Unlike typical AI benchmarks that test specific skills, ARC challenges AI to reason and abstract in ways that come naturally to humans but are exceptionally difficult for machines.
In 2025, we achieved unprecedented results on the ARC benchmark leveraging our proprietary approach and technology.
Check out our report →
Go to ARC prize website →
On November 4, 2025, we closed the competition at the 2nd place out of ~1'500 teams.
27.6%
27.1%
21.7%
Have a question for us? Please fill out the form below, and our team will get back to you promptly.