Optimizing AI model inference performance and harnessing
remarkable hardware efficiency to pave the way towards world-leading capabilities
Providing the most dexterous SW stack and tools
for AI model bring-up and optimization for the world’s leading model inference performance
We are developing a full stack SW for top AI model inference performance and utilization
We are developing all-around AI model development SDK for the easiest and fastest AI model deployment
We are developing NPU SW using two-distinct
NPU SW stacks and optimizing for “Generality” and “Performance”
Source: Official website of Tenstorrent Inc. BOS and Tenstorrent are collaborating on joint AI SW development.
We are developing the most flexible and compatible frontend
& backend AI model compiler for the world’s best AI model inference performance
Source: Official website of Tenstorrent Inc. BOS and Tenstorrent are collaborating on joint AI SW development.