Kompact AI’s CPU-Powered LLM Claims Face Scrutiny from Tech Community
Kompact AI, a collaboration between IIT Madras and Bengaluru-based startup Ziroh Labs, has claimed a significant breakthrough in artificial intelligence by enabling large language models to operate efficiently on central processing units rather than the traditionally used graphics processing units . The initiative, branded as Kai VM, reportedly achieved an inference speed of 43 tokens per second on an Intel Xeon Silver 4510 CPU with 24 cores […]

indian startup ziroh labs revolutionizes ai with kompact ai v0 4ltdf3uv9sue1

Kompact AI, a collaboration between IIT Madras and Bengaluru-based startup Ziroh Labs, has claimed a significant breakthrough in artificial intelligence by enabling large language models to operate efficiently on central processing units rather than the traditionally used graphics processing units . The initiative, branded as Kai VM, reportedly achieved an inference speed of 43 tokens per second on an Intel Xeon Silver 4510 CPU with 24 cores and 46GB RAM. This development has been touted as a potential game-changer for AI accessibility, particularly in regions with limited access to high-end GPU infrastructure.

However, the tech community has expressed scepticism regarding the novelty and practicality of this achievement. A detailed critique by an independent programmer, known by the pseudonym ‘TheSeriousProgrammer‘, challenges the originality of Kompact AI’s claims. By replicating the reported performance using Intel’s publicly available inference stack on a 12-core Xeon processor, the programmer achieved comparable results within three hours. This replication suggests that the performance gains may not stem from proprietary innovations but rather from existing technologies.

The core of the criticism lies in the assertion that Kompact AI’s demonstration may have leveraged Intel’s inference software without significant modifications or enhancements. If true, this raises concerns about the authenticity of the claimed breakthrough. The programmer argues that repackaging existing solutions without substantial innovation does not constitute a technological advancement. Moreover, the endorsement of this project by IIT Madras, a prestigious institution known for rigorous research standards, adds to the controversy, prompting calls for greater transparency and technical disclosure.

The lack of transparency regarding the underlying technology of Kompact AI has been a point of contention. Without open-source access or detailed technical documentation, it is challenging for the developer community to assess the system’s capabilities and validate its purported advantages. This opacity hinders collaborative development and critical evaluation, essential components in the advancement of AI technologies.

In the broader context of AI development, running LLMs on CPUs is not unprecedented. Companies like ThirdAI have introduced models such as BOLT2.5B, which was pre-trained exclusively on CPUs, achieving fine-tuning speeds of approximately 50 tokens per second on standard desktop configurations. Similarly, industry leaders like Intel and Ampere have demonstrated the feasibility of CPU-based LLM inference, albeit with certain limitations in performance compared to GPU-based systems.

Notice an issue?

Arabian Post strives to deliver the most accurate and reliable information to its readers. If you believe you have identified an error or inconsistency in this article, please don't hesitate to contact our editorial team at editor[at]thearabianpost[dot]com. We are committed to promptly addressing any concerns and ensuring the highest level of journalistic integrity. https://thearabianpost.com/kompact-ais-cpu-powered-llm-claims-face-scrutiny-from-tech-community/
Emirates for everyone

What's your reaction?


You may also like

Comments

https://iheartemirates.com/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!

Facebook Conversations