Established in 2016 for inference, groq is literally built different. Groq has created immense savings and reduced so much overhead for us. Groq is proud to partner on this key industry launch making the latest llama 3.1 models, including 70b instruct and 8b.
Documentation for how to use groq on the ai sdk can be. The lpuâ„¢ inference engine by groq is a hardware and software platform that delivers exceptional compute speed, quality, and energy efficiency. Start learning about lpu technology and benefits today.
The llama 3.1 model suite is now available on groq.