Hi Leo Chow,
Currently, the DeepSeek-R1 model is in Preview mode, and it supports a maximum context length of 128k tokens. This extended context length enables the model to excel at complex reasoning tasks, including language understanding, scientific reasoning, and coding.
Hope this helps. Do let us know if you have any further queries.
If this answers your query, do click Accept Answer and Yes for was this answer helpful.
Videos
Deepseek R1 is a powerful AI model, and with Groq’s high-speed inference, you can get lightning-fast responses. If you're looking to integrate Deepseek R1 distill with Groq, here's how you can do it.
Direct model link: https://console.groq.com/playground?model=deepseek-r1-distill-llama-70b
Set Up the API Request
You need to send a POST request to Groq’s API endpoint:
📌 URL:https://api.groq.com/openai/v1/chat/completions
📌 Headers:
Authorization: Bearer <your-api-key>
📌 Request Body (JSON format):
{ "messages": [ { "role": "system", "content": "Please answer in English only" }, { "role": "user", "content": "Deepseek R1 vs OpenAI O1" } ], "model": "deepseek-r1-distill-llama-70b", "temperature": 0.6, "max_completion_tokens": 4096, "top_p": 0.95, "stream": false, "stop": null } 👉 Replace <your-api-key> with your actual API key.
Why Use Groq for Deepseek R1?
✅ Faster Inference – Groq’s hardware accelerates LLM responses significantly.
✅ Easy API Integration – Works seamlessly with OpenAI-style API requests.
✅ High Token Limit – Supports long responses up to 131072 tokens.
💡 Pro Tip: Adjust the temperature and top_p parameters to fine-tune response randomness and creativity.
Have you tried using Deepseek R1 via Groq? Share your experiences in the comments! 🚀
Download the n8n template: https://drive.google.com/file/d/1ImStl41g32DD7RdcKP0YYAqO4q18jhWI/view?usp=download