POSThttps://api.groq.com/openai/v1/chat/completions
Creates a model response for the given chat conversation.
A list of messages comprising the conversation so far.
ID of the model to use. For details on which models are compatible with the Chat API, see available models
Deprecated: Use search_settings.exclude_domains instead. A list of domains to exclude from the search results when the model uses a web search tool.
For internal use only
This is not yet supported by any of our models. Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
Deprecated in favor of tool_choice
.
Controls which (if any) function is called by the model.
none
means the model will not call a function and instead generates a message.
auto
means the model can pick between generating a message or calling a function.
Specifying a particular function via {"name": "my_function"}
forces the model to call that function.
none
is the default when no functions are present. auto
is the default if functions are present.
Deprecated in favor of tools
.
A list of functions the model may generate JSON inputs for.
Deprecated: Use search_settings.include_domains instead. A list of domains to include in the search results when the model uses a web search tool.
Whether to include reasoning in the response. If true, the response will include a reasoning
field. If false, the model's reasoning will not be included in the response.
This field is mutually exclusive with reasoning_format
.
This is not yet supported by any of our models. Modify the likelihood of specified tokens appearing in the completion.
This is not yet supported by any of our models.
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content
of message
.
The maximum number of tokens that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length.
Deprecated in favor of max_completion_tokens
.
The maximum number of tokens that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length.
This parameter is not currently supported.
How many chat completion choices to generate for each input message. Note that the current moment, only n=1 is supported. Other values will result in a 400 response.
Whether to enable parallel function calling during tool use.
This is not yet supported by any of our models. Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
this field is only available for qwen3 models. Set to 'none' to disable reasoning. Set to 'default' or null to let Qwen reason.
hidden, raw, parsed
Specifies how to output reasoning tokens
This field is mutually exclusive with include_reasoning
.
An object specifying the format that the model must output. Setting to { "type": "json_schema", "json_schema": {...} }
enables Structured Outputs which ensures the model will match your supplied JSON schema. json_schema
response format is only available on supported models. Setting to { "type": "json_object" }
enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
Settings for web search functionality when the model uses a web search tool.
If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed
and parameters should return the same result.
Determinism is not guaranteed, and you should refer to the system_fingerprint
response parameter to monitor changes in the backend.
auto, on_demand, flex, performance, null
The service tier to use for the request. Defaults to on_demand
.
auto
will automatically select the highest tier available within the rate limits of your organization.flex
uses the flex tier, which will succeed or fail quickly.Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
This parameter is not currently supported.
If set, partial message deltas will be sent. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE]
message. Example code.
Options for streaming response. Only set this when you set stream: true
.
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.
Controls which (if any) tool is called by the model.
none
means the model will not call any tool and instead generates a message.
auto
means the model can pick between generating a message or calling one or more tools.
required
means the model must call one or more tools.
Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}}
forces the model to call that tool.
none
is the default when no tools are present. auto
is the default if tools are present.
A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.
This is not yet supported by any of our models.
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs
must be set to true
if this parameter is used.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
A unique identifier representing your end-user, which can help us monitor and detect abuse.
Returns a chat completion object, or a streamed sequence of chat completion chunk objects if the request is streamed.
curl https://api.groq.com/openai/v1/chat/completions -s \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $GROQ_API_KEY" \
-d '{
"model": "llama-3.3-70b-versatile",
"messages": [{
"role": "user",
"content": "Explain the importance of fast language models"
}]
}'
{
"id": "chatcmpl-f51b2cd2-bef7-417e-964e-a08f0b513c22",
"object": "chat.completion",
"created": 1730241104,
"model": "llama3-8b-8192",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Fast language models have gained significant attention in recent years due to their ability to process and generate human-like text quickly and efficiently. The importance of fast language models can be understood from their potential applications and benefits:\n\n1. **Real-time Chatbots and Conversational Interfaces**: Fast language models enable the development of chatbots and conversational interfaces that can respond promptly to user queries, making them more engaging and useful.\n2. **Sentiment Analysis and Opinion Mining**: Fast language models can quickly analyze text data to identify sentiments, opinions, and emotions, allowing for improved customer service, market research, and opinion mining.\n3. **Language Translation and Localization**: Fast language models can quickly translate text between languages, facilitating global communication and enabling businesses to reach a broader audience.\n4. **Text Summarization and Generation**: Fast language models can summarize long documents or even generate new text on a given topic, improving information retrieval and processing efficiency.\n5. **Named Entity Recognition and Information Extraction**: Fast language models can rapidly recognize and extract specific entities, such as names, locations, and organizations, from unstructured text data.\n6. **Recommendation Systems**: Fast language models can analyze large amounts of text data to personalize product recommendations, improve customer experience, and increase sales.\n7. **Content Generation for Social Media**: Fast language models can quickly generate engaging content for social media platforms, helping businesses maintain a consistent online presence and increasing their online visibility.\n8. **Sentiment Analysis for Stock Market Analysis**: Fast language models can quickly analyze social media posts, news articles, and other text data to identify sentiment trends, enabling financial analysts to make more informed investment decisions.\n9. **Language Learning and Education**: Fast language models can provide instant feedback and adaptive language learning, making language education more effective and engaging.\n10. **Domain-Specific Knowledge Extraction**: Fast language models can quickly extract relevant information from vast amounts of text data, enabling domain experts to focus on high-level decision-making rather than manual information gathering.\n\nThe benefits of fast language models include:\n\n* **Increased Efficiency**: Fast language models can process large amounts of text data quickly, reducing the time and effort required for tasks such as sentiment analysis, entity recognition, and text summarization.\n* **Improved Accuracy**: Fast language models can analyze and learn from large datasets, leading to more accurate results and more informed decision-making.\n* **Enhanced User Experience**: Fast language models can enable real-time interactions, personalized recommendations, and timely responses, improving the overall user experience.\n* **Cost Savings**: Fast language models can automate many tasks, reducing the need for manual labor and minimizing costs associated with data processing and analysis.\n\nIn summary, fast language models have the potential to transform various industries and applications by providing fast, accurate, and efficient language processing capabilities."
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"queue_time": 0.037493756,
"prompt_tokens": 18,
"prompt_time": 0.000680594,
"completion_tokens": 556,
"completion_time": 0.463333333,
"total_tokens": 574,
"total_time": 0.464013927
},
"system_fingerprint": "fp_179b0f92c9",
"x_groq": { "id": "req_01jbd6g2qdfw2adyrt2az8hz4w" }
}
POSThttps://api.groq.com/openai/v1/responses
Creates a model response for the given input.
Text input to the model, used to generate a response.
ID of the model to use. For details on which models are compatible with the Responses API, see available models
Inserts a system (or developer) message as the first item in the model's context.
An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.
Custom key-value pairs for storing additional information. Maximum of 16 pairs.
Enable parallel execution of multiple tool calls.
Configuration for reasoning capabilities when using compatible models.
auto, default, flex
Specifies the latency tier to use for processing the request.
Response storage flag. Note: Currently only supports false or null values.
Enable streaming mode to receive response data as server-sent events.
Controls randomness in the response generation. Range: 0 to 2. Lower values produce more deterministic outputs, higher values increase variety and creativity.
Response format configuration. Supports plain text or structured JSON output.
Controls which (if any) tool is called by the model.
none
means the model will not call any tool and instead generates a message.
auto
means the model can pick between generating a message or calling one or more tools.
required
means the model must call one or more tools.
Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}}
forces the model to call that tool.
none
is the default when no tools are present. auto
is the default if tools are present.
List of tools available to the model. Currently supports function definitions only. Maximum of 128 functions.
Nucleus sampling parameter that controls the cumulative probability cutoff. Range: 0 to 1. A value of 0.1 restricts sampling to tokens within the top 10% probability mass.
auto, disabled
Context truncation strategy. Supported values: auto
or disabled
.
Optional identifier for tracking end-user requests. Useful for usage monitoring and compliance.
Returns a response object, or a streamed sequence of response events if the request is streamed.
curl https://api.groq.com/openai/v1/responses -s \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $GROQ_API_KEY" \
-d '{
"model": "gpt-oss",
"input": "Tell me a three sentence bedtime story about a unicorn."
}'
{
"id": "resp_01k1x6w9ane6d8rfxm05cb45yk",
"object": "response",
"status": "completed",
"created_at": 1754400695,
"output": [
{
"type": "message",
"id": "msg_01k1x6w9ane6eb0650crhawwyy",
"status": "completed",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "When the stars blinked awake, Luna the unicorn curled her mane and whispered wishes to the sleeping pine trees. She galloped through a field of moonlit daisies, gathering dew like tiny silver pearls. With a gentle sigh, she tucked her hooves beneath a silver cloud so the world slept softly, dreaming of her gentle hooves until the morning.",
"annotations": []
}
]
}
],
"previous_response_id": null,
"model": "llama-3.3-70b-versatile",
"reasoning": {
"effort": null,
"summary": null
},
"max_output_tokens": null,
"instructions": null,
"text": {
"format": {
"type": "text"
}
},
"tools": [],
"tool_choice": "auto",
"truncation": "disabled",
"metadata": {},
"temperature": 1,
"top_p": 1,
"user": null,
"service_tier": "default",
"error": null,
"incomplete_details": null,
"usage": {
"input_tokens": 82,
"input_tokens_details": {
"cached_tokens": 0
},
"output_tokens": 266,
"output_tokens_details": {
"reasoning_tokens": 0
},
"total_tokens": 348
},
"parallel_tool_calls": true,
"store": false
}
POSThttps://api.groq.com/openai/v1/audio/transcriptions
Transcribes audio into the input language.
ID of the model to use. whisper-large-v3
and whisper-large-v3-turbo
are currently available.
The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm. Either a file or a URL must be provided. Note that the file field is not supported in Batch API requests.
The language of the input audio. Supplying the input language in ISO-639-1 format will improve accuracy and latency.
An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.
json, text, verbose_json
The format of the transcript output, in one of these options: json
, text
, or verbose_json
.
The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.
The timestamp granularities to populate for this transcription. response_format
must be set verbose_json
to use timestamp granularities. Either or both of these options are supported: word
, or segment
. Note: There is no additional latency for segment timestamps, but generating word timestamps incurs additional latency.
The audio URL to translate/transcribe (supports Base64URL). Either a file or a URL must be provided. For Batch API requests, the URL field is required since the file field is not supported.
Returns an audio transcription object.
curl https://api.groq.com/openai/v1/audio/transcriptions \
-H "Authorization: Bearer $GROQ_API_KEY" \
-H "Content-Type: multipart/form-data" \
-F file="@./sample_audio.m4a" \
-F model="whisper-large-v3"
{
"text": "Your transcribed text appears here...",
"x_groq": {
"id": "req_unique_id"
}
}
POSThttps://api.groq.com/openai/v1/audio/translations
Translates audio into English.
ID of the model to use. whisper-large-v3
and whisper-large-v3-turbo
are currently available.
The audio file object (not file name) translate, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
An optional text to guide the model's style or continue a previous audio segment. The prompt should be in English.
json, text, verbose_json
The format of the transcript output, in one of these options: json
, text
, or verbose_json
.
The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.
The audio URL to translate/transcribe (supports Base64URL). Either file or url must be provided. When using the Batch API only url is supported.
Returns an audio translation object.
curl https://api.groq.com/openai/v1/audio/translations \
-H "Authorization: Bearer $GROQ_API_KEY" \
-H "Content-Type: multipart/form-data" \
-F file="@./sample_audio.m4a" \
-F model="whisper-large-v3"
{
"text": "Your translated text appears here...",
"x_groq": {
"id": "req_unique_id"
}
}
POSThttps://api.groq.com/openai/v1/audio/speech
Generates audio from the input text.
The text to generate audio for.
One of the available TTS models.
The voice to use when generating the audio. List of voices can be found here.
flac, mp3, mulaw, ogg, wav
The format of the generated audio. Supported formats are flac, mp3, mulaw, ogg, wav
.
8000, 16000, 22050, 24000, 32000, 44100, 48000
The sample rate for generated audio
The speed of the generated audio.
Returns an audio file in wav
format.
curl https://api.groq.com/openai/v1/audio/speech \
-H "Authorization: Bearer $GROQ_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "playai-tts",
"input": "I love building and shipping new features for our users!",
"voice": "Fritz-PlayAI",
"response_format": "wav"
}'
curl https://api.groq.com/openai/v1/models \
-H "Authorization: Bearer $GROQ_API_KEY"
{
"object": "list",
"data": [
{
"id": "gemma2-9b-it",
"object": "model",
"created": 1693721698,
"owned_by": "Google",
"active": true,
"context_window": 8192,
"public_apps": null
},
{
"id": "llama3-8b-8192",
"object": "model",
"created": 1693721698,
"owned_by": "Meta",
"active": true,
"context_window": 8192,
"public_apps": null
},
{
"id": "llama3-70b-8192",
"object": "model",
"created": 1693721698,
"owned_by": "Meta",
"active": true,
"context_window": 8192,
"public_apps": null
},
{
"id": "whisper-large-v3-turbo",
"object": "model",
"created": 1728413088,
"owned_by": "OpenAI",
"active": true,
"context_window": 448,
"public_apps": null
},
{
"id": "whisper-large-v3",
"object": "model",
"created": 1693721698,
"owned_by": "OpenAI",
"active": true,
"context_window": 448,
"public_apps": null
},
{
"id": "llama-guard-3-8b",
"object": "model",
"created": 1693721698,
"owned_by": "Meta",
"active": true,
"context_window": 8192,
"public_apps": null
},
{
"id": "distil-whisper-large-v3-en",
"object": "model",
"created": 1693721698,
"owned_by": "Hugging Face",
"active": true,
"context_window": 448,
"public_apps": null
},
{
"id": "llama-3.1-8b-instant",
"object": "model",
"created": 1693721698,
"owned_by": "Meta",
"active": true,
"context_window": 131072,
"public_apps": null
}
]
}
GEThttps://api.groq.com/openai/v1/models/{model}
Get detailed information about a model.
A model object.
curl https://api.groq.com/openai/v1/models/llama-3.3-70b-versatile \
-H "Authorization: Bearer $GROQ_API_KEY"
{
"id": "llama3-8b-8192",
"object": "model",
"created": 1693721698,
"owned_by": "Meta",
"active": true,
"context_window": 8192,
"public_apps": null,
"max_completion_tokens": 8192
}
POSThttps://api.groq.com/openai/v1/batches
Creates and executes a batch from an uploaded file of requests. Learn more.
The time frame within which the batch should be processed. Durations from 24h
to 7d
are supported.
/v1/chat/completions
The endpoint to be used for all requests in the batch. Currently /v1/chat/completions
is supported.
The ID of an uploaded file that contains requests for the new batch.
See upload file for how to upload a file.
Your input file must be formatted as a JSONL file, and must be uploaded with the purpose batch
. The file can be up to 100 MB in size.
Optional custom metadata for the batch.
A created batch object.
curl https://api.groq.com/openai/v1/batches \
-H "Authorization: Bearer $GROQ_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"input_file_id": "file_01jh6x76wtemjr74t1fh0faj5t",
"endpoint": "/v1/chat/completions",
"completion_window": "24h"
}'
{
"id": "batch_01jh6xa7reempvjyh6n3yst2zw",
"object": "batch",
"endpoint": "/v1/chat/completions",
"errors": null,
"input_file_id": "file_01jh6x76wtemjr74t1fh0faj5t",
"completion_window": "24h",
"status": "validating",
"output_file_id": null,
"error_file_id": null,
"finalizing_at": null,
"failed_at": null,
"expired_at": null,
"cancelled_at": null,
"request_counts": {
"total": 0,
"completed": 0,
"failed": 0
},
"metadata": null,
"created_at": 1736472600,
"expires_at": 1736559000,
"cancelling_at": null,
"completed_at": null,
"in_progress_at": null
}
GEThttps://api.groq.com/openai/v1/batches/{batch_id}
Retrieves a batch.
A batch object.
curl https://api.groq.com/openai/v1/batches/batch_01jh6xa7reempvjyh6n3yst2zw \
-H "Authorization: Bearer $GROQ_API_KEY" \
-H "Content-Type: application/json"
{
"id": "batch_01jh6xa7reempvjyh6n3yst2zw",
"object": "batch",
"endpoint": "/v1/chat/completions",
"errors": null,
"input_file_id": "file_01jh6x76wtemjr74t1fh0faj5t",
"completion_window": "24h",
"status": "validating",
"output_file_id": null,
"error_file_id": null,
"finalizing_at": null,
"failed_at": null,
"expired_at": null,
"cancelled_at": null,
"request_counts": {
"total": 0,
"completed": 0,
"failed": 0
},
"metadata": null,
"created_at": 1736472600,
"expires_at": 1736559000,
"cancelling_at": null,
"completed_at": null,
"in_progress_at": null
}
GEThttps://api.groq.com/openai/v1/batches
List your organization's batches.
A list of batches
curl https://api.groq.com/openai/v1/batches \
-H "Authorization: Bearer $GROQ_API_KEY" \
-H "Content-Type: application/json"
{
"object": "list",
"data": [
{
"id": "batch_01jh6xa7reempvjyh6n3yst2zw",
"object": "batch",
"endpoint": "/v1/chat/completions",
"errors": null,
"input_file_id": "file_01jh6x76wtemjr74t1fh0faj5t",
"completion_window": "24h",
"status": "validating",
"output_file_id": null,
"error_file_id": null,
"finalizing_at": null,
"failed_at": null,
"expired_at": null,
"cancelled_at": null,
"request_counts": {
"total": 0,
"completed": 0,
"failed": 0
},
"metadata": null,
"created_at": 1736472600,
"expires_at": 1736559000,
"cancelling_at": null,
"completed_at": null,
"in_progress_at": null
}
]
}
POSThttps://api.groq.com/openai/v1/batches/{batch_id}/cancel
Cancels a batch.
A batch object.
curl -X POST https://api.groq.com/openai/v1/batches/batch_01jh6xa7reempvjyh6n3yst2zw/cancel \
-H "Authorization: Bearer $GROQ_API_KEY" \
-H "Content-Type: application/json"
{
"id": "batch_01jh6xa7reempvjyh6n3yst2zw",
"object": "batch",
"endpoint": "/v1/chat/completions",
"errors": null,
"input_file_id": "file_01jh6x76wtemjr74t1fh0faj5t",
"completion_window": "24h",
"status": "cancelling",
"output_file_id": null,
"error_file_id": null,
"finalizing_at": null,
"failed_at": null,
"expired_at": null,
"cancelled_at": null,
"request_counts": {
"total": 0,
"completed": 0,
"failed": 0
},
"metadata": null,
"created_at": 1736472600,
"expires_at": 1736559000,
"cancelling_at": null,
"completed_at": null,
"in_progress_at": null
}
POSThttps://api.groq.com/openai/v1/files
Upload a file that can be used across various endpoints.
The Batch API only supports .jsonl
files up to 100 MB in size. The input also has a specific required format.
Please contact us if you need to increase these storage limits.
The File object (not file name) to be uploaded.
batch
The intended purpose of the uploaded file. Use "batch" for Batch API.
The uploaded File object.
curl https://api.groq.com/openai/v1/files \
-H "Authorization: Bearer $GROQ_API_KEY" \
-F purpose="batch" \
-F "file=@batch_file.jsonl"
{
"id": "file_01jh6x76wtemjr74t1fh0faj5t",
"object": "file",
"bytes": 966,
"created_at": 1736472501,
"filename": "batch_file.jsonl",
"purpose": "batch"
}
GEThttps://api.groq.com/openai/v1/files
Returns a list of files.
curl https://api.groq.com/openai/v1/files \
-H "Authorization: Bearer $GROQ_API_KEY" \
-H "Content-Type: application/json"
{
"object": "list",
"data": [
{
"id": "file_01jh6x76wtemjr74t1fh0faj5t",
"object": "file",
"bytes": 966,
"created_at": 1736472501,
"filename": "batch_file.jsonl",
"purpose": "batch"
}
]
}
DELETEhttps://api.groq.com/openai/v1/files/{file_id}
Delete a file.
A deleted file response object.
curl -X DELETE https://api.groq.com/openai/v1/files/file_01jh6x76wtemjr74t1fh0faj5t \
-H "Authorization: Bearer $GROQ_API_KEY" \
-H "Content-Type: application/json"
{
"id": "file_01jh6x76wtemjr74t1fh0faj5t",
"object": "file",
"deleted": true
}
GEThttps://api.groq.com/openai/v1/files/{file_id}
Returns information about a file.
A file object.
curl https://api.groq.com/openai/v1/files/file_01jh6x76wtemjr74t1fh0faj5t \
-H "Authorization: Bearer $GROQ_API_KEY" \
-H "Content-Type: application/json"
{
"id": "file_01jh6x76wtemjr74t1fh0faj5t",
"object": "file",
"bytes": 966,
"created_at": 1736472501,
"filename": "batch_file.jsonl",
"purpose": "batch"
}
GEThttps://api.groq.com/openai/v1/files/{file_id}/content
Returns the contents of the specified file.
The file content
curl https://api.groq.com/openai/v1/files/file_01jh6x76wtemjr74t1fh0faj5t/content \
-H "Authorization: Bearer $GROQ_API_KEY" \
-H "Content-Type: application/json"
GEThttps://api.groq.com/v1/fine_tunings
Lists all previously created fine tunings. This endpoint is in closed beta. Contact us for more information.
The list of fine tunes
curl https://api.groq.com/v1/fine_tunings -s \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $GROQ_API_KEY"
{
"object": "list",
"data": [
{
"id": "string",
"name": "string",
"base_model": "string",
"type": "string",
"input_file_id": "string",
"created_at": 0,
"fine_tuned_model": "string"
}
]
}
POSThttps://api.groq.com/v1/fine_tunings
Creates a new fine tuning for the already uploaded files This endpoint is in closed beta. Contact us for more information.
BaseModel is the model that the fine tune was originally trained on.
InputFileID is the id of the file that was uploaded via the /files api.
Name is the given name to a fine tuned model.
Type is the type of fine tuning format such as "lora".
The newly created fine tune
curl https://api.groq.com/v1/fine_tunings -s \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $GROQ_API_KEY" \
-d '{
"input_file_id": "<file-id>",
"name": "test-1",
"type": "lora",
"base_model": "llama-3.1-8b-instant"
}'
{
"id": "string",
"object": "object",
"data": {
"id": "string",
"name": "string",
"base_model": "string",
"type": "string",
"input_file_id": "string",
"created_at": 0,
"fine_tuned_model": "string"
}
}
GEThttps://api.groq.com/v1/fine_tunings/{id}
Retrieves an existing fine tuning by id This endpoint is in closed beta. Contact us for more information.
A fine tune metadata object
curl https://api.groq.com/v1/fine_tunings/:id -s \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $GROQ_API_KEY"
{
"id": "string",
"object": "object",
"data": {
"id": "string",
"name": "string",
"base_model": "string",
"type": "string",
"input_file_id": "string",
"created_at": 0,
"fine_tuned_model": "string"
}
}
DELETEhttps://api.groq.com/v1/fine_tunings/{id}
Deletes an existing fine tuning by id This endpoint is in closed beta. Contact us for more information.
A confirmation of the deleted fine tune
curl -X DELETE https://api.groq.com/v1/fine_tunings/:id -s \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $GROQ_API_KEY"
{
"id": "string",
"object": "fine_tuning",
"deleted": true
}