How to use the ChatGPT API with Golang (with example code)
How to use the ChatGPT API with Golang (with example code)

How to use the ChatGPT API with Golang (with example code)

Date
Jan 31, 2024
Tags
Go
backend
basics
GPT
I’m new to Go but it is quickly becoming my preferred choice for all backend stuff
Many GPT endpoints and tutorial examples are predominantly in Python, so when I wanted to do the accomplish the same in Go, I struggled to find good resources.
This blog post aims to address this gap by providing guidance on implementing GPT functionality using Golang.

Pre-requisites:

  1. Register for an OpenAI API key.
  1. Set up a Go project
 
Here’s the code
type ResponseFromGPT struct { Id string `json:"id"` Object string `json:"object"` Created int64 `json:"created"` Model string `json:"model"` Choices []Choice `json:"choices"` Usage Usage `json:"usage"` } type Usage struct { PromptTokens int `json:"prompt_tokens"` CompletionTokens int `json:"completion_tokens"` TotalTokens int `json:"total_tokens"` } type Choice struct { Message Message `json:"message"` Index int `json:"index"` FinishReason string `json:"finish_reason"` } type Message struct { Role string `json:"role"` Content any `json:"content"` }
main.go
The following code sends multiple images (as base64 images) to GPT and receives back a response depending on your prompt
package main import ( "bytes" "encoding/base64" "encoding/json" "fmt" "io" "net/http" "os" ) func main() { OPENAI_API_KEY="" // Get your key from .env system_prompt="your system prompt" imageFilePaths := []string{ "https://yourimageoncloudstorage.co/123.jpg", "https://yourimageoncloudstorage.co/1234.jpg", "https://yourimageoncloudstorage.co/12345.jpg", } base64EncodedImages := []string{} for _, imageFilePath := range imageFilePaths { imageData, err := os.ReadFile(imageFilePath) if err != nil { fmt.Println("Error reading image file:", err) return } base64EncodedImage := base64.StdEncoding.EncodeToString(imageData) base64EncodedImages = append(base64EncodedImages, base64EncodedImage) } userContentPayload := []map[string]interface{}{} for _, base64Image := range base64EncodedImages { userContentPayload = append(userContentPayload, map[string]interface{}{ "type": "image_url", "image_url": map[string]interface{}{ "url": "data:image/jpeg;base64," + base64Image, }, }) } payload := map[string]interface{}{ "model": "gpt-4-vision-preview", "max_tokens": 2000, "messages": []map[string]interface{}{ { "role": "system", "content": []map[string]interface{}{ { "type": "text", "text": system_prompt, }, }, }, { "role": "user", "content": userContentPayload, }, }, } // Convert JSON data to bytes jsonData, err := json.Marshal(payload) if err != nil { fmt.Println("Error encoding JSON:", err) return } postUrl := "https://api.openai.com/v1/chat/completions" request, err := http.NewRequest("POST", postUrl, bytes.NewBuffer(jsonData)) if err != nil { fmt.Println("Error creating request:", err) return } request.Header.Set("Content-Type", "application/json") request.Header.Set("Authorization", "Bearer "+OPENAI_API_KEY) client := &http.Client{} resp, err := client.Do(request) if err != nil { fmt.Println("Error sending request:", err) return } defer resp.Body.Close() body, err := io.ReadAll(resp.Body) if err != nil { fmt.Println("Error reading response body:", err) return } var response ResponseFromGPT err = json.Unmarshal(body, &response) if err != nil { fmt.Println("Error unmarshalling response:", err) return } fmt.Print(response.Choices[0].Message.Content) }
response.Choices[0].Message.Content will look something like this:
```json { <THE JSON OUTPUT THAT YOU WANT> } ```
So the backticks need to be removed, before sending the data to client.
 
With selected models, it is possible to instruct gpt to only return a JSON object
by adding response_format={ "type": "json_object" } to the body
More features available (not for all models)
Function calling: