API
GoogleProvider
-generate_content
-count_tokens
-embed_content
-list_models
diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 52d1d7c..e6eb82c 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.1","generation_timestamp":"2024-02-24T22:22:57","documenter_version":"1.2.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.1","generation_timestamp":"2024-02-24T23:10:48","documenter_version":"1.2.1"}} \ No newline at end of file diff --git a/dev/api/index.html b/dev/api/index.html index ad4f7d2..f83d400 100644 --- a/dev/api/index.html +++ b/dev/api/index.html @@ -1,6 +1,13 @@ -
GoogleProvider
-generate_content
-count_tokens
-embed_content
-list_models
Settings
This document was generated with Documenter.jl version 1.2.1 on Saturday 24 February 2024. Using Julia version 1.10.1.
GoogleGenAI.GoogleProvider
GoogleGenAI.count_tokens
GoogleGenAI.embed_content
GoogleGenAI.generate_content
GoogleGenAI.list_models
GoogleGenAI.GoogleProvider
— TypeBase.@kwdef struct GoogleProvider <: AbstractGoogleProvider
+ api_key::String = ""
+ base_url::String = "https://generativelanguage.googleapis.com"
+ api_version::String = "v1beta"
+end
A configuration object used to set up and authenticate requests to the Google Generative Language API.
Fields
api_key::String
: Your Google API key. base_url::String
: The base URL for the Google Generative Language API. The default is set to "https://generativelanguage.googleapis.com"
.api_version::String
: The version of the API you wish to access. The default is set to "v1beta"
.GoogleGenAI.generate_content
— Functiongenerate_content(provider::AbstractGoogleProvider, model_name::String, prompt::String, image_path::String; kwargs...) -> GoogleTextResponse
+generate_content(api_key::String, model_name::String, prompt::String, image_path::String; kwargs...) -> GoogleTextResponse
+
+generate_content(provider::AbstractGoogleProvider, model_name::String, conversation::Vector{Dict{Symbol,Any}}; kwargs...) -> GoogleTextResponse
+generate_content(api_key::String, model_name::String, conversation::Vector{Dict{Symbol,Any}}; kwargs...) -> GoogleTextResponse
Generate content based on a combination of text prompt and an image (optional).
Arguments
provider::AbstractGoogleProvider
: The provider instance for API requests.api_key::String
: Your Google API key as a string. model_name::String
: The model to use for content generation.prompt::String
: The text prompt to accompany the image.image_path::String
(optional): The path to the image file to include in the request.Keyword Arguments
temperature::Float64
(optional): Controls the randomness in the generation process. Higher values result in more random outputs. Typically ranges between 0 and 1.candidate_count::Int
(optional): The number of generation candidates to consider. Currently, only one candidate can be specified.max_output_tokens::Int
(optional): The maximum number of tokens that the generated content should contain.stop_sequences::Vector{String}
(optional): A list of sequences where the generation should stop. Useful for defining natural endpoints in generated content.safety_settings::Vector{Dict}
(optional): Settings to control the safety aspects of the generated content, such as filtering out unsafe or inappropriate content.Returns
GoogleTextResponse
: The generated content response.GoogleGenAI.count_tokens
— Functioncount_tokens(provider::AbstractGoogleProvider, model_name::String, prompt::String) -> Int
+count_tokens(api_key::String, model_name::String, prompt::String) -> Int
Calculate the number of tokens generated by the specified model for a given prompt string.
Arguments
provider::AbstractGoogleProvider
: The provider instance containing API key and base URL information.api_key::String
: Your Google API key as a string. model_name::String
: The name of the model to use for generating content. prompt::String
: The prompt prompt based on which the text is generated.Returns
Int
: The total number of tokens that the given prompt string would be broken into by the specified model's tokenizer.GoogleGenAI.embed_content
— Functionembed_content(provider::AbstractGoogleProvider, model_name::String, prompt::String) -> GoogleEmbeddingResponse
+embed_content(api_key::String, model_name::String, prompt::String) -> GoogleEmbeddingResponse
Generate an embedding for the given prompt text using the specified model.
Arguments
provider::AbstractGoogleProvider
: The provider instance containing API key and base URL information.api_key::String
: Your Google API key as a string. model_name::String
: The name of the model to use for generating content. prompt::String
: The prompt prompt based on which the text is generated.Returns
GoogleEmbeddingResponse
GoogleGenAI.list_models
— Functionlist_models(provider::AbstractGoogleProvider) -> Vector{Dict}
+list_models(api_key::String) -> Vector{Dict}
Retrieve a list of available models along with their details from the Google AI API.
Arguments
provider::AbstractGoogleProvider
: The provider instance containing API key and base URL information.api_key::String
: Your Google API key as a string. Returns
Vector{Dict}
: A list of dictionaries, each containing details about an available model.Settings
This document was generated with Documenter.jl version 1.2.1 on Saturday 24 February 2024. Using Julia version 1.10.1.
returns
"Hello there! How may I assist you today? Feel free to ask me any questions you may have or give me a command. I'm here to help! 😊"
Settings
This document was generated with Documenter.jl version 1.2.1 on Saturday 24 February 2024. Using Julia version 1.10.1.