call_google_nl

call_vertex(prompt: str, model: str = 'gemini-1.5-pro-latest') str

Calls the Vertex AI Gemini model with the given prompt.

Parameters:
  • prompt (str) – The input prompt for the LLM.

  • model (str, optional) – The name of the Gemini model to use. Defaults to “gemini-1.5-pro-latest”.

Returns:

The LLM’s response text.

Return type:

str

Raises:

generation_types.StopCandidateException – If the LLM encounters a stop sequence.

call_google_palm(prompt, max_attempts=10, model='text-bison', max_tokens=800, verbose=True) str

Calls the Google PaLM API with the given prompt.

Parameters:
  • prompt (str) – The input prompt for the LLM.

  • max_attempts (int, optional) – The maximum number of attempts to make. Defaults to 10.

  • model (str, optional) – The name of the PaLM model to use. Defaults to ‘text-bison’.

  • max_tokens (int, optional) – The maximum number of tokens to generate. Defaults to 800.

  • verbose (bool, optional) – Whether to print verbose output. Defaults to True.

Returns:

The LLM’s response text.

Return type:

str

call_codey(prompt, max_attempts=10, model='code-bison-32k', max_tokens=800, verbose=True) str

Calls the Codey API (Vertex AI Code Generation) with the given prompt.

Parameters:
  • prompt (str) – The input prompt for the LLM.

  • max_attempts (int, optional) – The maximum number of attempts to make. Defaults to 10.

  • model (str, optional) – The name of the Codey model to use. Defaults to ‘code-bison-32k’.

  • max_tokens (int, optional) – The maximum number of tokens to generate. Defaults to 800.

  • verbose (bool, optional) – Whether to print verbose output. Defaults to True.

Returns:

The LLM’s response text.

Return type:

str