-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logging of Prompt Requests and Responses with Model Metrics #60
Comments
This data should be available. $response = Prism::text()
->using(Provider::Anthropic, 'claude-3-sonnet')
->withPrompt('Explain quantum computing.')
->generate(); Total token usage for the request/response: $response->usage->promptTokens
$response->usage->completionTokens If you are using multi-step generation then you can get the usage of each step: foreach ($response->steps as $step) {
echo "Prompt tokens: {$step->usage->promptTokens}";
echo "Completion tokens: {$step->usage->completionTokens}";
} You can also see all the messages but inspecting each step. Each step is an instance of class TextResult
{
/**
* @param array<int, ToolCall> $toolCalls
* @param array<int, ToolResult> $toolResults
* @param array{id: string, model: string} $response
* @param array<int, Message> $messages
*/
public function __construct(
public readonly string $text,
public readonly FinishReason $finishReason,
public readonly array $toolCalls,
public readonly array $toolResults,
public readonly Usage $usage,
public readonly array $response,
public readonly array $messages = [],
) {}
} Was there something specific that you would like that we aren't returning? |
I'd like the package to automatically save requests and responses with usage directly to a database or file. |
seems duplicate of this #56 |
@houssam38 at the moment, I don't have plans to support storage mechanisms. @pushpak1300 I think #56 is more about exposing raw request/response data. As I understand it, this request is about adding storage of the parts of the request / response lifecycle. |
Hi,
Add a logging feature to capture prompt requests and responses, including details such as the model name, input token count, and output token count.
This feature would provide valuable metrics on model usage, helping track and analyze how each model is utilized. Having these metrics available could also aid in optimizing prompt inputs, monitoring token usage, and managing costs.
Thx
The text was updated successfully, but these errors were encountered: