Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat(prompt) system prompt #40

Merged
merged 6 commits into from
Nov 4, 2024
Merged

Feat(prompt) system prompt #40

merged 6 commits into from
Nov 4, 2024

Conversation

ZHallen122
Copy link
Collaborator

@ZHallen122 ZHallen122 commented Nov 2, 2024

Add a basic system prompt in llm server.

Summary by CodeRabbit

  • New Features

    • Enhanced chat streaming functionality to accept a more structured input, allowing for improved message handling.
    • Introduced a new system prompt for the AI, enhancing its capabilities in code generation.
  • Bug Fixes

    • Updated error handling for model tag fetching to provide clearer error messages.
  • Documentation

    • Added a new interface GenerateMessageParams to define the structure for message generation parameters.

@ZHallen122 ZHallen122 requested a review from Sma1lboy November 2, 2024 23:06
Copy link

coderabbitai bot commented Nov 2, 2024

Walkthrough

This pull request introduces several modifications across multiple files to enhance the chat streaming functionality and its related components. Key changes include updating method parameters to accept a more structured input type, GenerateMessageParams, which incorporates model specifications alongside messages. The error handling in the ChatResolver has been simplified, and a new systemPrompts object has been added to support dynamic prompt generation. Overall, these changes improve the clarity and flexibility of the chat system's interactions.

Changes

File Path Change Summary
backend/src/chat/chat.resolver.ts - Updated triggerChatStream to use input instead of input.message.
- Modified error handling in getAvailableModelTags to throw a generic error.
backend/src/chat/chat.service.ts - Changed streamChat method parameter from string to ChatInput.
- Updated logging and HTTP POST request to send structured data.
llm-server/src/llm-provider.ts - Updated generateStreamingResponse to accept GenerateMessageParams instead of a string.
llm-server/src/main.ts - Enhanced handleChatRequest to expect a model field in the request body and create a params object for generateStreamingResponse.
llm-server/src/model/llama-model-provider.ts - Changed generateStreamingResponse to accept destructured parameters from GenerateMessageParams.
llm-server/src/model/model-provider.ts - Updated generateStreamingResponse to accept GenerateMessageParams instead of a string.
llm-server/src/model/openai-model-provider.ts - Modified generateStreamingResponse to accept destructured parameters and include a system prompt.
llm-server/src/prompt/systemPrompt.ts - Added systemPrompts object to define system prompts for AI interactions.
llm-server/src/type/GenerateMessage.ts - Introduced GenerateMessageParams interface to define expected parameters for message generation.

Possibly related issues

Possibly related PRs

  • issue#21 add chat apis #22: Changes in the ChatResolver class regarding model tags are related to modifications in the ChatService and ChatProxyService, which are also part of this PR.

Suggested reviewers

  • Sma1lboy: Suggested due to involvement with related features and potential insights on the changes made.

Poem

🐰 In the land of code where rabbits roam,
New prompts and streams make the chat feel like home.
With models to choose and messages clear,
CodeFox now listens, oh so near!
Hopping through changes, we celebrate bright,
A world of chat magic, taking flight! 🌟


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@ZHallen122 ZHallen122 marked this pull request as ready for review November 3, 2024 20:02
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Outside diff range and nitpick comments (10)
llm-server/src/type/GenerateMessage.ts (2)

1-5: Enhance interface documentation and type safety.

The interface provides a good foundation but could benefit from the following improvements:

  1. Add comprehensive JSDoc documentation:
+/**
+ * Parameters for generating a message response from an LLM model.
+ */
 export interface GenerateMessageParams {
+  /**
+   * The identifier of the LLM model to use.
+   * @example 'gpt-3.5-turbo', 'gpt-4'
+   */
   model: string;
+  /**
+   * The input message to be processed by the model.
+   */
   message: string;
+  /**
+   * The role of the message sender.
+   * @default 'user'
+   */
   role?: 'user' | 'system' | 'assistant' | 'tool' | 'function';
 }
  1. Consider adding validation constraints:
import { z } from 'zod';  // Add at top of file

export const GenerateMessageParamsSchema = z.object({
  model: z.string().min(1),
  message: z.string().min(1),
  role: z.enum(['user', 'system', 'assistant', 'tool', 'function']).optional().default('user'),
});

4-4: Document the default role behavior.

The role field is optional, but there's no explicit documentation about what value is used when it's omitted.

Consider either:

  1. Adding a comment specifying the default value
  2. Making it required if there's no sensible default
  3. Adding runtime validation with a default value
llm-server/src/main.ts (1)

46-46: Enhance error handling for streaming response.

While the parameter passing is correct, consider improving error handling for the streaming response to provide more specific error messages to the client.

-      await this.llmProvider.generateStreamingResponse(params, res);
+      try {
+        await this.llmProvider.generateStreamingResponse(params, res);
+      } catch (error) {
+        if (error.name === 'ModelError') {
+          res.status(400).json({ error: `Model error: ${error.message}` });
+        } else if (error.name === 'StreamingError') {
+          res.status(503).json({ error: 'Streaming service unavailable' });
+        } else {
+          throw error; // Let the outer catch handle unexpected errors
+        }
+      }
llm-server/src/model/llama-model-provider.ts (2)

37-39: Consider enhancing type safety.

The current implementation could benefit from stronger type safety.

Consider adding:

enum Role {
  System = 'system',
  User = 'user'
}

interface GenerateMessageParams {
  model: string;
  message: string;
  role?: Role;
}

Line range hint 48-57: Improve error handling in streaming response.

The current implementation sends chunks without proper error boundaries.

Consider wrapping each chunk in a try-catch and implementing a proper error protocol:

-      await session.prompt(message, {
+      await session.prompt(message, {
         onTextChunk: chunk => {
+          try {
             chunkCount++;
             this.logger.debug(`Sending chunk #${chunkCount}: "${chunk}"`);
             res.write(
               `data: ${JSON.stringify({ role: 'bot', content: chunk })}\n\n`,
             );
+          } catch (error) {
+            this.logger.error(`Error processing chunk #${chunkCount}:`, error);
+            throw error;
+          }
         },
       });
backend/src/chat/chat.resolver.ts (3)

Line range hint 56-80: Add error handling for message operations and implement stream safeguards.

The streaming implementation could benefit from additional error handling and safeguards:

  1. The message saving operations should have specific error handling
  2. Consider implementing a timeout mechanism for the stream
  3. Add validation for the accumulated content before saving

Consider implementing these improvements:

 const iterator = this.chatProxyService.streamChat(input);
 let accumulatedContent = '';
+const streamTimeout = setTimeout(() => {
+  throw new Error('Stream timeout exceeded');
+}, 30000); // 30 second timeout

 try {
   for await (const chunk of iterator) {
     if (chunk) {
       const enhancedChunk = {
         ...chunk,
         chatId: input.chatId,
       };

       await this.pubSub.publish(`chat_stream_${input.chatId}`, {
         chatStream: enhancedChunk,
       });

       if (chunk.choices[0]?.delta?.content) {
         accumulatedContent += chunk.choices[0].delta.content;
       }
     }
   }

+  if (!accumulatedContent.trim()) {
+    throw new Error('Empty response from model');
+  }

   await this.chatService.saveMessage(
     input.chatId,
     accumulatedContent,
     MessageRole.Model,
   );
 } catch (error) {
   this.logger.error('Stream processing error:', error);
+  // Attempt to save partial response if available
+  if (accumulatedContent.trim()) {
+    await this.chatService.saveMessage(
+      input.chatId,
+      accumulatedContent + '\n[Error: Stream interrupted]',
+      MessageRole.Model,
+    );
+  }
   throw error;
 } finally {
+  clearTimeout(streamTimeout);
 }

Line range hint 92-99: Improve error handling and type safety in model tag fetching.

The current error handling masks potentially useful error information and lacks proper logging. Additionally, the response mapping assumes a specific structure without type validation.

Consider implementing these improvements:

 @Query(() => [String], { nullable: true })
 async getAvailableModelTags(
   @GetUserIdFromToken() userId: string,
 ): Promise<string[]> {
   try {
     const response = await this.chatProxyService.fetchModelTags();
+    if (!response?.models?.data) {
+      throw new Error('Invalid response structure from model tags service');
+    }
     return response.models.data.map((model) => model.id);
   } catch (error) {
+    this.logger.error('Failed to fetch model tags:', error);
+    if (error instanceof Error) {
+      throw new Error(`Failed to fetch model tags: ${error.message}`);
+    }
     throw new Error('Failed to fetch model tags');
   }
 }

Line range hint 1-99: Consider implementing a more robust streaming architecture.

While the current implementation fulfills the basic requirements, consider these architectural improvements for better scalability and reliability:

  1. Implement a dead letter queue for failed message saves
  2. Add retry mechanisms for transient failures
  3. Consider implementing backpressure handling for the stream
  4. Add metrics collection for stream performance monitoring
  5. Implement rate limiting per user/chat
backend/src/chat/chat.service.ts (2)

71-71: Consider using a consistent request payload structure.

The request payload uses content for the message while the input uses message. Consider aligning these field names for consistency.

-       { content: input.message, model: input.model },
+       { message: input.message, model: input.model },

Line range hint 134-152: Remove hardcoded model value in done chunk.

The model value is hardcoded to 'gpt-3.5-turbo' in the done chunk. Consider using the model from the input parameter for consistency.

            const doneChunk: ChatCompletionChunk = {
              id: 'done',
              object: 'chat.completion.chunk',
              created: Date.now(),
-             model: 'gpt-3.5-turbo',
+             model: input.model,
              systemFingerprint: null,
              choices: [],
              status: StreamStatus.DONE,
            };

Apply the same change to the error handler's done chunk creation.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 92e5065 and c24518f.

📒 Files selected for processing (9)
  • backend/src/chat/chat.resolver.ts (1 hunks)
  • backend/src/chat/chat.service.ts (3 hunks)
  • llm-server/src/llm-provider.ts (2 hunks)
  • llm-server/src/main.ts (2 hunks)
  • llm-server/src/model/llama-model-provider.ts (3 hunks)
  • llm-server/src/model/model-provider.ts (1 hunks)
  • llm-server/src/model/openai-model-provider.ts (2 hunks)
  • llm-server/src/prompt/systemPrompt.ts (1 hunks)
  • llm-server/src/type/GenerateMessage.ts (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • llm-server/src/prompt/systemPrompt.ts
🔇 Additional comments (10)
llm-server/src/type/GenerateMessage.ts (1)

2-2: Verify supported model values.

The model field accepts any string, which could lead to runtime errors if an unsupported model is specified.

Let's check what models are currently supported:

llm-server/src/model/model-provider.ts (2)

2-2: LGTM!

Clean import of the new type definition.


7-7: LGTM! Verify implementations.

The change from string to GenerateMessageParams improves type safety and parameter structure. This aligns well with the system prompt implementation objective.

Let's verify that all implementations of ModelProvider have been updated:

✅ Verification successful

All implementations correctly use GenerateMessageParams

Based on the shell script results, both implementations of ModelProvider (OpenAIModelProvider and LlamaModelProvider) have been properly updated to use GenerateMessageParams in their generateStreamingResponse method signatures. The ripgrep search for the old signature pattern with content: string returned no results, confirming no legacy implementations remain.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for any implementations of ModelProvider to ensure they've been updated
# to use GenerateMessageParams

# Search for class declarations that extend ModelProvider
ast-grep --pattern 'class $_ extends ModelProvider {
  $$$
}'

# Also check for any remaining uses of the old signature pattern
rg "generateStreamingResponse.*content:\s*string.*Response"

Length of output: 16115

llm-server/src/llm-provider.ts (2)

6-6: LGTM: Clean import addition

The import of GenerateMessageParams is well-placed and necessary for the type safety improvements.


36-39: Consider enhancing error handling and documentation

While the signature change to use GenerateMessageParams improves type safety, there are a few considerations:

  1. The method lacks error handling for invalid params or failed responses
  2. The relationship between GenerateMessageParams and existing interfaces (ChatMessageInput, ChatMessage) is unclear

Let's verify the structure of GenerateMessageParams and its usage:

Consider adding:

  1. Error handling for invalid parameters
  2. JSDoc documentation explaining the relationship between different message types
  3. Type validation before passing to the model provider

Example enhancement:

  async generateStreamingResponse(
    params: GenerateMessageParams,
    res: Response,
  ): Promise<void> {
+   try {
+     if (!params.message || !params.model) {
+       throw new Error('Invalid parameters: message and model are required');
+     }
      await this.modelProvider.generateStreamingResponse(params, res);
+   } catch (error) {
+     this.logger.error(`Failed to generate response: ${error.message}`);
+     res.status(500).json({ error: 'Failed to generate response' });
+   }
  }
llm-server/src/main.ts (2)

4-4: LGTM: Import statement is appropriate.

The import of GenerateMessageParams type supports the enhanced type safety for message generation parameters.


31-46: Verify system prompt implementation.

While the code adds model support and allows for role specification, it's unclear how system prompts are being handled. The PR objective mentions introducing a basic system prompt, but the implementation doesn't show where these system prompts are defined or managed.

✅ Verification successful

Let's examine the actual implementation files to verify how system prompts are being handled in the codebase.


System prompt implementation is present and functional

The implementation correctly handles system prompts:

  • System prompts are defined in systemPrompt.ts with a default 'codefox-basic' prompt
  • The OpenAI provider properly integrates the system prompt by:
    • Importing and using the system prompts from the configuration
    • Including it as the first message in the conversation with role: 'system'
    • Falling back gracefully when no system prompt is available
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for system prompt definitions or configurations
echo "Searching for system prompt configurations..."
rg -l "systemPrompt|SYSTEM_PROMPT|system_prompt" --type ts

# Search for role-related configurations
echo "Searching for role configurations..."
ast-grep --pattern 'role: "system"'

Length of output: 366


Script:

#!/bin/bash
# Check the content of system prompt implementation
echo "Content of systemPrompt.ts:"
cat llm-server/src/prompt/systemPrompt.ts

echo -e "\nChecking OpenAI provider implementation:"
cat llm-server/src/model/openai-model-provider.ts

Length of output: 4456

llm-server/src/model/llama-model-provider.ts (1)

11-11: LGTM: Import statement for GenerateMessageParams.

The import aligns with the method signature changes and follows proper module organization.

backend/src/chat/chat.service.ts (2)

8-12: LGTM! Clean import structure.

The imports are well-organized and properly typed, supporting the transition to structured input types.


24-29: 🛠️ Refactor suggestion

Add input validation for the model parameter.

While the transition to ChatInput type improves type safety, consider adding runtime validation for the model parameter to ensure it's supported by the completion endpoint.

  streamChat(
    input: ChatInput,
  ): CustomAsyncIterableIterator<ChatCompletionChunk> {
+   if (!input.model) {
+     throw new Error('Model parameter is required');
+   }
+   // Consider maintaining a list of supported models
+   const supportedModels = ['gpt-3.5-turbo', 'gpt-4']; // Add your supported models
+   if (!supportedModels.includes(input.model)) {
+     throw new Error(`Unsupported model: ${input.model}`);
+   }
    this.logger.debug(
      `Request chat input: ${input.message} with model: ${input.model}`,
    );

Comment on lines +31 to +33
const { content, model } = req.body as ChatMessageInput & {
model: string;
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add type validation for request body.

The type assertion using as bypasses runtime type checking. Consider adding validation to ensure the request body matches the expected structure.

-      const { content, model } = req.body as ChatMessageInput & {
-        model: string;
-      };
+      if (!req.body?.content || typeof req.body.content !== 'string') {
+        throw new Error('Invalid content in request body');
+      }
+      const { content, model = 'gpt-3.5-turbo' } = req.body;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const { content, model } = req.body as ChatMessageInput & {
model: string;
};
if (!req.body?.content || typeof req.body.content !== 'string') {
throw new Error('Invalid content in request body');
}
const { content, model = 'gpt-3.5-turbo' } = req.body;

Comment on lines +35 to +39
const params: GenerateMessageParams = {
model: model || 'gpt-3.5-turbo', // Default to 'gpt-3.5-turbo' if model is not provided
message: content,
role: 'user',
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Refactor model handling and role assignment.

Several concerns with the current implementation:

  1. Model defaulting should be handled at a configuration level
  2. No validation for supported model types
  3. Hardcoded 'user' role might limit the system prompt functionality mentioned in PR objectives

Consider these improvements:

  1. Move model configuration to a central location
  2. Add model validation
  3. Allow role to be specified in the request for system prompts
+      const supportedModels = ['gpt-3.5-turbo', 'gpt-4']; // Move to config
+      if (model && !supportedModels.includes(model)) {
+        throw new Error(`Unsupported model: ${model}`);
+      }
       const params: GenerateMessageParams = {
-        model: model || 'gpt-3.5-turbo', // Default to 'gpt-3.5-turbo' if model is not provided
+        model: model || process.env.DEFAULT_MODEL || 'gpt-3.5-turbo',
         message: content,
-        role: 'user',
+        role: req.body.role || 'user', // Allow system prompts
       };

Committable suggestion skipped: line range outside the PR's diff.

@@ -33,7 +34,7 @@ export class LlamaModelProvider extends ModelProvider {
}

async generateStreamingResponse(
content: string,
{ model, message, role = 'user' }: GenerateMessageParams,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Unused 'model' parameter in method signature.

The destructured model parameter is not being utilized within the method implementation. Either remove it if unnecessary or implement model-specific logic.

Comment on lines +37 to 39
{ model, message, role = 'user' }: GenerateMessageParams,
res: Response,
): Promise<void> {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add input validation and implement system prompt support.

Several improvements are needed:

  1. Add validation for required parameters
  2. Implement system prompt support as per PR objectives

Consider this implementation:

 async generateStreamingResponse(
   { model, message, role = 'user' }: GenerateMessageParams,
   res: Response,
 ): Promise<void> {
+  if (!message?.trim()) {
+    throw new Error('Message cannot be empty');
+  }
+
+  const systemPrompt = 'You are a helpful AI assistant.'; // Consider making this configurable
+  const formattedMessage = role === 'system' ? message : `${systemPrompt}\n\nUser: ${message}`;
+
   this.logger.log('Generating streaming response with Llama...');
   const session = new LlamaChatSession({
     contextSequence: this.context.getSequence(),
   });
   this.logger.log('LlamaChatSession created.');
   let chunkCount = 0;
   const startTime = Date.now();
   try {
-    await session.prompt(message, {
+    await session.prompt(formattedMessage, {

Also applies to: 48-48

Comment on lines +35 to +36
// Get the system prompt based on the model
const systemPrompt = systemPrompts['codefox-basic']?.systemPrompt || '';
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Suggestion: Use the model parameter to select the system prompt

Currently, the system prompt is fetched using the hardcoded key 'codefox-basic'. To support dynamic selection of system prompts based on the model being used, consider using the model parameter as the key.

Apply this diff to utilize the model parameter:

-const systemPrompt = systemPrompts['codefox-basic']?.systemPrompt || '';
+const systemPrompt = systemPrompts[model]?.systemPrompt || '';
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Get the system prompt based on the model
const systemPrompt = systemPrompts['codefox-basic']?.systemPrompt || '';
// Get the system prompt based on the model
const systemPrompt = systemPrompts[model]?.systemPrompt || '';

Comment on lines 39 to 41
const messages: ChatCompletionMessageParam[] = systemPrompt
? [{ role: 'system', content: systemPrompt }]
: [{ role: role as 'user' | 'system' | 'assistant', content: message }];
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical Issue: User message is omitted when a system prompt is available

When a system prompt is available, the messages array includes only the system prompt and excludes the user's message. This will result in the assistant not receiving the user's input, leading to an incorrect or lack of response.

Apply this diff to include both the system prompt and the user's message:

 const messages: ChatCompletionMessageParam[] = systemPrompt
-  ? [{ role: 'system', content: systemPrompt }]
-  : [{ role: role as 'user' | 'system' | 'assistant', content: message }];
+  ? [
+      { role: 'system', content: systemPrompt },
+      { role: role as 'user' | 'system' | 'assistant', content: message },
+    ]
+  : [{ role: role as 'user' | 'system' | 'assistant', content: message }];
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const messages: ChatCompletionMessageParam[] = systemPrompt
? [{ role: 'system', content: systemPrompt }]
: [{ role: role as 'user' | 'system' | 'assistant', content: message }];
const messages: ChatCompletionMessageParam[] = systemPrompt
? [
{ role: 'system', content: systemPrompt },
{ role: role as 'user' | 'system' | 'assistant', content: message },
]
: [{ role: role as 'user' | 'system' | 'assistant', content: message }];

This commit refactors the OpenAIModelProvider class to improve the streaming response when generating chat completions with OpenAI. It updates the messages array to include the system prompt and user message, and removes unnecessary comments. This change aims to enhance the overall performance and reliability of the streaming response feature.

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (2)
llm-server/src/model/openai-model-provider.ts (2)

35-41: Add error logging for missing system prompts

While the fallback to empty string for missing system prompts is good, consider logging a warning when a system prompt is not found for better debugging and monitoring.

-const systemPrompt = systemPrompts['codefox-basic']?.systemPrompt || '';
+const systemPrompt = systemPrompts['codefox-basic']?.systemPrompt;
+if (!systemPrompt) {
+  this.logger.warn(`No system prompt found for key: codefox-basic`);
+}
+const finalSystemPrompt = systemPrompt || '';

Line range hint 68-73: Enhance error handling with specific OpenAI error types

The current error handling could be improved to provide more specific error messages based on the type of OpenAI error encountered.

     } catch (error) {
-      this.logger.error('Error during OpenAI response generation:', error);
-      res.write(`data: ${JSON.stringify({ error: 'Generation failed' })}\n\n`);
+      let errorMessage = 'Generation failed';
+      if (error instanceof OpenAI.APIError) {
+        this.logger.error(`OpenAI API Error: ${error.status} - ${error.message}`);
+        errorMessage = `OpenAI Error: ${error.message}`;
+      } else {
+        this.logger.error('Error during OpenAI response generation:', error);
+      }
+      res.write(`data: ${JSON.stringify({ error: errorMessage })}\n\n`);
       res.write(`data: [DONE]\n\n`);
       res.end();
     }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between c24518f and 4451534.

📒 Files selected for processing (1)
  • llm-server/src/model/openai-model-provider.ts (4 hunks)
🔇 Additional comments (2)
llm-server/src/model/openai-model-provider.ts (2)

5-7: LGTM! Imports are well-structured

The new imports are properly typed and necessary for the enhanced functionality.


22-22: LGTM! Method signature improvement

The change to use GenerateMessageParams type improves type safety and parameter organization. The default role of 'user' is appropriate.

Comment on lines +45 to +46
model,
messages,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add model validation before API call

Consider validating the model parameter against the list of available models to fail fast and provide better error messages.

+    // Validate model before making the API call
+    try {
+      await this.openai.models.retrieve(model);
+    } catch (error) {
+      this.logger.error(`Invalid model: ${model}`);
+      throw new Error(`Invalid model: ${model}`);
+    }
+
     const stream = await this.openai.chat.completions.create({
       model,
       messages,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
model,
messages,
// Validate model before making the API call
try {
await this.openai.models.retrieve(model);
} catch (error) {
this.logger.error(`Invalid model: ${model}`);
throw new Error(`Invalid model: ${model}`);
}
const stream = await this.openai.chat.completions.create({
model,
messages,

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (3)
llm-server/src/model/llama-model-provider.ts (3)

11-13: Remove unused import ChatCompletionMessageParam.

The ChatCompletionMessageParam type from 'openai/resources/chat/completions' is imported but never used in the code.

 import { systemPrompts } from '../prompt/systemPrompt';
-import { ChatCompletionMessageParam } from 'openai/resources/chat/completions';
 import { GenerateMessageParams } from '../type/GenerateMessage';

53-61: Add input validation for message content.

While the message formatting is well-structured, there's no validation to ensure the message content is not empty or just whitespace.

Consider adding validation before processing:

+    if (!message?.trim()) {
+      throw new Error('Message content cannot be empty');
+    }
+
     const messages = [
       { role: 'system', content: systemPrompt },
       { role: role as 'user' | 'system' | 'assistant', content: message },
     ];

Line range hint 39-64: Consider extracting prompt handling logic.

The current implementation mixes prompt formatting logic within the model provider. Consider extracting this into a dedicated prompt service/formatter class to:

  1. Improve separation of concerns
  2. Make prompt handling more testable
  3. Allow for easier extensions of prompt formatting logic

This would make it easier to:

  • Add different prompt formats for different models
  • Implement more complex prompt chains
  • Test prompt formatting independently of the model provider
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 4451534 and d6eee3c.

📒 Files selected for processing (1)
  • llm-server/src/model/llama-model-provider.ts (3 hunks)

Comment on lines +50 to +61
// Get the system prompt based on the model
const systemPrompt = systemPrompts['codefox-basic']?.systemPrompt || '';

const messages = [
{ role: 'system', content: systemPrompt },
{ role: role as 'user' | 'system' | 'assistant', content: message },
];

// Convert messages array to a single formatted string for Llama
const formattedPrompt = messages
.map(({ role, content }) => `${role}: ${content}`)
.join('\n');
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance system prompt handling with proper validation.

The current implementation has several potential issues:

  1. Hardcoded model name 'codefox-basic'
  2. No validation if the model exists in systemPrompts
  3. Silent fallback to empty string could mask configuration issues

Consider this improved implementation:

-    // Get the system prompt based on the model
-    const systemPrompt = systemPrompts['codefox-basic']?.systemPrompt || '';
+    // Get the system prompt based on the provided model
+    if (!systemPrompts[model]) {
+      throw new Error(`System prompt not found for model: ${model}`);
+    }
+    const systemPrompt = systemPrompts[model].systemPrompt;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Get the system prompt based on the model
const systemPrompt = systemPrompts['codefox-basic']?.systemPrompt || '';
const messages = [
{ role: 'system', content: systemPrompt },
{ role: role as 'user' | 'system' | 'assistant', content: message },
];
// Convert messages array to a single formatted string for Llama
const formattedPrompt = messages
.map(({ role, content }) => `${role}: ${content}`)
.join('\n');
// Get the system prompt based on the provided model
if (!systemPrompts[model]) {
throw new Error(`System prompt not found for model: ${model}`);
}
const systemPrompt = systemPrompts[model].systemPrompt;
const messages = [
{ role: 'system', content: systemPrompt },
{ role: role as 'user' | 'system' | 'assistant', content: message },
];
// Convert messages array to a single formatted string for Llama
const formattedPrompt = messages
.map(({ role, content }) => `${role}: ${content}`)
.join('\n');

@ZHallen122 ZHallen122 merged commit 1d706f1 into main Nov 4, 2024
2 of 3 checks passed
@ZHallen122 ZHallen122 linked an issue Nov 4, 2024 that may be closed by this pull request
@coderabbitai coderabbitai bot mentioned this pull request Jan 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

feat(prompt): system prompt
2 participants