-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat(prompt) system prompt #40
Conversation
WalkthroughThis pull request introduces several modifications across multiple files to enhance the chat streaming functionality and its related components. Key changes include updating method parameters to accept a more structured input type, Changes
Possibly related issues
Possibly related PRs
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Outside diff range and nitpick comments (10)
llm-server/src/type/GenerateMessage.ts (2)
1-5
: Enhance interface documentation and type safety.The interface provides a good foundation but could benefit from the following improvements:
- Add comprehensive JSDoc documentation:
+/** + * Parameters for generating a message response from an LLM model. + */ export interface GenerateMessageParams { + /** + * The identifier of the LLM model to use. + * @example 'gpt-3.5-turbo', 'gpt-4' + */ model: string; + /** + * The input message to be processed by the model. + */ message: string; + /** + * The role of the message sender. + * @default 'user' + */ role?: 'user' | 'system' | 'assistant' | 'tool' | 'function'; }
- Consider adding validation constraints:
import { z } from 'zod'; // Add at top of file export const GenerateMessageParamsSchema = z.object({ model: z.string().min(1), message: z.string().min(1), role: z.enum(['user', 'system', 'assistant', 'tool', 'function']).optional().default('user'), });
4-4
: Document the default role behavior.The
role
field is optional, but there's no explicit documentation about what value is used when it's omitted.Consider either:
- Adding a comment specifying the default value
- Making it required if there's no sensible default
- Adding runtime validation with a default value
llm-server/src/main.ts (1)
46-46
: Enhance error handling for streaming response.While the parameter passing is correct, consider improving error handling for the streaming response to provide more specific error messages to the client.
- await this.llmProvider.generateStreamingResponse(params, res); + try { + await this.llmProvider.generateStreamingResponse(params, res); + } catch (error) { + if (error.name === 'ModelError') { + res.status(400).json({ error: `Model error: ${error.message}` }); + } else if (error.name === 'StreamingError') { + res.status(503).json({ error: 'Streaming service unavailable' }); + } else { + throw error; // Let the outer catch handle unexpected errors + } + }llm-server/src/model/llama-model-provider.ts (2)
37-39
: Consider enhancing type safety.The current implementation could benefit from stronger type safety.
Consider adding:
enum Role { System = 'system', User = 'user' } interface GenerateMessageParams { model: string; message: string; role?: Role; }
Line range hint
48-57
: Improve error handling in streaming response.The current implementation sends chunks without proper error boundaries.
Consider wrapping each chunk in a try-catch and implementing a proper error protocol:
- await session.prompt(message, { + await session.prompt(message, { onTextChunk: chunk => { + try { chunkCount++; this.logger.debug(`Sending chunk #${chunkCount}: "${chunk}"`); res.write( `data: ${JSON.stringify({ role: 'bot', content: chunk })}\n\n`, ); + } catch (error) { + this.logger.error(`Error processing chunk #${chunkCount}:`, error); + throw error; + } }, });backend/src/chat/chat.resolver.ts (3)
Line range hint
56-80
: Add error handling for message operations and implement stream safeguards.The streaming implementation could benefit from additional error handling and safeguards:
- The message saving operations should have specific error handling
- Consider implementing a timeout mechanism for the stream
- Add validation for the accumulated content before saving
Consider implementing these improvements:
const iterator = this.chatProxyService.streamChat(input); let accumulatedContent = ''; +const streamTimeout = setTimeout(() => { + throw new Error('Stream timeout exceeded'); +}, 30000); // 30 second timeout try { for await (const chunk of iterator) { if (chunk) { const enhancedChunk = { ...chunk, chatId: input.chatId, }; await this.pubSub.publish(`chat_stream_${input.chatId}`, { chatStream: enhancedChunk, }); if (chunk.choices[0]?.delta?.content) { accumulatedContent += chunk.choices[0].delta.content; } } } + if (!accumulatedContent.trim()) { + throw new Error('Empty response from model'); + } await this.chatService.saveMessage( input.chatId, accumulatedContent, MessageRole.Model, ); } catch (error) { this.logger.error('Stream processing error:', error); + // Attempt to save partial response if available + if (accumulatedContent.trim()) { + await this.chatService.saveMessage( + input.chatId, + accumulatedContent + '\n[Error: Stream interrupted]', + MessageRole.Model, + ); + } throw error; } finally { + clearTimeout(streamTimeout); }
Line range hint
92-99
: Improve error handling and type safety in model tag fetching.The current error handling masks potentially useful error information and lacks proper logging. Additionally, the response mapping assumes a specific structure without type validation.
Consider implementing these improvements:
@Query(() => [String], { nullable: true }) async getAvailableModelTags( @GetUserIdFromToken() userId: string, ): Promise<string[]> { try { const response = await this.chatProxyService.fetchModelTags(); + if (!response?.models?.data) { + throw new Error('Invalid response structure from model tags service'); + } return response.models.data.map((model) => model.id); } catch (error) { + this.logger.error('Failed to fetch model tags:', error); + if (error instanceof Error) { + throw new Error(`Failed to fetch model tags: ${error.message}`); + } throw new Error('Failed to fetch model tags'); } }
Line range hint
1-99
: Consider implementing a more robust streaming architecture.While the current implementation fulfills the basic requirements, consider these architectural improvements for better scalability and reliability:
- Implement a dead letter queue for failed message saves
- Add retry mechanisms for transient failures
- Consider implementing backpressure handling for the stream
- Add metrics collection for stream performance monitoring
- Implement rate limiting per user/chat
backend/src/chat/chat.service.ts (2)
71-71
: Consider using a consistent request payload structure.The request payload uses
content
for the message while the input usesmessage
. Consider aligning these field names for consistency.- { content: input.message, model: input.model }, + { message: input.message, model: input.model },
Line range hint
134-152
: Remove hardcoded model value in done chunk.The model value is hardcoded to 'gpt-3.5-turbo' in the done chunk. Consider using the model from the input parameter for consistency.
const doneChunk: ChatCompletionChunk = { id: 'done', object: 'chat.completion.chunk', created: Date.now(), - model: 'gpt-3.5-turbo', + model: input.model, systemFingerprint: null, choices: [], status: StreamStatus.DONE, };Apply the same change to the error handler's done chunk creation.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (9)
backend/src/chat/chat.resolver.ts
(1 hunks)backend/src/chat/chat.service.ts
(3 hunks)llm-server/src/llm-provider.ts
(2 hunks)llm-server/src/main.ts
(2 hunks)llm-server/src/model/llama-model-provider.ts
(3 hunks)llm-server/src/model/model-provider.ts
(1 hunks)llm-server/src/model/openai-model-provider.ts
(2 hunks)llm-server/src/prompt/systemPrompt.ts
(1 hunks)llm-server/src/type/GenerateMessage.ts
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- llm-server/src/prompt/systemPrompt.ts
🔇 Additional comments (10)
llm-server/src/type/GenerateMessage.ts (1)
2-2
: Verify supported model values.
The model
field accepts any string, which could lead to runtime errors if an unsupported model is specified.
Let's check what models are currently supported:
llm-server/src/model/model-provider.ts (2)
2-2
: LGTM!
Clean import of the new type definition.
7-7
: LGTM! Verify implementations.
The change from string
to GenerateMessageParams
improves type safety and parameter structure. This aligns well with the system prompt implementation objective.
Let's verify that all implementations of ModelProvider have been updated:
✅ Verification successful
All implementations correctly use GenerateMessageParams
Based on the shell script results, both implementations of ModelProvider
(OpenAIModelProvider
and LlamaModelProvider
) have been properly updated to use GenerateMessageParams
in their generateStreamingResponse
method signatures. The ripgrep search for the old signature pattern with content: string
returned no results, confirming no legacy implementations remain.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any implementations of ModelProvider to ensure they've been updated
# to use GenerateMessageParams
# Search for class declarations that extend ModelProvider
ast-grep --pattern 'class $_ extends ModelProvider {
$$$
}'
# Also check for any remaining uses of the old signature pattern
rg "generateStreamingResponse.*content:\s*string.*Response"
Length of output: 16115
llm-server/src/llm-provider.ts (2)
6-6
: LGTM: Clean import addition
The import of GenerateMessageParams
is well-placed and necessary for the type safety improvements.
36-39
: Consider enhancing error handling and documentation
While the signature change to use GenerateMessageParams
improves type safety, there are a few considerations:
- The method lacks error handling for invalid params or failed responses
- The relationship between
GenerateMessageParams
and existing interfaces (ChatMessageInput
,ChatMessage
) is unclear
Let's verify the structure of GenerateMessageParams
and its usage:
Consider adding:
- Error handling for invalid parameters
- JSDoc documentation explaining the relationship between different message types
- Type validation before passing to the model provider
Example enhancement:
async generateStreamingResponse(
params: GenerateMessageParams,
res: Response,
): Promise<void> {
+ try {
+ if (!params.message || !params.model) {
+ throw new Error('Invalid parameters: message and model are required');
+ }
await this.modelProvider.generateStreamingResponse(params, res);
+ } catch (error) {
+ this.logger.error(`Failed to generate response: ${error.message}`);
+ res.status(500).json({ error: 'Failed to generate response' });
+ }
}
llm-server/src/main.ts (2)
4-4
: LGTM: Import statement is appropriate.
The import of GenerateMessageParams
type supports the enhanced type safety for message generation parameters.
31-46
: Verify system prompt implementation.
While the code adds model support and allows for role specification, it's unclear how system prompts are being handled. The PR objective mentions introducing a basic system prompt, but the implementation doesn't show where these system prompts are defined or managed.
✅ Verification successful
Let's examine the actual implementation files to verify how system prompts are being handled in the codebase.
System prompt implementation is present and functional
The implementation correctly handles system prompts:
- System prompts are defined in
systemPrompt.ts
with a default 'codefox-basic' prompt - The OpenAI provider properly integrates the system prompt by:
- Importing and using the system prompts from the configuration
- Including it as the first message in the conversation with
role: 'system'
- Falling back gracefully when no system prompt is available
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for system prompt definitions or configurations
echo "Searching for system prompt configurations..."
rg -l "systemPrompt|SYSTEM_PROMPT|system_prompt" --type ts
# Search for role-related configurations
echo "Searching for role configurations..."
ast-grep --pattern 'role: "system"'
Length of output: 366
Script:
#!/bin/bash
# Check the content of system prompt implementation
echo "Content of systemPrompt.ts:"
cat llm-server/src/prompt/systemPrompt.ts
echo -e "\nChecking OpenAI provider implementation:"
cat llm-server/src/model/openai-model-provider.ts
Length of output: 4456
llm-server/src/model/llama-model-provider.ts (1)
11-11
: LGTM: Import statement for GenerateMessageParams.
The import aligns with the method signature changes and follows proper module organization.
backend/src/chat/chat.service.ts (2)
8-12
: LGTM! Clean import structure.
The imports are well-organized and properly typed, supporting the transition to structured input types.
24-29
: 🛠️ Refactor suggestion
Add input validation for the model parameter.
While the transition to ChatInput
type improves type safety, consider adding runtime validation for the model parameter to ensure it's supported by the completion endpoint.
streamChat(
input: ChatInput,
): CustomAsyncIterableIterator<ChatCompletionChunk> {
+ if (!input.model) {
+ throw new Error('Model parameter is required');
+ }
+ // Consider maintaining a list of supported models
+ const supportedModels = ['gpt-3.5-turbo', 'gpt-4']; // Add your supported models
+ if (!supportedModels.includes(input.model)) {
+ throw new Error(`Unsupported model: ${input.model}`);
+ }
this.logger.debug(
`Request chat input: ${input.message} with model: ${input.model}`,
);
const { content, model } = req.body as ChatMessageInput & { | ||
model: string; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add type validation for request body.
The type assertion using as
bypasses runtime type checking. Consider adding validation to ensure the request body matches the expected structure.
- const { content, model } = req.body as ChatMessageInput & {
- model: string;
- };
+ if (!req.body?.content || typeof req.body.content !== 'string') {
+ throw new Error('Invalid content in request body');
+ }
+ const { content, model = 'gpt-3.5-turbo' } = req.body;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const { content, model } = req.body as ChatMessageInput & { | |
model: string; | |
}; | |
if (!req.body?.content || typeof req.body.content !== 'string') { | |
throw new Error('Invalid content in request body'); | |
} | |
const { content, model = 'gpt-3.5-turbo' } = req.body; |
const params: GenerateMessageParams = { | ||
model: model || 'gpt-3.5-turbo', // Default to 'gpt-3.5-turbo' if model is not provided | ||
message: content, | ||
role: 'user', | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor model handling and role assignment.
Several concerns with the current implementation:
- Model defaulting should be handled at a configuration level
- No validation for supported model types
- Hardcoded 'user' role might limit the system prompt functionality mentioned in PR objectives
Consider these improvements:
- Move model configuration to a central location
- Add model validation
- Allow role to be specified in the request for system prompts
+ const supportedModels = ['gpt-3.5-turbo', 'gpt-4']; // Move to config
+ if (model && !supportedModels.includes(model)) {
+ throw new Error(`Unsupported model: ${model}`);
+ }
const params: GenerateMessageParams = {
- model: model || 'gpt-3.5-turbo', // Default to 'gpt-3.5-turbo' if model is not provided
+ model: model || process.env.DEFAULT_MODEL || 'gpt-3.5-turbo',
message: content,
- role: 'user',
+ role: req.body.role || 'user', // Allow system prompts
};
Committable suggestion skipped: line range outside the PR's diff.
@@ -33,7 +34,7 @@ export class LlamaModelProvider extends ModelProvider { | |||
} | |||
|
|||
async generateStreamingResponse( | |||
content: string, | |||
{ model, message, role = 'user' }: GenerateMessageParams, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused 'model' parameter in method signature.
The destructured model
parameter is not being utilized within the method implementation. Either remove it if unnecessary or implement model-specific logic.
{ model, message, role = 'user' }: GenerateMessageParams, | ||
res: Response, | ||
): Promise<void> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add input validation and implement system prompt support.
Several improvements are needed:
- Add validation for required parameters
- Implement system prompt support as per PR objectives
Consider this implementation:
async generateStreamingResponse(
{ model, message, role = 'user' }: GenerateMessageParams,
res: Response,
): Promise<void> {
+ if (!message?.trim()) {
+ throw new Error('Message cannot be empty');
+ }
+
+ const systemPrompt = 'You are a helpful AI assistant.'; // Consider making this configurable
+ const formattedMessage = role === 'system' ? message : `${systemPrompt}\n\nUser: ${message}`;
+
this.logger.log('Generating streaming response with Llama...');
const session = new LlamaChatSession({
contextSequence: this.context.getSequence(),
});
this.logger.log('LlamaChatSession created.');
let chunkCount = 0;
const startTime = Date.now();
try {
- await session.prompt(message, {
+ await session.prompt(formattedMessage, {
Also applies to: 48-48
// Get the system prompt based on the model | ||
const systemPrompt = systemPrompts['codefox-basic']?.systemPrompt || ''; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Suggestion: Use the model
parameter to select the system prompt
Currently, the system prompt is fetched using the hardcoded key 'codefox-basic'
. To support dynamic selection of system prompts based on the model being used, consider using the model
parameter as the key.
Apply this diff to utilize the model
parameter:
-const systemPrompt = systemPrompts['codefox-basic']?.systemPrompt || '';
+const systemPrompt = systemPrompts[model]?.systemPrompt || '';
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Get the system prompt based on the model | |
const systemPrompt = systemPrompts['codefox-basic']?.systemPrompt || ''; | |
// Get the system prompt based on the model | |
const systemPrompt = systemPrompts[model]?.systemPrompt || ''; |
const messages: ChatCompletionMessageParam[] = systemPrompt | ||
? [{ role: 'system', content: systemPrompt }] | ||
: [{ role: role as 'user' | 'system' | 'assistant', content: message }]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical Issue: User message is omitted when a system prompt is available
When a system prompt is available, the messages
array includes only the system prompt and excludes the user's message. This will result in the assistant not receiving the user's input, leading to an incorrect or lack of response.
Apply this diff to include both the system prompt and the user's message:
const messages: ChatCompletionMessageParam[] = systemPrompt
- ? [{ role: 'system', content: systemPrompt }]
- : [{ role: role as 'user' | 'system' | 'assistant', content: message }];
+ ? [
+ { role: 'system', content: systemPrompt },
+ { role: role as 'user' | 'system' | 'assistant', content: message },
+ ]
+ : [{ role: role as 'user' | 'system' | 'assistant', content: message }];
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const messages: ChatCompletionMessageParam[] = systemPrompt | |
? [{ role: 'system', content: systemPrompt }] | |
: [{ role: role as 'user' | 'system' | 'assistant', content: message }]; | |
const messages: ChatCompletionMessageParam[] = systemPrompt | |
? [ | |
{ role: 'system', content: systemPrompt }, | |
{ role: role as 'user' | 'system' | 'assistant', content: message }, | |
] | |
: [{ role: role as 'user' | 'system' | 'assistant', content: message }]; |
This commit refactors the OpenAIModelProvider class to improve the streaming response when generating chat completions with OpenAI. It updates the messages array to include the system prompt and user message, and removes unnecessary comments. This change aims to enhance the overall performance and reliability of the streaming response feature. Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (2)
llm-server/src/model/openai-model-provider.ts (2)
35-41
: Add error logging for missing system promptsWhile the fallback to empty string for missing system prompts is good, consider logging a warning when a system prompt is not found for better debugging and monitoring.
-const systemPrompt = systemPrompts['codefox-basic']?.systemPrompt || ''; +const systemPrompt = systemPrompts['codefox-basic']?.systemPrompt; +if (!systemPrompt) { + this.logger.warn(`No system prompt found for key: codefox-basic`); +} +const finalSystemPrompt = systemPrompt || '';
Line range hint
68-73
: Enhance error handling with specific OpenAI error typesThe current error handling could be improved to provide more specific error messages based on the type of OpenAI error encountered.
} catch (error) { - this.logger.error('Error during OpenAI response generation:', error); - res.write(`data: ${JSON.stringify({ error: 'Generation failed' })}\n\n`); + let errorMessage = 'Generation failed'; + if (error instanceof OpenAI.APIError) { + this.logger.error(`OpenAI API Error: ${error.status} - ${error.message}`); + errorMessage = `OpenAI Error: ${error.message}`; + } else { + this.logger.error('Error during OpenAI response generation:', error); + } + res.write(`data: ${JSON.stringify({ error: errorMessage })}\n\n`); res.write(`data: [DONE]\n\n`); res.end(); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
llm-server/src/model/openai-model-provider.ts
(4 hunks)
🔇 Additional comments (2)
llm-server/src/model/openai-model-provider.ts (2)
5-7
: LGTM! Imports are well-structured
The new imports are properly typed and necessary for the enhanced functionality.
22-22
: LGTM! Method signature improvement
The change to use GenerateMessageParams
type improves type safety and parameter organization. The default role of 'user' is appropriate.
model, | ||
messages, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add model validation before API call
Consider validating the model parameter against the list of available models to fail fast and provide better error messages.
+ // Validate model before making the API call
+ try {
+ await this.openai.models.retrieve(model);
+ } catch (error) {
+ this.logger.error(`Invalid model: ${model}`);
+ throw new Error(`Invalid model: ${model}`);
+ }
+
const stream = await this.openai.chat.completions.create({
model,
messages,
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
model, | |
messages, | |
// Validate model before making the API call | |
try { | |
await this.openai.models.retrieve(model); | |
} catch (error) { | |
this.logger.error(`Invalid model: ${model}`); | |
throw new Error(`Invalid model: ${model}`); | |
} | |
const stream = await this.openai.chat.completions.create({ | |
model, | |
messages, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (3)
llm-server/src/model/llama-model-provider.ts (3)
11-13
: Remove unused importChatCompletionMessageParam
.The
ChatCompletionMessageParam
type from 'openai/resources/chat/completions' is imported but never used in the code.import { systemPrompts } from '../prompt/systemPrompt'; -import { ChatCompletionMessageParam } from 'openai/resources/chat/completions'; import { GenerateMessageParams } from '../type/GenerateMessage';
53-61
: Add input validation for message content.While the message formatting is well-structured, there's no validation to ensure the message content is not empty or just whitespace.
Consider adding validation before processing:
+ if (!message?.trim()) { + throw new Error('Message content cannot be empty'); + } + const messages = [ { role: 'system', content: systemPrompt }, { role: role as 'user' | 'system' | 'assistant', content: message }, ];
Line range hint
39-64
: Consider extracting prompt handling logic.The current implementation mixes prompt formatting logic within the model provider. Consider extracting this into a dedicated prompt service/formatter class to:
- Improve separation of concerns
- Make prompt handling more testable
- Allow for easier extensions of prompt formatting logic
This would make it easier to:
- Add different prompt formats for different models
- Implement more complex prompt chains
- Test prompt formatting independently of the model provider
// Get the system prompt based on the model | ||
const systemPrompt = systemPrompts['codefox-basic']?.systemPrompt || ''; | ||
|
||
const messages = [ | ||
{ role: 'system', content: systemPrompt }, | ||
{ role: role as 'user' | 'system' | 'assistant', content: message }, | ||
]; | ||
|
||
// Convert messages array to a single formatted string for Llama | ||
const formattedPrompt = messages | ||
.map(({ role, content }) => `${role}: ${content}`) | ||
.join('\n'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance system prompt handling with proper validation.
The current implementation has several potential issues:
- Hardcoded model name 'codefox-basic'
- No validation if the model exists in systemPrompts
- Silent fallback to empty string could mask configuration issues
Consider this improved implementation:
- // Get the system prompt based on the model
- const systemPrompt = systemPrompts['codefox-basic']?.systemPrompt || '';
+ // Get the system prompt based on the provided model
+ if (!systemPrompts[model]) {
+ throw new Error(`System prompt not found for model: ${model}`);
+ }
+ const systemPrompt = systemPrompts[model].systemPrompt;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Get the system prompt based on the model | |
const systemPrompt = systemPrompts['codefox-basic']?.systemPrompt || ''; | |
const messages = [ | |
{ role: 'system', content: systemPrompt }, | |
{ role: role as 'user' | 'system' | 'assistant', content: message }, | |
]; | |
// Convert messages array to a single formatted string for Llama | |
const formattedPrompt = messages | |
.map(({ role, content }) => `${role}: ${content}`) | |
.join('\n'); | |
// Get the system prompt based on the provided model | |
if (!systemPrompts[model]) { | |
throw new Error(`System prompt not found for model: ${model}`); | |
} | |
const systemPrompt = systemPrompts[model].systemPrompt; | |
const messages = [ | |
{ role: 'system', content: systemPrompt }, | |
{ role: role as 'user' | 'system' | 'assistant', content: message }, | |
]; | |
// Convert messages array to a single formatted string for Llama | |
const formattedPrompt = messages | |
.map(({ role, content }) => `${role}: ${content}`) | |
.join('\n'); |
Add a basic system prompt in llm server.
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
GenerateMessageParams
to define the structure for message generation parameters.