-
Notifications
You must be signed in to change notification settings - Fork 445
feat: add global language instruction setting for AI responses #500
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: add global language instruction setting for AI responses #500
Conversation
Add a new global setting that allows users to configure the language in which AI agents respond. Features include: - Language dropdown with 12 predefined templates (English, German, Spanish, French, Portuguese, Italian, Dutch, Polish, Russian, Japanese, Chinese, Korean) - Editable instruction text that gets prepended to all system prompts - Toggle to enable/disable the language instruction - Reset button to restore default template for selected language The language instruction is automatically applied to: - Agent chat responses - Backlog planning prompts - Enhancement prompts (improve, technical, simplify, etc.) Technical changes: - New LanguageInstruction interface in @automaker/types - Language templates in @automaker/prompts - prependLanguageInstruction() utility in merge.ts - Settings helper integration for prompt injection - New UI section in Settings -> Response Language - Store and settings sync integration Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
📝 WalkthroughWalkthroughAdds a LanguageInstruction type, language templates, UI for editing and persisting a global language instruction, a prepend utility in the prompts library, and server integrations to apply the stored language instruction to system prompts before model calls. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant UI as Client UI
participant Store as App Store
participant Server as API Server
participant Settings as SettingsService
participant Prompts as Prompts Library
participant Model as LLM
UI->>Store: setLanguageInstruction(...)
Store->>Server: syncSettingsToServer()
Server->>Settings: read GlobalSettings
Settings-->>Server: GlobalSettings (includes languageInstruction)
Server->>Prompts: prependLanguageInstruction(basePrompt, languageInstruction)
Prompts-->>Server: finalPrompt
Server->>Model: send(finalPrompt)
Model-->>Server: response
Server-->>UI: result
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
📜 Recent review detailsConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (3)
🧰 Additional context used📓 Path-based instructions (3)**/*.{ts,tsx,js,jsx}📄 CodeRabbit inference engine (CLAUDE.md)
Files:
apps/server/src/**/*.{ts,tsx}📄 CodeRabbit inference engine (CLAUDE.md)
Files:
**/*.{ts,tsx}📄 CodeRabbit inference engine (CLAUDE.md)
Files:
🧠 Learnings (1)📚 Learning: 2025-12-28T05:07:48.147ZApplied to files:
🧬 Code graph analysis (2)apps/server/src/services/ideation-service.ts (1)
apps/server/src/lib/settings-helpers.ts (4)
🔇 Additional comments (8)
✏️ Tip: You can disable this entire section by setting Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @SeoFood, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a significant new feature that empowers users to control the language in which AI agents respond. By providing a global setting, users can now ensure AI outputs align with their preferred language, enhancing accessibility and user experience. The implementation includes a comprehensive UI for easy configuration and robust backend logic to seamlessly inject language instructions into AI system prompts, ensuring consistent linguistic behavior across various AI interactions. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a well-implemented global language instruction setting for AI responses. The changes are comprehensive, covering type definitions, prompt merging logic, settings persistence, and a user-friendly UI component. The feature enhances the user experience by allowing customization of AI response languages while maintaining technical accuracy. The code is clean, well-structured, and follows good practices for modularity and maintainability.
Extends the global language instruction setting to additional endpoints that were generating content in English regardless of user preference: - generate-title.ts: Auto-generated feature titles - generate-suggestions.ts: Feature suggestions - describe-file.ts: Context file descriptions - describe-image.ts: Context image descriptions - generate-features-from-spec.ts: Features from app spec - generate-spec.ts: App specification generation - validate-issue.ts: Fix import path for prependLanguageInstruction Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Additional ChangesExtended the language instruction setting to all AI query endpoints that were previously generating content in English regardless of user preference: Fixed Endpoints
Bug Fix
This ensures that when a user sets their language to German (or any other language), all AI-generated content respects that preference. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@apps/server/src/routes/github/routes/validate-issue.ts`:
- Around line 46-50: The imports getLanguageInstruction and
prependLanguageInstruction are unused; either remove them or apply language
instruction support to the issue validation flow: fetch languageInstruction via
getLanguageInstruction(settingsService), wrap the base
ISSUE_VALIDATION_SYSTEM_PROMPT (or finalPrompt) with
prependLanguageInstruction(basePrompt, languageInstruction), and pass that
resulting prompt into streamingQuery (where
finalPrompt/ISSUE_VALIDATION_SYSTEM_PROMPT is currently used); update references
in validate-issue.ts (e.g., getLanguageInstruction, prependLanguageInstruction,
ISSUE_VALIDATION_SYSTEM_PROMPT, finalPrompt, streamingQuery) accordingly.
🧹 Nitpick comments (1)
apps/server/src/routes/features/routes/generate-title.ts (1)
69-77: UsesystemPromptoption insimpleQueryfor clearer prompt separation.The
simpleQueryfunction supports a dedicatedsystemPromptparameter (defined inSimpleQueryOptionsinterface). Separating system and user prompts improves clarity and aligns with how provider APIs handle them differently. Current approach concatenates everything intoprompt, which works but mixes concerns.♻️ Optional refactor to use systemPrompt option
// Use simpleQuery - provider abstraction handles all the streaming/extraction const result = await simpleQuery({ - prompt: `${effectiveSystemPrompt}\n\n${userPrompt}`, + prompt: userPrompt, + systemPrompt: effectiveSystemPrompt, model: CLAUDE_MODEL_MAP.haiku, cwd: process.cwd(), maxTurns: 1, allowedTools: [], });
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
apps/server/src/index.tsapps/server/src/routes/app-spec/generate-features-from-spec.tsapps/server/src/routes/app-spec/generate-spec.tsapps/server/src/routes/context/routes/describe-file.tsapps/server/src/routes/context/routes/describe-image.tsapps/server/src/routes/features/index.tsapps/server/src/routes/features/routes/generate-title.tsapps/server/src/routes/github/routes/validate-issue.tsapps/server/src/routes/suggestions/generate-suggestions.ts
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Always import from shared packages (
@automaker/*), never from old relative paths
Files:
apps/server/src/routes/context/routes/describe-image.tsapps/server/src/routes/app-spec/generate-spec.tsapps/server/src/routes/suggestions/generate-suggestions.tsapps/server/src/routes/github/routes/validate-issue.tsapps/server/src/routes/features/index.tsapps/server/src/routes/features/routes/generate-title.tsapps/server/src/index.tsapps/server/src/routes/app-spec/generate-features-from-spec.tsapps/server/src/routes/context/routes/describe-file.ts
apps/server/src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use
createEventEmitter()fromlib/events.tsfor all server operations to emit events that stream to frontend via WebSocket
Files:
apps/server/src/routes/context/routes/describe-image.tsapps/server/src/routes/app-spec/generate-spec.tsapps/server/src/routes/suggestions/generate-suggestions.tsapps/server/src/routes/github/routes/validate-issue.tsapps/server/src/routes/features/index.tsapps/server/src/routes/features/routes/generate-title.tsapps/server/src/index.tsapps/server/src/routes/app-spec/generate-features-from-spec.tsapps/server/src/routes/context/routes/describe-file.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use
resolveModelString()from@automaker/model-resolverto convert model aliases (haiku, sonnet, opus) to full model names
Files:
apps/server/src/routes/context/routes/describe-image.tsapps/server/src/routes/app-spec/generate-spec.tsapps/server/src/routes/suggestions/generate-suggestions.tsapps/server/src/routes/github/routes/validate-issue.tsapps/server/src/routes/features/index.tsapps/server/src/routes/features/routes/generate-title.tsapps/server/src/index.tsapps/server/src/routes/app-spec/generate-features-from-spec.tsapps/server/src/routes/context/routes/describe-file.ts
🧠 Learnings (1)
📚 Learning: 2025-12-28T05:07:48.147Z
Learnt from: CR
Repo: AutoMaker-Org/automaker PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-28T05:07:48.147Z
Learning: Store project-specific rules in `.automaker/context/` and load them into agent prompts via `loadContextFiles()` from automaker/utils
Applied to files:
apps/server/src/routes/app-spec/generate-spec.tsapps/server/src/routes/suggestions/generate-suggestions.ts
🧬 Code graph analysis (8)
apps/server/src/routes/context/routes/describe-image.ts (1)
apps/server/src/lib/settings-helpers.ts (1)
getLanguageInstruction(299-313)
apps/server/src/routes/app-spec/generate-spec.ts (1)
apps/server/src/lib/settings-helpers.ts (1)
getLanguageInstruction(299-313)
apps/server/src/routes/suggestions/generate-suggestions.ts (1)
apps/server/src/lib/settings-helpers.ts (1)
getLanguageInstruction(299-313)
apps/server/src/routes/features/index.ts (2)
apps/server/src/services/settings-service.ts (1)
SettingsService(106-782)apps/server/src/routes/features/routes/generate-title.ts (1)
createGenerateTitleHandler(41-114)
apps/server/src/routes/features/routes/generate-title.ts (2)
apps/server/src/lib/settings-helpers.ts (1)
getLanguageInstruction(299-313)apps/server/src/providers/simple-query-service.ts (1)
simpleQuery(105-167)
apps/server/src/index.ts (1)
apps/server/src/routes/features/index.ts (1)
createFeaturesRoutes(19-45)
apps/server/src/routes/app-spec/generate-features-from-spec.ts (1)
apps/server/src/lib/settings-helpers.ts (1)
getLanguageInstruction(299-313)
apps/server/src/routes/context/routes/describe-file.ts (1)
apps/server/src/lib/settings-helpers.ts (1)
getLanguageInstruction(299-313)
🔇 Additional comments (15)
apps/server/src/index.ts (1)
222-222: LGTM!The addition of
settingsServicetocreateFeaturesRoutesfollows the established pattern used by other routes (e.g.,createEnhancePromptRoutes,createWorktreeRoutes,createSuggestionsRoutes) and enables language instruction support for the features API.apps/server/src/routes/app-spec/generate-spec.ts (2)
24-25: LGTM!Imports follow the coding guideline to use shared packages (
@automaker/prompts).
70-85: LGTM!Clean implementation of language instruction integration:
- Base prompt is clearly defined
- Language instruction is loaded asynchronously from settings
- Final prompt is composed using
prependLanguageInstructionThis pattern is consistent with other routes in this PR.
apps/server/src/routes/context/routes/describe-image.ts (2)
22-26: LGTM!Imports are properly structured and follow the coding guideline to use shared packages (
@automaker/prompts).
286-294: LGTM!Language instruction integration follows the established pattern. The
instructionTextis correctly used in both the Cursor text prompt path (line 304) and the Claude multi-part prompt path (line 309).apps/server/src/routes/app-spec/generate-features-from-spec.ts (2)
17-18: LGTM!Imports follow the coding guideline to use shared packages (
@automaker/prompts).
57-92: LGTM!Clean implementation that follows the same pattern as other routes:
- Define
basePromptwith the feature generation instructions- Load language instruction via
getLanguageInstruction(settingsService)- Create final
promptusingprependLanguageInstructionThe implementation is consistent across the codebase.
apps/server/src/routes/suggestions/generate-suggestions.ts (2)
18-19: LGTM!Imports correctly use shared packages (
@automaker/prompts) and local settings helpers as required by coding guidelines.
205-212: LGTM!The language instruction integration follows the established pattern:
- Load instruction asynchronously via
getLanguageInstruction- Prepend to base prompt using
prependLanguageInstruction- Use the enhanced
basePromptfor both structured and non-structured output pathsThis ensures consistent language handling regardless of model type.
apps/server/src/routes/features/index.ts (2)
7-7: LGTM!Type-only import for
SettingsServiceis appropriate and follows best practices.
19-22: LGTM!The optional
settingsServiceparameter maintains backward compatibility while enabling language instruction support. Onlygenerate-titlereceives it, which is correct since other routes handle CRUD operations without AI text generation.Also applies to: 42-42
apps/server/src/routes/context/routes/describe-file.ts (2)
22-26: LGTM!Imports are properly organized:
- Settings-related functions grouped together from
settings-helpers.jsprependLanguageInstructioncorrectly imported from@automaker/prompts
139-150: LGTM!Clean integration of language instruction:
- Renamed to
basePromptfor clarity- Language instruction loaded asynchronously
- Final
promptcreated viaprependLanguageInstructionThe prompt is correctly passed to
simpleQueryat line 174.apps/server/src/routes/features/routes/generate-title.ts (2)
11-14: LGTM!Imports correctly use shared packages as per coding guidelines.
41-43: LGTM!Optional parameter maintains backward compatibility while enabling language instruction support.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
Use the language instruction setting for GitHub issue validation so that validation responses respect the user's language preference. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add language instruction to IdeationService for chat and suggestions - Add language instruction to commit message generation - Apply language instruction to all Auto Mode prompts in settings-helpers Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Summary
Features
Technical Changes
LanguageInstructioninterface in@automaker/types@automaker/promptsprependLanguageInstruction()utility inmerge.tsTest plan
npm run dev:web🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
UI
Chores
✏️ Tip: You can customize this high-level summary in your review settings.