Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,8 @@
"build": "babel --out-dir=dist source",
"dev": "babel --out-dir=dist --watch source",
"clean": "npm run clean:linux",
"clean:linux": "rm -rf dist",
"clean:windows": "rmdir /s /q dist",
"clean:linux": "rm -rf dist || echo Directory not found, skipping removal",
"clean:windows": "rmdir /s /q dist || echo Directory not found, skipping removal",
"setup": "npm run setup:linux",
"setup:linux": "npm run clean:linux && npm run build && npm i -g .",
"setup:windows": "npm run clean:windows && npm run build",
Expand All @@ -47,6 +47,7 @@
"ink-select-input": "^6.0.0",
"is-git-repository": "^2.0.0",
"meow": "^11.0.0",
"ollama": "^0.5.9",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify Ollama dependency version and security.

The new Ollama dependency supports the AI model integration. Please ensure this version is stable and free from known vulnerabilities.


🌐 Web query:

What is the latest stable version of the ollama npm package and are there any known security vulnerabilities in version 0.5.9?

💡 Result:

The latest stable version of the ollama npm package is 0.5.16, released approximately one month ago. (security.snyk.io)

Regarding version 0.5.9, there are no known security vulnerabilities associated with it. Snyk's vulnerability database indicates that version 0.5.16, which is the latest release, has no direct vulnerabilities. (security.snyk.io)

It's important to note that while version 0.5.9 does not have reported vulnerabilities, it is an older release. For the most up-to-date features and security patches, it's recommended to use the latest stable version, 0.5.16.

Citations:


Upgrade Ollama to the latest stable version

Ollama v0.5.9 has no known security vulnerabilities, but the latest stable release is v0.5.16 (also free of reported issues). To benefit from new features and security patches, please update:

  • File: package.json, line 50
  • Change:
-   "ollama": "^0.5.9",
+   "ollama": "^0.5.16",
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"ollama": "^0.5.9",
"ollama": "^0.5.16",
🤖 Prompt for AI Agents
In package.json at line 50, update the Ollama dependency version from "^0.5.9"
to "^0.5.16" to use the latest stable release. This involves changing the
version string to reflect the newer version while keeping the caret to allow
compatible updates.

"openai": "^4.28.4",
"react": "^18.2.0",
"readline": "^1.3.0"
Expand Down
47 changes: 3 additions & 44 deletions source/app.js
Original file line number Diff line number Diff line change
@@ -1,48 +1,7 @@
import React from 'react';
import {Text, Newline} from 'ink';
import BigText from 'ink-big-text';
import Gradient from 'ink-gradient';
import isGit from 'is-git-repository';
import isCommitterSet from './utils/errors.js';
import info from './utils/info.js';
import askForCommitMessage from './utils/commit.js';
import { getOpenAIKey, setOpenAIKey, deleteOPenAIKey } from './utils/api.js';
import Logo from './utils/logo.js';

export default function App({flags}) {
if(flags.setopenai) {
setOpenAIKey(flags.setopenai);
}
if(flags.delopenai) {
deleteOPenAIKey();
}
if (!getOpenAIKey()) {
console.log('Please provide an OpenAI API key.');
console.log('You can get one from https://platform.openai.com/account/api-keys')
console.log('Run `magicc --setopenai=<api-key>` to save your API key and try again.');
} else {
console.log('You have an OpenAI API key, you can now generate a commit message.');
const gitCheck = isGit();
const committerCheck = isCommitterSet();
if (gitCheck && committerCheck) {
askForCommitMessage();
} else {
console.log('This is not a git repository.');
}
}
return (
<>
<Gradient name='passion'>
<BigText text='Magicc' />
<Text>You can do `magicc`, you can build anything that you desire. 🪄</Text>
</Gradient>
<Text>
Version: <Text color='green'>{info('version')}</Text> |
Author: <Text color='blue'>{info('author')}</Text><Newline/>
<Text>
Need Help? <Text color="cyan">magicc --help</Text>
</Text><Newline/>
==================================================
</Text>
</>
);

return <Logo flags={flags} />;
}
Comment on lines 1 to 7
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Prop naming inconsistency with Logo component

The simplification looks good, but there's a naming issue. The Logo component receives flags as a prop, but in its implementation (source/utils/logo.js), the parameter is named children and used as props spread ({...children}). This is confusing since children typically refers to React children elements, not props.

Consider updating the Logo component to use a more appropriate parameter name:

-export default function Logo(children) {
+export default function Logo(props) {
 	return (
 		<>
 			...
-			<ModelSelection {...children} />
+			<ModelSelection {...props} />
 		</>
 	);
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import React from 'react';
import {Text, Newline} from 'ink';
import BigText from 'ink-big-text';
import Gradient from 'ink-gradient';
import isGit from 'is-git-repository';
import isCommitterSet from './utils/errors.js';
import info from './utils/info.js';
import askForCommitMessage from './utils/commit.js';
import { getOpenAIKey, setOpenAIKey, deleteOPenAIKey } from './utils/api.js';
import Logo from './utils/logo.js';
export default function App({flags}) {
if(flags.setopenai) {
setOpenAIKey(flags.setopenai);
}
if(flags.delopenai) {
deleteOPenAIKey();
}
if (!getOpenAIKey()) {
console.log('Please provide an OpenAI API key.');
console.log('You can get one from https://platform.openai.com/account/api-keys')
console.log('Run `magicc --setopenai=<api-key>` to save your API key and try again.');
} else {
console.log('You have an OpenAI API key, you can now generate a commit message.');
const gitCheck = isGit();
const committerCheck = isCommitterSet();
if (gitCheck && committerCheck) {
askForCommitMessage();
} else {
console.log('This is not a git repository.');
}
}
return (
<>
<Gradient name='passion'>
<BigText text='Magicc' />
<Text>You can do `magicc`, you can build anything that you desire. 🪄</Text>
</Gradient>
<Text>
Version: <Text color='green'>{info('version')}</Text> |
Author: <Text color='blue'>{info('author')}</Text><Newline/>
<Text>
Need Help? <Text color="cyan">magicc --help</Text>
</Text><Newline/>
==================================================
</Text>
</>
);
return <Logo flags={flags} />;
}
export default function Logo(props) {
return (
<>
...
<ModelSelection {...props} />
</>
);
}
🤖 Prompt for AI Agents
In source/app.js lines 1 to 7, the App component passes a prop named 'flags' to
the Logo component, but Logo's implementation uses 'children' as the parameter
name and spreads it as props, causing confusion. To fix this, update the Logo
component in source/utils/logo.js to rename the parameter from 'children' to
'props' or 'flags' as appropriate, and adjust the usage inside Logo to match
this new parameter name instead of 'children'.

10 changes: 6 additions & 4 deletions source/cli.js
Original file line number Diff line number Diff line change
@@ -1,12 +1,14 @@
#!/usr/bin/env node
import React from 'react';
import {render} from 'ink';
import meow from 'meow';
import App from './app.js';
import Logo from './utils/logo.js';
import { render } from 'ink';

render(<Logo />, {patchConsole: false});

const cli = meow(
`
Usage
`Usage
$ magicc

Options
Expand All @@ -33,4 +35,4 @@ const cli = meow(
},
);

render(<App flags={cli.flags}/>);
render(<App flags={cli.flags} />, {patchConsole: false});
36 changes: 36 additions & 0 deletions source/models/ollama.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
import Ollama from 'ollama'; // Import the Ollama model
import config from '../utils/config.json';

async function ollamaModel(model, flags, diffContent) {
try {
// Use the prompt from the config file emoji and send to Ollama
const categoryResponse = await Ollama.chat({
messages: [
{role: 'system', content: config.commitConfig.emoji},
{role: 'user', content: diffContent},
],
model,
});
// Use the prompt from the config file message and send to Ollama

const messageResponse = await Ollama.chat({
messages: [
{role: 'system', content: config.commitConfig.message},
{role: 'user', content: diffContent},
],
model,
});
console.log('categoryResponse', categoryResponse);
console.log('messageResponse', messageResponse);
Comment on lines +23 to +24
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove debug console.log statements

Debug logging should be removed or replaced with proper logging mechanism if needed.

-		console.log('categoryResponse', categoryResponse);
-		console.log('messageResponse', messageResponse);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
console.log('categoryResponse', categoryResponse);
console.log('messageResponse', messageResponse);
🤖 Prompt for AI Agents
In source/models/ollama.js at lines 23 to 24, remove the debug console.log
statements for categoryResponse and messageResponse. If logging is necessary,
replace these with a proper logging mechanism instead of using console.log.

return {
category: categoryResponse?.message?.content,
message: messageResponse?.message?.content,
};
} catch (error) {
throw new Error(
'Failed to connect to local Ollama instance. To start Ollama, first download it at https://ollama.ai.',
);
}
Comment on lines +29 to +33
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Preserve original error for better debugging

The generic error message loses valuable debugging information from the original error.

 	} catch (error) {
+		console.error('Ollama API error:', error);
 		throw new Error(
-			'Failed to connect to local Ollama instance. To start Ollama, first download it at https://ollama.ai.',
+			`Failed to connect to local Ollama instance: ${error.message}. To start Ollama, first download it at https://ollama.ai.`,
 		);
 	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} catch (error) {
throw new Error(
'Failed to connect to local Ollama instance. To start Ollama, first download it at https://ollama.ai.',
);
}
} catch (error) {
console.error('Ollama API error:', error);
throw new Error(
`Failed to connect to local Ollama instance: ${error.message}. To start Ollama, first download it at https://ollama.ai.`,
);
}
🤖 Prompt for AI Agents
In source/models/ollama.js around lines 29 to 33, the catch block throws a new
generic error message, which discards the original error details. Modify the
code to include the original error information by either appending the original
error message to the new error or by rethrowing the original error with
additional context. This preserves the original error for better debugging while
still providing the custom message.

}

export default ollamaModel;
54 changes: 54 additions & 0 deletions source/models/openai.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
import OpenAI from 'openai';
import config from '../utils/config.json';
import dotenv from 'dotenv';

import {getOpenAIKey, setOpenAIKey, deleteOPenAIKey} from '../utils/api.js';

dotenv.config();

async function openAiModel(model, flags, diffContent) {
if (flags.setopenai) {
setOpenAIKey(flags.setopenai);
}
if (flags.delopenai) {
deleteOPenAIKey();
}
Comment on lines +10 to +15
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

API key management should be handled separately

Setting and deleting API keys within the model function violates single responsibility principle. This should be handled at a higher level (e.g., in the CLI or UI layer).

Consider moving API key management to the appropriate layer and keeping this function focused solely on generating commit messages.

🤖 Prompt for AI Agents
In source/models/openai.js around lines 10 to 15, the code for setting and
deleting API keys is mixed within the model function, violating the single
responsibility principle. Refactor by removing the API key management logic from
this function and move it to a higher-level layer such as the CLI or UI layer.
Keep this function focused only on generating commit messages without handling
API key operations.

if (!getOpenAIKey()) {
return {
message:
'Please provide an OpenAI API key.\n' +
'You can get one from https://platform.openai.com/account/api-keys\n' +
'Run `magicc --setopenai=<api-key>` to save your API key and try again.',
};
} else {
console.log(
'You have an OpenAI API key, you can now generate a commit message.',
);

const apiKey = await getOpenAIKey();
const openai = new OpenAI({apiKey: apiKey});

const category = await openai.chat.completions.create({
messages: [
{role: 'system', content: config.commitConfig.emoji},
{role: 'user', content: diffContent},
],
model,
});
// use the prmopt from the config file message and send to openai
const message = await openai.chat.completions.create({
messages: [
{role: 'system', content: config.commitConfig.message},
{role: 'user', content: diffContent},
],
model,
});

return {
category: category.choices[0].message.content,
message: message.choices[0].message.content,
};
Comment on lines +31 to +50
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add error handling for API calls

The OpenAI API calls lack error handling, which could cause unhandled exceptions.

+		try {
 		const category = await openai.chat.completions.create({
 			messages: [
 				{role: 'system', content: config.commitConfig.emoji},
 				{role: 'user', content: diffContent},
 			],
 			model,
 		});
 		// use the prompt from the config file message and send to openai
 		const message = await openai.chat.completions.create({
 			messages: [
 				{role: 'system', content: config.commitConfig.message},
 				{role: 'user', content: diffContent},
 			],
 			model,
 		});
 
 		return {
 			category: category.choices[0].message.content,
 			message: message.choices[0].message.content,
 		};
+		} catch (error) {
+			console.error('OpenAI API error:', error);
+			throw new Error(`Failed to generate commit message: ${error.message}`);
+		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const category = await openai.chat.completions.create({
messages: [
{role: 'system', content: config.commitConfig.emoji},
{role: 'user', content: diffContent},
],
model,
});
// use the prmopt from the config file message and send to openai
const message = await openai.chat.completions.create({
messages: [
{role: 'system', content: config.commitConfig.message},
{role: 'user', content: diffContent},
],
model,
});
return {
category: category.choices[0].message.content,
message: message.choices[0].message.content,
};
try {
const category = await openai.chat.completions.create({
messages: [
{role: 'system', content: config.commitConfig.emoji},
{role: 'user', content: diffContent},
],
model,
});
// use the prmopt from the config file message and send to openai
const message = await openai.chat.completions.create({
messages: [
{role: 'system', content: config.commitConfig.message},
{role: 'user', content: diffContent},
],
model,
});
return {
category: category.choices[0].message.content,
message: message.choices[0].message.content,
};
} catch (error) {
console.error('OpenAI API error:', error);
throw new Error(`Failed to generate commit message: ${error.message}`);
}
🤖 Prompt for AI Agents
In source/models/openai.js between lines 31 and 50, the OpenAI API calls do not
have error handling, risking unhandled exceptions. Wrap the calls to
openai.chat.completions.create in try-catch blocks or a single try-catch to
catch any errors thrown during the API requests. In the catch block, handle or
log the error appropriately and ensure the function returns a meaningful
fallback or propagates the error safely.

}
}

export default openAiModel;
72 changes: 62 additions & 10 deletions source/utils/commit.js
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
import generatePrompt from './openai.js';
import generateCommitMessage from './generateCommitMessage.js';
import {execa} from 'execa';
import readline from 'readline';
import React from 'react';
import {Box, render, Text, useApp} from 'ink';
import SelectInput from 'ink-select-input';
import Logo from './logo.js';

async function askForCommitMessage() {
const prompt = await generatePrompt();
async function askForCommitMessage(flags, model) {
const prompt = await generateCommitMessage(flags, model);

const rl = readline.createInterface({
input: process.stdin,
Expand All @@ -24,10 +25,10 @@ async function askForCommitMessage() {
.catch(error => {
console.error('Failed to commit changes:', error);
});
}
else {
} else {
console.log('Changes not committed.');
}
rl.close();
exit();
};

Expand All @@ -43,13 +44,14 @@ async function askForCommitMessage() {
];

return (
<Box flexDirection="column">
<Text>{`Suggested commit message: ${prompt}\nDo you want to proceed?`}</Text>
<SelectInput items={items} onSelect={handleSelect} />
</Box>
<Logo>
<Box flexDirection="column">
<Text>{`Suggested commit message: ${prompt}\nDo you want to proceed?`}</Text>
<SelectInput items={items} onSelect={handleSelect} />
</Box>
</Logo>
);
};

if (prompt) {
render(<SelectSuggestedCommit />);
} else {
Expand All @@ -58,4 +60,54 @@ async function askForCommitMessage() {
}
}

export async function initGit() {
try {
await execa('git', ['restore', '--staged', '.']);
} catch (error) {
console.error(error);
}
Comment on lines +65 to +68
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve error handling in git functions

Errors are caught and logged but not re-thrown, making it difficult for callers to handle failures properly.

 	} catch (error) {
-		console.error(error);
+		console.error('Git operation failed:', error);
+		throw error;
 	}

Apply this pattern to all three functions: initGit, gitStatus, and gitDiff.

Also applies to: 97-99, 107-109

🤖 Prompt for AI Agents
In source/utils/commit.js at lines 65-68, 97-99, and 107-109, the catch blocks
log errors but do not re-throw them, preventing callers from handling failures
properly. Modify each catch block in the initGit, gitStatus, and gitDiff
functions to re-throw the caught error after logging it, ensuring that error
propagation is maintained for proper handling upstream.

}

// git status to see if there are any changes
// if there's any changes add the first file in the list of changes
let firstFilePath = '';
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid module-level mutable state

Using a module-level variable firstFilePath can lead to race conditions and makes the code harder to test and reason about.

Consider returning the first file path from gitStatus() and passing it through the functions that need it, or encapsulating it in a class/closure.

🤖 Prompt for AI Agents
In source/utils/commit.js at line 73, the module-level mutable variable
`firstFilePath` should be removed to avoid race conditions and improve
testability. Refactor the code so that `gitStatus()` returns the first file path
directly, and pass this value explicitly to any functions that require it, or
encapsulate the state within a class or closure to maintain proper scope and
avoid shared mutable state.


export async function gitStatus() {
try {
const {stdout: status} = await execa('git', ['status', '--porcelain']);
if (status) {
// get the first file path in the list of changes
const lines = status.split('\n');
const filePaths = lines
.map(line => line.split(' ').slice(2).join(' ').trim())
.filter(filePath => filePath !== '')
.concat(
lines
.filter(line => line.startsWith('??'))
.map(line => line.split(' ').slice(1).join(' ').trim()),
);
Comment on lines +81 to +88
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Complex file path parsing logic

The file path parsing logic is complex and might not handle all git status formats correctly (e.g., renamed files, files with spaces).

Consider using a more robust git parsing library or simplifying the logic. Also, verify that this handles all git status codes correctly:


🏁 Script executed:

#!/bin/bash
# Test various git status formats
echo "Testing git status parsing with different file states..."

# Create test files with various states
touch "test file with spaces.txt"
git add "test file with spaces.txt"
git status --porcelain | head -5

# Show git status format documentation
echo -e "\nGit status format reference:"
git status --porcelain | head -5
echo "First two characters indicate: XY where X=index, Y=worktree"

Length of output: 558


Refine git status parsing to handle spaces and renames

The current logic in source/utils/commit.js (lines 81–88) splits each status line on spaces and then slices/join—this:

  • Retains surrounding quotes for file names with spaces (e.g. "test file with spaces.txt")
  • Ignores rename (R), copy (C) or other multi-path entries
  • Only special-cases ?? entries, but not staged vs. unstaged changes

To fix this, replace the ad-hoc splitting with one of the following:

• Use git status --porcelain -z and split on \0 so file names (including spaces/newlines) are handled correctly.
– Each entry is <XY> <path>\0 or <XY> <src>\0<dst>\0 for renames/copies.
– You can then parse XY to include or exclude renamed files and pull the correct path.

• Or leverage a well-tested library (e.g. simple-git, nodegit) to parse porcelain output instead of manual string operations.

Making this change will ensure all status codes (A, M, D, R, C, ??, etc.) and file names with arbitrary characters are parsed reliably.

🤖 Prompt for AI Agents
In source/utils/commit.js around lines 81 to 88, the current git status parsing
uses space splitting which fails to handle file names with spaces, renames,
copies, and other status codes properly. Replace this logic by using 'git status
--porcelain -z' to get null-separated entries, then parse each entry by reading
the XY status code and the associated paths accordingly, handling single-path
and rename/copy two-path entries. Alternatively, integrate a library like
simple-git or nodegit to parse the porcelain output reliably, ensuring all file
names and status codes are correctly processed.

// git add the first file in the list of changes
firstFilePath = filePaths[0];
await execa('git', ['add', firstFilePath]);
console.log(`${firstFilePath} has been added to the staging area.`);
} else {
console.log('No changes to commit.');
return false;
}
} catch (error) {
console.error(error);
}
}

// get the diff of the staged changes
export async function gitDiff() {
try {
const {stdout: gitDiff} = await execa('git', ['diff', '--staged']);
return gitDiff;
} catch (error) {
console.error(error);
}
}


export default askForCommitMessage;
24 changes: 20 additions & 4 deletions source/utils/config.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,22 @@
{
"emoji": "YYou are the author of the commit message. Your task is to select the appropriate category for the git diff based on the changes. Use the following categories (emoji category name => usage): 📦 new => for new files or new features; ✨ tweak => for enhancements or updates to the codebase; ☕ chore => for updates or changes outside the project codebase, including README.md; 🐞 fix => for fixing code bugs and errors. Please reply with the category name only.",
"message": "You are the author of the changes, you are going to provide a professional git commit message that is no longer than 25 characters in imperative present tense. Stricly no emojis are allowed and no conventional commit message as prefix is already provided. For example, instead of 'fix: fix a bug' make it 'fix a bug'. The message should be in lower case and no period at the end.",
"default_model": "gpt-4o-mini",
"maxDiffSize": 4000
"commitConfig": {
"emoji": "You are the author of the commit message. Your task is to select the appropriate category for the git diff based on the changes. Use the following categories (emoji category name => usage): 📦 new => for new files or new features; ✨ tweak => for enhancements or updates to the codebase; ☕ chore => for updates or changes outside the project codebase, including README.md; 🐞 fix => for fixing code bugs and errors. Please reply with the category name only.",
"message": "You are the author of the changes, and you will provide a professional git commit message that is no longer than 25 characters in imperative present tense. Strictly no emojis are allowed, and no conventional commit message prefix is provided. For example, instead of 'fix: fix a bug,' make it 'fix a bug.' The message should be in lowercase and should not have a period at the end."
},

"models": [
{
"title": "GPT-4o Mini",
"provider": "openai",
"model": "gpt-4o-mini",
"maxDiffSize": 4000,
"apiKey": ""
},
{
"title": "Llama 3.1 8B",
"provider": "ollama",
"model": "llama3.1:8b",
"maxDiffSize": 4000
}
]
Comment on lines +9 to +21
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Validate model configuration completeness.

The OpenAI model has an empty apiKey field which may cause runtime issues. Consider documenting the expected configuration or adding validation.


🏁 Script executed:

#!/bin/bash
# Check if there are any references to apiKey validation in the codebase
rg -A 3 -B 3 "apiKey" --type js

Length of output: 954


🏁 Script executed:

#!/bin/bash
# Inspect OpenAI model integration
sed -n '1,200p' source/models/openai.js
echo "-----"
# Inspect API key utility functions
sed -n '1,200p' source/utils/api.js

Length of output: 2663


Clarify and resolve unused apiKey in JSON config

Your code already handles OpenAI key presence and validity via the --setopenai flag, getOpenAIKey(), and isValidOpenAIKey(). The apiKey field in source/utils/config.json (lines 9–21) is never read at runtime and may confuse users. To fix this:

  • Remove the "apiKey": "" entry from the OpenAI model in source/utils/config.json
  • Or, if you wish to support JSON-based keys, update openAiModel (in source/models/openai.js) to fall back to config.models[…].apiKey before calling getOpenAIKey()
🤖 Prompt for AI Agents
In source/utils/config.json lines 9 to 21, the "apiKey" field for the OpenAI
model is unused and may confuse users. To fix this, remove the "apiKey": ""
entry from the OpenAI model configuration. Alternatively, if you want to support
keys from the JSON config, update the openAiModel code in
source/models/openai.js to check for and use config.models[…].apiKey as a
fallback before calling getOpenAIKey().

}
61 changes: 61 additions & 0 deletions source/utils/generateCommitMessage.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
import openAiModel from '../models/openai.js';
import ollamaModel from '../models/ollama.js';
import config from './config.json';
import {gitDiff, gitStatus, initGit} from './commit.js';

// remove any staged changes in git

async function generateCommitMessage(flags, model) {
const maxDiffSize = config.maxDiffSize;

await initGit();
const status = await gitStatus();

const gitDiffContent = await gitDiff();
const {category, message} = await getModelResponse(
model,
flags,
gitDiffContent,
);

if (gitDiffContent.length > maxDiffSize) {
console.log('Diff content is too large. Skipping OpenAI request.');
return `✨ tweak (${firstFilePath}): update ${firstFilePath}`;
}
Comment on lines +8 to +24
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical: Diff size check happens after API calls

The diff size validation occurs after making expensive API calls, which wastes resources and time. Also, firstFilePath is used without being imported or declared.

 async function generateCommitMessage(flags, model) {
-	const maxDiffSize = config.maxDiffSize;
+	// Get model-specific configuration
+	const modelConfig = config.models.find(m => m.name === model);
+	const maxDiffSize = modelConfig ? modelConfig.maxDiffSize : config.maxDiffSize;
 
 	await initGit();
 	const status = await gitStatus();
+	
+	if (!status) {
+		return false;
+	}
 
 	const gitDiffContent = await gitDiff();
+	
+	// Check diff size before making API calls
+	if (gitDiffContent.length > maxDiffSize) {
+		console.log(`Diff content is too large for ${model}. Skipping AI request.`);
+		const firstFilePath = await getFirstFilePath(); // Need to import or access this properly
+		return `✨ tweak (${firstFilePath}): update ${firstFilePath}`;
+	}
+	
 	const {category, message} = await getModelResponse(
 		model,
 		flags,
 		gitDiffContent,
 	);
 
-	if (gitDiffContent.length > maxDiffSize) {
-		console.log('Diff content is too large. Skipping OpenAI request.');
-		return `✨ tweak (${firstFilePath}): update ${firstFilePath}`;
-	}
-
-	if (status !== false) {
+	const firstFilePath = await getFirstFilePath(); // Need to import or access this properly
+	if (category && message) {
 		return `${category} (${firstFilePath}): ${message}`;
 	} else {
 		return false;
 	}

Also need to import or properly access firstFilePath from the commit module.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In source/utils/generateCommitMessage.js between lines 8 and 24, the diff size
check is done after making API calls, which is inefficient, and the variable
firstFilePath is used without being declared or imported. To fix this, move the
diff size validation to before any API calls to avoid unnecessary requests, and
ensure firstFilePath is properly imported from the commit module or declared
before use.


if (status !== false) {
return `${category} (${firstFilePath}): ${message}`;
} else {
return false;
}
}

async function getModelResponse(model, flags, gitDiffContent) {
let response;

try {
switch (model) {
case 'gpt-4o-mini':
response = await openAiModel(model, flags, gitDiffContent);
break;
case 'llama3.1:8b':
response = await ollamaModel(model, flags, gitDiffContent);
break;
default:
throw new Error('Unsupported model selected');
}
console.log('response', response);

if (response && response.category && response.message) {
// Destructure and return the required fields
const {category, message} = response;
return {category, message};
} else {
throw new Error(response.message);
}
Comment on lines +49 to +55
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Use optional chaining and fix error handling

The static analysis correctly suggests using optional chaining. Also, the error throwing logic is flawed - it throws response.message which might not exist if response is invalid.

-		if (response && response.category && response.message) {
+		if (response?.category && response?.message) {
 			// Destructure and return the required fields
 			const {category, message} = response;
 			return {category, message};
 		} else {
-			throw new Error(response.message);
+			throw new Error('Invalid response from AI model: missing category or message');
 		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (response && response.category && response.message) {
// Destructure and return the required fields
const {category, message} = response;
return {category, message};
} else {
throw new Error(response.message);
}
if (response?.category && response?.message) {
// Destructure and return the required fields
const {category, message} = response;
return {category, message};
} else {
throw new Error('Invalid response from AI model: missing category or message');
}
🧰 Tools
🪛 Biome (1.9.4)

[error] 49-49: Change to an optional chain.

Unsafe fix: Change to an optional chain.

(lint/complexity/useOptionalChain)

🤖 Prompt for AI Agents
In source/utils/generateCommitMessage.js around lines 49 to 55, update the
condition to use optional chaining when accessing response properties to avoid
errors if response is null or undefined. Also, modify the error throwing logic
to check if response.message exists before throwing it; if not, throw a generic
error message to prevent throwing undefined.

} catch (error) {
console.log(error.message);
}
Comment on lines +56 to +58
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Error is caught but not re-thrown

Catching and logging errors without re-throwing them causes silent failures. The caller won't know that an error occurred.

 	} catch (error) {
-		console.log(error.message);
+		console.error('Model response error:', error.message);
+		throw error;
 	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} catch (error) {
console.log(error.message);
}
} catch (error) {
console.error('Model response error:', error.message);
throw error;
}
🤖 Prompt for AI Agents
In source/utils/generateCommitMessage.js around lines 56 to 58, the catch block
logs the error message but does not re-throw the error, causing silent failures.
Modify the catch block to re-throw the caught error after logging it so that the
caller is aware an error occurred and can handle it appropriately.

}

export default generateCommitMessage;
Loading