Skip to content

Conversation

@itzlambda
Copy link
Owner

@itzlambda itzlambda commented Jan 3, 2026

Replaces monolithic rullm-core with dedicated provider crates:

  • rullm-chat-completion - OpenAI-compatible chat completion API (works with OpenAI, Gemini, and other compatible endpoints)
  • rullm-anthropic - Native Anthropic Messages API implementation

Adds Gemini support via OpenAI-compatible endpoint.

Summary by CodeRabbit

  • New Features

    • Added comprehensive Anthropic Claude API integration with streaming support and token counting.
    • Added unified Chat Completions client supporting multiple providers (OpenAI, Groq, OpenRouter, Gemini).
  • Documentation

    • Updated chat examples to use Claude (anthropic) instead of Gemini.
    • Added provider comparison guides for Chat Completions API.
  • Chores

    • Reorganized internal crate structure for better provider isolation.
    • Renamed Google provider to Gemini in CLI for consistency.
    • Updated authentication handling across provider implementations.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Jan 3, 2026

📝 Walkthrough

Walkthrough

This PR replaces the monolithic rullm-core crate with two new provider crates (rullm-anthropic, rullm-chat-completion), updates the CLI to use them (renames Google → Gemini), introduces new typed clients/config/error/streaming layers, and removes the old core provider implementations and examples. Public APIs change accordingly.

Changes

Cohort / File(s) Summary
Workspace & Manifests
Cargo.toml, crates/*/*.Cargo.toml
Workspace members updated to add crates/rullm-anthropic and crates/rullm-chat-completion; serde feature set changed; many crate manifests switched to workspace = true.
New Anthropic Crate
crates/rullm-anthropic/Cargo.toml, crates/rullm-anthropic/src/*
New crate implementing Anthropic client: Client, MessagesClient, config/builder (ClientBuilder, ClientConfig, RequestOptions), HttpTransport, error types (AnthropicError, ErrorObject), messages types, SSE stream parser, accumulator, and tests.
New Chat Completion Crate
crates/rullm-chat-completion/Cargo.toml, crates/rullm-chat-completion/src/*, crates/rullm-chat-completion/spec/*
New OpenAI-compatible chat completions crate: ChatCompletionsClient, ChatRequestBuilder, ClientConfig, ApiResponse, error types (ClientError, ApiError, StreamError), full types module, SSE streaming + accumulator, and provider comparison docs.
CLI Integration & Surface
crates/rullm-cli/src/*
CLI refactored to use new crates; unified error type CliError introduced; many function and enum signatures changed to accept/return CliError; Provider::Google renamed to Provider::Gemini; provider construction and streaming paths updated.
Removed monolith: rullm-core
crates/rullm-core/Cargo.toml, crates/rullm-core/src/**, crates/rullm-core/examples/**
Entire rullm-core crate removed: provider implementations (OpenAI, Anthropic, Google), compat types, config, LlmError, SSE utils, and all example binaries/docs deleted.
Docs & Readme
README.md, spec/*, crates/rullm-chat-completion/spec/*
README examples adjusted (gemini → claude/claude examples); added provider comparison and difference docs for chat completions; removed older spec file(s).
Aliases / CLI UX
crates/rullm-cli/src/aliases.rs, crates/rullm-cli/src/args.rs
Alias resolution and CLI model examples adjusted; alias error types and signatures changed from LlmErrorCliError; model completer/provider lists updated.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant CLI as CLI
    participant Manager as CliClient
    participant Chat as ChatCompletionsClient
    participant Anth as AnthropicClient
    participant Transport as HTTPTransport
    participant API as Provider API

    CLI->>Manager: create_client(provider, model, creds)
    alt OpenAI-compatible / Gemini / Groq / OpenRouter
        Manager->>Chat: ClientConfig -> new()
        Chat->>Transport: init(config)
        Transport-->>Chat: http client
        Manager-->>CLI: CliClient (Chat-backed)
    else Anthropic
        Manager->>Anth: ClientBuilder/from_env -> new()
        Anth->>Transport: init(config)
        Transport-->>Anth: http client
        Manager-->>CLI: CliClient (Anthropic-backed)
    end

    CLI->>Manager: chat(request)
    alt non-streaming
        Manager->>Chat/Anth: create(request)
        Chat/Anth->>Transport: post_json(path, body, opts)
        Transport->>API: HTTP POST
        API-->>Transport: JSON response
        Transport-->>Chat/Anth: parsed response + meta
        Chat/Anth-->>Manager: ApiResponse
        Manager-->>CLI: result
    else streaming
        Manager->>Anth: stream(MessagesRequest)
        Anth->>Transport: post_stream(path, body, opts)
        Transport->>API: HTTP POST (SSE)
        API-->>Transport: bytes stream (SSE)
        Transport-->>Anth: bytes stream
        Anth->>Anth: parse_sse_stream -> StreamEvent
        Anth->>Anth: MessageAccumulator -> final Message
        Anth-->>Manager: MessageStream
        Manager-->>CLI: stream of deltas / final result
    end
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Poem

🐰 I hopped through crates both old and new,

Split the core, gave each provider its due,
Anthropic sings and Chat streams bright,
Gemini renamed to fit the light,
CLI twitches whiskers — ready to chew!

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly summarizes the main change: refactoring to separate API implementations into dedicated provider crates instead of a monolithic rullm-core.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings

Comment @coderabbitai help to get the list of available commands and usage tips.

@itzlambda itzlambda marked this pull request as ready for review January 3, 2026 18:50
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
crates/rullm-cli/src/aliases.rs (1)

198-210: Update pattern inference to include newly added providers or document the intentional limitation.

The pattern inference at line 199 only checks OpenAI and Anthropic, but the Provider enum includes Groq, OpenRouter, and Gemini with defined aliases. Users of these new providers must use explicit provider:model format (e.g., groq:mixtral-8x7b) rather than pattern inference (e.g., groq-mixtral-8x7b), creating inconsistent UX.

Either extend the pattern inference to all providers or add a comment explaining why the feature is limited to only OpenAI and Anthropic.

🧹 Nitpick comments (18)
crates/rullm-anthropic/src/messages/mod.rs (1)

12-12: Consider explicit re-exports for better API clarity (optional).

The wildcard re-export pub use types::*; works but makes the public API surface less discoverable. Explicitly listing the exported types (similar to the stream re-exports on lines 8-11) would improve maintainability and make it clear to users what's available without checking types.rs.

Example: Explicit re-exports
-pub use types::*;
+pub use types::{
+    MessagesRequest, MessagesResponse, ContentBlock, Message,
+    // ... other types as needed
+};
crates/rullm-cli/src/client.rs (1)

19-36: Consider providing user feedback for invalid option values.

The option parsing silently ignores parse failures (lines 23-25, 28-30) and unknown keys (lines 32-34). This could lead to user confusion when they mistype option values or keys without receiving any feedback.

Example: Add validation feedback
 for (key, value) in &cli.option {
     match key.as_str() {
         "temperature" => {
-            if let Ok(temp) = value.parse::<f32>() {
-                config.temperature = Some(temp);
+            match value.parse::<f32>() {
+                Ok(temp) => config.temperature = Some(temp),
+                Err(_) => eprintln!("Warning: Invalid temperature value '{}', ignoring", value),
             }
         }
         "max_tokens" => {
-            if let Ok(max_tokens) = value.parse::<u32>() {
-                config.max_tokens = Some(max_tokens);
+            match value.parse::<u32>() {
+                Ok(max_tokens) => config.max_tokens = Some(max_tokens),
+                Err(_) => eprintln!("Warning: Invalid max_tokens value '{}', ignoring", value),
             }
         }
         _ => {
-            // Ignore unknown options for now
+            eprintln!("Warning: Unknown option '{}', ignoring", key);
         }
     }
 }
crates/rullm-cli/src/error.rs (1)

37-40: Consider renaming unknown to other for consistency.

The method unknown creates an Other variant. For better consistency between the variant name and the constructor method, consider renaming it to other.

🔎 Proposed refactor
-    /// Create a generic error
-    pub fn unknown(msg: impl Into<String>) -> Self {
+    /// Create a generic/unknown error
+    pub fn other(msg: impl Into<String>) -> Self {
         Self::Other(msg.into())
     }
crates/rullm-chat-completion/spec/chat-completion.md (2)

74-77: Specify language for fenced code block.

The fenced code block should have a language identifier for proper syntax highlighting and markdown compliance.

🔎 Proposed fix
 Optional project header:
-```
+```http
 OpenAI-Project: YOUR_PROJECT_ID
</details>

---

`842-848`: **Specify language for fenced code block.**

The fenced code block should have a language identifier. Consider using `text` or `markdown` for this type of content.



<details>
<summary>🔎 Proposed fix</summary>

```diff
-```
+```text
 1. Wait: min(2^attempt * 1000ms, 60000ms) + random_jitter
 2. Max attempts: 5
 3. On 429: Check x-ratelimit-reset-* headers
 4. On 5xx: Always retry
 5. Always log x-request-id for diagnostics
</details>

</blockquote></details>
<details>
<summary>crates/rullm-chat-completion/spec/chat-completion-difference.md (5)</summary><blockquote>

`53-65`: **Add blank line before table for markdown compliance.**

The table should be surrounded by blank lines per markdown best practices.



<details>
<summary>🔎 Proposed fix</summary>

```diff
 ### Parameter support matrix
+
 | Category | Supported | Notes |
 |----------|-----------|-------|

109-121: Add blank line before table for markdown compliance.

The table should be surrounded by blank lines per markdown best practices.

🔎 Proposed fix
 ### Parameter support matrix
+
 | Category | Supported | Notes |
 |----------|-----------|-------|

167-182: Add blank line before table for markdown compliance.

The table should be surrounded by blank lines per markdown best practices.

🔎 Proposed fix
 ### Parameter support matrix
+
 | Category | Supported | Notes |
 |----------|-----------|-------|

218-233: Add blank line before table for markdown compliance.

The table should be surrounded by blank lines per markdown best practices.

🔎 Proposed fix
 ### Parameter support matrix
+
 | Category | Supported | Notes |
 |----------|-----------|-------|

265-278: Add blank line before table for markdown compliance.

The table should be surrounded by blank lines per markdown best practices.

🔎 Proposed fix
 ### Parameter support matrix
+
 | Category | Supported | Notes |
 |----------|-----------|-------|
crates/rullm-chat-completion/src/streaming.rs (1)

59-67: Consider logging when UTF-8 replacement occurs.

String::from_utf8_lossy silently replaces invalid UTF-8 sequences with the replacement character. For debugging streaming issues, you may want to trace when this happens.

crates/rullm-anthropic/src/messages/stream.rs (3)

163-174: Consider VecDeque for more efficient queue operations.

Using Vec::remove(0) is O(n) as it shifts all remaining elements. For a queue pattern where items are added at the end and removed from the front, VecDeque provides O(1) operations for both.

🔎 Suggested improvement
+use std::collections::VecDeque;

 pub struct SseParser<S> {
     stream: S,
     buffer: String,
-    event_queue: Vec<String>,
+    event_queue: VecDeque<String>,
 }

 // In poll_next:
-if !self.event_queue.is_empty() {
-    return Poll::Ready(Some(Ok(self.event_queue.remove(0))));
-}
+if let Some(event) = self.event_queue.pop_front() {
+    return Poll::Ready(Some(Ok(event)));
+}

359-362: Avoid eprintln! in library code; consider structured logging or error propagation.

Using eprintln! directly in library code bypasses the application's logging configuration and may be unexpected for library consumers. Consider using tracing::warn! or propagating the error.

🔎 Suggested alternatives
 StreamEvent::Error { error } => {
-    // Log error but don't fail accumulation
-    eprintln!("Stream error: {}: {}", error.error_type, error.message);
+    // Option 1: Use tracing (if available)
+    tracing::warn!(error_type = %error.error_type, message = %error.message, "Stream error received");
+    
+    // Option 2: Store error for later retrieval
+    // self.errors.push(error.clone());
 }

411-414: Silent fallback to Value::Null may hide parsing issues.

If partial_json contains malformed JSON, silently returning Null could mask bugs. Consider logging or tracking when this fallback occurs.

crates/rullm-anthropic/src/config.rs (1)

241-247: Same silent failure pattern for beta header values.

If beta_value contains invalid characters, the header silently won't be added. Consider logging or returning an error for consistency with a potential fix in apply_to_headers.

crates/rullm-chat-completion/src/client.rs (2)

116-121: Silent failure when Bearer token contains invalid characters.

Same pattern as noted in the Anthropic config - if the token contains invalid characters, authentication silently won't be applied.


319-325: Using HttpError for validation ("Model is required") is semantically imprecise.

Consider using a more appropriate error variant (e.g., ValidationError) for builder validation failures. This helps callers distinguish between network issues and usage errors.

crates/rullm-cli/src/cli_client.rs (1)

99-110: Consider making Anthropic beta features configurable or document their maintenance burden.

Anthropic's API documentation explicitly states that beta features "may be changed or removed and can be deprecated without notice." The hard-coded versioned beta headers (oauth-2025-04-20, claude-code-20250219, interleaved-thinking-2025-05-14, fine-grained-tool-streaming-2025-05-14) will cause the client to fail if Anthropic retires any of these versions. Either extract these to a configurable constant/environment variable to allow runtime updates without code changes, or add a comment documenting that these require maintenance as the Anthropic API evolves.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9d24a31 and aad324a.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (70)
  • Cargo.toml
  • README.md
  • crates/rullm-anthropic/Cargo.toml
  • crates/rullm-anthropic/spec/message-api.md
  • crates/rullm-anthropic/src/client.rs
  • crates/rullm-anthropic/src/config.rs
  • crates/rullm-anthropic/src/error.rs
  • crates/rullm-anthropic/src/lib.rs
  • crates/rullm-anthropic/src/messages/mod.rs
  • crates/rullm-anthropic/src/messages/stream.rs
  • crates/rullm-anthropic/src/messages/types.rs
  • crates/rullm-anthropic/src/transport.rs
  • crates/rullm-chat-completion/Cargo.toml
  • crates/rullm-chat-completion/spec/chat-completion-comparison.md
  • crates/rullm-chat-completion/spec/chat-completion-difference.md
  • crates/rullm-chat-completion/spec/chat-completion.md
  • crates/rullm-chat-completion/src/client.rs
  • crates/rullm-chat-completion/src/config.rs
  • crates/rullm-chat-completion/src/error.rs
  • crates/rullm-chat-completion/src/lib.rs
  • crates/rullm-chat-completion/src/streaming.rs
  • crates/rullm-chat-completion/src/types.rs
  • crates/rullm-cli/Cargo.toml
  • crates/rullm-cli/src/aliases.rs
  • crates/rullm-cli/src/args.rs
  • crates/rullm-cli/src/auth.rs
  • crates/rullm-cli/src/cli_client.rs
  • crates/rullm-cli/src/client.rs
  • crates/rullm-cli/src/commands/auth.rs
  • crates/rullm-cli/src/commands/chat/mod.rs
  • crates/rullm-cli/src/commands/info.rs
  • crates/rullm-cli/src/commands/mod.rs
  • crates/rullm-cli/src/error.rs
  • crates/rullm-cli/src/main.rs
  • crates/rullm-cli/src/provider.rs
  • crates/rullm-core/Cargo.toml
  • crates/rullm-core/examples/README.md
  • crates/rullm-core/examples/anthropic_simple.rs
  • crates/rullm-core/examples/anthropic_stream.rs
  • crates/rullm-core/examples/basic_usage.rs
  • crates/rullm-core/examples/gemini_stream.rs
  • crates/rullm-core/examples/google_simple.rs
  • crates/rullm-core/examples/openai_basic.rs
  • crates/rullm-core/examples/openai_config.rs
  • crates/rullm-core/examples/openai_conversation.rs
  • crates/rullm-core/examples/openai_simple.rs
  • crates/rullm-core/examples/openai_stream.rs
  • crates/rullm-core/examples/test_all_providers.rs
  • crates/rullm-core/src/compat_types.rs
  • crates/rullm-core/src/config.rs
  • crates/rullm-core/src/error.rs
  • crates/rullm-core/src/lib.rs
  • crates/rullm-core/src/providers/anthropic/client.rs
  • crates/rullm-core/src/providers/anthropic/config.rs
  • crates/rullm-core/src/providers/anthropic/mod.rs
  • crates/rullm-core/src/providers/anthropic/types.rs
  • crates/rullm-core/src/providers/google/client.rs
  • crates/rullm-core/src/providers/google/config.rs
  • crates/rullm-core/src/providers/google/mod.rs
  • crates/rullm-core/src/providers/google/types.rs
  • crates/rullm-core/src/providers/mod.rs
  • crates/rullm-core/src/providers/openai/client.rs
  • crates/rullm-core/src/providers/openai/mod.rs
  • crates/rullm-core/src/providers/openai/types.rs
  • crates/rullm-core/src/providers/openai_compatible/config.rs
  • crates/rullm-core/src/providers/openai_compatible/mod.rs
  • crates/rullm-core/src/utils/mod.rs
  • crates/rullm-core/src/utils/sse.rs
  • crates/rullm-core/src/utils/test_helpers.rs
  • spec/chat-completion2.md
💤 Files with no reviewable changes (36)
  • crates/rullm-cli/src/commands/info.rs
  • crates/rullm-core/src/providers/google/config.rs
  • crates/rullm-core/src/providers/openai/mod.rs
  • crates/rullm-core/examples/google_simple.rs
  • crates/rullm-core/examples/openai_basic.rs
  • spec/chat-completion2.md
  • crates/rullm-core/src/providers/google/mod.rs
  • crates/rullm-core/src/utils/mod.rs
  • crates/rullm-core/examples/basic_usage.rs
  • crates/rullm-core/src/providers/mod.rs
  • crates/rullm-core/src/lib.rs
  • crates/rullm-core/src/error.rs
  • crates/rullm-core/src/utils/sse.rs
  • crates/rullm-core/src/compat_types.rs
  • crates/rullm-core/src/providers/anthropic/config.rs
  • crates/rullm-core/src/providers/google/client.rs
  • crates/rullm-core/src/providers/openai/types.rs
  • crates/rullm-core/src/providers/openai_compatible/config.rs
  • crates/rullm-core/src/providers/anthropic/client.rs
  • crates/rullm-core/src/providers/openai/client.rs
  • crates/rullm-core/Cargo.toml
  • crates/rullm-core/examples/anthropic_stream.rs
  • crates/rullm-core/examples/README.md
  • crates/rullm-core/src/providers/anthropic/mod.rs
  • crates/rullm-core/examples/openai_simple.rs
  • crates/rullm-core/examples/test_all_providers.rs
  • crates/rullm-core/src/config.rs
  • crates/rullm-core/examples/gemini_stream.rs
  • crates/rullm-core/examples/openai_stream.rs
  • crates/rullm-core/src/utils/test_helpers.rs
  • crates/rullm-core/examples/openai_conversation.rs
  • crates/rullm-core/examples/anthropic_simple.rs
  • crates/rullm-core/src/providers/google/types.rs
  • crates/rullm-core/src/providers/anthropic/types.rs
  • crates/rullm-core/src/providers/openai_compatible/mod.rs
  • crates/rullm-core/examples/openai_config.rs
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-12-21T10:42:39.384Z
Learnt from: CR
Repo: itzlambda/rullm PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-21T10:42:39.384Z
Learning: Define provider configuration using configuration traits and builders in `config.rs`

Applied to files:

  • crates/rullm-cli/src/auth.rs
  • crates/rullm-cli/src/commands/auth.rs
  • crates/rullm-cli/src/provider.rs
  • crates/rullm-chat-completion/src/config.rs
  • crates/rullm-cli/src/aliases.rs
📚 Learning: 2025-12-21T10:42:39.384Z
Learnt from: CR
Repo: itzlambda/rullm PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-21T10:42:39.384Z
Learning: Use `compat_types.rs` for OpenAI-compatible message and response types across all providers

Applied to files:

  • crates/rullm-cli/src/auth.rs
  • crates/rullm-anthropic/src/messages/mod.rs
  • crates/rullm-chat-completion/src/config.rs
  • crates/rullm-chat-completion/src/error.rs
  • crates/rullm-anthropic/src/messages/types.rs
  • crates/rullm-chat-completion/src/types.rs
  • crates/rullm-cli/src/cli_client.rs
  • crates/rullm-chat-completion/src/lib.rs
📚 Learning: 2025-12-21T10:42:39.384Z
Learnt from: CR
Repo: itzlambda/rullm PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-21T10:42:39.384Z
Learning: Organize CLI commands in the `commands/` directory with separate modules for each command (auth.rs, chat.rs, models.rs, alias.rs, templates.rs)

Applied to files:

  • crates/rullm-cli/src/args.rs
  • crates/rullm-cli/src/cli_client.rs
  • crates/rullm-chat-completion/src/lib.rs
📚 Learning: 2025-12-21T10:42:39.384Z
Learnt from: CR
Repo: itzlambda/rullm PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-21T10:42:39.384Z
Learning: Implement server-sent event (SSE) parsing in `utils/sse.rs` for streaming support

Applied to files:

  • crates/rullm-anthropic/src/messages/mod.rs
  • crates/rullm-chat-completion/src/streaming.rs
  • crates/rullm-anthropic/src/messages/stream.rs
📚 Learning: 2025-12-21T10:42:39.384Z
Learnt from: CR
Repo: itzlambda/rullm PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-21T10:42:39.384Z
Learning: Define comprehensive error handling using the `LlmError` enum in `error.rs`

Applied to files:

  • crates/rullm-cli/src/client.rs
  • crates/rullm-cli/src/commands/mod.rs
  • crates/rullm-cli/src/main.rs
  • crates/rullm-chat-completion/src/error.rs
  • crates/rullm-cli/src/aliases.rs
  • crates/rullm-cli/src/error.rs
  • crates/rullm-cli/src/cli_client.rs
  • crates/rullm-anthropic/src/error.rs
📚 Learning: 2025-12-21T10:42:39.384Z
Learnt from: CR
Repo: itzlambda/rullm PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-21T10:42:39.384Z
Learning: Use `cargo build --all` to build all crates in the workspace

Applied to files:

  • crates/rullm-chat-completion/Cargo.toml
  • Cargo.toml
🧬 Code graph analysis (14)
crates/rullm-cli/Cargo.toml (5)
crates/rullm-core/src/providers/openai_compatible/mod.rs (2)
  • chat_completion_stream (319-420)
  • chat_completion (281-316)
crates/rullm-core/examples/anthropic_stream.rs (1)
  • main (7-240)
crates/rullm-core/src/providers/openai/client.rs (1)
  • chat_completion_stream (80-148)
crates/rullm-core/examples/openai_stream.rs (1)
  • main (5-166)
crates/rullm-core/src/providers/mod.rs (1)
  • openai_compatible (5-5)
crates/rullm-cli/src/args.rs (1)
crates/rullm-core/examples/anthropic_stream.rs (1)
  • main (7-240)
crates/rullm-anthropic/src/messages/mod.rs (3)
crates/rullm-anthropic/src/client.rs (1)
  • stream (151-167)
crates/rullm-anthropic/src/messages/stream.rs (1)
  • parse_sse_stream (211-222)
crates/rullm-core/src/providers/anthropic/mod.rs (1)
  • types (28-28)
crates/rullm-cli/src/client.rs (1)
crates/rullm-cli/src/cli_client.rs (1)
  • gemini (159-175)
crates/rullm-cli/src/main.rs (2)
crates/rullm-cli/src/output.rs (1)
  • error (92-100)
crates/rullm-core/src/lib.rs (1)
  • error (153-153)
crates/rullm-chat-completion/src/config.rs (1)
crates/rullm-chat-completion/src/error.rs (4)
  • from (34-39)
  • from (43-45)
  • from (107-113)
  • from (117-119)
crates/rullm-chat-completion/spec/chat-completion-difference.md (3)
crates/rullm-core/src/providers/openai_compatible/mod.rs (2)
  • chat_completion (281-316)
  • chat_completion_stream (319-420)
crates/rullm-core/src/providers/openai/client.rs (1)
  • chat_completion_stream (80-148)
crates/rullm-core/src/providers/openai/types.rs (1)
  • ChatCompletionRequest (11-85)
crates/rullm-anthropic/src/messages/stream.rs (3)
crates/rullm-anthropic/src/error.rs (1)
  • serialization (46-54)
crates/rullm-core/src/utils/sse.rs (4)
  • S (53-106)
  • SseParser (15-19)
  • S (21-51)
  • sse_lines (8-13)
crates/rullm-core/src/providers/anthropic/client.rs (1)
  • messages_stream (77-133)
crates/rullm-chat-completion/src/error.rs (2)
crates/rullm-core/src/providers/openai/client.rs (4)
  • chat_completion_stream (80-148)
  • e (140-145)
  • data (132-146)
  • chat_completion (42-77)
crates/rullm-core/src/providers/openai/types.rs (2)
  • ChatCompletionChunk (361-375)
  • ChatCompletionRequest (11-85)
crates/rullm-cli/src/aliases.rs (2)
crates/rullm-cli/src/error.rs (1)
  • validation (33-35)
crates/rullm-cli/src/provider.rs (1)
  • from_alias (61-64)
crates/rullm-anthropic/src/config.rs (3)
crates/rullm-anthropic/src/client.rs (3)
  • new (58-63)
  • from_env (52-55)
  • base_url (73-75)
crates/rullm-anthropic/src/error.rs (1)
  • configuration (62-64)
crates/rullm-core/src/providers/anthropic/config.rs (3)
  • AnthropicConfig (9-16)
  • AnthropicConfig (18-37)
  • ProviderConfig (39-85)
crates/rullm-chat-completion/src/types.rs (1)
crates/rullm-core/src/providers/openai/types.rs (4)
  • ChatCompletionChunk (361-375)
  • ChatMessage (89-108)
  • ChatMessage (462-502)
  • Delta (394-404)
crates/rullm-cli/src/cli_client.rs (7)
crates/rullm-anthropic/src/config.rs (1)
  • api_key (171-174)
crates/rullm-chat-completion/src/client.rs (1)
  • model (209-212)
crates/rullm-anthropic/src/messages/types.rs (1)
  • content (789-792)
crates/rullm-anthropic/src/messages/stream.rs (1)
  • text_stream (472-490)
crates/rullm-core/src/providers/openai/client.rs (2)
  • chat_completion_stream (80-148)
  • chat_completion (42-77)
crates/rullm-core/src/providers/anthropic/client.rs (1)
  • AnthropicClient (19-278)
crates/rullm-core/src/providers/openai_compatible/mod.rs (1)
  • chat_completion (281-316)
crates/rullm-anthropic/src/error.rs (3)
crates/rullm-core/src/providers/anthropic/client.rs (3)
  • AnthropicClient (13-17)
  • e (172-174)
  • AnthropicClient (19-278)
crates/rullm-core/src/providers/anthropic/types.rs (1)
  • ErrorData (336-340)
crates/rullm-core/src/providers/anthropic/config.rs (1)
  • AnthropicConfig (9-16)
🪛 LanguageTool
crates/rullm-anthropic/spec/message-api.md

[style] ~52-~52: To form a complete sentence, be sure to include a subject.
Context: ...eisuserorassistant. content` can be: - A string (shorthand for a single ...

(MISSING_IT_THERE)


[style] ~82-~82: To form a complete sentence, be sure to include a subject.
Context: ..., name }-disable_parallel_tool_use` can be set to force a single tool use. ## ...

(MISSING_IT_THERE)

🪛 markdownlint-cli2 (0.18.1)
crates/rullm-chat-completion/spec/chat-completion-comparison.md

99-99: Tables should be surrounded by blank lines

(MD058, blanks-around-tables)


181-181: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


188-188: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

crates/rullm-chat-completion/spec/chat-completion.md

75-75: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


842-842: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

crates/rullm-chat-completion/spec/chat-completion-difference.md

54-54: Tables should be surrounded by blank lines

(MD058, blanks-around-tables)


110-110: Tables should be surrounded by blank lines

(MD058, blanks-around-tables)


168-168: Tables should be surrounded by blank lines

(MD058, blanks-around-tables)


219-219: Tables should be surrounded by blank lines

(MD058, blanks-around-tables)


266-266: Tables should be surrounded by blank lines

(MD058, blanks-around-tables)

🔇 Additional comments (65)
crates/rullm-cli/src/commands/auth.rs (1)

225-225: LGTM!

The Gemini provider display mapping is correctly implemented and aligns with the Google → Gemini rename across the codebase.

crates/rullm-chat-completion/Cargo.toml (1)

1-28: LGTM!

The crate manifest properly integrates with the workspace and declares appropriate dependencies for an HTTP-based async chat completion client. Consistent use of workspace = true ensures unified version management across the workspace.

Cargo.toml (1)

2-2: LGTM!

The addition of rullm-chat-completion and rullm-anthropic workspace members properly structures the refactored provider crates and aligns with the PR's objective to decompose the monolithic core into specialized provider implementations.

crates/rullm-anthropic/Cargo.toml (1)

11-11: The rc feature is correctly included and actively used.

The codebase extensively uses Arc<str> throughout the messages, error, stream, and configuration types—many of which have serde derives (Serialize/Deserialize). Examples include ErrorObject, MessagesRequest, CustomTool, MessagesResponse, and many others. The serde rc feature is necessary to support serialization and deserialization of these Arc<T> types.

crates/rullm-cli/Cargo.toml (2)

11-13: LGTM! Clean refactoring to provider-specific crates.

The replacement of the monolithic rullm-core dependency with dedicated rullm-anthropic and rullm-chat-completion crates aligns well with the PR's architectural goals.


16-16: LGTM! Workspace dependencies added appropriately.

The addition of async-stream and thiserror as workspace dependencies supports the new provider-crate architecture.

Also applies to: 34-34

crates/rullm-chat-completion/src/config.rs (3)

1-95: LGTM! Well-structured configuration system.

The ClientConfig and builder pattern implementation is clean and follows Rust idioms. The use of Arc<str> for shared string data is efficient.


97-131: LGTM! Flexible authentication configuration.

The AuthConfig enum elegantly handles multiple authentication schemes (bearer tokens, custom headers, query params), supporting diverse provider requirements.


150-162: LGTM! Comprehensive rate limit parsing.

The RateLimitInfo structure correctly extracts rate limit metadata from response headers with proper fallback handling.

crates/rullm-cli/src/commands/chat/mod.rs (1)

29-29: LGTM! Documentation updated to reflect new provider focus.

The example format change from Gemini to Anthropic aligns with the PR's provider restructuring and provides users with a current, relevant example.

crates/rullm-cli/src/auth.rs (1)

101-101: LGTM! Consistent provider rebranding from Google to Gemini.

The field rename and updated match arms correctly reflect the PR's objective to rebrand the Google provider as Gemini. All references are updated consistently across the AuthConfig implementation.

Also applies to: 167-167, 178-178

crates/rullm-anthropic/src/messages/mod.rs (1)

1-11: LGTM!

The module structure is clean and follows Rust conventions. The selective re-exports from stream provide a well-defined public API.

crates/rullm-cli/src/main.rs (1)

12-12: LGTM!

The error module declaration enables the new CliError type used throughout the CLI for unified error handling, aligning with the PR's refactoring goals.

crates/rullm-cli/src/commands/mod.rs (2)

4-4: LGTM!

The error type migration from LlmError to CliError is consistent across the function signature, imports, and error conversions. This aligns with the PR's goal of unified error handling across providers.

Also applies to: 120-120, 165-165


36-36: LGTM!

The example update to use anthropic/claude-3-sonnet aligns with the PR's provider restructuring and provides users with a current, relevant example.

crates/rullm-cli/src/args.rs (1)

26-26: LGTM!

The example updates consistently reflect the shift to Anthropic models in the CLI documentation, aligning with the provider restructuring in this PR.

Also applies to: 121-121

crates/rullm-cli/src/client.rs (2)

5-5: LGTM!

The error type migration from LlmError to CliError is consistent with the broader refactoring across the CLI crate.

Also applies to: 15-15


43-43: LGTM!

The Gemini provider integration correctly routes to the CliClient::gemini() method, which uses Google's OpenAI-compatible endpoint. This aligns with the PR's goal of adding Gemini support via the OpenAI-compatible API.

crates/rullm-cli/src/provider.rs (1)

11-11: LGTM! Consistent provider rename with backward compatibility.

The refactoring from Google to Gemini is consistently applied across all implementations (enum variant, Display, ValueEnum, aliases, and environment key). The backward compatibility is preserved by keeping "google" as an alias, which is a thoughtful design decision.

Also applies to: 21-21, 34-34, 44-44, 57-57, 90-90

crates/rullm-anthropic/spec/message-api.md (1)

1-130: LGTM! Comprehensive and well-structured API documentation.

This documentation provides a clear, implementation-focused overview of the Anthropic Messages API. The organization is logical, covering all essential aspects from authentication to streaming behavior. The static analysis hints about sentence structure are false positives appropriate to technical documentation style.

crates/rullm-anthropic/src/error.rs (3)

1-42: Well-structured error enum with comprehensive variants.

The error hierarchy covers the expected failure modes for an API client. Using Arc<str> for string fields is efficient for cloning. The #[from] derive on Transport enables ergonomic ? usage with reqwest errors.


83-96: Good retry classification logic.

The is_retryable implementation correctly identifies rate limits (429), Anthropic's custom overloaded status (529), the overloaded_error type, and transport-level transient failures (timeout, connection errors).


99-140: Clean error object model matching Anthropic's API structure.

The ErrorObject and ApiErrorResponse types align with Anthropic's documented error format. The Display implementation provides useful context including the optional param field.

crates/rullm-chat-completion/src/lib.rs (2)

1-65: Excellent documentation with practical examples.

The module-level documentation provides clear quick-start examples for both synchronous and streaming usage. The no_run attribute appropriately prevents example execution during doc tests.


66-86: Well-organized module structure and re-exports.

The separation into client, config, error, streaming, and types modules follows good practices. The comprehensive re-exports at crate root provide a convenient flat API surface for consumers.

crates/rullm-chat-completion/src/streaming.rs (5)

16-45: Clean stream abstraction with proper pinning.

The ChatCompletionStream wrapper properly encapsulates the boxed stream and implements Stream by delegating to the inner stream. The accumulator() method provides a convenient transition to collected results.


86-101: Robust error fallback during SSE parsing.

The fallback to parse an SseErrorWrapper when ChatCompletionChunk parsing fails is a good defensive pattern that handles API errors embedded in the stream.


159-165: Accumulator correctly handles stream collection.

The collect method properly consumes all chunks before building the final ChatCompletion. Error propagation via ? ensures failures surface appropriately.


322-329: Unused index field is acceptable here.

The #[allow(dead_code)] annotation is appropriate since index is used as the HashMap key but not needed in the struct itself after construction.


371-451: Good test coverage for SSE parsing and accumulation.

The tests cover the key scenarios: multi-chunk parsing with role, content deltas, finish_reason, and the accumulator producing the correct final text. Consider adding a test for error handling in the stream.

crates/rullm-anthropic/src/lib.rs (2)

1-80: Comprehensive documentation with authentication guidance.

The module docs clearly explain both authentication methods (API key and OAuth token) with environment variable names. The streaming example properly demonstrates pinning the text stream.


81-126: Well-organized public API surface.

The module structure and re-exports provide a clean API. The grouped comments in the re-export block (Content blocks, Tools, Streaming, etc.) improve readability.

crates/rullm-anthropic/src/client.rs (3)

35-76: Clean client architecture with Arc-wrapped transport.

The Arc<HttpTransport> enables efficient cloning of the Client and MessagesClient. The sub-client pattern via messages() provides good API organization.


151-167: Stream method correctly forces streaming mode.

Setting request.stream = Some(true) before sending ensures the API returns a streaming response regardless of the caller's original request configuration.


202-235: Good unit test coverage for builders and helpers.

The tests verify client builder, request builder, and message helper functionality. Consider adding integration tests with a mock server for the async methods.

crates/rullm-chat-completion/src/error.rs (4)

5-23: Comprehensive client error hierarchy.

The ClientError enum cleanly separates transport (Http), API response (Api), parsing (Deserialize), and streaming (Stream) failure modes. The #[from] derives enable ergonomic error propagation.


60-75: Useful retry classification helpers.

The is_rate_limit, is_server_error, and is_retryable methods provide a clean API for implementing retry logic. The 5xx range check via (500..600).contains() is idiomatic.


77-95: ApiErrorBody matches OpenAI's error format.

The structure with message, type (as error_type), param, and code fields aligns with OpenAI's documented error response format.


122-140: Stream-specific error variants are well-defined.

StreamError covers the key SSE failure modes: malformed lines, JSON parse failures, embedded API errors, and unexpected stream closure.

crates/rullm-anthropic/src/transport.rs (6)

21-30: Transport initialization properly configures timeout.

The reqwest client is built with the configured timeout, and transport errors are correctly mapped to AnthropicError::Transport.


42-75: Flexible request building with options override.

The post method correctly applies default headers first, then overlays per-request extra_headers, extra_query, and timeout override from RequestOptions.


77-97: Proper error handling in post_json.

The method correctly checks for non-success status before attempting to parse the response body, and uses parse_error_response to construct structured errors.


155-177: Robust error response parsing with fallback.

parse_error_response attempts to parse the API's error format, falling back to a synthetic ErrorObject with status-derived error type when parsing fails. This ensures errors are never lost.


179-192: Status code mapping covers Anthropic's documented errors.

The mapping includes the custom 529 status code for "overloaded" which is specific to Anthropic's API. The fallback to "api_error" for unmapped codes is appropriate.


194-231: Unit tests verify key transport behaviors.

Tests cover ResponseMeta defaults, transport creation, and status code mapping. Consider adding tests for the error parsing path with mock responses.

crates/rullm-anthropic/src/messages/stream.rs (3)

17-111: LGTM - Well-structured streaming event types.

The StreamEvent enum and supporting data structures are well-designed with appropriate serde attributes for Anthropic's SSE format. Good use of Arc<str> for efficient string handling.


471-490: LGTM - Clean separation between text streaming and accumulation.

The text_stream() method provides a convenient way to extract only text chunks for display, while final_message() handles full accumulation. The design is sound.


532-623: LGTM - Good test coverage for SSE parsing and accumulation.

Tests cover single, multiple, and chunked SSE events, plus the accumulator's text-building flow. Consider adding tests for error scenarios (malformed JSON, transport errors) in a follow-up.

crates/rullm-anthropic/src/config.rs (2)

54-99: LGTM - Clean per-request options structure.

The RequestOptions struct provides a flexible way to override client defaults on a per-request basis. The builder methods are intuitive.


262-366: LGTM - Well-encapsulated config with comprehensive tests.

The ClientConfig properly encapsulates validated configuration, and the tests cover key scenarios including auth precedence and beta header concatenation.

crates/rullm-cli/src/aliases.rs (2)

17-98: LGTM - Clean migration to CliError with preserved functionality.

The error type migration is consistent throughout, and the alias validation and persistence logic remains correct. Good handling of edge cases (empty files, missing directories).


218-228: LGTM - Thread-safe global resolver initialization.

The OnceLock<RwLock<AliasResolver>> pattern provides safe lazy initialization with concurrent read access, appropriate for CLI usage.

crates/rullm-anthropic/src/messages/types.rs (5)

13-215: LGTM - Comprehensive request builder with proper field handling.

The MessagesRequest and MessagesRequestBuilder are well-implemented. The stream field being set to None in build() is correct since streaming is controlled by the transport layer.


217-419: LGTM - Comprehensive message and content type support.

Good coverage of text, image, document, and tool result content types with cache control support. The helper constructors (Message::user(), ContentBlockParam::text(), etc.) provide a clean API.


421-541: LGTM - Complete output content block and system prompt types.

All Anthropic response content types are covered, including thinking blocks and web search results. System content properly supports both simple text and cache-controlled blocks.


543-716: LGTM - Comprehensive tool type definitions.

Good coverage of custom tools and server-provided tools (bash, text_editor, web_search) with appropriate versioning. The ToolChoice helpers provide a clean API.


718-910: LGTM - Complete response type definitions with helpful accessors.

The MessagesResponse helpers (text(), tool_uses()) provide convenient access patterns. Token counting types are included for the count tokens endpoint.

crates/rullm-cli/src/cli_client.rs (1)

375-387: LGTM - Clean helper trait for system content conversion.

The SystemContentExt trait provides a convenient way to normalize SystemContent to blocks format for the OAuth flow.

crates/rullm-chat-completion/src/client.rs (2)

20-78: LGTM - Clean client implementation with proper error handling.

The client correctly handles HTTP errors, preserves raw response bodies for debugging, and returns typed ApiResponse with metadata. Good security practice in the Debug impl to not expose auth credentials.


171-373: LGTM - Comprehensive fluent builder API.

The ChatRequestBuilder provides an intuitive API for constructing requests with all necessary parameters. The send() and stream() methods cleanly execute the built request.

crates/rullm-chat-completion/src/types.rs (5)

10-91: LGTM - Clean newtype patterns for ModelId and Role.

The transparent serde serialization, Deref impls, and factory methods provide a good balance of type safety and ergonomics.


96-263: LGTM - Comprehensive message types with useful helpers.

The Message struct covers all OpenAI message fields, and the helper constructors (system(), user(), etc.) reduce boilerplate. The text() method correctly concatenates content from multipart messages.


265-509: LGTM - Comprehensive content parts and tool definitions.

Good support for multimodal content (images, audio, files) and tools. The ToolCall::arguments_as<T>() method provides convenient typed argument extraction.


657-874: LGTM - Complete request/response types with convenient helpers.

The ChatCompletionRequest covers all OpenAI parameters, and ChatCompletion provides helpful accessors like first_text(), tool_calls(), and parse_json<T>().


973-1070: LGTM - Correct streaming chunk types.

The streaming types (ChatCompletionChunk, ChunkDelta, ToolCallDelta) correctly model OpenAI's SSE format with appropriately optional fields for incremental updates.

Previously system messages were silently filtered out. Now they're
extracted and passed via the top-level system field per the API spec.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
crates/rullm-cli/src/commands/auth.rs (1)

23-23: Previous concern resolved: help text now includes Gemini.

The help text has been updated to include "gemini" in the provider list, addressing the previous review feedback.

Also applies to: 28-28

🧹 Nitpick comments (1)
crates/rullm-cli/src/cli_client.rs (1)

294-302: Consider logging a warning for unknown roles.

The fallback to ChatMessage::user for unknown roles (line 299) silently converts unexpected input. This could mask bugs where an unexpected role is passed. Consider adding a tracing::warn! for the default case.

🔎 Suggested improvement
                 for (role, content) in &messages {
                     let msg = match role.as_str() {
                         "system" => ChatMessage::system(content.as_str()),
                         "user" => ChatMessage::user(content.as_str()),
                         "assistant" => ChatMessage::assistant(content.as_str()),
-                        _ => ChatMessage::user(content.as_str()),
+                        other => {
+                            tracing::warn!("Unknown message role '{}', treating as user", other);
+                            ChatMessage::user(content.as_str())
+                        }
                     };
                     builder = builder.message(msg);
                 }
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between aad324a and ceef28b.

📒 Files selected for processing (3)
  • crates/rullm-cli/src/args.rs
  • crates/rullm-cli/src/cli_client.rs
  • crates/rullm-cli/src/commands/auth.rs
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: CR
Repo: itzlambda/rullm PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-21T10:42:39.384Z
Learning: Organize provider implementations in the `providers/` module using a trait-based provider system with Simple API (string-based) and Advanced API (ChatRequestBuilder) levels
📚 Learning: 2025-12-21T10:42:39.384Z
Learnt from: CR
Repo: itzlambda/rullm PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-21T10:42:39.384Z
Learning: Define comprehensive error handling using the `LlmError` enum in `error.rs`

Applied to files:

  • crates/rullm-cli/src/cli_client.rs
📚 Learning: 2025-12-21T10:42:39.384Z
Learnt from: CR
Repo: itzlambda/rullm PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-21T10:42:39.384Z
Learning: Use `compat_types.rs` for OpenAI-compatible message and response types across all providers

Applied to files:

  • crates/rullm-cli/src/cli_client.rs
📚 Learning: 2025-12-21T10:42:39.384Z
Learnt from: CR
Repo: itzlambda/rullm PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-21T10:42:39.384Z
Learning: Organize CLI commands in the `commands/` directory with separate modules for each command (auth.rs, chat.rs, models.rs, alias.rs, templates.rs)

Applied to files:

  • crates/rullm-cli/src/cli_client.rs
🔇 Additional comments (11)
crates/rullm-cli/src/commands/auth.rs (1)

225-225: Line 225 implementation is correct and complete.

Verification confirms:

  • The Provider enum properly includes the Gemini variant in all required implementations
  • The env_key() method correctly returns "GEMINI_API_KEY" for Gemini
  • No remaining references to the old Provider::Google variant exist; backward compatibility is maintained via the "google" alias in aliases()
  • The display mapping Provider::Gemini => "Gemini" is consistent with other providers
crates/rullm-cli/src/args.rs (3)

26-26: Example updated to showcase Anthropic provider.

The example change from Gemini to Anthropic aligns with the PR's introduction of the native Anthropic implementation while maintaining Gemini support in the provider list.


121-121: Help text updated to reflect new provider examples.

The model description now highlights OpenAI and Anthropic as example providers, consistent with the CLI examples and the PR's focus on the new provider architecture.


184-184: Provider rename is complete and consistent.

The change from "google:" to "gemini:" at line 184 is part of a comprehensive provider rename that has been properly implemented across the codebase. The Provider enum includes the Gemini variant, environment variables have been updated to GEMINI_API_KEY, and all client implementations support the new provider. Backward compatibility is maintained by accepting "google" as an alias during parsing.

crates/rullm-cli/src/cli_client.rs (7)

6-16: LGTM!

The imports are well-organized, cleanly separating the Anthropic-specific types from the unified ChatCompletionsClient types. The constant for OAuth identification is appropriately scoped.


18-53: Well-designed helper functions for system content handling.

The extract_system_content function properly concatenates multiple system messages with clear separation, and prepend_claude_code_system handles all SystemContent variants correctly. This addresses the previous concern about system messages being silently dropped.


62-90: Clean unification of provider variants.

The enum structure effectively consolidates OpenAI-compatible providers under ChatCompletionsClient while maintaining the separate Anthropic implementation for its native API. The is_oauth field for Anthropic is appropriately preserved.


199-262: LGTM!

The unified handling for OpenAI-compatible providers is clean, and the Anthropic-specific path correctly manages OAuth system content. The default of 1024 max_tokens for Anthropic is reasonable.


329-371: System message handling is now properly implemented.

The extraction and combination logic for system content correctly handles all four cases (user system ± OAuth). This addresses the previous review concern about system messages being silently skipped.


419-431: LGTM!

The SystemContentExt trait provides a clean internal abstraction for converting SystemContent variants into the block representation needed by the API builders.


119-131: All Anthropic OAuth beta identifiers are current and correct. The beta strings (oauth-2025-04-20, claude-code-20250219, interleaved-thinking-2025-05-14, and fine-grained-tool-streaming-2025-05-14) are all active, legitimate features from Anthropic's public beta offerings for OAuth support, Claude Code tooling, interleaved thinking, and fine-grained tool streaming respectively.

@itzlambda itzlambda merged commit 19d1cdf into main Jan 4, 2026
6 checks passed
@itzlambda itzlambda deleted the separate-chat-crate branch January 4, 2026 03:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants