-
Notifications
You must be signed in to change notification settings - Fork 92
Cherry picked Linux fixes and improvements from pr #1096 #1199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
WalkthroughReworks SCD to instance-based headers and per-stream StreamHeaders/StreamFiles/MediaFiles with codec-specific decoding for codecs 6 and 12. Adds EGS Extract outputMode to control output paths. Introduces KH1 Cvbl mesh/vertex/joint loader. Adds KH2 Bgm and Bop parsers and an empty Wd class. Minor docs header->link tweak. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant App
participant Scd as Scd.Read
participant Codec as CodecHandler
App->>Scd: Read(stream)
Scd->>Scd: Parse FileHeader, tableOffsetHeader
loop For each stream
Scd->>Scd: Read StreamHeader + raw bytes
alt Codec 6 (Ogg)
Scd->>Codec: XOR decrypt + extract vorbis headers
Codec-->>Scd: Ogg bytes -> MediaFiles
else Codec 12 (MSADPCM)
Scd->>Codec: Wrap RIFF/WAV
Codec-->>Scd: WAV bytes -> MediaFiles
else Other
Scd-->>Scd: MediaFiles add null
end
end
Scd-->>App: Return Scd (FileHeader, StreamHeaders, StreamFiles, MediaFiles)
sequenceDiagram
autonumber
participant Tool as EgsTools.Extract
participant FS as File System
Note over Tool: outputMode: SeparateRoot | Adjacent
Tool->>Tool: Compute originalPath (based on outputMode)
Tool->>Tool: Compute remasteredPath (based on outputMode)
Tool->>FS: Write original file
Tool->>FS: Write remastered file (if present)
sequenceDiagram
autonumber
participant App
participant BGM as Bgm.Read
App->>BGM: Read(stream)
BGM->>BGM: Read header
loop TrackCount
BGM->>BGM: Read length + payload
BGM-->>BGM: Tracks.Add(raw)
end
BGM-->>App: Return Bgm (header + Tracks)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🧹 Nitpick comments (14)
OpenKh.Kh2/Wd.cs (1)
3-6: Empty public type: consider sealing and documenting intentIf this is a placeholder, make the intent explicit and prevent unintended inheritance. Also, adding XML docs will help future contributors.
- public class Wd + /// <summary> + /// Placeholder for KH2 WD container type. + /// TODO: Add file layout and parsing once confirmed. + /// </summary> + public sealed class Wd { }OpenKh.Egs/EgsTools.cs (2)
71-78: Good API extension with backward-compatible defaultAdding ExtractFileOutputMode with a default keeps existing callers working while enabling the new “Adjacent” layout. Consider adding XML docs to the enum and parameter to clarify the two layouts and their resulting paths.
Would you like me to add XML documentation and update any CLI/help text that exposes this option?
89-93: Path resolution logic is sound; consider optional skip for remastered when unchangedThe switch expressions correctly resolve output paths for both layouts. If doNotExtractAgain is intended to also skip re-writing remastered assets, consider an additional check against the remastered output directory before emitting assets to reduce unnecessary IO on re-runs.
Confirm whether Patch() should also support the Adjacent layout. Today Patch expects original/remastered subfolders; extracting with Adjacent may not round-trip without an intermediate conversion step.
Also applies to: 105-110
OpenKh.Kh2/Bop.cs (1)
47-47: Prefer read-only exposure for entriesExposing List as a public mutable field invites external mutation. Consider an init-only property returning IReadOnlyList to preserve invariants.
OpenKh.Bbs/Scd.cs (2)
64-69: Public field naming consistencyFileHeader, StreamHeaders, StreamFiles, MediaFiles are PascalCase; tableOffsetHeader uses camelCase. Align the name to PascalCase for public fields or convert these to properties for API consistency.
70-80: Validate magic/version and header invariants earlyThe constants MagicCode, FileVersion, SSCFVersion are defined but not enforced. Validating them after reading the headers will improve robustness and error reporting on bad inputs.
I can add minimal checks (and unit tests) to validate header values and fail fast on unexpected files if you’d like.
OpenKh.Kh2/Bgm.cs (4)
6-6: Remove unused using.
using OpenKh.Kh2.SystemData;is not referenced in this file.Apply this diff:
-using OpenKh.Kh2.SystemData;
29-29: Encapsulate Tracks to prevent external mutation.Expose
Tracksas a read-only property to avoid accidental external replacement of the list reference.Apply this diff:
- public List<byte[]> Tracks = new(); + public List<byte[]> Tracks { get; private set; } = new();
31-34: Remove unused nested type or integrate it in Read().
BgmTrackis declared but never used. Either remove it or haveRead()populate a collection ofBgmTrackif that is the intended representation.Apply this diff to remove it:
- public class BgmTrack - { - [Data] public byte[] Raw { get; set; } - }If you intend to keep it, consider:
- Adding
public List<BgmTrack> TrackList { get; private set; }and populating it inRead().
40-40: Confirm endianness of per-track length.
ReadUInt32()in StreamExtensions uses the machine endianness. If the BGM format is defined as little-endian (likely), this is fine on x86/x64 but will misread on big-endian architectures. Consider using an explicit little-endian reader or documenting the assumption.Would you like a small helper
ReadUInt32LE()implemented inStreamExtensionsand used here?OpenKh.Kh1/Cvbl.cs (4)
206-215: Avoid full-buffer copy when not needed.
ReadAllBytes()copies the entire stream into memory and resets the original stream position. If the input is already seekable, you can parse directly, reducing allocations and side effects. Consider gating this behavior bystream.CanSeek.Possible refactor:
- var file = stream.ReadAllBytes(); - var str = new MemoryStream(file); + var str = stream.CanSeek ? stream : new MemoryStream(stream.ReadAllBytes());
342-346: Parsing flow control: avoid silent partial parsing.For unknown
subsectionType, the codebreaks the loop, potentially leaving the mesh partially parsed with no signal. Prefer settingrun = falseor throwing/logging to distinguish “end-of-mesh” from “unknown type”.Apply this diff:
- else if (subsectionType == 32768) run = false; - else break; + else if (subsectionType == 32768) { run = false; } + else + { + Console.WriteLine($"Warning: Unknown subsection type {subsectionType} at offset {subsectionOffset}, length {subsectionLength}. Aborting mesh parse."); + run = false; + }
236-241: Initialize jointSlots capacity and prefer explicit tuple names.Minor: pre-size the dictionary and clarify tuple elements
(JointId, ToLocalTransform)for readability.Apply this diff:
- var jointSlots = new Dictionary<uint, (uint, Matrix4x4)>(); + var jointSlots = new Dictionary<uint, (uint JointId, Matrix4x4 ToLocalTransform)>(48); for (var i = 0u; i < 48; i++) { - jointSlots[i] = (0, Matrix4x4.Identity); + jointSlots[i] = (0u, Matrix4x4.Identity); }
257-274: Validate subsection sizes before reading.Parsing assumes enough bytes exist for headers, vertex counts, and matrices. Add checks using
subsectionLengthto prevent out-of-range seeks on malformed data.Would you like me to add precise length guards for:
type == 1: requiresubsectionLength >= 24 + numVerts * VertexSizetype == 17: requiresubsectionLength >= minimum per jointStyle(including 16x4 bytes matrix for style 9)
I can draft the exact constants if you confirm the section layout.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (7)
OpenKh.Bbs/Scd.cs(2 hunks)OpenKh.Egs/EgsTools.cs(3 hunks)OpenKh.Kh1/Cvbl.cs(1 hunks)OpenKh.Kh2/Bgm.cs(1 hunks)OpenKh.Kh2/Bop.cs(1 hunks)OpenKh.Kh2/Wd.cs(1 hunks)docs/kh2/file/model.md(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
OpenKh.Kh2/Bop.cs (1)
OpenKh.Common/StreamExtensions.cs (1)
ReadUInt32(89-95)
OpenKh.Kh2/Bgm.cs (1)
OpenKh.Common/StreamExtensions.cs (1)
ReadUInt32(89-95)
OpenKh.Kh1/Cvbl.cs (2)
OpenKh.Kh1/Mdls.cs (1)
MdlsJoint(144-182)OpenKh.Common/StreamExtensions.cs (3)
ReadAllBytes(187-192)ReadUInt32(89-95)ReadSingle(113-119)
OpenKh.Bbs/Scd.cs (1)
OpenKh.Common/StreamExtensions.cs (1)
ReadUInt32(89-95)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build
- GitHub Check: Analyze (csharp)
🔇 Additional comments (1)
docs/kh2/file/model.md (1)
64-64: Header → link conversion looks goodTurning “UVSC option” into an anchor link improves navigation. No other changes needed here.
| { | ||
| var extradataOffset = 0u; | ||
| if (streamInfo.AuxChunkCount > 0) extradataOffset += BitConverter.ToUInt32(st.Skip(0x04).Take(4).ToArray(), 0); | ||
|
|
||
| var encryptionKey = st[extradataOffset + 0x02]; | ||
| var seekTableSize = BitConverter.ToUInt32(st.Skip((int)extradataOffset + 0x10).Take(4).ToArray(), 0); | ||
| var vorbHeaderSize = BitConverter.ToUInt32(st.Skip((int)extradataOffset + 0x14).Take(4).ToArray(), 0); | ||
|
|
||
| var startOffset = extradataOffset + 0x20 + seekTableSize; | ||
|
|
||
| var decryptedFile = st.ToArray(); | ||
|
|
||
| var endPosition = startOffset + vorbHeaderSize; | ||
|
|
||
| for (var i = startOffset; i < endPosition; i++) decryptedFile[i] = (byte)(decryptedFile[i]^encryptionKey); | ||
|
|
||
| var oggSize = vorbHeaderSize + streamInfo.StreamSize; | ||
|
|
||
| scd.MediaFiles.Add(decryptedFile.Skip((int)startOffset).Take((int)oggSize).ToArray()); | ||
| break; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Bounds checks needed for codec 6 (.ogg) parsing to avoid OOB access
Indexing st at computed offsets (extradataOffset + 0x02, +0x10, +0x14, and decrypt loop) assumes well-formed data. Malformed SCDs can throw IndexOutOfRangeException or expose incorrect data. Add bounds checks before reading/looping and validate oggSize fits within st.
Example guard pattern:
- var extradataOffset = 0u;
- if (streamInfo.AuxChunkCount > 0) extradataOffset += BitConverter.ToUInt32(st.Skip(0x04).Take(4).ToArray(), 0);
+ var extradataOffset = 0u;
+ if (streamInfo.AuxChunkCount > 0)
+ {
+ if (st.Length < 8) throw new InvalidDataException("SCD OGG extradata header truncated.");
+ extradataOffset += BitConverter.ToUInt32(st.AsSpan(0x04, 4));
+ }
var encryptionKey = st[extradataOffset + 0x02];
var seekTableSize = BitConverter.ToUInt32(st.Skip((int)extradataOffset + 0x10).Take(4).ToArray(), 0);
var vorbHeaderSize = BitConverter.ToUInt32(st.Skip((int)extradataOffset + 0x14).Take(4).ToArray(), 0);
var startOffset = extradataOffset + 0x20 + seekTableSize;
+ if (startOffset > (uint)st.Length) throw new InvalidDataException("SCD OGG start offset beyond payload.");
var decryptedFile = st.ToArray();
var endPosition = startOffset + vorbHeaderSize;
+ if (endPosition > (uint)decryptedFile.Length) throw new InvalidDataException("SCD OGG vorb header truncated.");
for (var i = startOffset; i < endPosition; i++) decryptedFile[i] = (byte)(decryptedFile[i]^encryptionKey);
var oggSize = vorbHeaderSize + streamInfo.StreamSize;
+ if (startOffset + oggSize > (uint)decryptedFile.Length) throw new InvalidDataException("SCD OGG size exceeds payload.");📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| { | |
| var extradataOffset = 0u; | |
| if (streamInfo.AuxChunkCount > 0) extradataOffset += BitConverter.ToUInt32(st.Skip(0x04).Take(4).ToArray(), 0); | |
| var encryptionKey = st[extradataOffset + 0x02]; | |
| var seekTableSize = BitConverter.ToUInt32(st.Skip((int)extradataOffset + 0x10).Take(4).ToArray(), 0); | |
| var vorbHeaderSize = BitConverter.ToUInt32(st.Skip((int)extradataOffset + 0x14).Take(4).ToArray(), 0); | |
| var startOffset = extradataOffset + 0x20 + seekTableSize; | |
| var decryptedFile = st.ToArray(); | |
| var endPosition = startOffset + vorbHeaderSize; | |
| for (var i = startOffset; i < endPosition; i++) decryptedFile[i] = (byte)(decryptedFile[i]^encryptionKey); | |
| var oggSize = vorbHeaderSize + streamInfo.StreamSize; | |
| scd.MediaFiles.Add(decryptedFile.Skip((int)startOffset).Take((int)oggSize).ToArray()); | |
| break; | |
| { | |
| var extradataOffset = 0u; | |
| if (streamInfo.AuxChunkCount > 0) | |
| { | |
| if (st.Length < 8) | |
| throw new InvalidDataException("SCD OGG extradata header truncated."); | |
| extradataOffset += BitConverter.ToUInt32(st.AsSpan(0x04, 4)); | |
| } | |
| var encryptionKey = st[extradataOffset + 0x02]; | |
| var seekTableSize = BitConverter.ToUInt32(st.Skip((int)extradataOffset + 0x10).Take(4).ToArray(), 0); | |
| var vorbHeaderSize = BitConverter.ToUInt32(st.Skip((int)extradataOffset + 0x14).Take(4).ToArray(), 0); | |
| var startOffset = extradataOffset + 0x20 + seekTableSize; | |
| if (startOffset > (uint)st.Length) | |
| throw new InvalidDataException("SCD OGG start offset beyond payload."); | |
| var decryptedFile = st.ToArray(); | |
| var endPosition = startOffset + vorbHeaderSize; | |
| if (endPosition > (uint)decryptedFile.Length) | |
| throw new InvalidDataException("SCD OGG vorb header truncated."); | |
| for (var i = startOffset; i < endPosition; i++) | |
| decryptedFile[i] = (byte)(decryptedFile[i] ^ encryptionKey); | |
| var oggSize = vorbHeaderSize + streamInfo.StreamSize; | |
| if (startOffset + oggSize > (uint)decryptedFile.Length) | |
| throw new InvalidDataException("SCD OGG size exceeds payload."); | |
| scd.MediaFiles.Add(decryptedFile.Skip((int)startOffset).Take((int)oggSize).ToArray()); | |
| break; | |
| } |
🤖 Prompt for AI Agents
In OpenKh.Bbs/Scd.cs around lines 100-119, add defensive bounds checks before
indexing into st: verify extradataOffset is within st.Length and, if
streamInfo.AuxChunkCount > 0, that the additional 4-byte read at offset 0x04 is
available; ensure (extradataOffset + 0x02) < st.Length before reading
encryptionKey, and that reading 4-byte values at (extradataOffset + 0x10) and
(extradataOffset + 0x14) is only done if (extradataOffset + 0x10 + 4) <=
st.Length and (extradataOffset + 0x14 + 4) <= st.Length; compute startOffset and
endPosition only after these checks and ensure endPosition <= st.Length and
startOffset <= endPosition before running the decryption loop; validate oggSize
is non-negative and that startOffset + oggSize <= st.Length before slicing; on
any failure either throw a clear exception or skip the malformed entry instead
of indexing out of bounds.
| var length = streamSize + (0x4e - 0x8); | ||
|
|
||
| var msadpcm = Array.Empty<byte>() | ||
| .Concat(BitConverter.GetBytes(0x46464952)) //"RIFF" | ||
| .Concat(BitConverter.GetBytes(length)) //overall file size - 8 | ||
| .Concat(BitConverter.GetBytes(0x45564157)) //"WAVE" | ||
| .Concat(BitConverter.GetBytes(0x20746D66)) //"fmt " | ||
| .Concat(BitConverter.GetBytes(0x32)) | ||
| .Concat(st.Take(0x32)) | ||
| .Concat(BitConverter.GetBytes(0x61746164)) //"data" | ||
| .Concat(BitConverter.GetBytes((int)streamSize)) | ||
| .Concat(st.Skip(0x32)) | ||
| .ToArray(); | ||
|
|
||
| scd.MediaFiles.Add(msadpcm); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: RIFF chunk size written as 8 bytes (long) instead of 4 bytes
length is inferred as long due to uint + int, so BitConverter.GetBytes(length) emits 8 bytes, breaking the RIFF header. Use a 32-bit value.
- var length = streamSize + (0x4e - 0x8);
+ var length = checked((int)(streamSize + 0x46)); // overall RIFF size minus 8
var msadpcm = Array.Empty<byte>()
.Concat(BitConverter.GetBytes(0x46464952)) //"RIFF"
- .Concat(BitConverter.GetBytes(length)) //overall file size - 8
+ .Concat(BitConverter.GetBytes(length)) // overall file size - 8 (4 bytes)
.Concat(BitConverter.GetBytes(0x45564157)) //"WAVE"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| var length = streamSize + (0x4e - 0x8); | |
| var msadpcm = Array.Empty<byte>() | |
| .Concat(BitConverter.GetBytes(0x46464952)) //"RIFF" | |
| .Concat(BitConverter.GetBytes(length)) //overall file size - 8 | |
| .Concat(BitConverter.GetBytes(0x45564157)) //"WAVE" | |
| .Concat(BitConverter.GetBytes(0x20746D66)) //"fmt " | |
| .Concat(BitConverter.GetBytes(0x32)) | |
| .Concat(st.Take(0x32)) | |
| .Concat(BitConverter.GetBytes(0x61746164)) //"data" | |
| .Concat(BitConverter.GetBytes((int)streamSize)) | |
| .Concat(st.Skip(0x32)) | |
| .ToArray(); | |
| scd.MediaFiles.Add(msadpcm); | |
| var length = checked((int)(streamSize + 0x46)); // overall RIFF size minus 8 | |
| var msadpcm = Array.Empty<byte>() | |
| .Concat(BitConverter.GetBytes(0x46464952)) //"RIFF" | |
| .Concat(BitConverter.GetBytes(length)) // overall file size - 8 (4 bytes) | |
| .Concat(BitConverter.GetBytes(0x45564157)) //"WAVE" | |
| .Concat(BitConverter.GetBytes(0x20746D66)) //"fmt " | |
| .Concat(BitConverter.GetBytes(0x32)) | |
| .Concat(st.Take(0x32)) | |
| .Concat(BitConverter.GetBytes(0x61746164)) //"data" | |
| .Concat(BitConverter.GetBytes((int)streamSize)) | |
| .Concat(st.Skip(0x32)) | |
| .ToArray(); | |
| scd.MediaFiles.Add(msadpcm); |
🤖 Prompt for AI Agents
In OpenKh.Bbs/Scd.cs around lines 127 to 141, the RIFF "chunk size" is being
written using BitConverter.GetBytes(length) where length is a 64-bit long
(resulting in 8 bytes) because of mixed uint + int arithmetic; replace length
with a 32-bit value (cast to int or uint) so BitConverter.GetBytes produces a
4-byte little-endian value, and ensure the casted value accurately represents
the overall file size - 8 before writing to the header.
| for (var i = 0u; i < joints.Count; i++) | ||
| { | ||
| var j = joints[(int)i]; | ||
|
|
||
| var scaleMatrix = Matrix4x4.CreateScale(new Vector3(j.ScaleX, j.ScaleY, j.ScaleZ)); | ||
|
|
||
| var rotationMatrixX = Matrix4x4.CreateRotationX(j.RotateX); | ||
| var rotationMatrixY = Matrix4x4.CreateRotationY(j.RotateY); | ||
| var rotationMatrixZ = Matrix4x4.CreateRotationZ(j.RotateZ); | ||
| var rotationMatrix = rotationMatrixX * rotationMatrixY * rotationMatrixZ; | ||
|
|
||
| var translationMatrix = Matrix4x4.CreateTranslation(new Vector3(j.TranslateX, j.TranslateY, j.TranslateZ)); | ||
|
|
||
| dict[i] = scaleMatrix * rotationMatrix * translationMatrix; | ||
| } | ||
| return dict; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use actual joint indices from MdlsJoint instead of loop index.
MdlsJointsToDictionary keys the dictionary by the loop counter i, not MdlsJoint.Index. Later lookups use jointId read from the stream, which likely refers to the model’s joint index. If Index differs from positional order, lookups will be wrong.
Apply this diff:
- var dict = new Dictionary<uint, Matrix4x4>();
- for (var i = 0u; i < joints.Count; i++)
- {
- var j = joints[(int)i];
+ var dict = new Dictionary<uint, Matrix4x4>(joints.Count);
+ for (var k = 0; k < joints.Count; k++)
+ {
+ var j = joints[k];
+ var id = j.Index; // Use the joint's declared index
var scaleMatrix = Matrix4x4.CreateScale(new Vector3(j.ScaleX, j.ScaleY, j.ScaleZ));
var rotationMatrixX = Matrix4x4.CreateRotationX(j.RotateX);
var rotationMatrixY = Matrix4x4.CreateRotationY(j.RotateY);
var rotationMatrixZ = Matrix4x4.CreateRotationZ(j.RotateZ);
var rotationMatrix = rotationMatrixX * rotationMatrixY * rotationMatrixZ;
var translationMatrix = Matrix4x4.CreateTranslation(new Vector3(j.TranslateX, j.TranslateY, j.TranslateZ));
- dict[i] = scaleMatrix * rotationMatrix * translationMatrix;
+ dict[id] = scaleMatrix * rotationMatrix * translationMatrix;
}Optional: If MdlsJoint.Index is not unique or can be sparse, add guards and/or fall back to positional index if id collides.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for (var i = 0u; i < joints.Count; i++) | |
| { | |
| var j = joints[(int)i]; | |
| var scaleMatrix = Matrix4x4.CreateScale(new Vector3(j.ScaleX, j.ScaleY, j.ScaleZ)); | |
| var rotationMatrixX = Matrix4x4.CreateRotationX(j.RotateX); | |
| var rotationMatrixY = Matrix4x4.CreateRotationY(j.RotateY); | |
| var rotationMatrixZ = Matrix4x4.CreateRotationZ(j.RotateZ); | |
| var rotationMatrix = rotationMatrixX * rotationMatrixY * rotationMatrixZ; | |
| var translationMatrix = Matrix4x4.CreateTranslation(new Vector3(j.TranslateX, j.TranslateY, j.TranslateZ)); | |
| dict[i] = scaleMatrix * rotationMatrix * translationMatrix; | |
| } | |
| return dict; | |
| } | |
| // Use declared joint indices as dictionary keys | |
| var dict = new Dictionary<uint, Matrix4x4>(joints.Count); | |
| for (var k = 0; k < joints.Count; k++) | |
| { | |
| var j = joints[k]; | |
| var id = j.Index; // Use the joint's declared index | |
| var scaleMatrix = Matrix4x4.CreateScale(new Vector3(j.ScaleX, j.ScaleY, j.ScaleZ)); | |
| var rotationMatrixX = Matrix4x4.CreateRotationX(j.RotateX); | |
| var rotationMatrixY = Matrix4x4.CreateRotationY(j.RotateY); | |
| var rotationMatrixZ = Matrix4x4.CreateRotationZ(j.RotateZ); | |
| var rotationMatrix = rotationMatrixX * rotationMatrixY * rotationMatrixZ; | |
| var translationMatrix = Matrix4x4.CreateTranslation(new Vector3(j.TranslateX, j.TranslateY, j.TranslateZ)); | |
| dict[id] = scaleMatrix * rotationMatrix * translationMatrix; | |
| } | |
| return dict; |
🤖 Prompt for AI Agents
In OpenKh.Kh1/Cvbl.cs around lines 188 to 204, the dictionary is currently keyed
by the loop counter i instead of the joint's actual MdlsJoint.Index which later
lookups expect; change the key to use j.Index when inserting (e.g.,
dict[j.Index] = ...), and add a guard to handle non-unique or sparse indices
(check if dict already contains j.Index and either skip/overwrite with a warning
or fall back to using the positional i as the key when j.Index is invalid),
ensuring the dictionary maps the model joint indices to their transform
matrices.
| for (var i = 0; i < Header.NumMeshes; i++) meshEntries.Add(BinaryMapping.ReadObject<MeshEntry>(str)); | ||
|
|
||
| MeshEntries = meshEntries; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Guard against large mesh counts and invalid offsets.
Header.NumMeshes is uint, cast implicitly in the for condition. Add bounds checks and explicit casts to avoid overflow and to validate meshEntriesOffset.
Apply this diff:
- for (var i = 0; i < Header.NumMeshes; i++) meshEntries.Add(BinaryMapping.ReadObject<MeshEntry>(str));
+ if (Header.NumMeshes > int.MaxValue)
+ throw new InvalidDataException($"NumMeshes too large: {Header.NumMeshes}");
+ for (var i = 0; i < (int)Header.NumMeshes; i++)
+ {
+ meshEntries.Add(BinaryMapping.ReadObject<MeshEntry>(str));
+ }And consider:
- var meshEntriesOffset = 16 + (Header.HasUnknownEntries == 1 ? Header.NumUnknownEntries * 32 : 0);
+ var meshEntriesOffset = 16u + (Header.HasUnknownEntries == 1 ? (uint)Header.NumUnknownEntries * 32u : 0u);
+ if ((ulong)meshEntriesOffset > (ulong)str.Length)
+ throw new EndOfStreamException($"meshEntriesOffset {meshEntriesOffset} beyond stream length {str.Length}");📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for (var i = 0; i < Header.NumMeshes; i++) meshEntries.Add(BinaryMapping.ReadObject<MeshEntry>(str)); | |
| MeshEntries = meshEntries; | |
| if (Header.NumMeshes > int.MaxValue) | |
| throw new InvalidDataException($"NumMeshes too large: {Header.NumMeshes}"); | |
| for (var i = 0; i < (int)Header.NumMeshes; i++) | |
| { | |
| meshEntries.Add(BinaryMapping.ReadObject<MeshEntry>(str)); | |
| } | |
| MeshEntries = meshEntries; |
🤖 Prompt for AI Agents
In OpenKh.Kh1/Cvbl.cs around lines 223-226, the loop uses Header.NumMeshes (a
uint) without bounds checks and performs implicit casting that can overflow;
validate Header.NumMeshes by explicitly casting to an int after checking it does
not exceed a reasonable maximum (e.g., a safe cap like 1_000_000), verify
MeshEntriesOffset is within the stream length and that MeshEntriesOffset +
(count * MeshEntrySize) does not overflow and stays <= stream length, throw a
clear exception if checks fail, then seek to the validated offset and iterate
using the safe int count to ReadObject<MeshEntry> and add to the list.
| if (joints != null) | ||
| vert.Position = RelativeToGlobalVertex(vert.Position, joints, vert.Joints.Select(j => (uint)j).ToArray(), vert.Weights, | ||
| vert.Joints.Select(j => jointSlots[j].Item2).ToArray()); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Bounds check joint lookups and weight counts.
RelativeToGlobalVertex assumes weights[i] and jointIds[i] exist and that joints[jointId] is present. Add validation to avoid KeyNotFoundException/Index errors with malformed data.
Apply this diff:
- if (joints != null)
- vert.Position = RelativeToGlobalVertex(vert.Position, joints, vert.Joints.Select(j => (uint)j).ToArray(), vert.Weights,
- vert.Joints.Select(j => jointSlots[j].Item2).ToArray());
+ if (joints != null)
+ {
+ var jointIds = vert.Joints.Select(j => (uint)j).ToArray();
+ var toLocal = vert.Joints.Select(j => jointSlots.ContainsKey(j) ? jointSlots[j].Item2 : Matrix4x4.Identity).ToArray();
+ var weights = vert.Weights ?? Array.Empty<float>();
+ if (jointIds.Length > 0 && jointIds.Length == weights.Length && jointIds.All(id => joints.ContainsKey(id)))
+ vert.Position = RelativeToGlobalVertex(vert.Position, joints, jointIds, weights, toLocal);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (joints != null) | |
| vert.Position = RelativeToGlobalVertex(vert.Position, joints, vert.Joints.Select(j => (uint)j).ToArray(), vert.Weights, | |
| vert.Joints.Select(j => jointSlots[j].Item2).ToArray()); | |
| if (joints != null) | |
| { | |
| var jointIds = vert.Joints.Select(j => (uint)j).ToArray(); | |
| var toLocal = vert.Joints | |
| .Select(j => jointSlots.ContainsKey(j) | |
| ? jointSlots[j].Item2 | |
| : Matrix4x4.Identity) | |
| .ToArray(); | |
| var weights = vert.Weights ?? Array.Empty<float>(); | |
| if (jointIds.Length > 0 | |
| && jointIds.Length == weights.Length | |
| && jointIds.All(id => joints.ContainsKey(id))) | |
| { | |
| vert.Position = RelativeToGlobalVertex( | |
| vert.Position, | |
| joints, | |
| jointIds, | |
| weights, | |
| toLocal | |
| ); | |
| } | |
| } |
| switch (vert.Face) | ||
| { | ||
| case 1: | ||
| mesh.Faces.Add([totalVertCount + i - 2, totalVertCount + i - 1, totalVertCount + i]); | ||
| break; | ||
| case 2: | ||
| mesh.Faces.Add([totalVertCount + i, totalVertCount + i - 1, totalVertCount + i - 2]); | ||
| break; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix uint underflow when emitting triangle indices.
totalVertCount + i - 2 underflows for i < 2 because i/totalVertCount are uint. This yields huge indices and corrupts faces.
Apply this diff:
- switch (vert.Face)
- {
- case 1:
- mesh.Faces.Add([totalVertCount + i - 2, totalVertCount + i - 1, totalVertCount + i]);
- break;
- case 2:
- mesh.Faces.Add([totalVertCount + i, totalVertCount + i - 1, totalVertCount + i - 2]);
- break;
- }
+ if (vert.Face != 0 && i >= 2)
+ {
+ var a = totalVertCount + i - 2;
+ var b = totalVertCount + i - 1;
+ var c = totalVertCount + i;
+ if (vert.Face == 1)
+ mesh.Faces.Add([a, b, c]);
+ else if (vert.Face == 2)
+ mesh.Faces.Add([c, b, a]);
+ }Alternatively, switch i/totalVertCount to int for index arithmetic and cast back to uint at storage time.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| switch (vert.Face) | |
| { | |
| case 1: | |
| mesh.Faces.Add([totalVertCount + i - 2, totalVertCount + i - 1, totalVertCount + i]); | |
| break; | |
| case 2: | |
| mesh.Faces.Add([totalVertCount + i, totalVertCount + i - 1, totalVertCount + i - 2]); | |
| break; | |
| } | |
| if (vert.Face != 0 && i >= 2) | |
| { | |
| var a = totalVertCount + i - 2; | |
| var b = totalVertCount + i - 1; | |
| var c = totalVertCount + i; | |
| if (vert.Face == 1) | |
| mesh.Faces.Add([a, b, c]); | |
| else if (vert.Face == 2) | |
| mesh.Faces.Add([c, b, a]); | |
| } |
🤖 Prompt for AI Agents
In OpenKh.Kh1/Cvbl.cs around lines 287-295, index arithmetic uses uints so
expressions like totalVertCount + i - 2 underflow when i < 2; change the
arithmetic to use signed ints (e.g. compute int baseIndex = (int)totalVertCount
+ (int)i) then compute the triangle vertex indices with baseIndex - 2, baseIndex
- 1, baseIndex (or reverse for winding), validate they are >= 0, and finally
cast each index back to uint when adding to mesh.Faces; this avoids unsigned
underflow while preserving storage as uint.
| public static Bgm Read(Stream stream) | ||
| { | ||
| var bgm = BinaryMapping.ReadObject<Bgm>(stream); | ||
|
|
||
| bgm.Tracks = Enumerable.Range(0, bgm.TrackCount).Select(_ => stream.ReadBytes((int)stream.ReadUInt32())).ToList(); | ||
|
|
||
| return bgm; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Validate track counts and lengths when parsing.
Current parsing trusts TrackCount and the per-track length. Add basic sanity checks (seekable stream, bounds) to harden against malformed inputs.
Apply this diff:
public static Bgm Read(Stream stream)
{
- var bgm = BinaryMapping.ReadObject<Bgm>(stream);
-
- bgm.Tracks = Enumerable.Range(0, bgm.TrackCount).Select(_ => stream.ReadBytes((int)stream.ReadUInt32())).ToList();
-
- return bgm;
+ var bgm = BinaryMapping.ReadObject<Bgm>(stream);
+
+ // Basic validation and safe reads
+ if (bgm.TrackCount > 64) // arbitrary upper guardrail; adjust if spec allows more
+ throw new InvalidDataException($"Unreasonable track count: {bgm.TrackCount}");
+
+ var tracks = new List<byte[]>(bgm.TrackCount);
+ for (int i = 0; i < bgm.TrackCount; i++)
+ {
+ var len = stream.ReadUInt32();
+ if (len > int.MaxValue)
+ throw new InvalidDataException($"Track {i}: length too large: {len}");
+ if (stream.CanSeek && stream.Position + len > stream.Length)
+ throw new EndOfStreamException($"Track {i}: insufficient data for length {len} at pos {stream.Position}.");
+ tracks.Add(stream.ReadBytes((int)len));
+ }
+ bgm.Tracks = tracks;
+ return bgm;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| public static Bgm Read(Stream stream) | |
| { | |
| var bgm = BinaryMapping.ReadObject<Bgm>(stream); | |
| bgm.Tracks = Enumerable.Range(0, bgm.TrackCount).Select(_ => stream.ReadBytes((int)stream.ReadUInt32())).ToList(); | |
| return bgm; | |
| } | |
| public static Bgm Read(Stream stream) | |
| { | |
| var bgm = BinaryMapping.ReadObject<Bgm>(stream); | |
| // Basic validation and safe reads | |
| if (bgm.TrackCount > 64) // arbitrary upper guardrail; adjust if spec allows more | |
| throw new InvalidDataException($"Unreasonable track count: {bgm.TrackCount}"); | |
| var tracks = new List<byte[]>(bgm.TrackCount); | |
| for (int i = 0; i < bgm.TrackCount; i++) | |
| { | |
| var len = stream.ReadUInt32(); | |
| if (len > int.MaxValue) | |
| throw new InvalidDataException($"Track {i}: length too large: {len}"); | |
| if (stream.CanSeek && stream.Position + len > stream.Length) | |
| throw new EndOfStreamException($"Track {i}: insufficient data for length {len} at pos {stream.Position}."); | |
| tracks.Add(stream.ReadBytes((int)len)); | |
| } | |
| bgm.Tracks = tracks; | |
| return bgm; | |
| } |
| stream.ReadUInt32(); //the number 8 | ||
| var fileSize = stream.ReadUInt32(); | ||
| var count = (int)(fileSize / 0x68); | ||
|
|
||
| bop.Entries = Enumerable.Range(0, count).Select(_ => BinaryMapping.ReadObject<BopEntry>(stream)).ToList(); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Guard against malformed headers and validate entry count
Current parsing assumes the first UInt32 is 8 and that fileSize is a clean multiple of 0x68. Add validation to fail fast on corrupt data and avoid partial reads.
- stream.ReadUInt32(); //the number 8
- var fileSize = stream.ReadUInt32();
- var count = (int)(fileSize / 0x68);
+ var preamble = stream.ReadUInt32(); // expected 8
+ var fileSize = stream.ReadUInt32();
+ if (preamble != 8)
+ throw new InvalidDataException($"Unexpected BOP preamble: {preamble} (expected 8).");
+ if (fileSize % 0x68 != 0)
+ throw new InvalidDataException($"BOP size {fileSize} is not a multiple of entry size 0x68.");
+ var count = checked((int)(fileSize / 0x68));🏁 Script executed:
#!/bin/bash
# Locate Bop.cs within the repository
fd -t f -g '*Bop.cs'Length of output: 40
🏁 Script executed:
#!/bin/bash
# Print the first 200 lines of OpenKh.Kh2/Bop.cs for context
sed -n '1,200p' OpenKh.Kh2/Bop.csLength of output: 2312
Guard against malformed BOP headers and validate entry count
The Read method in OpenKh.Kh2/Bop.cs currently assumes the first 32-bit word is always 8 and that the following size is an exact multiple of 0x68. A corrupted or malicious file could violate these assumptions, leading to silent truncation, out-of-range casts, or partial reads. We should:
- Verify the preamble matches the expected magic value before proceeding.
- Ensure
fileSizedivides evenly by the entry size (0x68) and fail fast otherwise. - Use a checked cast when converting the entry count to
intto catch overflows.
--- a/OpenKh.Kh2/Bop.cs
+++ b/OpenKh.Kh2/Bop.cs
@@ public static Bop Read(Stream stream)
- stream.ReadUInt32(); //the number 8
- var fileSize = stream.ReadUInt32();
- var count = (int)(fileSize / 0x68);
+ // Validate header and entry count
+ var preamble = stream.ReadUInt32(); // expected 8
+ if (preamble != 8)
+ throw new InvalidDataException(
+ $"Unexpected BOP preamble: {preamble} (expected 8).");
+
+ var fileSize = stream.ReadUInt32();
+ if (fileSize % 0x68 != 0)
+ throw new InvalidDataException(
+ $"BOP size {fileSize} is not a multiple of entry size 0x68.");
+
+ // Checked cast prevents integer overflow
+ var count = checked((int)(fileSize / 0x68));- Location:
OpenKh.Kh2/Bop.cs, methodRead(Stream), replace lines 56–61.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| stream.ReadUInt32(); //the number 8 | |
| var fileSize = stream.ReadUInt32(); | |
| var count = (int)(fileSize / 0x68); | |
| bop.Entries = Enumerable.Range(0, count).Select(_ => BinaryMapping.ReadObject<BopEntry>(stream)).ToList(); | |
| // Validate header and entry count | |
| var preamble = stream.ReadUInt32(); // expected 8 | |
| if (preamble != 8) | |
| throw new InvalidDataException( | |
| $"Unexpected BOP preamble: {preamble} (expected 8)."); | |
| var fileSize = stream.ReadUInt32(); | |
| if (fileSize % 0x68 != 0) | |
| throw new InvalidDataException( | |
| $"BOP size {fileSize} is not a multiple of entry size 0x68."); | |
| // Checked cast prevents integer overflow | |
| var count = checked((int)(fileSize / 0x68)); | |
| bop.Entries = Enumerable.Range(0, count) | |
| .Select(_ => BinaryMapping.ReadObject<BopEntry>(stream)) | |
| .ToList(); |
🤖 Prompt for AI Agents
In OpenKh.Kh2/Bop.cs around lines 56 to 61, the code assumes the first UInt32 is
8 and that fileSize is an exact multiple of 0x68; instead read and validate the
preamble/magic equals the expected value (throw InvalidDataException if not),
read fileSize and check fileSize % 0x68 == 0 (throw InvalidDataException if
not), compute the entry count using a checked conversion to int to catch
overflows, and only then read that many BopEntry objects; replace the current
unguarded reads with these validations and fail fast on any mismatch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (3)
OpenKh.Bbs/Scd.cs (3)
86-96: Bug: last-chunk size computed incorrectly; can over-read and corrupt parseFor the last entry you use
stream.Lengthas an absolute end offset but treat it as a size. ComputeendOffsetfirst, thenchunkSize = endOffset - off. Also defend against undersized headers.- var off = soundOffsets[index]; - var next = (int)(index == soundOffsets.Count - 1 ? stream.Length : soundOffsets[index + 1] - off); + var off = soundOffsets[index]; + var endOffset = (int)(index == soundOffsets.Count - 1 ? stream.Length : soundOffsets[index + 1]); + var chunkSize = endOffset - (int)off; + if (chunkSize < 0x20) + throw new InvalidDataException($"SCD stream chunk too small: {chunkSize} bytes at offset 0x{off:X}."); stream.Seek(off, SeekOrigin.Begin); var streamInfo = BinaryMapping.ReadObject<StreamHeader>(stream); scd.StreamHeaders.Add(streamInfo); - - var st = stream.ReadBytes(next - 0x20); + var st = stream.ReadBytes(chunkSize - 0x20); scd.StreamFiles.Add(st);
100-121: Add defensive bounds checks for codec 6 (.ogg) parsing to avoid OOB accessMultiple indices into
stassume well-formed data. Malformed SCDs can cause IndexOutOfRange or bad slices. Validate before indexing and decrypting.case 6: { - var extradataOffset = 0u; - if (streamInfo.AuxChunkCount > 0) extradataOffset += BitConverter.ToUInt32(st.Skip(0x04).Take(4).ToArray(), 0); - - var encryptionKey = st[extradataOffset + 0x02]; - var seekTableSize = BitConverter.ToUInt32(st.Skip((int)extradataOffset + 0x10).Take(4).ToArray(), 0); - var vorbHeaderSize = BitConverter.ToUInt32(st.Skip((int)extradataOffset + 0x14).Take(4).ToArray(), 0); - - var startOffset = extradataOffset + 0x20 + seekTableSize; - - var decryptedFile = st.ToArray(); - - var endPosition = startOffset + vorbHeaderSize; - - for (var i = startOffset; i < endPosition; i++) decryptedFile[i] = (byte)(decryptedFile[i]^encryptionKey); - - var oggSize = vorbHeaderSize + streamInfo.StreamSize; - - scd.MediaFiles.Add(decryptedFile.Skip((int)startOffset).Take((int)oggSize).ToArray()); + var extradataOffset = 0u; + if (streamInfo.AuxChunkCount > 0) + { + if (st.Length < 8) + throw new InvalidDataException("SCD OGG extradata header truncated."); + extradataOffset += BitConverter.ToUInt32(st, 0x04); + } + + var keyIndex = (int)extradataOffset + 0x02; + if (keyIndex >= st.Length) + throw new InvalidDataException("SCD OGG encryption key offset beyond payload."); + var encryptionKey = st[keyIndex]; + + var seekTabIndex = (int)extradataOffset + 0x10; + var vorbSizeIndex = (int)extradataOffset + 0x14; + if (seekTabIndex + 4 > st.Length || vorbSizeIndex + 4 > st.Length) + throw new InvalidDataException("SCD OGG header fields truncated."); + var seekTableSize = BitConverter.ToUInt32(st, seekTabIndex); + var vorbHeaderSize = BitConverter.ToUInt32(st, vorbSizeIndex); + + var startOffset = (long)extradataOffset + 0x20 + seekTableSize; + if (startOffset < 0 || startOffset > st.Length) + throw new InvalidDataException("SCD OGG start offset beyond payload."); + + var decryptedFile = (byte[])st.Clone(); + + var endPosition = startOffset + vorbHeaderSize; + if (endPosition > decryptedFile.Length) + throw new InvalidDataException("SCD OGG vorb header truncated."); + + for (var i = (int)startOffset; i < (int)endPosition; i++) + decryptedFile[i] = (byte)(decryptedFile[i] ^ encryptionKey); + + var oggSize = (long)vorbHeaderSize + streamInfo.StreamSize; + if (startOffset + oggSize > decryptedFile.Length) + throw new InvalidDataException("SCD OGG size exceeds payload."); + + scd.MediaFiles.Add(decryptedFile.AsSpan((int)startOffset, (int)oggSize).ToArray()); break; }
122-144: Bug: RIFF chunk size emitted as 8 bytes (long) instead of 4 bytes; also guard fmt chunk size
var length = streamSize + (0x4e - 0x8)promotes tolongdue touint + int, soBitConverter.GetBytes(length)writes 8 bytes breaking the RIFF header. Cast to 32-bit. Also ensuresthas at least 0x32 bytes before slicing.var streamSize = streamInfo.StreamSize; - var channelCount = streamInfo.ChannelCount; - var sampleRate = streamInfo.SampleRate; + var channelCount = streamInfo.ChannelCount; // (currently unused) + var sampleRate = streamInfo.SampleRate; // (currently unused) - var length = streamSize + (0x4e - 0x8); + if (st.Length < 0x32) + throw new InvalidDataException("MSADPCM fmt chunk truncated (need 0x32 bytes)."); + var length = checked((int)(streamSize + 0x46)); // overall RIFF size minus 8 var msadpcm = Array.Empty<byte>() .Concat(BitConverter.GetBytes(0x46464952)) //"RIFF" - .Concat(BitConverter.GetBytes(length)) //overall file size - 8 + .Concat(BitConverter.GetBytes(length)) // overall file size - 8 (4 bytes) .Concat(BitConverter.GetBytes(0x45564157)) //"WAVE" .Concat(BitConverter.GetBytes(0x20746D66)) //"fmt " .Concat(BitConverter.GetBytes(0x32)) .Concat(st.Take(0x32)) .Concat(BitConverter.GetBytes(0x61746164)) //"data" .Concat(BitConverter.GetBytes((int)streamSize)) .Concat(st.Skip(0x32)) .ToArray();Optional: remove
channelCount/sampleRatelocals if intentionally unused to avoid warnings.
🧹 Nitpick comments (6)
OpenKh.Bbs/Scd.cs (6)
5-5: Optional: avoid LINQ allocations in binary parsing hot path
Skip/Take/Concatallocate and are used inside per-entry parsing. Consider using spans and BitConverter overloads instead; you’ll reduce GC pressure during bulk SCD reads.If you adopt the span approach in codec parsing (see suggested diffs below),
using System.Linqmay become unnecessary here.
45-45: Prefer a strongly-typed enum for codec values instead of magic numbersImproves readability and prevents accidental misuse.
Example:
+public enum ScdCodec : uint { Ogg = 6, MsAdpcm = 12 } - [Data] public uint Codec { get; set; } //6 = .ogg, 12 = MSADPCM .wav + [Data] public ScdCodec Codec { get; set; } // 6 = Ogg, 12 = MSADPCM (.wav)This will require updating
switch (streamInfo.Codec)cases accordingly.
65-70: Public mutable fields and C# 12 collection expressions: verify language version; consider tightening mutabilityTwo points:
[]collection expressions require C# 12. Confirm the repo’s LangVersion/TFM targets support it, or switch tonew()for broader compatibility.- Exposing these collections and headers as public mutable fields invites accidental external mutation. Consider properties with private setters or exposing
IReadOnlyList<T>for consumers.Minimal compatibility tweak:
- public List<StreamHeader> StreamHeaders = []; - public List<byte[]> StreamFiles = []; - public List<byte[]> MediaFiles = []; //6 = .ogg file, everything else is a .wav with msadpcm codec, throw it at ffmpeg /shrug + public List<StreamHeader> StreamHeaders = new(); + public List<byte[]> StreamFiles = new(); + public List<byte[]> MediaFiles = new(); // 6 = .ogg file; 12 = MSADPCM .wavOptional mutability tightening (would need small call-site updates within this file):
- public Header FileHeader = new(); - public TableOffsetHeader tableOffsetHeader = new(); - public List<StreamHeader> StreamHeaders = new(); - public List<byte[]> StreamFiles = new(); - public List<byte[]> MediaFiles = new(); + public Header FileHeader { get; private set; } = new(); + public TableOffsetHeader TableOffsets { get; private set; } = new(); + public List<StreamHeader> StreamHeaders { get; } = new(); + public List<byte[]> StreamFiles { get; } = new(); + public List<byte[]> MediaFiles { get; } = new();Follow-up: update references to
tableOffsetHeaderin this file if you adopt the property rename.
73-80: Fail fast on invalid headers (magic/version) before deeper parsingValidate
MagicCode,FileVersion, andSSCFVersionimmediately after reading. This prevents cascading errors on wrong/truncated inputs.var scd = new Scd { FileHeader = BinaryMapping.ReadObject<Header>(stream), - tableOffsetHeader = BinaryMapping.ReadObject<TableOffsetHeader>(stream), + tableOffsetHeader = BinaryMapping.ReadObject<TableOffsetHeader>(stream), }; + + if (scd.FileHeader.MagicCode != MagicCode) + throw new InvalidDataException($"Invalid SCD magic: 0x{scd.FileHeader.MagicCode:X16}."); + if (scd.FileHeader.FileVersion != FileVersion || scd.FileHeader.SSCFVersion != SSCFVersion) + throw new InvalidDataException($"Unsupported SCD version (File={scd.FileHeader.FileVersion}, SSCF=0x{scd.FileHeader.SSCFVersion:X}).");
81-85: Bounds-check the offset table to avoid reading past end-of-streamGuard the read of
Table1Offsetand the offsets array size (Table1ElementCount * 4).- stream.Seek(scd.tableOffsetHeader.Table1Offset, SeekOrigin.Begin); - - var soundOffsets = new List<uint>(); - for (var i = 0; i < scd.tableOffsetHeader.Table1ElementCount; i++) soundOffsets.Add(stream.ReadUInt32()); + var offsetsPos = (long)scd.tableOffsetHeader.Table1Offset; + if (offsetsPos < 0 || offsetsPos > stream.Length) + throw new InvalidDataException($"Table1Offset points outside stream: 0x{offsetsPos:X}."); + var requiredBytes = (long)scd.tableOffsetHeader.Table1ElementCount * sizeof(uint); + if (offsetsPos + requiredBytes > stream.Length) + throw new InvalidDataException($"Offsets table truncated: need {requiredBytes} bytes from 0x{offsetsPos:X}."); + + stream.Seek(offsetsPos, SeekOrigin.Begin); + var soundOffsets = new List<uint>((int)scd.tableOffsetHeader.Table1ElementCount); + for (var i = 0; i < scd.tableOffsetHeader.Table1ElementCount; i++) + soundOffsets.Add(stream.ReadUInt32());
145-150: Confirm downstream consumers expect null in MediaFiles for unsupported codecsIf not required, consider
Array.Empty<byte>()to avoid null-checks by callers.- scd.MediaFiles.Add(null); + scd.MediaFiles.Add(Array.Empty<byte>());
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
OpenKh.Bbs/Scd.cs(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
OpenKh.Bbs/Scd.cs (1)
OpenKh.Common/StreamExtensions.cs (1)
ReadUInt32(89-95)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Analyze (csharp)
- GitHub Check: build
|
About the build error
It comes from The collection expressions The array reference type - C# reference | Microsoft Learn
Fix it with a traditional expression |

As that pr is still a draft, I have cherry picked all the non Godot code from the branch as it has some nice improvements for native Linux support and cleanups. Unfortunately I don't know how to do a partial cherry pick and keep Frozenreflex as the author of the other commits.
Summary by CodeRabbit
New Features
Documentation