Skip to content

Jnewhouse/write path#3

Open
jacksonrnewhouse wants to merge 4 commits intomarkovejnovic/some-decent-performancefrom
jnewhouse/write_path
Open

Jnewhouse/write path#3
jacksonrnewhouse wants to merge 4 commits intomarkovejnovic/some-decent-performancefrom
jnewhouse/write_path

Conversation

@jacksonrnewhouse
Copy link
Collaborator

This has my changes so far, namely

  • Use async fn for SsfsBackend implementation. This is cleaner and requires less copying.
  • Detect the uid and gid for the user, rather than hardcoded.
  • First version of write support. Currently there is a race because the writes are pushed to remote in a separate thread, so you will have stale reads. This is the thing I'll fix next.

jacksonrnewhouse and others added 4 commits January 30, 2026 11:04
Convert the MesaBackend implementation of SsfsBackend from explicit
future construction (cloning self fields and wrapping in async move)
to native async fn syntax. This simplifies the code by removing
unnecessary clones and the manual async move block.

The trait definition retains `impl Future<...> + Send` for the Send
bound requirement, but implementations can use async fn which
desugars compatibly.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace hardcoded uid/gid values (1000) with the actual uid/gid of
the current process using nix::unistd::getuid/getgid. This ensures
files appear owned by the user running the filesystem, which is
necessary for proper permission checking.

Also refactors the From<INodeHandle> trait implementation to a
standalone function for better flexibility.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add writable mode (`--writable` flag) that allows creating, modifying,
and deleting files. Changes are immediately committed to the remote
repository via the Mesa API.

Key changes:
- Add `--writable` CLI flag that enables write operations
- Read git config user.name/email for commit authorship
- Implement FUSE operations: create, write, unlink, setattr, fsync
- Add background task that processes commit requests asynchronously
- Extend SsfsBackend trait with create_file, update_file, delete_file
  methods (default to ReadOnly error for backwards compatibility)
- Add SsFs helper methods for cache manipulation: insert_file,
  update_file_size, remove_file, get_path, backend accessor
- Add CommitEncoding enum to mesa-dev for specifying content encoding

When mounted in writable mode, file permissions show 0o644 instead of
0o444 to indicate writeability. The filesystem uses a channel to queue
commit requests, which are processed asynchronously by a background task.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Move the background commit task from MesaFS::new into a separate
commit_worker module. This improves code organization by:

- Isolating the commit processing logic in its own module
- Introducing CommitWorkerConfig to bundle worker parameters
- Making the commit request types and spawn function reusable

The CommitRequest enum and spawn_commit_worker function are now
public, allowing potential reuse in other contexts.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@mesa-dot-dev
Copy link

mesa-dot-dev bot commented Jan 30, 2026

Mesa Description

This has my changes so far, namely

  • Use async fn for SsfsBackend implementation. This is cleaner and requires less copying.
  • Detect the uid and gid for the user, rather than hardcoded.
  • First version of write support. Currently there is a race because the writes are pushed to remote in a separate thread, so you will have stale reads. This is the thing I'll fix next.

Description generated by Mesa. Update settings

Copy link

@mesa-dot-dev mesa-dot-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Performed full review of 670f0cc...125015d

Analysis

  1. The design lacks a local transaction layer, causing every FUSE mutation to directly trigger asynchronous API calls without buffering or batching
  2. No read-after-write consistency guarantees as file contents are always reconstructed from remote state rather than using local cache
  3. Multi-block operations are inefficiently split into many independent commits with no ordering guarantees
  4. Critical filesystem operations like fsync are implemented as no-ops, failing to provide expected durability semantics
  5. Missing error propagation mechanism from the commit worker back to the FUSE layer, which could lead to silent failures

Tip

Help

Slash Commands:

  • /review - Request a full code review
  • /review latest - Review only changes since the last review
  • /describe - Generate PR description. This will update the PR body or issue comment depending on your configuration
  • /help - Get help with Mesa commands and configuration options

0 files reviewed | 2 comments | Edit Agent SettingsRead Docs

};

// Read current content
let current_content = match self.ssfs.read(ino) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

High

Because each write() call rebuilds the file from self.ssfs.read(ino) (lines 518‑548) and then enqueues the whole blob via backend.update_file (line 559), multi-chunk writes are effectively lossy. FUSE will typically send a large write as several 4 KiB chunks; after the first chunk we enqueue a commit but the remote repo still contains the old version when the next chunk arrives, so self.ssfs.read returns stale bytes and the new buffer only contains that one chunk. The final commit therefore only includes the last chunk that arrived before the worker caught up, corrupting any non-trivial write. We also never mutate the in-memory cache beyond update_file_size, so read-after-write sees the same stale data.

Please keep the dirty data local until all chunks for that file descriptor have been applied (e.g., track a per-inode buffer and update it directly), and only enqueue a commit once the file contents reflect the combined write. At minimum, stop re-fetching from the remote backend while commits are still outstanding.

Fix in Cursor • Fix in Claude

Prompt for Agent
Task: Address review feedback left on GitHub.
Repository: mesa-dot-dev/git-fs#3
File: crates/git-fs/src/mesafuse.rs#L518
Action: Open this file location in your editor, inspect the highlighted code, and resolve the issue described below.

Feedback:
Because each `write()` call rebuilds the file from `self.ssfs.read(ino)` (lines 518‑548) and then enqueues the whole blob via `backend.update_file` (line 559), multi-chunk writes are effectively lossy. FUSE will typically send a large write as several 4 KiB chunks; after the first chunk we enqueue a commit but the remote repo still contains the *old* version when the next chunk arrives, so `self.ssfs.read` returns stale bytes and the new buffer only contains that one chunk. The final commit therefore only includes the last chunk that arrived before the worker caught up, corrupting any non-trivial write. We also never mutate the in-memory cache beyond `update_file_size`, so read-after-write sees the same stale data.

Please keep the dirty data local until all chunks for that file descriptor have been applied (e.g., track a per-inode buffer and update it directly), and only enqueue a commit once the file contents reflect the combined write. At minimum, stop re-fetching from the remote backend while commits are still outstanding.

while let Some(request) = commit_rx.recv().await {
let (message, files) = match request {
CommitRequest::Create { path, content } => {
// Use "." for empty files to work around Mesa API bug with empty content
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Medium

This "workaround" replaces every empty file with a single '.' byte before base64 encoding (lines 65‑78 and 82‑95). That means any zero-length file we create or truncate is committed with one byte of content, so the file is no longer empty when it reaches Mesa/GitHub. Unless the server really does a special-case replacement on Lg==, this corrupts empty files.

If the Mesa API cannot accept an empty string today, we need a server-side fix or a different protocol knob (e.g., send content: Some(String::new()), or add an explicit flag) rather than mutating user data on the client.

Fix in Cursor • Fix in Claude

Prompt for Agent
Task: Address review feedback left on GitHub.
Repository: mesa-dot-dev/git-fs#3
File: crates/git-fs/src/commit_worker.rs#L65
Action: Open this file location in your editor, inspect the highlighted code, and resolve the issue described below.

Feedback:
This "workaround" replaces every empty file with a single `'.'` byte before base64 encoding (lines 65‑78 and 82‑95). That means any zero-length file we create or truncate is committed with one byte of content, so the file is no longer empty when it reaches Mesa/GitHub. Unless the server really does a special-case replacement on `Lg==`, this corrupts empty files.

If the Mesa API cannot accept an empty string today, we need a server-side fix or a different protocol knob (e.g., send `content: Some(String::new())`, or add an explicit flag) rather than mutating user data on the client.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant