nerdculture.de is one of the many independent Mastodon servers you can use to participate in the fediverse.
Be excellent to each other, live humanism, no nazis, no hate speech. Not only for nerds, but the domain is somewhat cool. ;) No bots in general. Languages: DE, EN, FR, NL, ES, IT

Administered by:

Server stats:

1.2K
active users

#filesystem

0 posts0 participants0 posts today

#ReleaseWednesday Just pushed a new version of thi.ng/block-fs, now with additional multi-command CLI tooling to convert & bundle a local file system tree into a single block-based binary blob (e.g. for bundling assets, or distributing a virtual filesystem as part of a web app, or for snapshot testing, or as bridge for WASM interop etc.)

Also new, the main API now includes a `.readAsObjectURL()` method to wrap files as URLs to binary blobs with associated MIME types, thereby making it trivial to use the virtual filesystem for sourcing stored images and other assets for direct use in the browser...

(Ps. For more context see other recent announcement: mastodon.thi.ng/@toxi/11426498)

#ThingUmbrella #ReleaseTuesday... New package (initial alpha release):

thi.ng/block-fs provides highly customizable & extensible block-based storage with an optional hierarchical filesystem layer. This is useful everywhere you might need virtual filesystem, though the storage providers can also be used without the filesystem layer (e.g. for #Forth-style block data/editors).

The default configuration provides:

- arbitrarily nested directories
- filenames (UTF-8) of max. 31 bytes per directory level
- max. 32 owner IDs
- file locking
- creation/modification timestamps (64 bit)
- efficient append writes

Currently included storage providers: TypedArray-based in-memory and host filesystem based file storage (one block per file). More are planned (e.g. IndexedDB, remote endpoint)...

The readme is currently still lacking various diagrams to illustrate the filesystem internals. I will add those ASAP...

Linux 6.15’s exFAT file deletion performance boosted

A recent development in the upcoming Linux 6.15 kernel has been spotted, because there was a big improvement to the exFAT file system implementation in relation to how it deletes the files when the “discard” mount option is used. This improvement significantly saves time as a test file after the merge has been deleted in 1.6 seconds, compared to more than 4 minutes of the total time taken.

This pull request makes sure that, upon file deletion, it discards a group of contiguous clusters (that is, clusters that are next to each other) in batch instead of discarding them one by one. This was because in prior kernels, such as 6.14, “if the discard mount option is enabled, the file’s clusters are discarded when they are freed. Discarding clusters one by one will significantly reduce performance. Poor performance may cause soft lockup when lots of clusters are freed.”

The change has been introduced in commit a36e0ab. Since then, the pull request has been merged to the kernel and it will be integrated to the first release candidate of Linux 6.15. A simple performance benchmark has been verified with the following commands:

# truncate -s 80G /mnt/file# time rm /mnt/file

In detail, the performance of this filesystem without this commit is poor, totalling about 4 minutes and 46 seconds in real time, with 12 seconds of system time. In contrast to the patched kernel, it totals about 1 second in real time, with 17 milliseconds of system time.

It’s a huge improvement!

Image by diana.grytsku on Freepik

SysV filesystem is being removed from Linux 6.15

In the old Unix days, there was a filesystem that implemented the Xenix FS, Coherent Unix FS, and SystemV/386 FS. It allowed file organization and access that provided the data storage service that allowed applications to access mass storage and its contents, including files and folders.

The ex-maintainer of this filesystem support for Linux systems had orphaned the filesystem maintenance back in 2023, when the maintainer said that there was no way to test it, with the possible removal slated in the future.

The future has come, and Jan Kara from the SUSE team has pushed a commit to the VFS git that removed all code for the SysV support for Linux, which confirms that, starting from Linux 6.15, you won’t be able to access these legacy filesystems. This is because, back in 2023, Google’s Linux kernel fuzzer, syzkaller, has automatically reported a bug in SysV where the sleep function was called from an invalid context.

As nobody is using this filesystem in their Linux installation, it’s safe to remove this filesystem support from the kernel. This only affects computers that have both Linux and a legacy Unix system that uses this antique filesystem installed, but the amount of such computers is very small.

Once Linux 6.15 gets released, you won’t be able to use any partitions that use this filesystem.

https://audiomack.com/aptivi/song/sysv-filesystem-is-being-removed-from-linux-615

> https://github.com/tuxera/ntfs-3g/wiki/Manual#alternate-data-streams-ads

Wait, so #NTFS does some weird files-as-objects-with-slots but still untyped binary streams thing?

Damn, with that and #Transactional NTFS, it really *is* the closest thing to have implemented the #database #filesystem I wish for (which would be typed, of course).

Shame that got deprecated. (It'd still be lacking #integrity features too but damn, so close yet so far.)
GitHubManualNTFS-3G Safe Read/Write NTFS Driver. Contribute to tuxera/ntfs-3g development by creating an account on GitHub.

hey hey #Linux #FileSystem #ZFS #RAID #XFS entities! I'm looking for extremely opinionated discourses on alternatives to ZFS on Linux for slapping together a #JBOD ("Just a Bunch Of Disks", "Just a Buncha Old Disks", "Jesus! Buncha Old Disks!", etc) array.

I like ZFS
but the fact that it's not in tree in-kernel is an issue for me. What I need most is reliability and stability (specifically regarding parity) here; integrity is the need. Read/write don't have to be blazingly fast (not that I'm mad about it).

I also have one
#proxmox ZFS array where a raw disk image is stored for a #Qemu #VirtualMachine; in the VM, it's formatted to XFS. That "seems" fine in limited testing thus far (and seems fast?, so it does seem like the defaults got the striping correct) but I kind of hate how I have multiple levels of abstraction here.

I don't think there's been any change on the
#BTRFS front re: raid-like array stability (I like and use BTRFS for single disk filesystems but) although I would love for that to be different.

I'm open to
#LVM, etc, or whatever might help me stay in tree and up to date. Thank you! Boosts appreciated and welcome.

#techPosting

#btrfs-progs 6.13 is out:

lore.kernel.org/all/2025021423

github.com/kdave/btrfs-progs/r

Some highlights:

mkfs:
* new option to enable compression
* updated summary (subvolumes, compression)

scrub:
* start: new option --limit to set the bandwidth limit for the duration of the run

btrfstune:
* add option to remove squota

other:
* a bit more optimized crc32c code

lore.kernel.orgBtrfs progs release 6.13 - David Sterba

Ok... this is weird. One of my #Raspis decided to not start some services after a reboot. Turns out the / file system was read only. Somehow, /etc/fstab got a "o" at the end of the line for the root file system.

Remounted rw, removed this "o" from fstab, rebooted and everything is back to normal.

e2fsck showed no errors so... ??? Maybe modifying files via an SSH client on a tiny smartphone screen is not the best idea.