Can we please put this to rest; xfs does not explicitly null files on a crash, it is not doing this out of security concerns, it is not wishing it ran on super-conducting flux-capacitor enhanced hard drives, and the old behavior, which has almost always been wrongly explained, has been fixed for over a year now.
From: Lachlan McIlroy <lachlan@sgi.com> Date: Tue, 8 May 2007 03:49:46 +0000 (+1000) Subject: [XFS] Fix to prevent the notorious 'NULL files' problem after a crash. X-Git-Tag: v2.6.22-rc1~353^2~5 [XFS] Fix to prevent the notorious 'NULL files' problem after a crash. The problem that has been addressed is that of synchronising updates of the file size with writes that extend a file. Without the fix the update of a file's size, as a result of a write beyond eof, is independent of when the cached data is flushed to disk. Often the file size update would be written to the filesystem log before the data is flushed to disk. When a system crashes between these two events and the filesystem log is replayed on mount the file's size will be set but since the contents never made it to disk the file is full of holes. If some of the cached data was flushed to disk then it may just be a section of the file at the end that has holes. There are existing fixes to help alleviate this problem, particularly in the case where a file has been truncated, that force cached data to be flushed to disk when the file is closed. If the system crashes while the file(s) are still open then this flushing will never occur. The fix that we have implemented is to introduce a second file size, called the in-memory file size, that represents the current file size as viewed by the user. The existing file size, called the on-disk file size, is the one that get's written to the filesystem log and we only update it when it is safe to do so. When we write to a file beyond eof we only update the in- memory file size in the write operation. Later when the I/O operation, that flushes the cached data to disk completes, an I/O completion routine will update the on-disk file size. The on-disk file size will be updated to the maximum offset of the I/O or to the value of the in-memory file size if the I/O includes eof.
Thanks for the explanation!
Hi Eric,
I still don’t trust XFS with my data. See: http://blog.flameeyes.eu/2008/12/09/filesystems-take-two/ for one recent instance.
The powers that be are aligning behind ext4, and maybe Btrfs in the future. That leaves JFS, XFS and Reiser3/4 in a precarious position. ext4 at least in theory should deliver performance similar or better to any of them and has a good number of advanced technologies as well. Btrfs surpasses even XFS with ambition, but has broad vendor support behind it as well.
The roadmap on xfs.org essentially shows more of the same with continued maintenance but no real plan forward (which is probably the right step and is for example what JFS is doing). Is this the correct message to take away, or are there plans to revitalize XFS and keep it competitive in the coming years?
Reading the blog you link to, it’s a vague reference to an unspecified problem in which the author mentions XFS. If you wish to use that as a basis for not trusting it, that’s your prerogative. :)
As for the roadmap, there are some intriguing things such as checksumming most everything for error detection, and parent pointers for efficient fault isolation & repair. Not super-sexy ZFS-killer features, but nonetheless items that should keep XFS useful for a good time to come if they come to fruition.
I don’t agree that XFS is in particular need of “revitalization” at this point; it’s unlikely to get many bolt-ons to make it flashy and new; it does what it does today well, which is to handle extremely large files & filesystems at full disk bandwidth. For many workloads there’s not much that competes with it on Linux today.
Hello,
I need to recover data in xfs, from linux, which opensource tool recommend?
regards
Depends on what you mean by “recovery” and what the damage is. You’d be best served by asking your question, with as much detail about your situation as possible, on the mailing list at linux-xfs@vger.kernel.org
Thanks for the answer, by mistake delete a subdirectory from the console, and I have no backup of it, thanks to the people of mozilla decided to migrate firefox sync and now I lost all the local profile, which I have a lot of data to recover, and it is the first time that I use xfs for recommendations of many people who work in IT. and ho and I find something that nobody even knew to give me advice on the matter.
There is no official tool or method to recover deleted files; something like photorec may be your best bet. Do not mount or use the filesystem in the meantime; doing so will only increase the chances that more data gets overwritten.
Thanks for the answer again, try this method, but do not recover anything … I just tried with a soft that I recommended in another forum, called r-explorer but since it is licensed, it is very limited, but it does not bring me anything recent either. Thank you for your collaboration, but I give it for lost, I will start to not trust the people of mozilla, this migrate something so easy for people and do not warn their registered users, leaves much to be desired.
Thanks for everything