Logo 
Search:

Unix / Linux / Ubuntu Forum

Ask Question   UnAnswered
Home » Forum » Unix / Linux / Ubuntu       RSS Feeds

Defragment

  Date: Dec 19    Category: Unix / Linux / Ubuntu    Views: 207
  

I am using HH and being new to Linux, is there a or even necessary way
to defrag the drives? I did a search and saw for the server there is a
defrag, but is there one need for pc users like myself?

Share: 

 

10 Answers Found

 
Answer #1    Answered On: Dec 19    

linux does not need to be defraged. The file system is set up different than
windows.I dont know all the reasns why,but linux assignd extra space on the disk
for a program and it is always keep in sme place. The windows file system may
spread the program out over the disk whereever it can find a place to put it
back after use You might be abe to find mor info on the ubuntu forums that may
explain it better.I read some wher that defragging coul mess upour system.

 
Answer #2    Answered On: Dec 19    

I thought it might work like that, but being a linux
newbie, wanted to make sure I understood.

 
Answer #3    Answered On: Dec 19    

The only time you might have problems is when your drive(s) start to become
very close to full, in which case it has no other option but to fragment.
Not sure though, I think once you resolve the space issues it takes care of
the rest.

 
Answer #4    Answered On: Dec 19    

that was one of ny first questions when I started using ubuntu, Thats whats god
about this groop. I sometimes get answers just from readung posts,without having
tp even ask.

 
Answer #5    Answered On: Dec 19    

One nice thing about Ubuntu is if your computers drive does get fragmented,
the operating
system will repair the unmountable volume and then boot up. It will also do
a disk check every 50
or so boot up cycles that checks the disk.

 
Answer #6    Answered On: Dec 19    

Mine runs a disk check every 33 boots.

 
Answer #7    Answered On: Dec 19    

Please tell me where can I find some programs and source code made in Gambas ?
Gambas is really wonderful. But I will study Python further. Now I feel rather
discouraged.

 
Answer #8    Answered On: Dec 19    

This is a very hot topic on the net lately. A user on his own website posted how
to defrag on Linux which is not easy to do. His starting point was that it is a
mistake to believe that Linux drives do not get fragmented. Then he got a lot of
people saying that it was unnecessary and flaming him for even suggesting it.


While it is a contentious subject to be sure, most people agree that you do not
need to worry about it provided you don't let your drive get over 80% full.
After this large files can become fragmented as the file system has no
alternative than to put the file in available spots, breaking them as necessary.
However, if you delete some files the file system will in the background move
the large files and defragment the drive in the process.

The big difference is that Linux files systems are journaled (ext 3 and Reiser)
and it spreads files across the drive, leaving space for the file to grow as it
does this. Journaling is a way of keeping track of the file system and changes
made. If the system crashes the journal helps to protect from truncated files as
all changes are saved to a log.

 
Answer #9    Answered On: Dec 19    

Yes, defragging can mess you up. You need to work on an unmounted file system
for starters and if you have only one bootable partition then you must use a
live disk or something similar and check to make sure the disk is not mounted
before working on it.

Basically Windows file systems are based on older technology. Most people see
NTFS as newer than FAT32, but NTFS came out in 1993 with the advent of NT (the
same year as ext2), while FAT32 came out in 1996. It is of the same generation
as ext2 which is not a journaled file system either. Both ext3 (1999) and
ReiserFS (2001) are newer and ext4 and Reiser4 are both in development.

However, the prevalence of NTFS on servers and its longevity is a sign that it
is a durable file system which can be depended on in the long run. It may not be
the best for everyday users who do things to their system that may cause them
problems later, such has shutting down improperly, but it is certainly a worthy
file system for what it was made for, servers. In the times of TB drives, it may
be time for M$ to look at updating their file system which so far has stood them
in good stead, but lots has happened since 1993.

Recent Linux file systems (ext3 and ReiserFS) are all journaled which means they
are more resilient because changes are written to the log file or journal before
they are written to disk. If the file and journal disagree then the file will be
purged. This can happen if a system goes down in between the journaling and the
file change being committed to disk. In Windows files are truncated or worse
still the file table can become corrupted. The one thing not to do in Linux is
to use fsck on a mounted disk or it can mess things up royally. No file system
can protect the user if the user is careless.

Ext2 allows the user to recover deleted files, but ext3 does not. When a file is
deleted, the file locations are zeroed out in ext3 and one must use grep to
recover file parts as best as one can.

Journaling on Linux explanation:
www.ibm.com/developerworks/library/l-fs.html
FAT and NTFS explained:
support.microsoft.com/default.aspx

The other big difference is that Windows file systems store data in sequence
from the beginning of the disk. As a file is added it is put in the first
available location, next to the previous one. If that previous file is added to
afterwards, the extra bits are moved to a new location causing the file to
become fragmented and a gap will exist if the file is reduced in size.

Linux does it differently, it starts not at the beginning but in the middle and
leaves space to grow when it adds a file. The file system then monitors the
files and moves them silently if they appear to be no longer in use. However, as
the disk fills, even Linux must scramble to fit large files into available
spaces and will fragment files. If fragmentation is an issue in Linux it usually
shows itself on servers and since there is no decent defrag tool it can become a
problem. The best solution is to not fill your drives up too full. Windows users
at least have a good tool for managing fragmentation.

I am told by file system affectionados that a file system to watch is ZFS from
Sun which is used on Solaris. Not being so inclined I have so far resisted the
temptation to try openSolaris. I prefer ReiserFS to ext3, but there is nothing
wrong with ext3. Both support up to 16 TB file size, so there is lots of
potential still in these file systems.

 
Answer #10    Answered On: Dec 19    

I had to do a fsck In H H awhile back. What it did was perge 4 or5
conf fles. It moved them all to lost and found, so yu now have away to try and
recover. Mine were all conf. files that the system reconfigure. They atleast
let you know whats messed up.

 
Didn't find what you were looking for? Find more on Defragment Or get search suggestion and latest updates.




Tagged: