Tech Support Guy banner
Status
Not open for further replies.

Solved: Defragging and Superfetch.

2K views 7 replies 5 participants last post by  uhaligani 
#1 ·
I usually try to offer help, on this forum, whenever I am competent to do so. I visit several such forums, but have a lot of repect for some of the "boffins" who live here. So I want to throw a coal on the fire for the experts. We know, to different degrees of understanding, what defragging does. It appears to me that Superfetch would tend to turn this process around again, obviating its use. Perhaps I should contact, say Seagate for an opinion?
Any comments.
 
#5 ·
The Vista defragger uses the data from the layout.ini file, derived from usage data gathered from prefetch, to decide how best to optimize the layout of files on the drive. Some other defraggers, like O&O, optionally use the layout.ini data, too.

Superfetch is a memory-management feature that expands the capabilities of prefetch. Not only does superfetch load parts of your most frequently used programs to memory so that they will load faster, but it keeps a long-term log of your program usage, time of day, frequency, and pattern of usage, to try to predict which programs you will use at any particular time and preload them so that they are more readily available to you. It does not rearrange any files on the hard drive. ReadyBoost is primarily superfetch data.
 
#6 ·
Thanks Evandil. That is a more infomative answer.
Thats leads me to another question, which was really, I guess, what I was coming around to. If you are, in that case, using prefetch, will this feature grab some of the memory on start up? I know that, even with 4 gigs memory, we have plenty to spare, but perhaps marginal memory users could be cautioned, if my thoughts are correct.
The thing began bugging me when, after frequent off/on experiments, I felt that the prefetch was slowing me down.
 
#7 ·
The memory used doesn't matter so long as the Windows VM isn't having to free pages. It is actually disk contention that could be the issue.

What you absolutely don't want, is pre-loading of regularly used programs to be done at expense of what the User is actually asking for right now. If all the commonly used stuff is moved together, and on fastest part of the disk, then when you do something different from usual you'ld get read starvation if the OS is optimised for throughput. The User's application blocks on a page fault, waits long time, then another and so on. Similarly reading data files, it blocks until data is fetched, and in mean time the disk arm may be moved some distance to "optimise" anticipated access.

Defragging is going to be necessary on Windows style filesystem, because it likes to keep things together, rather than leave holes for expansion room. A sparser layout suited to multi-user systems, as used in most common UNIX/Linux trades some performance, but usually avoids serious fragmentation, though it has the side effect that writing is faster in filesystems which are not very full. Using those last blocks becomes very expensive.

Whilst IDE disks now will try and anticipate requests, this is always going to work much better for sequential reads, if a file is very fragmented then you have near Random Access which is very slow due to all the disk head movements. An intelligent drive might actually offer worst performance in this case, because it's making bad guesses due to being too low level.

The low tech solution, is to turn the machine on, then go and have a coffee.
 
Status
Not open for further replies.
You have insufficient privileges to reply here.
Top