Change Auto-Refresh Rate
Moderators: fgagnon, nikos, Site Mods
Change Auto-Refresh Rate
What ever the refresh rate is, I have folders larger enough to cause Xplorer2 to be unresponsive because it is ALWAYS refreshing. Is there a way to change the refresh time out? Yes, I know I can turn it off, but this does not server my purpose.
I am inclined to believe having only inactive tabs not process refresh will not be enough. I can have on panel displaying one folder with > 10K files and a fairly active file change rate and Xplorer2 will be very hard to work with. I have found I can use Alt-F1 to select another drive away from this folder and Xplorer2 will eventually put up the drive list, but if this is one of the folders of interest, it is not really a solution.
What would help me more, is being able to tell Xplorer2 to wait longer before going and fetching the folder contants. This way I have a window of opportunity to accomplish something.
What would help me more, is being able to tell Xplorer2 to wait longer before going and fetching the folder contants. This way I have a window of opportunity to accomplish something.
- FrizzleFry
- Platinum Member
- Posts: 1241
- Joined: 2005 Oct 16, 19:09
Just a suggestion for Clay... the work that X2 goes through to keep an updated view of a folder with thousands of files should give you an idea of how much work your system is performing just to store them there. You might find that the effort of using a different directory layout brings you a real performance gain.
We have a production system which processes a couple hundred thousand files a day, shuttling them through different directory structures as they're processed. We use a simple filename-tree structure to keep any directory from getting too large (file 123456789 might be stored, for example, under 123/456/). Our filenames increase sequentially, so we benefit from a lot of directory caching.
This has worked pretty well over the years although once we didn't bother making a tree and didn't notice that our purging logic for the directory was broken. A few weeks later, our system suddenly ground to an almost complete halt when the directory's size reached 16 MB... our old Unixware system apparently stops caching directory contents beyond that limit and the system had to physically reread the entire directory for each new file! It took about a day to delete it.
That's anecdotal & extreme, but I think if you managed to reduce your actively changing directories to a couple hundred files, you might find that your system was able to perform a lot more work.
Merry Christmas, Jamie
We have a production system which processes a couple hundred thousand files a day, shuttling them through different directory structures as they're processed. We use a simple filename-tree structure to keep any directory from getting too large (file 123456789 might be stored, for example, under 123/456/). Our filenames increase sequentially, so we benefit from a lot of directory caching.
This has worked pretty well over the years although once we didn't bother making a tree and didn't notice that our purging logic for the directory was broken. A few weeks later, our system suddenly ground to an almost complete halt when the directory's size reached 16 MB... our old Unixware system apparently stops caching directory contents beyond that limit and the system had to physically reread the entire directory for each new file! It took about a day to delete it.
That's anecdotal & extreme, but I think if you managed to reduce your actively changing directories to a couple hundred files, you might find that your system was able to perform a lot more work.
Merry Christmas, Jamie