This used to be the default before 1.5, but for some reason the default
changed in 1.5 and 1.6. Changing it back now, because the graph really
is useful, and there's still enough space for the filename even in
smaller terminals.
This solution is far cleaner. Thanks to Ben North for pointing me to the
*-width-specifier that has apparently been built into the printf-family
functions for, well, quite a while, it seems.
The memory for this format is now statically allocated as well. I
was under the impression its size would depend on wincols, but this is
the format we're talking about, the string does not have to hold the
actual line contents. I must have been sleeping again...
Oh well, this is a slight performance improvement, although it doesn't
seem the be the cause of the browing slowness when running under
valgrind. (Obviously running ncdu with valgrind is supposed to be
slower, but the current performance is rather bad...)
Rather than storing a pointer to another memory allocation in the
struct. This saves some memory and improves performance by significantly
decreasing the number of calls to [c|m]alloc() and free().
Here is the new multi-page listing functionality I promised in
5db9c2aea1.
It may look very easy, but getting this to work right wasn't,
unfortunately.
This optimizes a few actions (though not all), and makes the code easier
to understand and expand.
The behaviour of the browser has changed a bit with regards to
multi-page listings. Personally I don't like this change much, so I'd
probably fix that later on.
The displayed directory sizes are now fully correct, although in its
current state it's not all that intuitive because:
directory size != sum(size of all files and subdirectories)
This should probably be fixed later on by splitting the sizes into a
shared and non-shared part.
Also, the sizes displayed after a recalculation or deletion are
incorrect, I'll fix this later on.
The directory sizes are now incorrect as hard links will be counted
twice again (as if there wasn't any detection in the first place), but
this will get fixed by adding a shared size field.
This method of keeping track of hard links is a lot faster and allows
adding an interface which lists the found links.
These ideas may not be very easy to implement, and may not be worth
the performance penalty they might introduce. But that's something that
still needs to be investigated.
When interrupinting the calculation process by pressing 'q' while
it's looping through a directory, or when a directory could be openend
but not chdir()'ed into, closedir() wasn't called.
Setting FF_BSEL after calling browse_init() causes two items to be
selected, as browse_init() makes sure something will be selected,
while calc_process() assumes nothing is, because the previously
selected item had just been deleted.
Hard link detection is now done in a separate pass on the in-memory tree,
and duplicates can be 'removed' and 're-added' on the fly. When making any
changes in the tree, all hard links are re-added before the operation and
removed again afterwards.
While this guarantees that all hard link information is correct, it does
have a few drawbacks. I can currently think of two:
1. It's not the most efficient way to do it, and may be quite slow on
large trees. Will have to do some benchmarks later to see whether
it is anything to be concerned about.
2. The first encountered item is considered as 'counted' and all items
encountered after that are considered as 'duplicate'. Because the
order in which we traverse the tree doesn't always have to be the
same, the items that will be considered as 'duplicate' can vary with
each deletion or re-calculation. This might cause confusion for
people who aren't aware of how hard links work.