

wli points out that shrink_slab inverts the sense of shrinker->seeks: those
caches which require more seeks to reestablish an object are shrunk harder. 
That's wrong - they should be shrunk less.

So fix that up, but scaling the result so that the patch is actually a no-op
at this time, because all caches use DEFAULT_SEEKS (2).



 mm/vmscan.c |    2 +-
 1 files changed, 1 insertion(+), 1 deletion(-)

diff -puN mm/vmscan.c~shrink_slab-seeks-fix mm/vmscan.c
--- 25/mm/vmscan.c~shrink_slab-seeks-fix	2003-12-24 10:21:03.000000000 -0800
+++ 25-akpm/mm/vmscan.c	2003-12-24 10:21:18.000000000 -0800
@@ -154,7 +154,7 @@ static int shrink_slab(long scanned, uns
 	list_for_each_entry(shrinker, &shrinker_list, list) {
 		unsigned long long delta;
 
-		delta = scanned * shrinker->seeks;
+		delta = 4 * (scanned / shrinker->seeks);
 		delta *= (*shrinker->shrinker)(0, gfp_mask);
 		do_div(delta, pages + 1);
 		shrinker->nr += delta;

_
