
From: Trond Myklebust <trond.myklebust@fys.uio.no>

The following reversion is what fixes my regression.  That puts the
sequential read numbers back to the 2.6.0 values of ~140MB/sec (from the
current 2.6.1 values of 14MB/second)...

We were triggering I/O of the `ahead' when we hit the last page in the
`current' window.  That's bad because it gives no pipelining at all.

So go back to full pipelining.

It's not at all clear why this change made a 10x difference in NFS
throughput.


---

 mm/readahead.c |   14 +++-----------
 1 files changed, 3 insertions(+), 11 deletions(-)

diff -puN mm/readahead.c~readahead-revert-lazy-readahead mm/readahead.c
--- 25/mm/readahead.c~readahead-revert-lazy-readahead	2004-01-15 18:47:15.000000000 -0800
+++ 25-akpm/mm/readahead.c	2004-01-15 18:54:02.000000000 -0800
@@ -449,12 +449,8 @@ do_io:
 			  * accessed in the current window, there
 			  * is a high probability that around 'n' pages
 			  * shall be used in the next current window.
-			  *
-			  * To minimize lazy-readahead triggered
-			  * in the next current window, read in
-			  * an extra page.
 			  */
-			ra->next_size = preoffset - ra->start + 2;
+			ra->next_size = preoffset - ra->start + 1;
 		}
 		ra->start = offset;
 		ra->size = ra->next_size;
@@ -474,13 +470,9 @@ do_io:
 		/*
 		 * This read request is within the current window.  It is time
 		 * to submit I/O for the ahead window while the application is
-		 * about to step into the ahead window.
-		 * Heuristic: Defer reading the ahead window till we hit
-		 * the last page in the current window. (lazy readahead)
-		 * If we read in earlier we run the risk of wasting
-		 * the ahead window.
+		 * crunching through the current window.
 		 */
-		if (ra->ahead_start == 0 && offset == (ra->start + ra->size -1)) {
+		if (ra->ahead_start == 0) {
 			ra->ahead_start = ra->start + ra->size;
 			ra->ahead_size = ra->next_size;
 			actual = do_page_cache_readahead(mapping, filp,

_
