Just had someone working on performance testing on one of our dodgier projects say that he's getting results saying that running some processes results in 9000% disk access, according to Microsoft you can get 100% per disk so if you have 6 disks then in theory you could get 600%, so to get 9000% you need 90 disks. That test rig doesn't have 90 disks in it, I think it's 2x6 and 4x2 so 20 disks max and that's including the online spares etc. He asked me to refer to my great knowledge of this kind of thing (tbh I thought he was taking the piss but was serious
) and come up with a possible answer, the only thing I could think of was that each disk was relying on something else being processed on another disk or server before hand and creating a daisy chain effect possibly even a loop. He went off nodding saying that makes sense but I'm sat here thinking I've just spouted total bullshit and sent him off on a wild goose chase. Anyone got any ideas on how that's possible or am I a genius and just didn't know it?