I wrote: >I have an A1000 under control of RaidManager 6, set up for RAID 5, with 12 >disks on one LUN. Both one of the 11 data disks *and* the hotspare have > failed. >I replaced the hotspare with a new disk, thinking it would automagically become > active and take over for the other failed disk. RaidManager showed the LUN >being reconstructed after installing the new disk, but now shows the hotspare >in "standby" status. > >LUN is still showing its status as "degraded," with 10 good disks and one > failed one. > >How do I make the current hotspare take over? Just pull the bad one? Thanks to: Helmut Kreft <kreft@belwue.de> JV <jv711@yahoo.com> mike.salehi@kodak.com Concensus was that I should have just replaced the failed disk with the new one, rather than replacing the (also-failed) hotspare. I used RaidManager to delete the new hotspare, then moved the new disk into the failed data disk slot. The array attempted to rebuild the LUN, but that, too, failed. Troubleshooting in RaidManager indicated errors on one or more other disk, causing the failure to rebuild. fscki-ng the filesystem turned up several dozen unreadable blocks, as did ufsdump-ing it. Before I could troubleshoot any further, however, the entire LUN failed. RM now shows two failed disks (not counting the previously failed hotspare). Attempting to fsck the filesystem came back with the superblock, including alternative locations, unreadable. Looks like I'm s-o-l here. The boss is replacing the array with a new Linux server with RAID array. -- Tim Evans, TKEvans.com, Inc. | 5 Chestnut Court tkevans@tkevans.com | Owings Mills, MD 21117 http://www.tkevans.com/ | 443-394-3864 http://www.come-here.com/News/ | _______________________________________________ sunmanagers mailing list sunmanagers@sunmanagers.org http://www.sunmanagers.org/mailman/listinfo/sunmanagersReceived on Mon Oct 10 10:03:24 2005
This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:43:52 EST