Welcome to Media Center Master!
A powerful solution for mastering your digital media library.
Supporting Emby, Kodi/XBMC/OSMC, Plex, Windows Media Center, and more!

Home Download Gallery Wiki Issue Tracker Licensing Forums

   FAQ  •  Search •  Login •  Register     
It is currently December 13th, 2017, 7:09 am

All times are UTC - 7 hours [ DST ]



Post new topic Reply to topic  [ 28 posts ]  Go to page 1, 2, 3  Next
Author Message
 Post subject: My new NAS, part II
PostPosted: June 7th, 2012, 10:21 am 
Offline
Developer/Owner
User avatar

Joined: May 1st, 2009, 10:12 am
Posts: 11315
Location: Meridian, ID, USA
The original Juggernaut build has long been full, even barely a year old. So I'm building a new NAS!

Juggernaut mark II will support up to four clusters of eight drives each, but I'm starting out with two clusters. Each cluster will have eight 3 TB SATA III drives installed and connected to the host system via USB 3.0 (has a bandwidth of 5 Gbps versus eSATA which is limited to about 3 Gbps). I'll have each set of two clusters in one large ZFS pool running RAID-Z2 under FreeNAS again, which puts the total estimate for the formatted, post-RAID capacity around 35 TiB, more than double the old Juggernaut's 14.9 TiB (which, after overhead, turned out to be closer to 14.1 TiB).

Parts list (~ US $3,750 at the time of this writing):
Seagate Barracuda ST3000DM001 3TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive -Bare Drive x 16
Mediasonic H82-SU3S2 3.5" Black USB3.0 & eSATA ProBox 8 Bay External Hard Drive Enclosure x 2 (retired)
LSI SAS 9211-8i HBA controller card x 2 (added)
Intel Core i5-2310 Sandy Bridge 2.9GHz (3.2GHz Turbo Boost) LGA 1155 95W Quad-Core Desktop Proc...
GIGABYTE GA-Z77-D3H LGA 1155 Intel Z77 HDMI SATA 6Gb/s USB 3.0 ATX Intel Motherboard
G.SKILL Ripjaws X Series 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666) F3-10666CL9D-8GBXL
CORSAIR Builder Series CX500 V2 500W ATX12V v2.3 80 PLUS Certified Active PFC Power Supply
Kingston DataTraveler G3 32GB USB 2.0 Flash Drive (White & Red) Model DTIG3/32GBZ
APEVIA CF6025S 60mm Case Fan x 2
MASSCOOL FD08025S1M4 80mm Case Fan
HEC 7106BB Black 1.0mm Thickness ATX Desktop Computer Case

NewEgg wish list (old)
NewEgg wish list (current)

UPDATE 6/21/2012: some minor modifications needed to be done to this build to make it work. FreeNAS wouldn't support USB 3 at all (kernel panics on a variety of builds) and Windows Home Server refuses to make external media dynamic volumes -- which is fine, I didn't want to forgo ZFS anyway. I could have tried other NAS OS'es such as OpenIndiana or straight up FreeBSD (in fact, they have a more updated version of ZFS), but imagined I would have had the same issues.

The motherboard didn't support port multiplying on the onboard SATA II or SATA III ports, so I started looking for simple eSATA expansion cards that would would do the trick. The problem? I couldn't find specs for how many drives they would support. The enclosures I'm using were 8 drives each through either 1x USB 3.0 cable or 1x eSATA cable, so I needed cards that would support all 8 in multiplication. eSATA port multipliers don't really have any standard -- where some can support "up to 15" drives, but many only support 4. I found a SATA III port multiplier PCIx1 card at NewEgg and bought two... however, it turns out that they only support 6 drives each.

So what to do? Find other cards? Change the plan for this NAS to use 12 drives instead of 16? Take some drives out of the enclosures and throw them in a new enclosure? Take some drives out and plug them straight into the motherboard? I wound up choosing that last option and currently have two empty drive bays in each of the two enclosures with the remaining four drives plugged directly into the motherboard (some on SATA II and some on SATA III). The setup isn't exactly ideal, but it works and lets me proceed with the zpool/array creation and filecopy while I consider better methods to present these drives to FreeNAS.

I've got all sixteen drives in a RAID-Z2 zpool (similar to RAID-6: fault tolerant for two simultaneous drive failures). The formatted capacity, as seen by Windows, is 35.1 TB.

As for performance? It's awful... just awful. I can use dd locally to benchmark the raw read and write to/from the zpool, which averages around 150 MB/s write speed. When writing over the network, I get around 52 MB/s from my personal PC over SMB/CIFS (1 Gbps LAN) and when using the system itself to rsync pull from the old NAS through SSH, it's even worse... about 30 MB/s (without compression). I'm still tweaking things to fine-tune performance (there's a lot to tweak when it comes to ZFS, arc caching, etc.), but it doesn't look like it'll get much better. I'm hesitant to blame anything specific right now, but if I had to guess, I would say it's probably the enclosures or eSATA port multipliers bottlenecking (even though they say they fully support NCQ).

UPDATE 9/15/2012: I've decided to ditch the two external enclosures. They're wonderful enclosures for a standalone, unmanaged NAS -- support up to 32 TB (each) via 8x 4 TB drives, USB 3 or eSATA/SATA III, etc. But trying to access individual drives via FreeNAS (which can only use generic drivers for the device) means I'm talking to each drive in the enclosure one at a time. That's the huge bottleneck in my performance right now.

I'm going with two of these controller cards now: Supermicro UIO MegaRAID AOC-USAS2-L8i and will have all sixteen drives sitting out of the case in a custom-built enclosure I'll be working on (since there's no way I can fit them all in a single case).

Even with the motherboard only supporting 4X transport mode on the second PCIx16 slot, the theoretical transfer cap on that slot will be 4x 500 MB/s (not Mbps) since it's PCIx2.0, giving the card up to 2,000 MB/s or 16,000 Mbps total transfer limit to/from the motherboard. Each port (of which there are 8 per card, supplied by the 2x mini-SAS connectors) supports up to 600 Mbps (SATA III limit), so the drives would have only used 4,800 Mbps anyway, far under the 4X lane limit on that PCIx16 slot.

If my math is correct, the local drive transfer rate should jump to a good 600-800 MB/s in practice. Given that the machine is only connected through a single gigabit LAN network, the network will limit that down to 100-125 MB/s, which will be the new bottleneck to address, but even that is FAR better than the 30-50 MB/s I'm getting from it now (due in part to poor design choice on my side and lack of non-generic drivers in FreeNAS).

I'll update again when the hardware arrives on Tuesday.

UPDATE 9/18/2012: well, the drive speed did increase substantially. Or... at least reading did. Write speeds have dropped to half (from ~150 MB/s down to 80-85 MB/s) but read speeds are about what I expected at ~680-700 MB/s. Sadly, reading across the network is still god-awful slow (tried SMB, FTP, and NFS and can't read beyond 30 MB/s now). I'm going through the process of killing all the Tunables, but have no idea where the performance issue is at this point.

UPDATE 9/19/2012: as it turns out, I was benching using dd (which I already knew was going to give me inconsistent benchmarks) with input from /dev/random, which isn't as fast as the write speed to these drives apparently: something I hadn't considered. If I write from /dev/zero instead (1 GB file), I'm getting speeds closer to what I expected. Here are my averaged results with dd using 10 GiB files:

Ten runs of:
# dd if=/dev/zero of=/mnt/storage/test.dat bs=1024k count=10000
Result: 10485760000 bytes transferred in 20.661325 secs (507506659 bytes/sec)
Result: 10485760000 bytes transferred in 10.961360 secs (956611225 bytes/sec)
Result: 10485760000 bytes transferred in 21.324609 secs (491721090 bytes/sec)
Result: 10485760000 bytes transferred in 17.952482 secs (584084140 bytes/sec)
Result: 10485760000 bytes transferred in 18.162130 secs (577341971 bytes/sec)
Result: 10485760000 bytes transferred in 18.342647 secs (571660129 bytes/sec)
Result: 10485760000 bytes transferred in 14.981934 secs (699893615 bytes/sec)
Result: 10485760000 bytes transferred in 12.795745 secs (819472402 bytes/sec)
Result: 10485760000 bytes transferred in 17.678354 secs (593141193 bytes/sec)
Result: 10485760000 bytes transferred in 12.945719 secs (809978959 bytes/sec)
Average: 10485760000 bytes transferred in 15.860093 secs (661141138 bytes/sec)
Apparent write speed: 630.5 MB/s (4.93 Gbps)

Ten runs of:
# dd of=/dev/null if=/mnt/storage/test.dat bs=1024k count=10000
Result: 10485760000 bytes transferred in 15.790833 secs (664040966 bytes/sec)
Result: 10485760000 bytes transferred in 15.537288 secs (674877111 bytes/sec)
Result: 10485760000 bytes transferred in 15.482166 secs (677279908 bytes/sec)
Result: 10485760000 bytes transferred in 20.563456 secs (509922066 bytes/sec)
Result: 10485760000 bytes transferred in 22.052992 secs (475480150 bytes/sec)
Result: 10485760000 bytes transferred in 14.512513 secs (772542334 bytes/sec)
Result: 10485760000 bytes transferred in 22.058579 secs (475359724 bytes/sec)
Result: 10485760000 bytes transferred in 24.815716 secs (422545132 bytes/sec)
Result: 10485760000 bytes transferred in 21.631180 secs (484752102 bytes/sec)
Result: 10485760000 bytes transferred in 22.539017 secs (465227034 bytes/sec)
Average: 10485760000 bytes transferred in 18.651210 secs (562202653 bytes/sec)
Apparent read speed: 536.5 MB/s (4.19 Gbps)


This brings my local I/O to tolerable and expected levels: about 630 MB/s write and 536 MB/s read.

Running iozone (also with 10 GiB files) yields these results for 4k records (sequential only):

Code:
# iozone -i 0 -i 1 -r 4k -s 10000m -t 1 -F /mnt/storage/iozone.tmp
        Iozone: Performance Test of File I/O
                Version $Revision: 3.397 $
                Compiled for 64 bit mode.
                Build: freebsd

        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                     Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                     Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
                     Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
                     Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer.
                     Ben England.

        Run began: Wed Sep 19 10:08:00 2012

        Record Size 4 KB
        File size set to 10240000 KB
        Command line used: iozone -i 0 -i 1 -r 4k -s 10000m -t 1 -F /mnt/storage/iozone.tmp
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
        Throughput test with 1 process
        Each process writes a 10240000 Kbyte file in 4 Kbyte records

        Children see throughput for  1 initial writers  =  354975.84 KB/sec
        Parent sees throughput for  1 initial writers   =  330521.05 KB/sec
        Min throughput per process                      =  354975.84 KB/sec
        Max throughput per process                      =  354975.84 KB/sec
        Avg throughput per process                      =  354975.84 KB/sec
        Min xfer                                        = 10240000.00 KB

        Children see throughput for  1 rewriters        =  158032.20 KB/sec
        Parent sees throughput for  1 rewriters         =  156623.40 KB/sec
        Min throughput per process                      =  158032.20 KB/sec
        Max throughput per process                      =  158032.20 KB/sec
        Avg throughput per process                      =  158032.20 KB/sec
        Min xfer                                        = 10240000.00 KB

        Children see throughput for  1 readers          =  559883.19 KB/sec
        Parent sees throughput for  1 readers           =  559866.46 KB/sec
        Min throughput per process                      =  559883.19 KB/sec
        Max throughput per process                      =  559883.19 KB/sec
        Avg throughput per process                      =  559883.19 KB/sec
        Min xfer                                        = 10240000.00 KB

        Children see throughput for 1 re-readers        =  879164.69 KB/sec
        Parent sees throughput for 1 re-readers         =  879057.94 KB/sec
        Min throughput per process                      =  879164.69 KB/sec
        Max throughput per process                      =  879164.69 KB/sec
        Avg throughput per process                      =  879164.69 KB/sec
        Min xfer                                        = 10240000.00 KB


And iozone with 64k records:

Code:
# iozone -i 0 -i 1 -r 64k -s 10000m -t 1 -F /mnt/storage/iozone.tmp
        Iozone: Performance Test of File I/O
                Version $Revision: 3.397 $
                Compiled for 64 bit mode.
                Build: freebsd

        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                     Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                     Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
                     Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
                     Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer.
                     Ben England.

        Run began: Wed Sep 19 10:05:01 2012

        Record Size 64 KB
        File size set to 10240000 KB
        Command line used: iozone -i 0 -i 1 -r 64k -s 10000m -t 1 -F /mnt/storage/iozone.tmp
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
        Throughput test with 1 process
        Each process writes a 10240000 Kbyte file in 64 Kbyte records

        Children see throughput for  1 initial writers  =  348337.31 KB/sec
        Parent sees throughput for  1 initial writers   =  331179.70 KB/sec
        Min throughput per process                      =  348337.31 KB/sec
        Max throughput per process                      =  348337.31 KB/sec
        Avg throughput per process                      =  348337.31 KB/sec
        Min xfer                                        = 10240000.00 KB

        Children see throughput for  1 rewriters        =  222259.89 KB/sec
        Parent sees throughput for  1 rewriters         =  218475.16 KB/sec
        Min throughput per process                      =  222259.89 KB/sec
        Max throughput per process                      =  222259.89 KB/sec
        Avg throughput per process                      =  222259.89 KB/sec
        Min xfer                                        = 10240000.00 KB

        Children see throughput for  1 readers          =  758806.94 KB/sec
        Parent sees throughput for  1 readers           =  758685.94 KB/sec
        Min throughput per process                      =  758806.94 KB/sec
        Max throughput per process                      =  758806.94 KB/sec
        Avg throughput per process                      =  758806.94 KB/sec
        Min xfer                                        = 10240000.00 KB

        Children see throughput for 1 re-readers        =  774870.88 KB/sec
        Parent sees throughput for 1 re-readers         =  774749.29 KB/sec
        Min throughput per process                      =  774870.88 KB/sec
        Max throughput per process                      =  774870.88 KB/sec
        Avg throughput per process                      =  774870.88 KB/sec
        Min xfer                                        = 10240000.00 KB


However, my network speed to/from this system (and only this system) is hamstrung somewhere. I can easily push my gigabit network limit reading and writing files to any other system (I have a modern Velociraptor HDD in one HTPC and modern SSD in three PCs on this network to test with), reaching 80-110 MB/s but only 30-50 MB/s to and from FreeNAS (over CIFs/CMB, NFS, or FTP -- I regularly test against all three).

I'm still playing with tunables, etc. but dialing in the performance issue has been an exercise in frustration.


UPDATE 9/19/2012 (#2):
Making progress:
Image


UPDATE 9/21/2012:
And we've reached a decent write-speed now!
Image

Happy to report a fast and healthy NAS finally! :D


UPDATE 9/25/2012:
I've built a rather crummy, but functional external mount/enclosure for the drives and wanted to share some pictures of the whole thing:

Inside the server:
Image

Because the mounting brackets for these cards are UIO, I found some longer screws and a combination of information found here and here to adjust how they secured in the mounting area:
Image

16 x 3 TiB drives sitting in the enclosure:
Image

The drives, all wired up:
Image

The back:
Image

The whole thing:
Image

And for cooling the drives, a standard, oscillating floor fan (although I've made sure it doesn't oscillate).
Image

_________________
Peter Souza IV
stable version 2.16.11117.1299 / April 21st, 2017
Media Center Master on Facebook!


 Profile  
 Post subject: Re: My new NAS, part II
PostPosted: June 7th, 2012, 7:59 pm 
Offline
Original BluRay
User avatar

Joined: January 7th, 2011, 8:19 pm
Posts: 1295
Location: Melbourne, Australia
Impressive amount of space you'll have there. I'm always tempted to get a dedicated NAS, but never can seem to justify it.

_________________
"An important reward for a job well done is a personal sense of worthwhile achievement."
Image


 Profile  
 Post subject: Re: My new NAS, part II
PostPosted: June 11th, 2012, 12:39 pm 
Offline
High-Def MKV

Joined: November 24th, 2010, 9:31 pm
Posts: 194
Location: Lima, OH
I was never happy with FreeNAS performance, compared to something a bit more commercial like OpenIndiana.

If I replace mine, I will definitely go server board and ECC memory as well, at probably twice the cost of a regular board and standard memory.

Mine is just 6x2tb drives in 2x RAIDz, with my folders striped across both sets. The only bad thing about adding sets down the road is that if your old vdev's are full you're just writing to the new vdev, so you won't get as good of performance. I suppose if you added a vdev you could created new ZFS folders, then cut from your old ones (striped across, say, 2 vdevs) and past to your new ones (striped across, say, 4 vdevs), which should even out somewhat.

_________________
OmniOS based SAS
Duel E5 clustered ESXi farm (32 cores 128Gb ram)


 Profile  
 Post subject: Re: My new NAS, part II
PostPosted: June 11th, 2012, 8:23 pm 
Offline
High-Def MKV

Joined: July 5th, 2010, 11:07 am
Posts: 111
Nice setup.

My boss just ordered me a Thecus NAS server with 16x 4TB drives in a RAID 60 configuration giving me 48TB of storage before formatting overhead.

Should finally resolve the storage limitation I ran into with my current Adaptec 5805 RAID controller.


 Profile  
 Post subject: Re: My new NAS, part II
PostPosted: September 15th, 2012, 2:41 pm 
Offline
Developer/Owner
User avatar

Joined: May 1st, 2009, 10:12 am
Posts: 11315
Location: Meridian, ID, USA
Updated again. Anyone want to buy the old enclosures? ;)

_________________
Peter Souza IV
stable version 2.16.11117.1299 / April 21st, 2017
Media Center Master on Facebook!


 Profile  
 Post subject: Re: My new NAS, part II
PostPosted: September 15th, 2012, 10:18 pm 
Offline
High-Def MKV

Joined: November 24th, 2010, 9:31 pm
Posts: 194
Location: Lima, OH
I'd have suggested a couple $75.00 M1015 controllers off ebay, for the money they're great and just about every OS supports them.

Start a single-parity RAIDZ (raidz) configuration at 3 disks (2+1)
Start a double-parity RAIDZ (raidz2) configuration at 5 disks (3+2)
Start a triple-parity RAIDZ (raidz3) configuration at 8 disks (5+3)
(N+P) with P = 1 (raidz), 2 (raidz2), or 3 (raidz3) and N equals 2, 4, or 8

Might want to compare your layout with those suggestions. I would HIGHLY (and I can't stress that enough) recommend that you don't go with one large raidz2 pool, but break it down into at least two raidz or raidz2 pools. I realize you'll lose more drives to parity that way, but your reads and writes should improve as you'll spread read/writes across both vdevs.

Also, I have always maintained that FreeNAS is the "easy" but not the fastest solution for zfs. Openindiana + napp-it is the freaking bomb.

_________________
OmniOS based SAS
Duel E5 clustered ESXi farm (32 cores 128Gb ram)


 Profile  
 Post subject: Re: My new NAS, part II
PostPosted: September 18th, 2012, 7:06 pm 
Offline
Developer/Owner
User avatar

Joined: May 1st, 2009, 10:12 am
Posts: 11315
Location: Meridian, ID, USA
Updated again.

cw823: I tried OpenIndiana -- gave it the better part of four hours. It refused to automatically detect 64-bit from either the Live CD or USB. It also refused to install to a USB; you're required to reserve a hard drive for it. I managed to make room for a 17th drive on this desktop system and found a work-around to hack the boot menu to force 64-bit, however every time I install to the drive it would throw an "installation failed" at 99% and would revert all of my settings every time. I tried to boot from its 'failed install' status anyway and it wouldn't ever finish booting. I also gave it a go on different hardware and experienced similar issues. In fact, the only place I could get it to boot was from a VM, which hardly helps. I won't be pursuing this as an option going forward.

Additionally, there's no further redundancy for this NAS and I'm not prepared to lose 17.4 TiB of data (it's ~half full). I will not be breaking this current zpool nor buying more drives to act as a secondary backup at this time (although you're welcome to donate to such a cause if you'd really like), but honestly: the solution to the problem shouldn't be "nuke everything and start over" anyway. I may gain some minor, marginal performance gains by restructuring my zpool, but we're talking about very drastic issues in performance right now. My first NAS with 10 drives (again: all in one zpool) using SATA II drives runs circles around this thing.

_________________
Peter Souza IV
stable version 2.16.11117.1299 / April 21st, 2017
Media Center Master on Facebook!


 Profile  
 Post subject: Re: My new NAS, part II
PostPosted: September 19th, 2012, 7:38 am 
Offline
High-Def MKV

Joined: November 24th, 2010, 9:31 pm
Posts: 194
Location: Lima, OH
Ah, ic. I've actually never had an install problem with it, oddly enough. I am in the process of virtualizing everything at home on an i7 box with 48Gb of ram which has been quite the process.

I see your point on the freenas vs openindiana install debate, I know it nice nice to save one mechanical hard drive and the USB option is one of FreeNAS best features imho. I've been using spare 2.5" drives for my openindiana install then mirroring the root pool.

I do love the M1015 cards though, for $75 you're getting a SATAIII controller with 8 sata ports (2x sas), and I've tried the Intel/Supermicro varieties too. We all have our preferred brands, and sometimes there is little to no difference between them.

Odd on your speed issue though, but was the same problem I had when I was browsing NAS OS's and was the main reason I settled on openindiana for mine. Just couldn't get the FreeNAS speed where I wanted it (maxing gigabit). Hoping virtualizing that box and giving it 10G connects to other VMs (like my MCM/torrent machine) will speed things up AND consolidate rack space/heating.

_________________
OmniOS based SAS
Duel E5 clustered ESXi farm (32 cores 128Gb ram)


 Profile  
 Post subject: Re: My new NAS, part II
PostPosted: September 19th, 2012, 11:54 am 
Offline
Developer/Owner
User avatar

Joined: May 1st, 2009, 10:12 am
Posts: 11315
Location: Meridian, ID, USA
Another few updates -- I've got the local I/O speed solid now and iperf tests max out my gigabit LAN, so I'm fairly sure with the right number of tweaks, I can get SMB performance where I want it.

Using SMB, I'm now at 90-92 MB/s on reads from the NAS and about 50-52 MB/s writes to the NAS -- quite a difference from 25-25 MB/s reads and 45-50 MB/s writes like before.

_________________
Peter Souza IV
stable version 2.16.11117.1299 / April 21st, 2017
Media Center Master on Facebook!


 Profile  
 Post subject: Re: My new NAS, part II
PostPosted: September 19th, 2012, 3:50 pm 
Offline
Flash Video

Joined: March 1st, 2012, 3:44 pm
Posts: 12
Interesting read. Keep at the tweaking and you should be able to get to 90/MBs+ on both read and write with that config. I have a much smaller NAS (went with Windows Storage Server after testing about every NAS option out there) due to the best performance. It's not close to the specs you have but I've managed to hit that performance level I mentioned. Don't rule out your cabling and network equipment. I had a Dell switch in between that killed things until I removed it. 1 central switch from the NAS to my clients and each hits that mark pretty regularly (with the exception of a slower desktop on a single 7200rpm drive that probably can't sustain that transfer speed). My main PC (raided 15K SAS drives) hit over 100MB/s write to the NAS across the lan pretty regularly. Good luck and man am I jealous of that space. I've been looking into some WD Red NAS drives but nothing of that size....


 Profile  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 28 posts ]  Go to page 1, 2, 3  Next

All times are UTC - 7 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 3 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  


Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group



Copyright © 2009-2017, Media Center Master, Inc. All rights reserved.